text
stringlengths
100
500k
subset
stringclasses
4 values
In an electrical circuit, why does the charge move after it passes through the last resistor, when it's voltage is zero? simulate this circuit – Schematic created using CircuitLab From what I know, current is the flow of charges, and charges move because of the potential difference, i.e. voltage. The thing is, charges in a circuit are affected by voltage drops, and total voltage drop is equal to the initial voltage. So, once the charge passes through the last resistor, it's voltage is 0! There is no potential difference anymore, so why is it moving towards the positive terminal? I have 2 guesses, one is that those charges are still negatively charged so there is still some potential difference. Voltage from the battery is just some sort of additional energy (is this correct?) My other guess is that they are being pushed away by incoming charges. You wanted me to sketch it so here it is, but I think it works for most circuits... voltage current circuit-analysis electricity M. Wother M. WotherM. Wother \$\begingroup\$ It's really hard to understand the question as it is worded using some quasi-technical language. But apparently yes, battery is an energy source. \$\endgroup\$ – Eugene Sh. Nov 18 '16 at 22:14 \$\begingroup\$ Please really use the circuit editor and draw the circuit. The question editor has a schematic editor button for a purpose! (@EugeneSh.: M. Wother asked this in a by-sentence in another question, so I asked her/him to ask this separately, but I also already explained that a good schematic is absolutely necessary when talking about something) \$\endgroup\$ – Marcus Müller Nov 18 '16 at 22:20 \$\begingroup\$ show your circuit and assumptions. The assumptions may be wrong. \$\endgroup\$ – Sunnyskyguy EE75 Nov 18 '16 at 22:37 \$\begingroup\$ In the presense of no external voltage, charges will try and move away from each other - look up gold-leak electroscope. This isn't a real question so I'm voting to close. \$\endgroup\$ – Andy aka Nov 18 '16 at 22:38 \$\begingroup\$ I don't understand everyone's issue with this question. The circuit can be a battery and a resistor. Between the resistor and the negative battery terminal in normal circuit analysis, it's assumed that the voltage is 0. He/she is asking how can charge move between across the wire between the resistor and the battery if there's 0 volts to push the charge. Seems straight forward enough. The issue is the idealization of the wire having 0 ohms. If they represent it as 0.001 ohms as it is closer to, then all of the confusion should go away. \$\endgroup\$ – horta Nov 18 '16 at 22:44 Look at this problem from a different perspective because what you have here is similar to a Zeno's arrow paradox. You have effectively stopped time and examined a static position without considering the dynamic situation. You don't have a stationary single electron, you always have billions of electrons in motion. (either as a drift current or as random Fermi motion - see https://en.wikipedia.org/wiki/Drift_velocity ) Even your single electron is always in motion. Let's assume our electron has the same potential as the 0V terminal of the battery and has no inclination to move (totally ignoring the fact the electron would have momentum). If there is a drift current (the actual movement of electrons) then very soon a second electron will find itself in the same position near the first, then a third, then a fourth etc. This accumulation of electrons (increased electron density) constitutes a more negative potential than the 0V so creating an electric field which would sweep the electrons towards the 0V electrode. Let's not ignore momentum - The electron has been subjected to an electric field (from +V to 0V when you complete the circuit) - it has been drifting in the direction of the 0V terminal at a speed of about 2.3 x 10^-5 m/s. It will continue to do so because there is nothing to stop it. Electron drift is not the same thing as electric current. See https://en.wikipedia.org/wiki/Speed_of_electricity Does this make sense in the real world? - Kirchoff's law states that charge cannot accumulate at a node - current going in must equal current going out. In other words - what goes in must come out. One final point - The charge on an electron is a Universal constant (-1.6 (0217662) x 10^-19 Coulombs), it always has the same value. Charge cannot be created or destroyed, it is always conserved. JIm DeardenJIm Dearden A potential difference sets up an electric force which causes conduction electrons to move. A battery is a piece of complicated chemistry, with anions and cations, and I don't want to delve into the details there. Plenty of better sources for that, than me. But the fundamental problem with your imagination is illustrated where you write, "it's voltage is 0!" Up until that point, you were talking about potential differences (appropriate.) Suddenly, right there you skip tracks and then conflate (confuse) the idea of a voltage at a single point with the idea of potential differences. These aren't even close to the same thing. A voltage at exactly one node is entirely arbitrary. I could look at your circuit and argue correctly that the node you identified as 0 V is really at 1,000,000 V. The number is completely arbitrary. You can make it up. So can I. And we are both right, so far as the idea goes. Such values depend only on your choice of reference. And you and I are allowed to make different choices, there. (Of course, once you've chosen a reference point and assigned it any arbitrary value, you can't do that again. You only get to chose a reference point and assign it a value, just once.) The only question here is whether or not there is a non-zero electric field intensity (which is a vector), \$\vec{\varepsilon}=\left(\frac{\textrm{d}V}{\textrm{d}x}, \frac{\textrm{d}V}{\textrm{d}y}, \frac{\textrm{d}V}{\textrm{d}z}\right)\$, which provides a motive force. The battery itself has a field intensity direction inside it, as well. So the point you called "0" has a different potential to one side and to the other side, so there is still a field intensity direction at that point, as well. Everywhere in the circuit, in fact, has an electric field gradient. So the electrons continue to be propelled by the force acting on them throughout the circuit. jonkjonk \$\begingroup\$ So what would you answer if instead of "it's voltage is zero" I wrote "the potential difference between the charge and the terminal is 0"? Are the other answers (which say that it is actually not 0) correct? \$\endgroup\$ – M. Wother Nov 19 '16 at 0:10 \$\begingroup\$ @M.Wother I would say that your statement is a matter of your choice of reference, which is entirely irrelevant to the question. The electric field gradient is still there, despite your choice of reference for assigning absolute values to points. Nature doesn't care about what you choose as a reference for making such statements. Your choice of perspective isn't "reality." \$\endgroup\$ – jonk Nov 19 '16 at 0:12 So your point makes perfect sense, in an ideal world. In a schematic, a single node has the same voltage everywhere. In the real world, components aren't ideal, and that extends even to the wires that make up the nodes in the physical circuit. They have a resistance, but it is usually negligibly small. So there is, in fact, a voltage that still causing charges to move in the wires after the charges have passed through all of the components. In most cases you won't find a significant difference in the voltage at the two ends of a wire unless there is a lot of current moving through that wire, or the wire is sufficiently long. For instance, 16 gauge copper wire has a resistance of approximately 4 \$ \Omega \$ of resistance per 1000 feet (source: google 'resistance of copper wire'). With that, it would take about 250 amps to notice a 1 volt drop across a single foot of wire! Let me know if anything is unclear, or if I didn't actually answer your question. It felt a little like I was rambling in there. ambitiose_sed_ineptumambitiose_sed_ineptum \$\begingroup\$ I don't feel like it answered my question... see the scheme I wrote in the question. So once a charge passes through the resistor, it has 0 volts. So why does it keep moving? I don't see how the fact that wire has resistance helps here.. doesn't the resistance resist the movement of charge even more? \$\endgroup\$ – M. Wother Nov 18 '16 at 23:02 \$\begingroup\$ The point is that it doesn't have 0 volts, the voltage of that charge is (with most instruments) immeasurably small, but that charge does have a potential difference when compared to the negative of the source, so the charge still needs to move. Does that make more sense? \$\endgroup\$ – ambitiose_sed_ineptum Nov 18 '16 at 23:05 \$\begingroup\$ @M, after it passes through the resistor, it doesn't have 0 V. It has maybe 0.001 V , and that is enough to move it through the wire, which has, as an example, 1 milliohm of resistance. \$\endgroup\$ – The Photon Nov 18 '16 at 23:05 \$\begingroup\$ @M, or look at it another way, if the wire is perfect (0 ohms resistance), it doesn't take any voltage to force a current to move through it. \$\endgroup\$ – The Photon Nov 18 '16 at 23:06 \$\begingroup\$ @M.Wother, or, let's try yet another approach: When the charges move into what is called the 0V node, there are actually some very small voltage fluctuations that take place as the charges try to move away from each other. The charges very quickly spread out "evenly" along the negative wire and the negative node of the battery, which results in a 0V difference and no further current flow. \$\endgroup\$ – bitsmack Nov 18 '16 at 23:29 Not the answer you're looking for? Browse other questions tagged voltage current circuit-analysis electricity or ask your own question. What is Kirchhoff's voltage law all about? Does electric potential influence the direction of current? What am I misunderstanding in electrical circuits regarding voltage/current/resistance Does conventional current impede qualitative understanding of any circuits? Connecting capacitor to the ground and potentials question So which direction do electrons really flow? Visualizing Electrical Potential How can an electron have 0 electric potential after exiting a resistor but have current? Battery operation, charge and discharge processes misconceptions Does current flow between a positively charged conductor and a neutral conductor?
CommonCrawl
Applying fuzzy logic to assess the biogeographical risk of dengue in South America David Romero ORCID: orcid.org/0000-0003-4540-63491, Jesús Olivero2 na1, Raimundo Real2 na1 & José Carlos Guerrero1 na1 Over the last decade, reports about dengue cases have increase worldwide, which is particularly worrisome in South America due to the historic record of dengue outbreaks from the seventeenth century until the first half of the twentieth century. Dengue is a viral disease that involves insect vectors, namely Aedes aegypti and Ae. albopictus, which implies that, to prevent and combat outbreaks, it is necessary to understand the set of ecological and biogeographical factors affecting both the vector species and the virus. We contribute with a methodology based on fuzzy logic that is helpful to disentangle the main factors that determine favorable environmental conditions for vectors and diseases. Using favorability functions as fuzzy logic modelling technique and the fuzzy intersection, union and inclusion as fuzzy operators, we were able to specify the territories at biogeographical risk of dengue outbreaks in South America. Our results indicate that the distribution of Ae. aegypti mostly encompasses the biogeographical framework of dengue in South America, which suggests that this species is the principal vector responsible for the geographical extent of dengue cases in the continent. Nevertheless, the intersection between the favorability for dengue cases and the union of the favorability for any of the vector species provided a comprehensive map of the biogeographical risk for dengue. Fuzzy logic is an appropriate conceptual and operational tool to tackle the nuances of the vector-illness biogeographical interaction. The application of fuzzy logic may be useful in decision-making by the public health authorities to prevent, control and mitigate vector-borne diseases. Dengue is one of the diseases with most epidemiological global relevance in the last decades [1,2,3,4,5,6,7,8,9]. Over this century, dengue has become a growing public health problem and about half of the world's population is currently at risk of dengue infection [7, 8, 10,11,12]. This is especially a concern in South America, where historical records of dengue epidemic outbreaks report upsurges every three or five years from the seventeenth century until the first half of the twentieth century [13, 14]. Aedes mosquitoes, namely the yellow fever mosquito (Aedes aegypti) and the Asian tiger mosquito (Ae. albopictus), are the most important dengue vectors in the world [15,16,17]. The number of studies about the mosquitoes of the genus Aedes as transmission vectors of human infectious diseases has recently increased remarkably [7, 10, 18,19,20,21]. Different authors have studied relevant aspects of mosquito-dengue relationships from phylogenetic [22], ecological [17,18,19], physicochemical [20, 23], genetic [21] and biogeographical perspectives [7, 10, 24, 25]. A biogeographical approach to the study of zoonotic diseases, known as pathogeography, has contributed with relevant advances in the knowledge of infectious disease macroecology and distribution [26,27,28,29]. It has also provided a proper analytical framework for the study of vector-illness interaction useful for management or surveillance. Species distribution models (hereinafter SDM) have been particularly used to investigate the environmental drivers for the distribution of Aedes species [25], to map the global distribution of the Aedes species according to the effect of temperature [10, 25], precipitation, and some land cover variables [7, 10], or to forecast the possible effects of climate change scenarios for Aedes species distributions [10]. Other studies also took into account economic information [4], or focused on predicting and determining the global burden of dengue [4, 24]. However, the biogeographical framework of vector-illness interaction that could reveal the large-scale risk of dengue occurrence remains poorly understood. The current range occupied by Aedes mosquitoes (Ae. aegypti and Ae. albopictus) in South America is wider than the known dengue cases. For some reason not yet fully elucidated, there are territories occupied by Aedes vectors with and without dengue cases. This suggests that the relationship between the occurrence of Aedes mosquito populations and cases of dengue is not clear-cut, and that a fuzzy-logic approach is worth considering. In contrast to crisp logic, Zadeh [30] proposed the fuzzy set theory, which avoids the use of discrete true-or-false syllogisms, thus conferring a conceptual malleability suitable for real-life situations. Salski and Kandzia [31] emphasized the continuous character of nature, which implies that living beings are distributed in time and space essentially in a gradual and fuzzy manner. A fuzzy logic approach is consequently useful for processing and modelling environmental data [32]. Thus, the application of fuzzy logic could be helpful to recognize the biogeographical vector-illness interaction and the dynamism in the risk of dengue occurrence, and to establish the biogeographical framework in which the disease occurs. Fuzzy logic led to the notion of environmental favorability, a concept related to, but different from, probability of occurrence [33]. Favorability functions can be used in SDM, and are particularly helpful when models of several species are involved in the study, as they allow the comparison between models for species or cases differing in prevalence, using fuzzy logic tools [28, 34,35,36,37,38]. In this study, our aims were to establish the biogeographical context in which dengue cases occur in South America and to map the areas favourable for new cases to occur. We assessed vector-illness biogeographical relationship using fuzzy logic to determine the different environmental drivers that favor the occurrence of both Aedes vectors and of dengue cases. We aimed to identify the territories more at biogeographical risk of dengue outbreaks, which may be helpful in order to apply measures for the management and control of this recurrent disease. Study area and species range We analyzed Ae. aegypti, Ae. albopictus and dengue virus occurrences on a 0.5° × 0.5° grid (6430 cells of approximately 50 km × 50 km at the equator) to identify the biogeographical relationship between Aedes vectors and dengue cases. We used grids instead of geographical locations, thus solving a large part of the spatial autocorrelation problems derived from sampling bias or observation spatial clustering. Dengue virus occurrences were obtained from global occurrence records published from 1960 to 2012 in Messina et al. [5], with 731 grid cells with confirmed cases, which cover 11.37% of South America (Fig. 1). Aedes aegypti and Ae. albopictus occurrences were obtained from the global compendium of Aedes aegypti and Ae. albopictus occurrence [8] and from the Faculty of Science of the Republic University of Uruguay (inbuy.fcien.edu.uy, accessed in May 2017), with data spanning from 1960 to 2013, and from 1986 to 2014, respectively. Aedes aegypti was confirmed to occur in 1688 cells whereas Ae. albopictus presence was confirmed in 957 cells, covering 26.25 and 14.88% of South America, respectively (Fig. 1). Study area and distribution data: a the grid of 0.5° latitude × 0.5° longitude squares in which the study area was divided to represent the occurrence data; b occurrence data of vectors and dengue infection cases. The grid layer was created with the tool "Create grid" of the software QGIS (www.qgis.org). The country layer was obtained from https://www.naturalearthdata.com and licensed CC BY. The maps were developed using QGIS in the composer tool. The final composition was created using CorelDRAW X8 Environment predictors and distribution modelling We modelled the distribution of the two vector species and of dengue virus occurrence (the target variables) on the basis of a set of explanatory variables that could potentially affect them at the spatial resolution here applied [39, 40] (Table 1). The explanatory variables were related to different environmental factors that could determine the area occupied by both Aedes species and the extent of dengue virus occurrence in South America: spatial configuration, topography, climate (rainfall and temperature), hydrology, land use and other human activities (Table 1). Table 1 Explanatory variables used in Ae. aegypti, Ae. albopictus and dengue virus models in South America. Climate variables which do not have a pairwise correlation value above 0.80 according to Spearman's test are shown in bolditalic To define the spatial structure of each distribution, we considered a polynomial trend-surface analysis [41] that included a quadratic and cube effect of latitude and longitude and interactions between them. Spatial structure is known to be functional in biogeography, as purely spatial trends derive from biological processes such as history, spatial ecology and population dynamics [42]. Spatial structure is known to be functional in biogeography, as purely spatial trends derive from biological processes such as history, spatial ecology and population dynamics [42]. To define the spatial structure of each distribution, we considered a polynomial trend surface analysis [41] that included a quadratic and cube effect of latitude and longitude and interactions between them. For this, we performed a logistic regression of Ae. aegypti, Ae. albopictus and dengue virus distribution data on latitude (Lat), longitude (Lon), Lat2, Lon2, Lat3, Lon3, Lat × Lon, Lat2 × Lon and Lat × Lon2. Specifically, we performed a backward stepwise logistic regression with each event (both Aedes vectors and dengue cases) and those nine spatial terms as predictor variables in order to remove the non-significant spatial terms from models [41]. In this way, in the modelling procedure we included the resulting lineal combinations (ysp) as the spatial variable without non-significant spatial terms. We used this spatial lineal combination (ysp) and the rest of variables listed in Table 1 (environmental factors) to produce distribution models according to all the explanatory factors together. To do this, we first analysed the effect of each explanatory variable on each target variable on a bivariate basis, by performing a logistic regression of each target variable on each explanatory variable separately. So, as Miller et al. [43] indicated, including the variation of the response variables separated into environmental and spatial components (represented by a trend surface of geographical coordinates) is a way to quantify the spatial dependence in distribution models ([44, 45], among others). We controlled the false discovery rate (FDR) with the aim of avoiding the increase in type I errors arising from the number of variables used in the analyses [46]. An explanatory variable was selected only when it was significantly related to the target variable (P < 0.05) under a FDR of q < 0.05, with q being the false discovery rate. Then, we calculated Spearman correlation coefficients to control multicollinearity between the selected explanatory variables. Out of any group of explanatory variables whose pairwise correlation value was higher than 0.80, we retained the variable most significantly related with the distribution of the target variable. In this way we obtained a filtered set of potentially explanatory variables for each target variable. Finally, we performed a forward-backward stepwise logistic regression of the target variable on the polynomial combination of the spatial structure (ysp) and the filtered set of environmental variables, which produced increasingly more complex multivariate models while avoiding the inclusion of redundant variables. We used Akaike's information criterion (AIC) to select the multivariate model that best-balanced information and parsimony (AIC; [47]). All analyses mentioned so far were performed with the fuzzySim R package [38]. Then, we evaluated the relative weight of each variable included in the models through the Wald parameter [48] using the survey package [49, 50]. Variables with non-significant coefficients left in the model (Chi-square test, P < 0.05) were eliminated until we obtained a model with all the coefficients significantly different from zero according to Crawley's [51] procedure. Then we used the Favorability Function according to Real et al. [33] and Acevedo and Real [52]. $${\text{F}} = {{\left[ {{\text{P}}/\left( {1 - {\text{P}}} \right)} \right]} \mathord{\left/ {\vphantom {{\left[ {{\text{P}}/\left( {1 - {\text{P}}} \right)} \right]} {\left[ {\left( {{\text{n}}1/{\text{n}}0} \right) + \left( {{{\text{P}} \mathord{\left/ {\vphantom {{\text{P}} {\left[ {1 - {\text{P}}} \right]}}} \right. \kern-0pt} {\left[ {1 - {\text{P}}} \right]}}} \right)} \right]}}} \right. \kern-0pt} {\left[ {\left( {{\text{n}}1/{\text{n}}0} \right) + \left( {{{\text{P}} \mathord{\left/ {\vphantom {{\text{P}} {\left[ {1 - {\text{P}}} \right]}}} \right. \kern-0pt} {\left[ {1 - {\text{P}}} \right]}}} \right)} \right]}}$$ where F is the environmental favorability (ranging between 0 and 1), P is the probability of occurrence obtained from the multivariate logistic regression performed for each target variable, n1 is the number of presences and n0 in the number of absences, in each case. This analysis was carried out with the fuzzySim R package [38]. Favorability values factor out the weight of the initial species presence/absences ratio, inherent to any probability function [33, 52] and, thus, depend exclusively on the effect of the environmental conditions of the territory on the distribution under analysis. In addition, local favorability reflects the degree of membership of the locality in the fuzzy set of areas favorable for the occurrence of the event, so allowing the comparison between models through fuzzy logic tools [36, 52]. In this way, we obtained favorability models for the occurrence of the two Aedes vectors (Ae. aegypti and Ae. albopictus) and of dengue in South America, F-Ae. aegypti, F-Ae. albopictus and F-dengue, respectively. We evaluated the discrimination and classification capacity of the models with the modEva R package [53]. The discrimination ability of the models was evaluated using the area under the curve (AUC) [54], and the classification capacity was estimated through the model sensitivity, specificity, kappa and correct classification rate (CCR), using the value of F = 0.5 as classification threshold. We checked the autocorrelation spatial using the Moran's I spatial autocorrelation statistic from the residuals of the models [55]. According to the thresholds proposed by Muñoz and Real [56], we calculated the number of grid cells in each South American country classified as highly favorable (F ≥ 0.8), for which the favorability odds are more than 4:1 in favor, hereinafter at high risk, and of intermediate favorability (0.2 < F < 0.8), which odds are under 4:1 and above 1:4 in favor, hereinafter vulnerable, for Aedes vectors, for dengue cases, and for vector-dengue cases simultaneously (see below). Biogeographical vector-dengue relationships and dengue risk maps We used the fuzzy modelling approach to assess the vector-dengue biogeographical interaction in South America. The logic underlying fuzzy sets was applied to the favorability function to indicate to what degree each grid cell belongs to the set of favorable areas for the presence of each species [52]. Then we used fuzzy logic tools to analyze the fuzzy vector-dengue biogeographical interactions and to detect the territories at high risk or vulnerable to new dengue cases. Based on the values of F-Ae. aegypti, F-Ae. albopictus and F-dengue models, we calculated the fuzzy intersection (minimum favorability value for two events at a given location) [30] to identify the fuzzy set of areas simultaneously favorable for dengue outbreaks and for any of the two species separately (i.e. F-Ae. aegypti ∩ F-dengue and F-Ae. albopictus ∩ F-dengue). Then, we analyzed how the favorability for each vector presence and for the occurrence of dengue cases changed along the gradient of favorability intersection (i.e. of shared favorability for both vector and disease). To this aim, we established 10 bins of equal-range F-Ae. aegypti ∩ F-dengue values and F-Ae. albopictus ∩ F-dengue values and calculated in each bin the average favorability values for the corresponding vector species and for dengue virus. If the vector species is a limiting factor in the distribution of the disease, then the favorability for dengue should be equal to or lower than that for the mosquito along the shared favorability range. We also calculated to what extent the favorable areas for dengue (F-dengue) are contained in those for F-Ae. aegypti and for F-Ae. albopictus models, by applying the fuzzy inclusion equation [57]: $$I\left( {A,B} \right) = \frac{{\left| {A \cap B} \right|}}{\left| A \right|}$$ which indicates how much the set A is included in the set B. In this way, we calculated the inclusion of one into the other for the models F-Ae. aegypti, F-Ae. albopictus and F-dengue, and also for vector-dengue intersections (i.e. F-Ae. aegypti ∩ F-dengue and F-Ae. albopictus ∩ F-dengue). Those fuzzy inclusion operations are defined in terms of the cardinal of each fuzzy set (i.e. the sum of the favorabilities values of all the grids). Thus, for example, the cardinal of F-Ae. aegypti ∩ F-dengue divided by the cardinal of F-dengue indicates the degree of inclusion of the distribution of dengue into that of Aedes aegypti. To obtain the comprehensive biogeographical risk map for dengue in South America in the current context of vector-dengue biogeographical relationship, we identified the fuzzy set of areas simultaneously favorable for dengue outbreaks and for any of the two vector species. To do this we first calculated the fuzzy union of the favorability for any vector species, F-Ae. aegypti ∪ F-Ae. albopictus (or maximum favorability value for any of them), which can identify the fuzzy set of areas favorable to either vector species [30]. Then, we calculated the fuzzy intersection between F-Ae. aegypti ∪ F-Ae. albopictus and F-dengue [(F-Ae. aegypti ∪ F-Ae. albopictus) ∩ F-dengue]. Favorable conditions for vectors and dengue cases The variables that were significantly associated with the occurrence of each vector species and with dengue cases are shown in Table 2 (Additional file 1: Table S1). All the factors explained to some extent the occurrence of both vectors and dengue, with the exception of the hydrology for Ae. albopictus and hydrology and land use for dengue. Table 2 Predictor variables included in Ae. aegypti, Ae. albopictus and dengue cases favorability models. Signs in brackets show the positive or negative relationship between favorability and the variables in the models. The Wald parameter indicates the relative weight of every variable in each model. Variable abbreviations are given in Table 1 The distribution of dengue was favored in territories of a certain elevation (435.06 m.a.s.l. on average), of predominant orientation towards the south, of high mean annual temperatures (23.13 °C on average), with low precipitation in the colder months (185.86 mm on average), few differences between maximum and minimum precipitations, high population density (221 inhabitants/km2 on average) and moderate distance to urban centers (7750 m on average). The distribution of both Aedes vectors was favored by similar variables with similar effect (positive or negative), except for the land use factor. A high proportion of crops was favorable for Ae. aegypti while it was unfavorable for Ae. albopictus. According to the Wald test, in both Ae. aegypti and dengue models, the three most explanatory variables were the spatial structure, closeness to urban centers and mean annual temperature (Table 2). The spatial structure, North-South orientation and proximity to urban centers were the three most explanatory variables for Ae. albopictus. In Fig. 2 we show the cartographic favorability models for the vector species and for dengue (F-Ae. aegypti, F-Ae. albopictus and F-dengue) separately, with values grouped in three favorability classes: F values lower than 0.2 indicate low favorability, values between 0.2 and 0.8 indicate vulnerable areas, and values higher than 0.8 indicate areas at high risk [58]. The F-Ae. aegypti model depicted a large principal nucleus of high risk in Brazil, and some dispersed high-risk cells in Venezuela, Colombia, Peru, Paraguay, Argentina and Uruguay. The F-Ae. albopictus map revealed a main high-risk nucleus in Brazil, and some dispersed high-risk cells in Colombia and Peru. The F-dengue model detected two main nuclei of high risk for the occurrence of dengue cases, one in Brazil and another, more dispersed, in Venezuela, Colombia and Ecuador. Some dispersed high-risk cells are also found in Peru, Guyana, Surinam, Paraguay and Uruguay. The F-dengue model detected at least one vulnerable grid cell (0.2 < F < 0.8) in every South American country. Although only 11.37% (731 squares) of the total analyzed squares (n = 6430) have recorded dengue cases, 60.14% (3867) of the squares showed at least vulnerable conditions (F > 0.2) according the F-dengue model, while 8.94% of the cells (575) were at high risk (F ≥ 0.8). Favorability models of: a Aedes aegypti, b Ae. albopictus and c dengue cases. Favorable areas are shown in black (favorability values or F ≥ 0.8), grey (0.2 < F < 0.8) and white (F ≤ 0.2). The arrows show inclusion values between the different models, one into the other. The maps were developed using QGIS (www.qgis.org) in the composer tool. The final composition was created using CorelDRAW X8 Both Aedes and dengue favorability models attained general acceptable scores according to the parameters considered to assess discrimination and classification capacities (Table 3). Discrimination (AUC) was always higher than 0.86, which is "excellent" according to Hosmer and Lemeshow [59]. Sensitivity values were always higher than 0.79, specificity was always higher than 0.74 and CCR was higher than 0.75. Kappa was higher than 0.6 for both Aedes vectors, which is "substantial" according to Landis and Kock [60]; it was 0.31, which is "fair", for dengue cases. On the other hand, according to the analysis of residuals, we detected a minor autocorrelation (Moran's I < 0.019), or approximately zero, below 1600 km and only in the Ae. albopictus model. These results indicate that there is no relevant spatial autocorrelation resulting from sampling bias with the grid system employed [55]. None of the Moran's I-values were significant in the Ae. aegypti or the dengue models. The residuals did not show problems of spatial autocorrelation in our models and therefore we did not find relevant effects of spatial autocorrelation that invalidate our results. Table 3 Comparative assessment of models for Aedes aegypti, Ae. albopictus and dengue cases, as well as the fuzzy intersection between the vector species and dengue cases, according to their discrimination and classification capacity Vector-dengue biogeographical interactions Compared to the F-dengue model, both vector-disease intersections improved classification capacity according to kappa, CCR and specificity, whereas sensitivity and discrimination capacity decreased (Table 3). In Fig. 3 we show the fuzzy intersection between the favorability for dengue and vector species for both Ae. aegypti and Ae. albopictus in South America. Aedes aegypti and dengue favorability values increased together until a fuzzy intersection of 0.5 was reached; then, both continued to increase with higher favorability values for the mosquito (Fig. 3a). The intersection between Ae. albopictus and dengue favorability values indicated that dengue cases had higher favorability values than the mosquito up to fuzzy intersection = 0.3; after that point, the vector showed higher favorability values than the disease. Plots and maps show the fuzzy intersection (simultaneous favorability) between the favorability for: a Ae. aegypti and dengue infection cases; and b favorability for Ae. albopictus and dengue infection cases. Fuzzy intersection values are shown on the horizontal axes (ranging from 0.1 to 1), grouped in 10 bins of values of equal favorability range. The average favorability values for both mosquito vectors in each bin are represented by solid lines and filled squares, and for dengue infection cases by dashed lines and blank circles, (on the left vertical axes ranging from 0 to 1). Columns represent the percentage of grid cells at each fuzzy intersection bin (on the right vertical axes). On maps, the arrows show inclusion values between both fuzzy intersection models (ranging from 0 to 1). The graphics were made using LibreOffice (https://es.libreoffice.org). The maps were developed using QGIS (www.qgis.org) in the composer tool. The final composition was created using CorelDRAW X8 According to the intersection between F-Ae. aegypti and F-dengue (Table 4), nine of the 14 South American countries have more than 50% of the country surface area at least vulnerable (F > 0.2) to dengue-cases occurrence transmitted by Ae. aegypti. In contrast, only Brazil has more than 50% of the country at least vulnerable (F > 0.2) to dengue-cases occurrence transmitted by Ae. albopictus. Seven South American countries have some locations at high risk (F ≥ 0.8) based on the intersection between F-Ae. aegypti and F-dengue. Two countries, Brazil and Peru, have locations at high risk of dengue occurrence due to Ae. albopictus, based on the intersection between the F-Ae. albopictus and F-dengue models (Table 4). Table 4 Percentages of the country surface with intermediate and high risk (F > 0.2, and F ≥ 0.8, respectively) of both vectors (Aedes aegypti and Ae. albopictus), of dengue cases, and of vector-dengue favorability intersection (with respect to the total number of grid cells per country in the leftmost column). Countries were ordered from highest to lowest percentage of the country surface of dengue cases detected in Messina et al. [5] Fuzzy-inclusion relationships between models In Fig. 2 we show the values for F-Ae. aegypti, F-Ae. albopictus and F-dengue inclusion into one another and in Fig. 3 the values of the inclusion of the two vector-dengue intersections one into the other. The main results were that F-dengue was included in a higher proportion into F-Ae. aegypti (0.75) than into F-Ae. albopictus (0.55), and that the intersection F-Ae. albopictus ∩ F-dengue was more included into the F-Ae. aegypti ∩ F-dengue (0.99) model than vice versa (0.69). Dengue risk map in the current biogeographical context of vector-dengue interaction In Fig. 4, we show the comprehensive map of areas at high risk and vulnerable to dengue cases due to the two vector species combined, resulting from F-dengue ∩ (F-Ae. aegypti ∪ F-Ae. albopictus). Dengue risk map in South America in the current biogeographical context of vector-dengue interaction. The map shows the intersection favorability values between the union of the favorability for the two mosquito species with the favorability for dengue (F-Ae. aegypti ∪ F-Ae. albopictus) ∩ F-dengue. The maps were developed using QGIS (www.qgis.org) in the composer tool. The final composition was created using CorelDRAW X8 Environmental drivers of Aedes vectors and dengue cases in South America Many studies have explained the occurrence of both Aedes species in South America in terms of only climate [10, 25], climate and some land cover variables [7], or climate and economic information [4]. In general, they detected that temperature was the main factor limiting the distribution of the two Aedes species. In contrast, our favorability models detected a more complex pattern of drivers for the presence of these vectors (Table 2). Spatial structure and closeness to urban centers were among the most relevant variables for both Aedes species, while mean annual temperature was more important for Ae. aegypti and topography was more relevant for Ae. albopictus. The most important drivers of dengue cases, according to our model, are the same as those favoring Ae. aegypti, including temperature (Table 2), which coincides with the conclusion of Capinha et al. [61]. As Campbell et al. [10] suspected, requirements for the presence of Ae. aegypti in South of America better reflect the risky environmental conditions for dengue occurrence than those for Ae. albopictus. Our results also concur with what Messina et al. [62] and Brady et al. [25] suggested, that the distribution of dengue occurrences is better modelled by incorporating drivers of different nature, such as climate, topography and human activities. Distribution of favourable areas Although our explanatory models were more complex than those previously described, we detected favorable regions for both Aedes species coarsely similar to those described by other authors [7, 10, 25]. Areas highly favorable for Ae. albopictus were mostly located in Brazil and Paraguay (Fig. 2). High-favorability territories for Ae. aegypti were more concentrated in eastern South America, in Brazil, and some high-favorability territories reached further south than those indicated by Kraemer et al. [7], particularly in Uruguay (Fig. 2). The dengue favorability model got lower discrimination and classification scores than the vector models (Table 3). This may result from the fact that, as other authors have pointed out [63, 64], accuracy of distribution models gets worse when a distribution is more poorly known. The published distribution data of this disease shows a scattered pattern that point to some possible bias in the quality of dengue-virus infection reports, despite the effort of Messina et al. [14]. Nevertheless, Bhatt et al. [24], by using descriptors based on climate, vegetation and human variables, described a pattern of dengue risk in South America similar to our F-dengue model (Fig. 2). However, they did not define risk areas in southern countries such as Chile and Uruguay, while we obtained areas vulnerable or at high risk in these countries. These areas represent a risk for dengue that was hidden up to now (Fig. 2). In the case of Chile, the vulnerable zones are restricted to a few low-altitude grids that were also favorable for Ae. aegypti. It should be noted that we found areas at high risk and vulnerable in many squares neighboring those with reported cases. This suggests that, although these areas are apparently dengue-free, they are in fact at high risk, and extreme precautions and management, control and prevention plans should be applied there. The greatest risk for the disease in South America may be considered to occur in areas favourable to dengue (F-dengue) with reported presence of vectors Aedes aegypti and/or Ae. albopictus (Figs. 2, 4): much of Brazil and scattered regions of Venezuela, Colombia, Peru and Paraguay for Ae. aegypti; and much of Brazil for Ae. albopictus. The case of Uruguay is particularly interesting. In this country, the F-dengue model detected vulnerable locations in areas where no dengue cases had been reported for a century [11]. Uruguay was classified by Brady et al. [65] as with complete or good evidence consensus on dengue absence. However, in the summer of 2016, about 20 cases of indigenous dengue occurred in Montevideo city [11], specifically where our F-dengue model indicated a high risk of dengue occurrence (Fig. 4). Taking into account that these cases have not been considered as presences for model training in this work, this supports the predictive capacity of our model. According to Real et al. [66], the favorability function may be considered to be, for the distribution of species, analogous to what the wave function is for the distribution of quantum particles, a mathematical conceptualization of the forces that are behind the distribution of the species. This being the case, favorability values could be a better representation of the distribution of a species than the dataset of specific observations, which are always incomplete and contingent on the observation effort. The biogeographical context of dengue risk In agreement with the proposal of Messina et al. [62], our approach of biogeographical modelling and fuzzy logic applied to the interaction with Aedes vectors has proved to be a useful method for unravelling the biogeographical context of dengue cases. The maximum simultaneous vector-dengue favorability occurs in much of Brazil for both species (Fig. 3). Additionally, some scattered areas of Colombia, Venezuela, Paraguay, Peru and Uruguay are simultaneously favorable for Aedes aegypti and dengue occurrence, all of them areas where disease cases have been recorded. The F-dengue model was more included into the F-Ae. aegypti model than into F-Ae. albopictus (Fig. 2), which indicates that the favorability for Ae. aegypti explained to a higher extent than that for Ae. albopictus the dengue cases in South America. Coinciding with Campbell et al. [10], these results indicate that the distribution of Ae. aegypti mostly encompasses the biogeographical framework of dengue in South America, which also suggests that this species is the principal vector responsible for the dengue cases in the continent. Some authors already reported that the increase in the cases of dengue in Brazil and Argentina, for example, was directly linked to the expansion of Ae. aegypti [67,68,69]. Brathwaite et al. [13] also found a relationship between an increase in dispersion of Ae. aegypti between 2001 and 2010 in America and a corresponding increase in dengue virus circulation. In our analyses, compared to the model built on dengue cases alone, the model based on the intersection between dengue and Ae. aegypti included 26% more no-case-record locations within areas of low risk (F ≤ 0.2), i.e. had a higher specificity (Table 3). This corroborates that incorporating vector information in the biogeographical analysis of disease drivers provides a more plausible explanation about the pattern of cases occurrence [28, 29], which was previously suggested specifically for dengue as well [10, 62]. Although both mosquito species are known to act as vectors of dengue, 99% of the F-Ae. albopictus ∩ F-dengue model was included in the F-Ae. aegypti ∩ F-dengue model (Fig. 3). In addition, while the favorability for Ae. aegypti seems to effectively limit that for dengue (Fig. 3a), this does not happen with the favorability for Ae. albopictus, particularly when the favorability for the mosquito is 0.3 or lower (Fig. 3b). Consequently, in South America, in order to manage the epidemiological risk of new dengue cases, the intersection between Ae. aegypti and dengue favorability should be used as a the most parsimonious map of dengue risk. Nevertheless, a few territories at risk of dengue were attributed exclusively to the F-Ae. albopictus model. Therefore, the most appropriate risk map should include the interaction of all vectors and cases of dengue, which can be readily obtained using fuzzy logic. The fuzzy intersection between the favorability for dengue cases and the fuzzy union of the favorability for any of the vector species provided a comprehensive map of the biogeographical risk for dengue (Fig. 4). This proposal of a risk map for dengue in South America is based on the geographical-environmental, disease-trait and human profiling that constitute the starting point for risk assessments in pathogeography [29]. Our results corroborate that incorporating vector information in the biogeographical analysis of disease drivers provides a more plausible explanation about the pattern of cases occurrence, and confirm that fuzzy logic is an appropriate conceptual and operational tool to deal with the nuances of the vector-illness biogeographical interactions. Thus, the application of fuzzy logic may help health authorities to better prevent, control and mitigate vector-borne diseases. We have included the data table used as Additional file 1: Table S1. SDM: species distribution models AIC: Akaike's information criterion environmental favorability AUC: correct classification rate Gubler DJ. Epidemic dengue/dengue hemorrhagic fever as a public health, social and economic problem in the 21st century. Trends Microbiol. 2002;10:100–3. Gubler DJ. The changing epidemiology of yellow fever and dengue, 1900 to 2003: full circle? Comp Immunol Microbiol Infect Dis. 2004;27:319–30. Delatte H, Gimonneau G, Triboire A, Fontenille D. Influence of temperature on immature development, survival, longevity, fecundity, and gonotrophic cycles of Aedes albopictus, vector of chikungunya and dengue in the Indian Ocean. J Med Entomol. 2009;46:33–41. Aström C, Rocklöv J, Hales S, Béguin A, Louis V, Sauerborn R. Potential distribution of dengue fever under scenarios of climate change and economic development. EcoHealth. 2013;9:448–54. Messina JP, Brady OJ, Pigott DM, Brownstein JS, Hoen AG, Hay SI. Global dengue occurrence database: 1960–2012. 2014. https://doi.org/10.6084/m9.figshare.902845. Accessed 1 Mar 2018. Morin CW, Comrie AC, Ernst K. Climate and dengue transmission: evidence and implications. Environ Health Perspect. 2013;121:1264–72. Kraemer MU, Sinka ME, Duda KA, Mylne AQ, Shearer FM, Barker CM, et al. The global distribution of the arbovirus vectors Aedes aegypti and A. albopictus. eLife. 2015;4:e08347. Kraemer MUG, Sinka ME, Duda KA, Mylne A, Shearer FM, Barker CM, et al. The global compendium of Aedes aegypti and Ae. albopictus occurrence. Sci Data. 2015;2:150035. Zeller H, Van Bortel W, Sudre B. Chikungunya: its history in Africa and Asia and its spread to new regions in 2013–2014. J Infect Dis. 2016;214:436–40. Campbell LP, Luther C, Moo-Llanes D, Ramsey JM, Danis-Lozano R, Peterson AT. Climate change influences on global distributions of dengue and chikungunya virus vector. Philos Trans R Soc London B Biol Sci. 2015;370:20140135. Smink V. Cómo llegó el dengue a Uruguay después de 100 años sin el virus. 2016. https://www.bbc.com/mundo/noticias/2016/02/160225_uruguay_dengue_brote_vs. Accessed 1 June 2016. Gubler DJ, Vasilakis N, Musso D. History and emergence of Zika virus. J Infect Dis. 2017;216:860–7. Brathwaite DO, San Martín JL, Montoya RH, del Diego J, Zambrano B, Dayan GH. Review: the history of dengue outbreaks in the Americas. Am J Trop Med Hyg. 2012;87:584–93. Messina JP, Bradu OJ, Scott TW, Zou C, Pigott DM, Duda KA, et al. Global spread of dengue virus types: mapping the 70 year history. Trends Microbiol. 2014;22:138–46. Powell JR, Tabachnick WJ. Genetics and the origin of a vector population: Aedes aegypti, a case study. Science. 1980;208:1385–7. Braks MAH, Honório N, Lounibos L, Lourenço-de-Oliveira R, Juliano S. Interspecific competition between two invasive species of container mosquitoes, Aedes aegypti and Aedes albopictus (Diptera: Culicidae), in Brazil. Ann Entomol Soc Am. 2004;97:130–9. Ferguson NM, Kien DT, Clapham H, Aguas R, Trung VT, Chau TN, et al. Modeling the impact on virus transmission of Wolbachia-mediated blocking of dengue virus infection of Aedes aegypti. Sci Transl Med. 2015;7:279ra37. Brady OJ, Johansson MA, Guerra CA, Bhatt S, Golding N, Pigott DM, et al. Modelling adult Aedes aegypti and Aedes albopictus survival at different temperatures in laboratory and field settings. Parasites Vectors. 2013;6:351. Gama ZP, Nakagoshi N, Islamiyah M. Distribution patterns and relationship between elevation and the abundance of Aedes aegypti in Mojokerto city 2012. Open J Anim Sci. 2013;3:11–6. Goindin D, Delannay C, Ramdini C, Gustave J, Fouque F. Parity and longevity of Aedes aegypti according to temperatures in controlled conditions and consequences on dengue transmission risks. PLoS ONE. 2015;10:e0135489. Rašić G, Endersby-Harshman N, Tantowijoyo W, Goundar A, White V, Yang Q, et al. Aedes aegypti has spatially structured and seasonally stable populations in Yogyakarta, Indonesia. Parasites Vectors. 2015;8:610. Brown JE, Evans BR, Zheng W, Obas V, Barrera-Martinez L, Egizi A, et al. Human impacts have shaped historical and recent evolution in Aedes aegypti, the dengue and yellow fever mosquito. Evolution. 2014;68:14–25. Chatterjee S, Chakraborty A, Sinha SK. Spatial distribution and physicochemical characterization of the breeding habitats of Aedes aegypti in and around Kolkata, West Bengal, India. Indian J Med Res. 2015;142:79–86. Bhatt S, Gething PW, Brady OJ, Messina JP, Farlow AW, Moyes CL, et al. The global distribution and burden of dengue. Nature. 2013;496:504–7. Brady OJ, Golding N, Pigott DM, Kraemer MU, Messina JP, Reiner RC, et al. Global temperature constraints on Aedes aegypti and Ae. albopictus persistence and competence for dengue virus transmission. Parasites Vectors. 2014;7:338. Cliff AD, Haggett P. The epidemiological significance of islands. Health Place. 1995;1:199–209. Smith KF, Guégan J-F. Changing geographic distributions of human pathogens. Annu Rev Ecol Evol Syst. 2010;41:231–50. Olivero J, Fa JE, Real R, Farfán MA, Márquez AL, Vargas JM, et al. Mammalian biogeography and the Ebola virus in Africa. Mammal Rev. 2017;47:24–37. Murray KA, Olivero J, Roche B, Tiedt S, Guégan JF. Pathogeography: leveraging the biogeography of human infectious diseases for global health management. Ecography. 2018;41:1411–27. Zadeh LA. Fuzzy sets. Inform Control. 1965;8:338–53. Salski A, Kandzia P. Fuzzy sets and fuzzy logic in ecological modelling. EcoSys. 1996;4:85–97. Salski A. Ecological applications of fuzzy logic. In: Recknagel F, editor. Ecological informatics: scope, techniques and applications. Berlin: Springer; 2006. p. 3–14. Real R, Barbosa AM, Vargas JM. Obtaining environmental favourability functions from logistic regression. Environ Ecol Stat. 2006;13:237–45. Acevedo P, Ward AI, Real R, Smith GC. Assessing biogeographical relationships of ecologically related species using favourability functions: a case study on British deer. Divers Distrib. 2010;16:515–28. Romero D, Báez JC, Ferri-Yáñez F, Bellido JJ, Real R. Modelling favourability for invasive species encroachment to identify areas of native species vulnerability. Sci World J. 2014;2014:519710. Romero D, Olivero J, Brito JC, Real R. Comparison of approaches to combine species distribution models based on different sets of predictors. Ecography. 2016;39:561–71. Romo H, García-Barros E, Márquez AL, Moreno JC, Real R. Effects of climate change on the distribution of ecologically interacting species: butterflies and their main food plants in Spain. Ecography. 2014;37:1063–72. Barbosa AM. fuzzySim: applying fuzzy logic to binary similarity indices in ecology. Methods Ecol Evol. 2015;6:853–8. Willis KJ, Whittaker RJ. Species diversity—scale matters. Science. 2002;295:1245–8. Pearson RG, Dawson TP. Predicting the impacts of climate change on the distribution of species: are bioclimate envelope models useful? Glob Ecol Biogeogr. 2003;12:361–71. Legendre P, Legendre L. Numerical ecology. 2nd ed. Amsterdam: Elsevier Science; 1998. Legendre P. Spatial autocorrelation: trouble or new paradigm? Ecology. 1993;74:1659–73. Miller J, Franklin J, Aspinall R. Incorporating spatial dependence in predictive vegetation models. Ecol Model. 2007;202:225–42. Lobo JM, Lumaret J-P, Jay-Robert P. Modelling the species richness distribution of French dung beetles (Coleoptera, Scarabaeidae) and delimiting the predictive capacity of different groups of explanatory variables. Glob Ecol Biogeogr. 2002;11:265–77. Nogués-Bravo D, Martinez-Rica JP. Factors controlling the spatial species richness pattern of four groups of terrestrial vertebrates in an area between two different biogeographic regions in northern Spain. J Biogeogr. 2004;31:629–40. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J Royal Stat Soc B. 1995;57:289–300. Akaike H. Information theory and an extension of the maximum likelihood principle. In: Petrov BN, Csaki F, editors. Proceedings of the second international symposium on information theory. Budapest, Hungary: Akademiai Kiado; 1973. p. 267–81. Wald A. Tests of statistical hypotheses concerning several parameters with applications to problems of estimation. Trans Am Math Soc. 1943;54:426–82. Lumley T. Analysis of complex survey samples. J Stat Softw. 2004;9:1–19. Lumley T. Survey: analysis of complex survey samples. R package version 3.35. 2018. Crawley MJ. The R book. Chichester: Wiley; 2007. Acevedo P, Real R. Favourability: concept, distinctive characteristics and potential usefulness. Naturwissenschaften. 2012;99:515–22. Barbosa AM, Brown JA, Jimenez-Valverde A, Real R. modEvA: model evaluation and analysis. R package version 1.3.2. 2016. https://CRAN.R-project.org/package=modEvA. Lobo J, Jiménez-Valverde A, Real R. AUC: a misleading measure of the performance of predictive distribution models. Glob Ecol Biogeogr. 2008;17:145–51. Cliff AD, Ord JK. Spatial processes: models and applications. London: Pion; 1981. p. 2–10. Muñoz AR, Real R. Assessing the potential range expansion of the exotic monk parakeet in Spain. Divers Distrib. 2006;12:656–65. Kunchenva LI. Using measures of similarity and inclusion for multiple classifier fusion by decision templates. Fuzzy Set Syst. 2001;122:401–7. Muñoz AR, Real R, Barbosa AM, Vargas JM. Modelling the distribution of Bonelli's eagle in Spain: implications for conservation planning. Divers Distrib. 2005;11:477–86. Hosmer DW, Lemeshow S. Applied logistic regression. 2nd ed. New York: Wiley; 2000. Landis JR, Koch GC. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–74. Capinha C, Rocha J, Sousa CA. Macroclimate determines the global range limit of Aedes aegypti. EcoHealth. 2014;11:420–8. Messina JP, Brady OJ, Pigott DM, Golding N, Kraemer MUG, Scott TW, et al. The many projected futures of dengue. Nat Rev Microbiol. 2015;13:230–9. Stockwell DRB, Peterson AT. Effects of sample size on accuracy of species distribution models. Ecol Model. 2002;148:1–13. Gurutzeta GA. Modelling of species distributions, range dynamics and communities under imperfect detection: advances, challenges and opportunities. Ecography. 2017;40:281–95. Brady OJ, Gething PW, Bhatt S, Messina JP, Brownstein JS, Hoen AG, et al. Refining the global spatial limits of dengue virus transmission by evidence-based consensus. PLoS Negl Trop Dis. 2012;6:e1760. Real R, Barbosa AM, Bull JW. Species distributions, quantum theory, and the enhancement of biodiversity measures. Syst Biol. 2017;66:453–62. Vezzani D, Carbajo AE. Aedes aegypti, Aedes albopictus, and dengue in Argentina: current knowledge and future directions. Mem Inst Oswaldo Cruz. 2008;103:66–74. Serpa LLN, Marques GRAM, de Lima AP, Voltolini JC, de Brito Arduino M, Barbosa GL, et al. Study of the distribution and abundance of the eggs of Aedes aegypti and Aedes albopictus according to the habitat and meteorological variables, municipality of São Sebastião, São Paulo State, Brazil. Parasites Vectors. 2013;6:321. da Rocha Taranto MF, Pessanha JE, dos Santos M, dos Santos Pereira Andrade AC, Camargos VN, Alves SN, et al. Dengue outbreaks in Divinopolis, south-eastern Brazil and the geographic and climatic distribution of Aedes albopictus and Aedes aegypti in 2011–2012. Trop Med Int Health. 2015;20:77–88. DR was supported by a grant from the Agencia Nacional de Investigación e Innovación de Uruguay (ANII) (2016–2018), and from the Graduate Academic Commission (CAP, from Spanish acronym Comisión Académica de Posgrado) of the Universidad de la República (2018–2020). This study was supported by the project CGL2016-76747-R (Ministerio de Economía y Competitividad, España, y el Fondo Europeo de Desarrollo Regional, Unión Europea). Jesús Olivero, Raimundo Real and José Carlos Guerrero contributed equally to this work Laboratorio de Desarrollo Sustentable y Gestión Ambiental del Territorio (LDSGAT), Instituto de Ecología y Ciencias Ambientales (IECA), Facultad de Ciencias, Universidad de la República, Iguá 4225, 11400, Montevideo, Uruguay & José Carlos Guerrero Departamento de Biología Animal, Grupo de Biogeografía, Diversidad y Conservación, Facultad de Ciencias, Universidad de Málaga, Bulevar Louis Pasteur, 31, 29010, Málaga, Spain Jesús Olivero & Raimundo Real Search for David Romero in: Search for Jesús Olivero in: Search for Raimundo Real in: Search for José Carlos Guerrero in: DR processed and analyzed part of the data, participated in and designed the field work and wrote the first draft. JO worked on the discussion section and made revisions to the manuscript. RR participated in the design of the field work and revised the last version of the manuscript. JCG elaborated the database, analyzed part of the data and defined the study aims. All authors read and approved the final manuscript. Correspondence to David Romero. Additional file 1: Table S1. Dengue cases, Aedes aegypti and Ae. albopictus occurrences (presences as 1, and absences as 0 in each South America grid), values of the significant variables in the models by South America grid, and favorability values for each model by grid (F-dengue; F-Ae. aegypti and F-Ae. albopictus). Variable abbreviations are given in Table 1. Romero, D., Olivero, J., Real, R. et al. Applying fuzzy logic to assess the biogeographical risk of dengue in South America. Parasites Vectors 12, 428 (2019) doi:10.1186/s13071-019-3691-5 Ae. albopictus Favorability function Fuzzy operators Vector-illness interaction
CommonCrawl
QALY league table of Iran: a practical method for better resource allocation Reza Hashempour ORCID: orcid.org/0000-0002-7351-92391, Behzad Raei ORCID: orcid.org/0000-0001-8186-08001, Majid Safaei Lari ORCID: orcid.org/0000-0003-3190-29231, Nasrin Abolhasanbeigi Gallezan2 & Ali AkbariSari ORCID: orcid.org/0000-0002-3173-49691 Cost Effectiveness and Resource Allocation volume 19, Article number: 3 (2021) Cite this article The limited health care resources cannot meet all the demands of the society. Thus, decision makers have to choose feasible interventions and reject the others. We aimed to collect and summarize the results of all cost utility analysis studies that were conducted in Iran and develop a Quality Adjusted Life Year (QALY) league table. A systematic mapping review was conducted to identify all cost utility analysis studies done in Iran and then map them in a table. PubMed, Embase, Cochrane library, Web of Science, as well as Iranian databases like Iran Medex, SID, Magiran, and Barakat Knowledge Network System were all searched for articles published from the inception of the databases to January 2020. Additionally, Cost per QALY or Incremental Cost Utility Ratio (ICUR) were collected from all studies. The Joanna Briggs checklist was used to assess quality appraisal. In total, 51 cost-utility studies were included in the final analysis, out of which 14 studies were on cancer, six studies on coronary heart diseases. Two studies, each on hemophilia, multiple sclerosis and rheumatoid arthritis. The rest were on various other diseases. Markov model was the commonest one which has been applied to in 45% of the reviewed studies. Discount rates ranged from zero to 7.2%. The cost per QALY ranged from $ 0.144 in radiography costs for patients with some orthopedic problems to $ 4,551,521 for immune tolerance induction (ITI) therapy in hemophilia patients. High heterogeneity was revealed; therefore, it would be biased to rank interventions based on reported cost per QALY or ICUR. However, it is instructive and informative to collect all economic evaluation studies and summarize them in a table. The information on the table would in turn be used to redirect resources for efficient allocation. in general, it was revealed that preventive programs are cost effective interventions from different perspectives in Iran. The limited healthcare resources cannot meet all the demands of the society [1] so, decision makers will have to choose feasible interventions and reject the others [2]. Thus, health systems should prioritize and use their limited resources efficiently. Economic evaluation studies is the best tool that aids priority setting and efficient resource allocation in the health sector [3, 4]. Many countries have adapted health technology assessment systems for evaluation of health interventions where in, technology and, economic evaluation, lies at the heart of any health technology assessment (HTA) [1, 3]. Cost effectiveness (CE) and cost utility (CU) are the main methods used frequently for economic evaluations in healthcare sector [4]. However, merely economic evaluation studies cannot fully guide policy-makers to a wide range of programs that might be a wise investment. To overcome this problem, cost-effectiveness threshold analysis has been developed to identify the level of cost per unit of outcome below which an intervention might be described as cost-effective [4]. In this regard, league-tables are a great instrument option for policy-makers to determine threshold values to help them make the best use of resources but, this necessitates a comprehensible league table approach in which a list of ICURs are interpreted in the context of specific costs and cost-effectiveness of competing interventions [7]. League tables rank health strategies, programs and interventions in terms of cost-effectiveness [5] for numerous diseases [6]. The intervention choices on the league table has the intervention with the lowest ICUR or cost per QALY placed at the top – and then moves down the list, to interventions with sequentially higher ratios, until the budget is used up [6,7,8]. They are valuable tools for prioritizing health expenses, especially for national health resources [9, 10]. It has been used as a policy tool by high [9], middle and low-income countries [5]. League tables are frequently used and they have been used for public health by WHO in the World Health Report since 2000 [11, 12]. A few regional league tables are available for some diseases. For example, there are tables for 60 different interventions in Africa [6]. The league tables are available in other countries as well [13]. Results from one of the most important studies has provided more than 3600 ICERs for more than 2000 health programs and strategies [6]. In Iran's health system, the systematic use of economic evaluation started only few years ago but it has been expanding gradually. A league table related to public health interventions has not been developed in Iran to date. The main purpose of this paper was to assemble all cost utility studies systematically and then summarize the findings of cost utility analysis studies conducted in Iran and thereby, develop a QALY league table for the country. In doing so, decision makers would be able to distinguish and choose the best cost-effective interventions. In this study we aimed to gather all cost-utility studies based in Iran and summarize them in a table. PRISMA, the methodological guidance for reporting systematic reviews, was used in this study [14]. The study protocol was registered (R.H) in the international prospective register of systematic reviews database (PROSPERO). The registration number is CRD42019123313. PubMed, Embase, Cochrane library, Web of Science databases as well as Iranian databases like Iran Medex, SID, Magiran, and Barakat Knowledge Network System were searched (R.H and B.R) for articles published from the start to January 2020. This review further searched the grey literature, implying documents that are often not well represented in indexing databases and usually have not been peer reviewed. National Institute for Health Research (NIHR), Google, Google Scholar and ministry of health webpage were reviewed for grey literature (R.H and B.R). We performed iterative reviews of reference lists attached to all papers selected for inclusion (R.H). The search process had no time nor language restrictions. The key words including cost utility, cost effectiveness, health technology assessment, HTA, economic evaluation, and QALY Iran were identified from our searching of respective literature on economic evaluation and health technology assessment studies, and then we conducted a search of extracted key words in aforementioned electronic bibliographic databases. The complete search strategy in the PubMed database was as follows: cost utility [title/abstract] OR cost effectiveness[title/abstract] OR health technology assessment[title/abstract] OR HTA [title/abstract] OR economic evaluation[title/abstract] OR cost per QALY [title/abstract] AND Iran [title/abstract]. The same search strategy was adapted for other international databases using Boolean operators like OR as well as AND. All cost utility studies reporting cost per QALY or ICUR and which were published till January 2020 conducted in Iran were eligible for inclusion in this review. On the contrary, all letter to editor, conference papers, review articles, cost-effectiveness, cost minimization, and cost consequences studies, and studies with low standards were excluded. Besides, all cost utility studies not done in Iran were omitted. Selection of articles After removing duplicates, the titles and abstract, the papers were screened to eliminate irrelevant papers. All the steps were performed by two authors independently (R.H and B.R). Discrepancies between the reviewers were resolved by discussion or consultation with a third author (A.A). Then, the full text of the remaining papers was reviewed by the two reviewers separately (R.H and B.R) Persisting discrepancies were resolved by third author (A.A). All studies reported whether cost per QALY or ICUR in Iran were included. All low-quality studies (scored less than 6) appeared not to match our inclusion criteria and were excluded. Detailed information was extracted from each included study using a pre-structured data extraction form by two authors (M.S and N.A), separately. Any discrepancies between them were resolved through discussion; otherwise, they were resolved by the third author (R.H). Data on publication year, type of intervention (drug, screening, technology, surgery, vaccine, follow-up), year costs, sensitivity analysis (one way, two way, probabilistic sensitivity analysis and so on), perspectives adapted (society, health system, health insurance organization and so forth), discount rate, outcomes (ICUR or cost per QALY) and their recommendation were extracted. To standardize the results of studies conducted in different years, costs were deflated using the formula below: $$Future\,value= Present\,value \times (1+r) {^n}$$ n and r are year and inflation rate respectively. In case an article did not report the year costs, year of the paper publication was considered as a base for cost adjustment. Moreover, in case an article used Rial for calculation, we converted Rial to USD to reduce heterogeneity. The quality of the eligible studies were determined by two independent investigators (M.S and N.A), according to the Joana Briggs Institute (JBI) quality assessment checklist [15]. The quality appraisal results were checked by a third reviewer (R.H). The JBI tool consists of the following 11 appraisal items: (1) well-defined question, (2) description of alternatives, (3) relevant costs and outcomes, (4) effectiveness, (5) outcome and cost measured accurately, (6) cost and outcome valued credibly, (7) adjusting cost and outcome for different timing, (8) incremental analysis, (9) sensitivity analysis, (10) including all concerns, and (11) generalizability. Each item was scored as 1 if the study met a criterion, and all the scores were summed up to reach a total score, which ranged from 0 (lowest quality possible) to 11 (highest quality possible). The studies were categorized into three types: the studies that scored 10 and 11 were considered as excellent quality studies (1), the studies that scored eight and nine as good quality studies (2) the studies that scored seven and six were considered as medium quality studies (3). In total, our initial search yielded 2619 papers; out of which, 808 articles were duplicate and they were removed. Then, titles and abstracts of the remaining papers were reviewed, 1678 of which were excluded and 133 articles were selected. Full text of 133 articles were reviewed and 51 cost utility analysis studies were found eligible for inclusion in the final analysis (Fig. 1). Flowchart of study and inclusion in studies Since there was heterogeneity in the study's results and methods, for instance, in terms of year of costing, and study perspective (viewpoint), the results of them cannot be combined or synthesized. however, QALY was the most common outcome in all included studies. Out of the 51 studies, 13 studies were based on different cancer-related interventions (e.g. screening, chemotherapy and other interventions), followed by seven studies on different programs for heart diseases, six studies were carried out on orthopedic interventions, four studies were performed on multiple sclerosis interventions, two studies were conducted on different strategies of hepatitis disease. Table 1 shows studies and diseases. Table 1 The main characteristics of the studies included in the present review Societal perspective (n = 19), health system (n = 10), and payer (n = 6) were the most common perspectives taken. Health insurance organization was considered in four studies. Provider, third party, ministry of health, and government in which each has been adapted as the viewpoint in two studies separately. Three studies did not state a perspective. Most articles used Markov model (n = 23), followed by Decision Tree (n = 14) and two studies used both Markov and Decision Tree models. A quarter of the studies did not use any modeling (n = 12). In 29 studies, discounting has been used. Discount rate for cost ranged from 0 to 7.2%. The discount rates were 3%, 5%, 7.2%, and 6% in thirteen, eight, four and two studies, respectively. After, performing sensitivity analysis, the results of two studies were reported with discount rates of zero, 3%, and 7.2%. and 20% in studies which used discounting for outcomes. The discount rates for outcomes were 3%, 5%, 6% and 7.2% in seventeen, six, two and four studies, respectively. Two studies used different discount rates of zero, 3% and 7/2% to perform the sensitivity analysis. The majority of studies undertook a sensitivity analysis (n = 47). Some papers used several techniques, but Probabilistic analysis (n = 16) was the predominant technique, followed by one-way sensitivity analysis (n = 15). Sensitivity analysis was reported in two studies, but the kind of which has not been mentioned. Result of ICUR and cost per QALY for intervention Multiple measures have been used for evaluating outcome data, including incremental cost, incremental QALY, ICUR, and QALY. Various methods of costing, modeling, discount rate, and perspectives had been used in the selected studies. Hence, there was high heterogeneity among variables making it difficult to rank interventions based on cost per QALY or ICUR. So, we report cost per QALY or ICUR for all interventions from different perspectives and then summarized all pivotal information in Additional file 1: Appendix S1 and Additional file 2: Appendix S2. five out of twelve studies were performed on breast cancer. Mammography in the first round was cost effective in 53% of cases from the health system perspective in Iranian women aged 40–70 years based on modeling but it was not cost effective in the second and the third rounds. Cost per QALY ranged from 15.75 to 621 USD [16]. Adjuvant chemotherapy plus trastuzumab (Cost per QALY = 4,756 USD) was not a cost-effective option for treating patients with HER2-positive early breast cancer versus adjuvant chemotherapy alone (Cost per QALY = 1,115 USD) from Iranian health system perspective [17]. and intensive follow-up model was not cost-effective versus standard follow-up for breast cancer from payer perspective with cost per QALY of 178,792 USD and 381,070 USD respectively [18]. Doxorubicin and Cyclophosphamide (AC) with cost per QALY of 11,554 USD was considered as a cost-effective option for the treatment of women with advanced breast cancer who were younger than 65 years old versus Gemcitabine and Paclitaxel (PG) with cost per QALY of 16,415 USD from society perspective [19]. 5-fluorouracil, doxorubicin, cyclophosphamide (FAC) was a cost effective treatment in women less than 75 years old suffering from breast cancer with node-positive versus Docetaxel with doxorubicin and cyclophosphamide (TAC) from third-party perspective [20]. Cost per QALY for FAC and TAC were 355 USD and 5,500 USD respectively. Quadrivalent HPV vaccine is not a cost-effective option for cervical cancer screening in girls at the age of 15 from government perspective in Iran. Moreover, cost per QALY for different strategies of cervical screening ranged from $0.5750 (no screening) to $7.866 (pap smear starting at the age of 21 and repeat every three years) from health providers perspective. It is recommended for women in Iran to start pap smear at the age of 35 and repeat it every 5 or 10 years [21]. The most cost-effective options for colorectal and colon cancer are colonoscopy screening every 10 years starting at the age of 40 and fecal immunochemical test, or colonoscopy every 10 years respectively in the target population from health care system perspective. Cost per QALY ranged from 67.3 USD (no screening) to139.1 USD (colonoscopy) [22]. Pet scan and IEV regimen (ifosfamide, epirubicin and etoposide) were cost effective alternatives in the treatment of non-small cell lung carcinoma from health system perspective and patients with lymphoma from society perspective, respectively [23]. Screening of Smokers aged 55–74 for lung cancer versus no screening is a cost-effective option from health system perspective [24]. It is suggested that oncologists use epirubicin, oxaliplatin, and capecitabine (EOX) drug regimen compared to docetaxel, cisplatin, and fluorouracil (DCF) for the treatment of patients with gastric cancer. EOX is a cost-effective drug from society's perspective [25]. Aspirin is a cost-effective option in men with a 10-year CVD risk of 15% from payer perspective[ [26]] and simvastatin 10 mg is a cost-effective intervention in CVD-healthy men aged 45 with a 10-year CVD risk of 15% for the prevention of myocardial infarction from payer perspective [27]. Coronary bypass surgery (CBAG) in patients with multi-vessel coronary artery disease [28] and, tissue plasminogen activator in patients with ischemic stroke [29] are cost effective interventions from society and third-party perspective, respectively. Moreover, homograft valve in patients that underwent homograft and mechanical heart valve replacement surgery is a cost effective intervention [30]. Enoxaparin for inpatients treatment of venous thromboembolism prophylaxis with moderate to high risk is not a cost-effective option in comparison to heparin from perspective of payer in Iran [31]. Orthopedic disease EOS imaging technique is not cost-effective in routine practice from ministry of health perspective [32]. Electroacupuncture is more cost-effective than nonsteroidal anti-inflammatory drugs for the treatment of chronic low back pain from society perspective [33]. In another study, it is alleged that electroacupuncture is a more cost-effective intervention that NSAIDs in treatment of patients with chronic low back pain from perspective of society [33]. For osteoporosis, teriparatide is not a cost-effective intervention compared to alendronate and risedronate from health system perspective in the treatment of postmenopausal Iranian women aged 60 years and above [34]. Rituximab versus disease-modifying anti rheumatoid drugs (DMARDs) is not a cost-effective intervention for the treatment of patients with refractory rheumatoid arthritis from health service perspective [35]. Teriparatide also is a cost-effective option versus no treatment in treatment of women with Severe Postmenopausal Osteoporosis (PMO) from health system perspective [36]. Moreover, Tocilizumab plus methotrexate compared with infliximab plus methotrexate is not a cost-effective option for Rheumatoid Arthritis Patients from payer perspective [37]. Dual energy absorptiometry (DXA) & osteoporosis self- assessment tool (OST) is more cost-effective program than DXA in people over 55 years for Osteoporosis from health insurance organization perspective [38]. Congenital disease Screening for PKU versus no screening is beneficial to society and patients and ICUR is $33,860 [39]. The ICUR of screening versus no screening for hypothyroidism among infants is $13,413 from society perspective. Thus, the screening is not only economically beneficial, but it also, prevents mental retardation [40]. Galactosemia screening program versus no screening is both cost-effective and socially acceptable among infants and ICUR is $12,000 from society perspective [41]. ICUR of screening versus no screening for Phenylketonuria, Hypothyroidism, Galactosemia and Favism are $3386, $13,078, $19,641 and $1088 respectively from social perspective. This neonatal screening yields long term benefits [42]. In chronic hepatitis B, cost per QALY of medications ranged from $3474.78 in Tenofovir (TDF) to $10359.24 in Entecavir (ETV) in patients with HBeAg-negative chronic Hepatitis B from society perspective. Thus, TDF in patients with HBeAg-negative CHB is a highly cost-effective strategy [43]. In the treatment of patients with HCV genotype 1, the highest cost per QALY was $3826.8 for Ledipasvir and Sofosbuvir (LDV + SOF) and the least cost per QALY was $635.4 for Pegylated interferon and Ribavirin + Sofosbuvir (SOF + PR). The combination of SOF + PR was most cost-effective from payer perspective [44]. For patients aged 30 years old diagnosed with relapsing multiple sclerosis, the ICUR varies from $3850 to $18,050 for different strategies. All brands of interferon beta products except Avonex is cost-effective in treatment of patients from societal perspective [45]. In another study, cost per QALY ranged from $2233.78 (symptom management) to $15529.78 (Avonex) for the treatment of patients with relapsing-remitting multiple sclerosis from the perspective of Iran's health care perspective [46]. Moreover, alemtuzumab is a dominant intervention versus natalizumab in patients with multiple sclerosis from society perspective [47] Alemtuzumab and Natalizumab resulted in 25,475 and 28,902 dollars per QALY, respectively. Fingolimod and natalizumab resulted in 27,368 and 7180 dollar per QALY from perspective of society in treatment of patients with multiple sclerosis [48]. it is suggested that fingolimod is used as the first priority for second-line treatment. B-thalassemia: DFX (deferasirox) is cost-effective compared to deferoxamine infusion for the treatment of iron overload in patients with b-thalassemia from the perspective of Iran's society [49]. In another study, it was claimed that treating patients with Thalassemia major is a cost-effective intervention versus no treatment from social viewpoint [50]. Hemophilia A: low dose ITI (immune tolerance induction) is more cost-effective than other options for the treatment of hemophilia patients with inhibitors from Iranian ministry of health perspective [51]. Human immunodeficiency virus (HIV): methadone maintain treatment (MMT) is cost effective versus no MMT among iv drug users referred to the public MMT from governmental perspective [52]. Depression disorder: The repetitive transcranial magnetic stimulation is a cost-effective intervention versus electroconvulsive therapy in the treatment of depressive disorders from health system perspective [53]. Helicobacter pylori: It is recommended to avoid carbon-13 urea breath method in large scale among Iranian adult population with uninvestigated dyspepsia with no history of Non-Steroidal Anti-Inflammatory Drugs (NSAID) consumption and had no symptoms of other diseases from perspective of providers [54]. Chronic Kidney Disease(CKD): screening of CKD versus no screening in adult patient is a cost effective program from health insurance organization perspective [55]. For febrile Seizure in children, Phenobarbital and topiramate led to 1051 and 2466 dollars per QALY. Topiramate in patients with febrile seizure under five years of age is a cost-effective strategy from society perspective [56]. Renal disease: ]It is recommended that kidney transplantation is the best intervention compared to hemodialysis and peritoneal dialysis in patients with end stage renal disease from perspective of society [57]. Ulcerative coitus: conventional treatment is not a cost-effective option versus Infliximab in patients with moderate to severe ulcerative coitus [58]. Dental disease: varnish fluoride therapy versus no varnish fluoride therapy in students aged 7–12 years is a cost effective strategy from perspective of health system [59]. The best strategy in management of pharyngitis is rapid test antigen (RTA) from perspective of society. Cost per QALY ranged from $3.41 to $4.93 in diagnosis and treatment of pharyngitis [60]. Somatropin is a cost-effective option in comparison with no somatropin in treatment of children with short stature from health insurance organization perspective [61]. We found 51 CUA studies that were conducted between 2000 and 2020 in Iran. With regards to resource scarcity, it was apparent that the focus of economic evaluations was high on interventions for diseases that impose a growing burden on population health. Accordingly, the results of the current review highlighted that a large part of cost-utility studies concentrates on cancer which is the second major health problem in Iran. However, fewer studies (n = 6) have been undertaken on cardiovascular disease and strokes, which are responsible for roughly one-third of the mortality rates in Iran [62]. This finding shows that most of the cost-utility analysis studies have concentrated on high burden illness to allocate health resources economically. Economic evaluation should be conducted and interpreted within clear and precise theoretical frameworks to conduct the research, and to support its interpretation [63]. The scope of the costs and benefits is determined by the selection of study perspective [64]. The predominant viewpoint in the studies analyzed was society (n = 19), which ensures addressing costs and benefits attributable to patients and society as a whole. There is a consensus among economists that the reliable perspective in economic evaluation is societal. A societal viewpoint entails that all costs and benefits should be included in the evaluation as wide as possible, irrespective of who pays or receives them [63]. It has been further observed that three studies have not stated their perspectives. Based on the review of articles, about 63% of them adopted narrow viewpoints on impeding generalizability and not including overall long-term implications in their analyses. Yet, some studies specified that perspectives failed to estimate the associated consequences concerning the adopted viewpoint. Discounting refers to the translation of values drawn from a certain time horizon in the future to the present value that aims to make costs and benefits comparable throughout different years [65]. There is some controversy over the rate that should be employed to discount benefits and costs. Most of the countries have recommended reporting results with benefit and costs discounted at a range of 3–5 percent to ensure some consistency in the findings of economic evaluations. The current review found that majority of studies in the Iranian setting used the discount rate varied between 0 and 7.2% and 3% was the mode, which is consistent with the WHO's guidelines on discounting [66]. Nevertheless, a few studies ignored or did not report discounting. In addition, it must be mentioned that there is no need to perform discount rate for short-run studies Since ICUR results are very sensitive to differences in discount rates, using a higher discount rate gives somewhat little weight to costs and benefits in the remote future, hence it can notably affect the decisions made. Sensitivity analysis is performed to discover the effect of uncertainty on findings by changes in values of inputs and assumptions [67]. Researchers should perform a sensitivity analysis to assess the robustness of results. One-way sensitivity analysis was the most common technique that has been done owing to the uncertainty of a single component (e.g., by changing discount rate). However, for a more favorable validation of findings, probabilistic as well as multi-dimensional sensitivity analyses are suggested to not only assess the robustness of results but also to facilitate generalizability of findings to other settings. From the review of articles, we found that Markov models (n = 23) were the most common analytic techniques followed by the decision tree models (n = 14). Markov models are useful when a decision problem involves a risk that is continuous over time, when the timing of events is important, and when important events may happen more than once. Representing such clinical settings with conventional decision trees is difficult and may require simplifying unrealistic assumptions. Markov models assume that a patient is always in one of the finite numbers of discrete health states, called Markov states. All events are represented as transitions from one state to another [68]. The evaluation of the studies in this review reflects a paucity of information useful for making decisions about the allocation of resources for healthcare interventions. It is actually, concerning that over 6 percent of Iranian GDP is being spent on the health system, with insufficient economic evidence. In CUA studies, multiple domain scores from Health-Related Quality of Life (HRQoL) instruments are translated into a single summary utility score. By doing so, QALY estimates, and thus cost-utility ratios, as well as ICUR can be calculated [69]. Cost utility analyses adapt QALY measurement which is comparable and generalizable across various interventions as an instrument for comparing their value for money. Thus, ICUR is defined as the ratio of the difference in cost between two alternatives to the difference in effectiveness (QALY) between the same two alternatives. Each of CUAs included in this study has weighed the cost and effectiveness of a competing intervention against another one to give the decision-maker a precise quantitative understanding of their likely effectiveness. Based on findings in the present study, league table for Iranian CUA studies begins with $ 0.144 per-QALY ratio for radiography with the minimum ICUR in patients with any orthopedic problems from perspective of ministry of health and ends with $ 1,675,535 per QALY for immune tolerance induction (ITI) therapy with the maximum ICUR in treatment of hemophilia patients with inhibitors from perspective of health ministry. The major shortcoming of league tables for Iranian CUA studies may be the omission of much of the information that decision-makers might want to take into account when choosing alternatives. For instance, in recent years, few studies have been conducted on economic evaluation of interventions concerning cardiovascular disease while those diseases represented nearly %9 of disease burden, but 11% of the CUA studies in our review were related to them. However, there appears to be an imbalance existing between disease burden and studies. It was revealed that screening programs related to all diseases were cost effective interventions except two studies [16, 21]. In one study [16] it was cost effective to use mammography in women aged 40–70 in 53% of trials but it was not cost effective to use it in second and third round. In the other study [21], Quadrivalent HPV vaccine was not cost effective. The reason for that may be attributed to ignoring some possible benefits. The other screening strategies were cost effective from different perspectives in Iran due to high effectiveness or low cost. So, decision makers should allocate resources to screening programs because they would use lower cost and produce more QALY. On the other hand, in treatment strategies, due to low effectiveness or high expenses, most of the options were not cost effective. In summary, we addressed the quality of CUA studies in Iran and suggest that the adherence to technical criteria needs to improve and methodological flaws in published work should be removed to ensure that economic evaluations do not mislead policy-makers and serve as tools of advocacy to witness which interventions are most cost- effectiveness (efficient). It is suggested that decision makers develop a webpage like Iranian Registry of Clinical Trials (IRCT) [70] for registration of economic evaluation studies, and generate agreed and international guidelines of what to do economic evaluations. In addition, obligating researchers to follow the determined guidelines before conducting the economic evaluation studies might increase homogeneity and comparability among the studies. League table serves as an approach which can help decision-makers to distil policy recommendations when confronting with imperfect information during the process of resource allocation in a rational way. Although economic evaluations have been conducted alongside higher heterogeneity and no ranking was performed, it is instructive and informative to collect all cost utility studies and summarize them in a table. Moreover, there was a limited number of economic evaluation studies related to different disease to make better decision for various strategies for every disease. As the findings illustrate, in general, screening programs were found cost effective interventions from different perspectives in Iran due to high effectiveness or low cost. Hence, decision makers are suggested to allocate resources to screening programs because they would use lower financial resources and produce more benefits. High heterogeneity was revealed and sorting was not carried out. There were a few numbers of studies to draw tables for all diseases. All data used for this review are included in the published article. Since publication, the article has been updated to add the files of Appendix 1 and 2. Schwarzer R, Rochau U, Saverno K, Jahn B, Bornschein B, Muehlberger N, Flatscher-Thoeni M, Schnell-Inderst P, Sroczynski G, Lackner M. Systematic overview of cost–effectiveness thresholds in ten countries across four continents. J Compar Effect Res. 2015;4:485–504. Haghparast-Bidgoli H, Kiadaliri AA, Skordis-Worrall J. Do economic evaluation studies inform effective healthcare resource allocation in Iran? A critical review of the literature. Cost Effect Resour Alloc. 2014;12:15. Ong KS, Carter R, Vos T, Kelaher M, Anderson I. Cost-effectiveness of interventions to prevent cardiovascular disease in Australia's indigenous population. Heart Lung Circ. 2014;23:414–21. Drummond MF, Sculpher MJ, Claxton K, Stoddart GL, Torrance GW. Methods for the economic evaluation of health care programmes. Oxford: Oxford University Press; 2015. Horton S, Gelband H, Jamison D, Levin C, Nugent R, Watkins D. Ranking 93 health interventions for low-and middle-income countries by cost-effectiveness. PLoS ONE. 2017;12:e0182951. Marseille E, Larson B, Kazi DS, Kahn JG, Rosen S. Thresholds for the cost-effectiveness of interventions: alternative approaches. Bull World Health Organ. 2015;93:118–24. Thokala P, Ochalek J, Leech AA, Tong T. Cost-effectiveness thresholds: the past, the present and the future. Pharmacoeconomics. 2018;36:509–22. Crown W, Buyukkaramikli N, Thokala P, Morton A, Sir MY, Marshall DA, Tosh J, Padula WV, Ijzerman MJ, Wong PK. Constrained optimization methods in health services research—an introduction: report 1 of the ISPOR optimization methods emerging good practices task force. Value Health. 2017;20:310–9. Greenberg D, Earle C, Fang C-H, Eldar-Lissai A, Neumann PJ. When is cancer care cost-effective? A systematic overview of cost–utility analyses in oncology.  J Natl Cancer Inst. 2010;102:82–8. Horton S. Cost-effectiveness analysis in disease control priorities. 2017. Tashobya CK, Dubourg D, Ssengooba F, Speybroeck N, Macq J, Criel B. A comparison of hierarchical cluster analysis and league table rankings as methods for analysis and presentation of district health system performance data in Uganda. Health Policy Plan. 2015;31:217–28. Newall AT, Jit M, Hutubessy R. Are current cost-effectiveness thresholds for low- and middle-income countries useful? Examples from the world of vaccines. Pharmacoeconomics. 2014;32:525–31. Patel HD, Roberts ET, Constenla DO. Cost-effectiveness of a new rotavirus vaccination program in Pakistan: a decision tree model. Vaccine. 2013;31:6072–8. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1. Institute JB. JBI Critical Appraisal tools for use in JBI Systematic Reviews: Checklist for Economic Evaluations. Australia; 2016. Haghighat S, Akbari ME, Yavari P, Javanbakht M, Ghaffari S. Cost-effectiveness of three rounds of mammography breast cancer screening in Iranian women. Iran J Cancer Prev. 2016;9:1. Aboutorabi A, Hadian M, Ghaderi H, Salehi M, Ghiasipour M. Cost-effectiveness analysis of trastuzumab in the adjuvant treatment for early breast cancer. Global J Health Sci. 2015;7:98. Hatam N, Ahmadloo N, Vazirzadeh M, Jafari A, Askarian M. Cost-effectiveness of intensive vs. standard follow-up models for patients with breast cancer in Shiraz, Iran. Asian Pac J Cancer Prev. 2016;17:5309. Hatam N, Askarian M, Javan-Noghabi J, Ahmadloo N, Mohammadianpanah M. Costutility of "Doxorubicin and Cyclophosphamide" versus "Gemcitabine and Paclitaxel" for treatment of patients with breast cancer in Iran. Asian Pac J Cancer Prev. 2015;16:8265–70. Bastani P, Kiadaliri AA. Cost-utility analysis of adjuvant therapies for breast cancer in Iran. Int J Technol Assess Health Care. 2012;28:110–4. Khatibi M, Rasekh HR, Shahverdi Z. Cost-effectiveness evaluation of quadrivalent human papilloma virus vaccine for HPV-related disease in Iran. Iran J Pharm Res. 2014;13:225. Barouni M, Larizadeh MH, Sabermahani A, Ghaderi H. Markovs Modeling for Screening Strategies for Colorectal Cancer. Asian Pac J Cancer Prev. 2012;13:5125–9. Sari AA, Ravaghi H, Mobinizadeh M, Sarvari S. The cost-utility analysis of PET-scan in diagnosis and treatment of non-small cell lung carcinoma in Iran. Iran J Radiol. 2013;10:61. Hajiesmaeili M, Yousefi M, Mahboub-Ahari A, Seyyednejad F. Cost effectiveness of lung cancer screening. Tabriz University of Medical Science, 2017. Khezeli MJ, Dehghani M, Keshavarz K, Kavosi Z. Cost-utility analysis of the EOX drug regimen versus the DCF drug regimen for patients with advanced gastric cancer. Middle East J Cancer. 2019;10:118–24. Amirsadri M, Sedighi MJ. Cost-effectiveness evaluation of aspirin in primary prevention of myocardial infarction amongst males with average cardiovascular risk in Iran. Res Pharm Sci. 2017;12:144. Amirsadri M, Hassani A. Cost-effectiveness and cost-utility analysis of OTC use of simvastatin 10 mg for the primary prevention of myocardial infarction in Iranian men. DARU Journal of Pharmaceutical Sciences. 2015;23:56. Javanbakht M, Bakhsh RY, Mashayekhi A, Ghaderi H, Sadeghi M. Coronary bypass surgery versus percutaneous coronary intervention: Cost-effectiveness in Iran: A study in patients with multivessel coronary artery disease. Int J Technol Assess Health Care. 2014;30:366–73. Amiri A, Goudarzi R, Amiresmaili M, Iranmanesh F. Cost-effectiveness analysis of tissue plasminogen activator in acute ischemic stroke in Iran. J Med Econ. 2018;21:282–7. Yaghoubi M, Aghayan HR, Arjmand B, Emami-Razavi SH. Cost-effectiveness of homograft heart valve replacement surgery: an introductory study. Cell Tissue Bank. 2011;12:153–8. Amirsadri M, Mousavi S, Karimipour A. The cost-effectiveness and cost-utility analysis of the use of enoxaparin compared with heparin for venous thromboembolism prophylaxis in medical inpatients in Iran. DARU J Pharm Sci. 2019;27:627–34. Mahboub-Ahari A, Hajebrahimi S, Yusefi M, Velayati A. EOS imaging versus current radiography: A health technology assessment study. Med J Islamic Republic of Iran. 2016;30:331. Toroski M, Nikfar S, Mojahedian MM, Ayati MH. Comparison of the cost-utility Analysis of Electroacupuncture and Nonsteroidal Antiinflammatory Drugs in the Treatment of Chronic Low Back Pain. J Acupunct Meridian Stud. 2018;11:62–6. Azar AAEF, Rezapour A, Alipour V, Sarabi-Asiabar A, Gray S, Mobinizadeh M, Yousefvand M, Arabloo J. Cost-effectiveness of teriparatide compared with alendronate and risedronate for the treatment of postmenopausal osteoporosis patients in Iran. Med J Islamic Republic of Iran. 2017;31:39. Ahmadiani S, Nikfar S, Karimi S, Jamshidi AR, Akbari-Sari A, Kebriaeezadeh A. Rituximab as first choice for patients with refractory rheumatoid arthritis: cost-effectiveness analysis in Iran based on a systematic review and meta-analysis. Rheumatol Int. 2016;36:1291–300. Taheri S, Fashami FM, Peiravian F, Yousefi P. Teriparatide in the treatment of severe postmenopausal osteoporosis: a cost-utility analysis. Iran J Pharm Res. 2019;18:1073. Hashemi-Meshkini A, Nikfar S, Glaser E, Jamshidi A, Hosseini SA. Cost-effectiveness analysis of tocilizumab in comparison with infliximab in iranian rheumatoid arthritis patients with inadequate response to tDMARDs: a multistage Markov model. Value Health Regional Issues. 2016;9:42–8. Derakhshan F, Afsharzadeh N, Barouni M, Jafari M. Cost effectiveness of Osteoporosis Screening in Kerman. School of Health Information and Management, Kerman University of Medical. Hatam N, Askarian M, Shirvani S, Purmohamadi K. Cost Utility of Neonatal Screening Program for Phenylketonuria in Shiraz University of Medical Sciences. 2014. Hatam N, Askarian M, Bastani P, Pourmohammadi K, Shirvani S. Cost-Utility of Screening Program for Neonatal Hypothyroidism in Iran. Shiraz E-Medical Journal 2016, 17. Hatam N, Askarian M, Shirvani S, Siavashi E. Neonatal screening: cost-utility analysis for Galactosemia. Iran J Public Health. 2017;46:112. Hatam N, Shirvani S, Javanbakht M, Askarian M, Rastegar M. Cost-utility analysis of neonatal screening program, shiraz university of medical sciences, shiraz, iran, 2010. Iran J Pediatrics. 2013;23:493. Keshavarz K, Kebriaeezadeh A, Alavian SM, Sari AA, Hemami MR, Lotfi F, Meshkini AH, Javanbakht M, Keshvari M, Nikfar S. A cost-utility and cost-effectiveness analysis of different oral antiviral medications in patients with HBeAg-negative chronic hepatitis B in Iran: an economic microsimulation decision model. Hepatitis Monthly. 2016;16:1. Alavian SM, Nikfar S, Kebriaeezadeh A, Lotfi F, Sanati E, Hemami MR, Keshavarz K. A cost-utility analysis of different antiviral medicine regimens in patients with chronic hepatitis C virus genotype 1 infection. Iran Red Cresc Med J. 2016;18:1. Nikfar S, Kebriaeezadeh A, Dinarvand R, Abdollahi M, Sahraian M-A, Henry D, Sari AA. Cost-effectiveness of different interferon beta products for relapsing-remitting and secondary progressive multiple sclerosis: Decision analysis based on long-term clinical data and switchable treatments. DARU J Pharm Sci. 2013;21:50. Imani A, Golestani M. Cost-utility analysis of disease-modifying drugs in relapsing-remitting multiple sclerosis in Iran. Iran J Neurol. 2012;11:87. Taheri S, Sahraian MA, Yousefi N. Cost-effectiveness of alemtuzumab and natalizumab for relapsing-remitting multiple sclerosis treatment in Iran: decision analysis based on an indirect comparison. J Med Econ. 2019;22:71–84. Rezaee M, Izadi S, Keshavarz K, Borhanihaghighi A, Ravangard R. Fingolimod versus natalizumab in patients with relapsing remitting multiple sclerosis: a cost-effectiveness and cost-utility study in Iran. Journal of medical economics. 2019;22:297–305. Keshtkaran A, Javanbakht M, Salavati S, Mashayekhi A, Karimi M, Nuri B. Cost–utility analysis of oral deferasirox versus infusional deferoxamine in transfusion-dependent β‐thalassemia patients. Transfusion. 2013;53:1722–9. Emamgholipour S, Ahmadi B, Rajabi AH, Azarkeivan A, Ebrahimi M, Esmaeilzadeh F. Cost-utility of treatment of the patients with Thalassemia Major in Iran. The Scientific Journal of Iranian Blood Transfusion Organization. 2018;15:257–64. Rasekh HR, Imani A, Karimi M, Golestani M. Cost-utility analysis of immune tolerance induction therapy versus on-demand treatment with recombinant factor VII for hemophilia A with high titer inhibitors in Iran. ClinicoEcon Outcomes Res. 2011;3:207. Pourkhajoei S, Barouni M, Noroozi A, Hajebi A, Amini S, Karamouzian M, Sharifi H. Cost-effectiveness of methadone maintenance treatment centers in prevention of human immunodeficiency virus infection. Addiction Health. 2017;9:81. Ghiasvand H, Moradi-Joo M, Abolhassani N, Ravaghi H, Raygani SM, Mohabbat-Bahar S. Economic evaluation of resistant major depressive disorder treatment in Iranian population: a comparison between repetitive Transcranial Magnetic Stimulation with electroconvulsive. Med J Islamic Republic of Iran. 2016;30:330. Mazdaki A, Ghiasvand H, Asiabar AS, Naghdi S, Aryankhesal A. Economic evaluation of test-and-treat and empirical treatment strategies in the eradication of Helicobacter pylori infection; A Markov model in an Iranian adult population. Med J Islamic Republic of Iran. 2016;30:327. Ravaghi H, Ebrahimnia M, Rostami-Farzaneha Z, Madani MHH. Cost-effectiveness analysis of screening chronic kidney disease in Iran. J Clin Diagn Res. 2019;13:1. Nemati H, Talebianpour H, Lotfi F, Sepehri NZ, Keshavarz K. Cost-effectiveness analysis of topiramate versus phenobarbital in the treatment of children with febrile seizure in Shiraz. Iran J Child Neurol. 2019;13:109–20. Moradpour A, Hadian M, Tavakkoli M. Economic evaluation of End Stage Renal Disease treatments in Iran. Clin Epidemiol Global Health. 2019. Moradi N, Tofighi S, Akbari Sari A. Economic evaluation of infliximab for treatment of refractory ulcerative colitis in Iran: cost-effectiveness analysis. Iran J Pharm Sci. 2016;12:33–42. Davoodi Lahijan J, Farrokh-Eslamlou H, Shariat Torbaghan K, Nouraei Motlagh S, Yusefzadeh H. COST EFFECTIVENESS ANALYSIS OF VARNISH FLUORIDE THERAPY OF STUDENTS IN URMIA'S PRIMARY SCHOOLS. J Nurs Midwif Urmia Univ Med Sci. 2019;17:204–12. Behnamfar Z, Shahkarami V, Sohrabi S, Aghdam AS, Afzali H. Cost and effectiveness analysis of the diagnostic and therapeutic approaches of group A Streptococcus pharyngitis management in Iran. J Family Med Primary Care. 2019;8:2942. Tasavon Gholamhoseini M, Barouni M, Afsharzadeh N, Jafar iSirizi M. Cost-effectiveness of growth hormone (Somatropin) for the treatment of children with short stature. Payavard Salamat. 2018;12:286–95. GBD Compare. | Viz Hub. https://www.vizhub.healthdata.org/gbd-compare/. Smith S, Nolan A, Normand C, McPake B. Health economics: an international perspective. London: Routledge; 2013. Walker D, Fox-Rushby JA. Economic evaluation of communicable disease interventions in developing countries: a critical review of the published literature. Health Econ. 2000;9:681–98. Tu H-AT, Woerdenbag HJ, Kane S, Rozenbaum MH, Li SC, Postma MJ. Economic evaluations of rotavirus immunization for developing countries: a review of the literature. Expert Rev Vaccines. 2011;10:1037–51. Baltussen RM, Adam T, Tan-Torres Edejer T, Hutubessy RC, Acharya A, Evans DB, Murray CJ, Organization WH. Making choices in health: WHO guide to cost-effectiveness analysis. 2003. Hoque ME, Khan JA, Hossain SS, Gazi R, Rashid H-a, Koehlmoos TP, Walker DG. A systematic review of economic evaluations of health and health-related interventions in Bangladesh. Cost Effectiveness Resource Allocation. 2011;9:1. Sonnenberg FA, Beck JR. Markov models in medical decision making: a practical guide. Med Decis Making. 1993;13:322–38. Rascati K. Essentials of pharmacoeconomics. New York: Lippincott Williams & Wilkins; 2013. Iranian Registry of Clinical Trials [https://irct.ir/]. Javadinasab H, Daroudi R, Salimzadeh H, Delavari A, Vezvaie P, Malekzadeh R. Cost-effectiveness of screening colonoscopy in Iranian High Risk Population. ArchIran Med. 2017;20:1. Hatam N, Dehghani M, Habibian M, Jafari A. Cost-utility analysis of IEV drug regimen versus ESHAP drug regimen for the patients with relapsed and refractory hodgkin and non-hodgkin's lymphoma in Iran. Iran J Cancer Prev. 2015;8:1. Nahvijou A, Daroudi R, Tahmasebi M, Hashemi FA, Hemami MR, Sari AA, Marenani AB, Zendehdel K. Cost-effectiveness of different cervical screening strategies in Islamic Republic of Iran: a middle-income country with a low incidence rate of cervical cancer. PLoS ONE. 2016;11:e0156705. Ghaderi H, Shafiee H, Ameri H, VafaeeNasab M. Cost-effectiveness of home care and hospital care for stroke patients. Health Care Management. 2013;4:1. There was no funding source. Department of Health Management and Economics, School of Public Health, Tehran University of Medical Sciences, 0000-0002-2043-8451, Tehran, Iran Reza Hashempour, Behzad Raei, Majid Safaei Lari & Ali AkbariSari Nasrin Abolhasanbeigi Gallezan Reza Hashempour Behzad Raei Majid Safaei Lari Ali AkbariSari The author contributions are explained in the method. All authors read and approved the final manuscript. Correspondence to Ali AkbariSari. Because it is a review study, Consent to participate is unnecessary. The authors declare that there are no conflicts of interest. Additional file 1: Appendix S1. The results of technical charecteristics and cost per QALY of the studies. The results of technical charecteristics and ICUR of the studies. Hashempour, R., Raei, B., Safaei Lari, M. et al. QALY league table of Iran: a practical method for better resource allocation. Cost Eff Resour Alloc 19, 3 (2021). https://doi.org/10.1186/s12962-020-00256-2 Revised: 24 November 2020 Cost utility analysis Cost effectiveness analysis
CommonCrawl
Journal of NeuroEngineering and Rehabilitation A comprehensive scheme for the objective upper body assessments of subjects with cerebellar ataxia Ha Tran ORCID: orcid.org/0000-0001-5832-78721, Khoa D. Nguyen1, Pubudu N. Pathirana1, Malcolm K. Horne2, Laura Power3 & David J. Szmulewicz2,3,4 Journal of NeuroEngineering and Rehabilitation volume 17, Article number: 162 (2020) Cite this article Cerebellar ataxia refers to the disturbance in movement resulting from cerebellar dysfunction. It manifests as inaccurate movements with delayed onset and overshoot, especially when movements are repetitive or rhythmic. Identification of ataxia is integral to the diagnosis and assessment of severity, and is important in monitoring progression and improvement. Ataxia is identified and assessed by clinicians observing subjects perform standardised movement tasks that emphasise ataxic movements. Our aim in this paper was to use data recorded from motion sensors worn while subjects performed these tasks, in order to make an objective assessment of ataxia that accurately modelled the clinical assessment. Inertial measurement units and a Kinect© system were used to record motion data while control and ataxic subjects performed four instrumented version of upper extremities tests, i.e. finger chase test (FCT), finger tapping test (FTT), finger to nose test (FNT) and dysdiadochokinesia test (DDKT). Kinematic features were extracted from this data and correlated with clinical ratings of severity of ataxia using the Scale for the Assessment and Rating of Ataxia (SARA). These features were refined using Feed Backward feature Elimination (the best performing method of four). Using several different learning models, including Linear Discrimination, Quadratic Discrimination Analysis, Support Vector Machine and K-Nearest Neighbour these extracted features were used to accurately discriminate between ataxics and control subjects. Leave-One-Out cross validation estimated the generalised performance of the diagnostic model as well as the severity predicting regression model. The selected model accurately (\(96.4\%\)) predicted the clinical scores for ataxia and correlated well with clinical scores of the severity of ataxia (\(rho = 0.8\), \(p < 0.001\)). The severity estimation was also considered in a 4-level scale to provide a rating that is familiar to the current clinically-used rating of upper limb impairments. The combination of FCT and FTT performed as well as all four test combined in predicting the presence and severity of ataxia. Individual bedside tests can be emulated using features derived from sensors worn while bedside tests of cerebellar ataxia were being performed. Each test emphasises different aspects of stability, timing, accuracy and rhythmicity of movements. Using the current models it is possible to model the clinician in identifying ataxia and assessing severity but also to identify those test which provide the optimum set of data. Trial registration Human Research and Ethics Committee, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia (HREC Reference Number: 11/994H/16). Cerebellar ataxia (CA) describes the dysfunctional balance, gait [1, 2] and limb function [3] that results from cerebellar dysfunction. Ataxia is assessed by observing the performance of standard motor tasks described by Holmes [4, 5] and others almost a century ago. These pioneering clinicians recognised that ataxic movements could not easily be reduced to Newtonian terms but fundamentally manifest as disturbances in accuracy, timing, rhythmicity and stability of the proximal motor platform which they described using terms such as dysmetria, dyssynergia and dysrhythmia. The standard motor tasks used to assess upper limb ataxia, referred to here as "tests" include the finger chasing test (FCT), finger tapping test (FTT), finger to nose test (FNT) and alternating hand movements looking for dysdiadochokinesia (DDKT) [6, 7]. Scales such as the Scale for the Assessment and Rating of Ataxia (SARA) [6] have been developed to codify the assessment of these tests and require specific aspects of motor dysfunction to be considered when scoring ataxia. For example, the SARA stipulates the overshoot/undershoot distance between subject's finger and clinician's finger in the finger chase test. However, there will inevitably be subjectivity and variation in the severity that human observers rate deficits in performance of these tests. The SARA and conventional teaching also recommend administering several tests to characterise upper limb ataxia. However, it is unclear whether this is because each test carries unique information necessary for establishing the presence and severity of ataxia or whether it is because the performance of several different tests provides clinical security despite the redundant information. Several sensing and information extracting systems have been proposed for quantifying the assessment of upper limb ataxia and thus overcoming subjectivity. For example, a push-button system to evaluate the variation in timing of ataxic movements was considered for the FTT [8,9,10]. Inertial measurement units (IMUs) have been used to capture movement kinematics in multiple signal domains [11] to objectively assess the FNT [12, 13] and DDKT [12]. The movement of the finger performing the FCT has been tracked using optoelectronic devices ranging from video cameras [14] to VICON [15] and recently Kinect© in our previous study [3] to assess delay in initiating movement and accuracy in reaching the target. While this test identified deficits in accuracy and timing, neither the maintenance of rhythm nor the stability of the execution platform of the moving distal limb were assessed [4]. Thus, it has been possible to emulate individual bed side tests through objective assessments but none of these tests appear to fully assess all aspects of upper limb ataxia (timing, accuracy, rhythmicity and proximal stability). Even in those tests that addressed similar aspects of ataxia, the extent to which they measure the same aspect similarly (i.e. are redundant) is unclear. In this study, our primary aim is to: Develop an Instrumented System for the objective assessment of Upper Limb Ataxia (ISULA). The system includes an IMU sensor module(BioKin™) and Kinect camera to capture movement information from subjects while performing the four conventional tests; namely FCT, FTT, FNT and DDK. Identify the minimum combination of tests that provide sufficient information to assess the disability. Quantify the heterogeneous aspects intrinsic to ataxia by grouping the extracted features from the system according to clinical domains described by Holmes and others: stability, timing, accuracy, and rhythmicity (referred to here as STAR dimensions). Fourteen control subjects ("controls": mean age, 55; range, 25–68 years) and 41 subjects with cerebellar ataxia (CA or "ataxics": mean age, 64; range, 28–78 years) participated in this study (Table 1). All ataxics were previously diagnosed with a progressive neurodegenerative ataxia (with genotyping or other confirmatory investigations when relevant, see Table 1—Diagnosis). This study was approved by the Human Research and Ethics Committee, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia (HREC Reference Number: 11/994H/16). Written consent was obtained from all participants. Table 1 Participant demographics Clinical assessments The severity of ataxia was assessed using SARA [6], on the same day that objective measurements were made. The same clinician, experienced in assessing ataxia, provided all SARA scores to avoid the inevitable scoring variation that occurs when subjects are assessed by different clinicians. SARA assessment is comprised of eight sub-scores: three for the lower limbs (No. 1-2-8), three for the upper limbs (No. 5-6-7), one for sitting (No. 3) and one for speech (No. 4). The SARA scores of upper limb function tests used in this study are: FCT (No. 5), FNT (No. 6) and DDKT (No. 7). Each sub-scores can be scored from 0 to 4 points (5 levels) according to the clinicians assessment of the severity of ataxia when performing the specific test. Hence the upper limb SARA score (SARA-UL) is from 0 to 15 points. The SARA total score (SARA-Total) is calculated by the summation of the eight sub-scores resulting in a maximum of 40 points. In this study, the SARA-Total (\(14.23 \pm 9.84\)) and the SARA-UL (\(3.58 \pm 2.62\)) were correlated with the objective assessment of severity of ataxia. Automated assessment protocols and apparatus The ISULA requires the performance of four tests measured by instrumented devices: FCT, FTT, FNT and DDKT. The FCT used a depth sensing camera (Kinect©) to capture the movements of the subject's finger (while reaching a target on the screen) while in the other three tests, kinematic information was acquired from a 3-dimensional (3D) IMU system, BioKin™ [16] system. The test descriptions and protocols are summarised in Table 2. All the tests were performed under the supervision of an expert clinician (LP). During the test, the clinician wirelessly started and stopped recording and applied markers into the data stream, denoting specific points during the performance through a mobile application. At the end of each trial, the sensor data were uploaded to a cloud-based storage and computing platform for further analysis. Instrumented version of the upper limb assessments and the movement waveform of a control and a patient diagnosed with CA. a Finger Chase (ballistic, FCT) using Kinect© system. b I. An IMU sensor with tri-axial accelerometer directions (Ax,Ay,Az) with a gyroscope directions (Gx,Gy,Gz) b II. Sensor placement around the wrist b III. Sensor placement around the palm. Testing with the IMU system denoting the direction of the primary movement; movement along the direction of effective axis in order to accomplish the task objectives: c Finger tapping (FTT), d Finger to nose (FNT), e dysdiadochokinesia (DDKT) Table 2 Experimental setup and description of tests in ISULA system Manifestations of ataxia Following Holmes [4], we describe four domains of ataxia (using the acronym STAR). The purpose is to develop a system of assessing ataxia that reflect the following generic domains of CA manifestations: Stability (S): Lack of stability in the platform during the execution of the task (the oscillations of the movement that is not preferred). Timing (T): Error between the goal/time objective against what is achieved in a temporal context. This is likely to be affected by: The time for the subject to initiate a movement. The time to complete a movement/speed. Accuracy (A): Error between the goal/space objective against what is achieved in a spatial context. Rhythmicity (R): The regularity in repeated movement Data preprocessing Accelerations and angular velocities from the IMU sensor were sampled at 50 Hz in the three orthogonal \(X, Y\) and \(Z\) axes. These signals were filtered by a 2nd order band-pass Butterworth filter with the cut-off frequency from 0.3 to 20 Hz where the base band frequencies were excluded to minimise drift effects and the high frequencies were restricted to the bandwidth of human movements [17]. In the Kinect© system, the location of each randomly generated instantaneous position change was stored as a pair of position coordinates. The target position remained constant between each change in the target location, while the marker position changed. The Kinect© captured the marker position at a sampling rate of 30 Hz. The maximum frequency of human movement is approximately 20 Hz [17], and for ataxic subjects this can be lower [3]. Especially for peripheral limb motion, the requencies are even less. Therefore, the sampling rate of Kinect is sufficient to capture the motion of subjects in this study. Relevant objective measures extracted from each ISULA test were described and associated with the corresponding STAR classification in Table 3. For notational simplicity, the feature names are denoted as: \((FeatureName)^{Axis(L/R)}_{Test}\) with L/R indicating performance by the left or right hand. Finger chase test (FCT) When assessing the FCT, the clinician subjectively estimates the extent of under/overshooting in the subject's movements relative to the moving target [6]. The ISULA system automates the assessment of FCT by considering the space–time trajectory of the marker and target. The overshoot/undershoot information of the subject movement was measured by the Dynamic time warping (DTW)-based error. The DTW was used to find the shortest path between the marker \(S_m\) and target \(S_t\) trajectories via their distance matrix DS using dynamic programming $$\begin{aligned} \begin{aligned} DS({i,j}) = dist(S_m(i),S_t(j)) + min\{DS(i-1,j),\\DS(i-1,j-1),DS(i,j-1)\}. \end{aligned} \end{aligned}$$ The error DTWErr is calculated by summing the value of the shortest path P obtained by going from the last (n; n) to the first (1; 1) element of DS via adjacent elements with the smallest values. The time from establishing a new target position to the subject's initiation of movement was defined as reaction time (ReTi). This feature was obtained by cross correlating the two time sequences representing the marker and the target movement. $$\begin{aligned} ReTi = \arg max \left(\sum_{i=-\infty }^{\infty }S_m^*[i]S_t[i+j]\right). \end{aligned}$$ The kinematic delay was obtained from the index of performance measurement described by Fitts' law [18]. The feature is intended to capture the performance of the subject in reaching a target position outlined by \(KiDe = ID/MT\), where \(ID=\log _2(di/ra)\) is the index of difficulty of the task while di is the distance between the current and previous position of the target, ra is the radius of the target circle and MT is the execution time of the task by the subject. The acceleration alteration (AcAlt) computed the number of times the subjects changed their acceleration while reaching the target. The feature measures the efficiency of force applied to performing the task. Finger tapping test (FTT) There is greater temporal variability when ataxic subjects tap repetitively than when controls do [19]. This can be observed in the inter-tap interval (ITI) and "movement variability". The ITI is defined as the duration between successive contacts with the table and its coefficient of variation (CITI). This quantifies the variability of the tapping rhythm with respect to the tapping rate [19, 20]. Movement variability is quantified using fuzzy entropy (FuEn), obtained for each movement time series (accelerations and angular velocities). Given a N-sample time series \(y=\{x_t|^N_{t=1}\}\), FuEn defines a states of m embedding dimensions such that \(X^m_t=\{x_t,x_{t+1},\ldots,x_{t+m-1}\}\) in the phase space and the distance \(d_{pq}=d[X^m_p,X^m_q]\) is measured by Chebyshev distance. Instead of using a Heaviside function to count the number of matched pairs of states, the similarity degree \(D^m_{pq}\) between any two states (\(t=p\) and \(t=q\)) is quantified using a fuzzy function \(D^m_{pq}=exp(-(d_{pq}/r)^2)\) of order 2 and radius r. FuEn allows variability to be quantified by calculating the reduction of information when the embedding dimension m increases by one [11, 21]. $$\begin{aligned} FuEn=\ln \phi ^m(r)-\ln \phi ^{m+1}(r) \end{aligned}$$ $$\begin{aligned} \phi ^m(r)=\dfrac{1}{N-m}\sum _{i=1}^{N-m}\Bigg [\dfrac{1}{N-m-1}\sum _{p=1,p\ne q}^{N-m}D_{pq}^m\Bigg ]. \end{aligned}$$ The reduced entropy values are in accordance to the complexity loss theory of disease which attributes to reduced adaptive capabilities of individuals owing to the effect of the disease [22]. The parameters for the entropy calculation is generally selected as, \(m=3\) and \(r=0.2*std(y)\). Table 3 Description and STAR classification of ataxic features Finger to nose test (FNT) and tests of dysdiadochokinesia (DDKT) There are movement characteristics in FNT and DDKT that can be considered together in analysis. They are both repetitive movements that require a stable platform (shoulder in both cases). In both tests, ataxia is not only characterised by variability in rhythm but also by prolonged task duration resulting from displacement errors when moving. Such characteristics are amenable to investigation using frequency domain techniques. Measurements from the accelerometer and gyroscope were analysed in terms of the resonant frequency (RF) and its magnitude (MR) using Fast Fourier Transforms (FFT) with appropriate filtering parameters (6th order bandpass Butterworth filter with the cut-off frequency region of 2–5 Hz). In the FNT, the angular accelerations and linear accelerations can be effectively used to characterise ataxia [12]. The RF and MR of angular accelerations in the three axes were calculated as well as the linear acceleration in the \(X\) axis. Only RF was applied to the linear acceleration in the \(Y\) and \(Z\) axes. In the case of the DDKT, RF and MR of angular acceleration and linear accelerations in all axes best distinguished between ataxics and controls [12]. Statistical inferences Statistically significant difference between ataxics and controls were identified using hypothesis tests. Normality in the variables was tested using the Shapiro–Wilk test. The t-Student's test was applied for normally distributed variables. The Wilcoxon rank-sum test or Mann–Whitney U test was applied for variables that were not normally distributed. To test for validity, Spearman correlation was used to measure the relationship between objective measurements and clinical scales. The sample size used in this study was determined to detect the minimum effect size of 0.88 with 80% statistical power and significance level (\(\alpha\)) of 5%. Similarly, for testing correlation, the effect size (r) of 0.30 was used. The power analysis was performed using G*Power version 3.1.9.4 [23]. Feature selection The four tests of upper limb ataxia produced many features. As the feature space was unlikely to be uniformly populated, there was a risk of overfitting a learning model. To overcome this and improve the prediction power the number of features were reduced using feature selection (FS) techniques. The Feed Backward Feature Elimination (FBE) [24] was employed along with three other widely-used methods involving Random Forest [25], RELIEF [26] and LASSO [27]. The central idea of FBE is to find a subset of features that increases the model's performance. In each iteration of the process, 90% of the data was randomly selected and a feature elimination decision was made using a threshold \(\alpha\) (significance level) on the p-value of the feature (with the null hypothesis \(H_0\) that the examining feature is independent of the predicted score given the set of currently selected features). Therefore, only features that significantly impacts the output (p-value \(<\alpha\)) are selected for the feature subset. Feature selection and contribution. a FBE-based process of obtaining selection frequency of features. b, c STAR distribution of the selected features and test distribution in each partition: b All four tests and c FCT and FTT. d Feature contributions of FCT and FTT. e Feature contributions of the 4 tests (first 22 features) In our experiment, we repeated the process 100 times to obtain the selection frequency of each feature in estimating its significance in the assessment/diagnosis problem. Details of the process were explained in the flowchart in Fig. 2a. Since the process is time consuming, computational performance was improved by employing the Parallel Computing Toolbox of MATLAB version 2019b to simultaneously execute the computations. Discrimination and severity analysis Features with high selection frequency from the FBE were used to classify control and CA groups and predicting the severity of ataxia. Classification models for diagnosis included Linear Discrimination (LD) [28], Quadratic Discrimination Analysis (QDA) [29], Support Vector Machine (SVM) [30], K-Nearest Neighbour (KNN) [31]. Leave-One-Out (LOO) cross validation estimated the generalised performance of the diagnostic model as well as the severity predicting regression model [32]. The effectiveness of the model was evaluated through a number of statistical measurements including accuracy (ACC), F1-score, the stability of the model by the area under the Receiver Operating Characteristics curve (AUC), the sensitivity measure (or Recall), and Precision. For regression analysis, we employed the Ridge regression method to correlate the proposed features with the SARA scores. This model avoids over-fitting when working with small data sets by forming a linear model to estimate the severity for the given input feature vector. In order to generate a general instrumented score, a severity scale that mirrored the SARA upper limb scores was developed. As there were not any ataxic subjects in the cohort who were rated with a SARA score of 4 (i.e all subjects ranged from 0 to 3), the instrumented severity scale was limited to 4 levels defined as follows: Level 0 : Normal, no dysmetria, tremor or irregularities. Level 1: Minimal dysmetria or low amplitude tremor or slight irregular motion. Level 2: Moderate, clear dysmetria, tremor or clearly irregular motion. Level 3: Severe, dysmetria in large range, high amplitude tremor or very irregular motion. Feature significance Table 4 Mean, standard deviation, effect size measure and correlation coefficient values with SARA scores of the extracted features from CA subjects and controls Table 4 shows the 31 (out of 62) objective features generated during the performance of FCT, FTT, FNT and DDKT that reached statistical significance (\(p < 0.05\)). These features represent movement characteristics of ataxic subjects that differ significantly from controls. Movements performed by subjects with CA were significantly slower (e.g. KiDe, \(p < 0.001\)) with longer reaction times (e.g. ReTi, \(p < 0.001\)) suggesting that a longer time is required to recognise the new target and react. The movements of controls were relatively more complex movements (entropy measures, \(p < 0.05\)) than movements of ataxic subjects and had less functional variability (e.g. DTWEr, \(p < 0.001\)). There were measures in the non-primary axis whose values in controls differed significantly from ataxic subjects. These difference most likely arose from instability in proximal or stabilising joints of individuals with CA. The clinical validity of these measures was assessed by correlating with the SARA scores (SARA-Total and SARA-UL: see last two columns of Table 4). Some FCT movement characteristics were moderately (\(p < 0.01\)) to significantly (\(p < 0.001\)) correlated with the SARA ratings. Entropy features of FTT correlated moderately with SARA, while DDKT and FNT features correlated weakly with SARA (\(p < 0.05\)). Selection frequency of features Figure 2e shows the selection frequency following 100 iterations of the FBE process applied to 22 features with the highest contribution (first 22 features) in the combined test model. The frequency implies the contribution of each feature in estimating severity of ataxia. Higher selection frequency, implies a greater possibility of the feature's selection in the final subset. Of note, FCT provided more important features (including ReTi, the feature with the highest selection frequency) than other tests and all FCT movement features appear in the chart. As discussed later this reflects the importance of FCT's contribution to the objective ataxia score. In comparison, FNT contributed the least to the selected feature subset. Despite fewer features (3), FTT features were selected with higher frequency than DDKT or FNT related features. Classification performance of the 4 feature selection methods Figure 3 plots the classification performance (Y axis) of the four feature selection methods against the number of selected features (X axis). Here, the aim was to find the smallest subset of features that produced a high performance (accuracy) in diagnosing CA. FBE outperformed LASSO, RELIEF and Random Forest, providing a 96.4% accuracy with the first 22 of the 64 (top 34%) features with the highest selection frequency. The accuracy was low (ACC 83%) with the first 5 features. As the number of features increased, the performance of FBE fluctuated around 95.4% (std. ± 1.4%) with a similar performance to other methods. List of selected features are shown in the bar chart of Fig. 2e. Table 5 Experimental results of different combination of feature selection and binary classification methods Table 6 Performance of classification models to distinguish CA subjects from controls from features of individual test and of combined tests "Diagnosis" of ataxia and SARA based Severity Estimation The accuracy of the system in making a binary diagnostic classification (into ataxics and controls) can be considered with Precision and Recall values. Precision is measured by expressing the number of correctly identified ataxic subjects as a fraction of the number of identified ataxic subjects. Recall expresses the number of correctly identified ataxic subjects as fraction of the total number of actual ataxic subjects. Therefore, the closer a mode's Precision and Recall are to 1, the more effective the model is in sorting ("diagnosing") ataxic subjects from controls. The effect of the higher number of ataxic subjects on the model's accuracy was assessed using Matthew's Correlation Coefficient (MCC). The MCC ranges from − 1 to 1, where 1 depicts a perfect prediction. The diagnostic performance of four learning models (QDA, LD, SVM, KNN) and four feature selection methods (FBE, LASSO, RELIEF, RF) were compared (Table 5). The QDA + FBE pair outperformed the others in diagnostic performance with a greater accuracy (ACC 96.4%, Recall 0.98, Precision 0.98) and reliability (AUC 0.97 and MCC 0.90). This classification can be visualised by plotting the first three principle components of a Principal Component Analysis (PCA) (Fig. 4a). In summary, ataxic subjects can be identified (diagnosed) from controls with a high degree of accuracy [ACC > 92%, Precision and Recall > 0.9, MCC > 0.7 (Table 5)] using several models (QDA + FBE, QDA + RF, KNN + LASSO, KNN + FBE) generated from extracted features. Group classification in PCA. a All features. b FCT and FTT features QDA provided a flexible decision boundary for assessing the influence of each clinical test on the capacity to accurately separate ataxic subjects from controls. Table 6 shows that FCT performed best (ACC 92.7%, Recall 0.95, Precision 0.95), despite fewer rhythmicity features in the selection (Table 4). Notwithstanding FCT's performance, combining all tests provided a model with greater accuracy in "diagnosing" ataxic subjects (ACC 96.4% compared to ACC 92.7%, Table 6) and less affected by the imbalance between ataxic and control subjects (MCC 0.90 compared to MCC 0.81, Table 6). This superior performance of the combined tests implies that a contribution from all domains is required for best "diagnostic" performance: rhythmicity features missing from FCT were provided by other tests. However, domains can be provided by more than one test; for instance, rhythmicity is contributed by FTT, FNT and DDKT and stability is provided by all 4 tests. This raises the question whether all tests are required to accurately assess the severity of ataxia. Thus, we mapped features and tests to the STAR dimensions and then different combinations of tests are investigated in the last subsection. Severity estimation. a Distribution between regression scores and mean upper-limb SARA scores. b Severity agreement between the 4-level predicted scores and the mean upper-limb SARA scores As previously shown, the values predicted by the model were highly correlated with the mean SARA score. The predicted scores, ps (all tests, Table 9), of the subjects were plotted against their corresponding mean SARA scores (the rounded average of the scores of the upper limb tests: mean \(SARA\_UL\)) in Fig. 5a). The boxplots represent the distribution of the ps in each of the four severity levels (0–3—see "Methods"). For comparison, the predicted scores were classified into the 4-level scale as follows: \(ps<4\) belongs to the normal group (level 0), \(4\le ps<7\) belong to the mild group (level 1), \(7\le ps < 10\) belongs to the moderate group (level 2) and \(ps\ge 10\) belongs to the severe group (level 3). The agreement matrix in Fig. 5b outlines the mapping of the predicted scores into each clinical severity levels. In particular, subjects scored '0' by SARA can be predicted with a high degree of accuracy (90%) from the underlying system. No subject scored '0' or '1' by SARA was classified to moderate (level 2) nor to severe group (level 3) by the model and no subject scored '1'/'2'/'3' by SARA was classified as normal (level 0) by the model. Table 7 Common selected feature in each test from the 4 FS methods Table 8 Statistical measurement of regression analysis of features from each dimension in STAR with SARA scores Disability association to The STAR dimensions The extracted features were assigned to one of the proposed Holmesian dimensions (STAR) of ataxia. Details of this clustering are presented in Table 3. Selected features in each dimension of the STAR together with their contribution to the feature selection process can also be related to the presence and severity of ataxia. Using this approach, it is possible to attribute the contribution of each STAR dimension to the overall diagnosis of ataxia (Fig. 2a), with the stability features contributing most (41% compared to 25% from Rhythmicity, 20% Timing and 14% Accuracy). Most of the stability features were derived from the DDKT and FNT. Considering dimensional aspects of the extracted features, Table 8 records the correlation between the STAR features and the three SARA scores, i.e. the SARA-Total and the SARA-UL (in terms of sum and mean). Features corresponding to timing provided the highest correlation with the SARA scores (0.77, 0.87 and 0.85) whereas features corresponding to rhythmicity had the lowest correlations (0.35, 0.47 and 0.38). Combination of tests In order to determine whether SARA scores can be predicted with fewer clinical tests, the performance of different test combinations was investigated. As discussed in the STAR analysis, only the FCT provided features that corresponded to the accuracy dimension. Assuming that all STAR dimensions of ataxia will be required for the best prediction of SARA, the presence of FCT features will be essential. The test groupings are considered as follows: Group 1 (G1): FCT and FTT Group 2 (G2): FCT and FNT Group 3 (G3): FCT and DDKT Group 4 (G4): FCT and FNT and DDKT Group 5 (G5): FCT and FTT and DDKT Group 6 (G6): FCT and FTT and FNT Table 9 Statistical measurements of binary classification and SARA scores correlation from different combination of tests The highest accuracy in sorting ataxics from control subjects was provided by Group 1 (Table 9). This combination also performed best in terms of AUC, sensitivity, and precision. Clear separation between ataxic and control subjects is evident in Fig. 4b. Additionally, the classification achieved by Group 1 was similar to that achieved by the combination of all tests, but with a greater effect size of the correlation. The contribution of each STAR Domain to Group 1 is 32% timing, 31% stability, 22% accuracy and 15% rhythmicity (Fig. 2c). Sensitivity and accuracy are important for diagnostic accuracy and F1-score, which is the harmonic mean of the sensitivity and precision [33]. The F1-score of the combined tests (0.98) was marginally better than Group 1's F1-score (0.97). However, the correlation between the scores predicted by the regression model and the SARA-Total (Table 9), was higher for Group 1 (0.8) than the combined tests (0.68), coefficient of 0.8 compared to 0.68 when using all tests). Therefore, in the instrumented system, the FCT and FTT combination provided the best agreement with the clinical assessment of ataxia in the upper limb. Previous studies have shown that each individual bedside test can be emulated using features derived from sensors worn while the bedside tests of CA were being performed [3, 11, 12]. However, each test emphasises different STAR domains and thus begs the questions of which are the most useful in identifying ataxia and how much redundancy is there in these tests. This was achieved in this study by obtaining instrumented data while four bedside tests (FCT, FTT, FNT and DDKT) were performed and features from these data were used to model the SARA-Total and the SARA-UL scores. Approximately half of the features were significantly correlated with the two SARA scores with the highest correlation of individual features being 0.68 with the SARA-Total and 0.66 with the SARA-UL (Table 4). The feature set was further refined a smaller subset of 22 features that maintained a high performance (accuracy) in sorting ataxics from controls (Fig. 3). Using several different learning models it was possible to identify (diagnose) ataxics accurately using these 22 extracted features (Table 5). Not all bedside tests contributed equally to the performance of these models. FCT contributed the most features as well as the most frequently selected features (Fig. 2e). FCT combined with FTT provided enough features to performs as well as the combined feature set (Fig. 4). One conclusion is that FCT was necessary because it was the only test that included the accuracy domain from STAR (Table 3). This may be in part self-fulfilling and reflect aspects of the STAR criteria but future studies exploring different definitions of accuracy or other tests which measure accuracy could address this issue. Even though accuracy was only present in FCT, features related to the accuracy dimension were not selected in the list of common features (Table 7). One possible explanation is that accuracy is highly correlated with timing features which may in turn have contributed to the exclusion of this dimension in LASSO. It is also noteworthy that kinematic delay in FCT contributed the most to the performance of the two models and the number of timing features significantly increased and were the highest proportion of features when FCT and FTT were combined (32% in Fig. 2c compared to 20% in Fig. 2b). Further, the predicted values from the regression model that used timing features demonstrated the highest correlation with the SARA scores (Table 8) indicating the important role of timing in the clinical assessment of ataxia. Consistently selected features were obtained by extracting common features from the four FS methods. It also should be noted that features belonging to the timing domain were consistently selected from 3 out of 4 tests. They were also significant features in the combined model of G1 (Fig. 2d) and all the tests model (Fig. 2e). Another conclusion from this study is that there is redundancy in the bedside tests and not all are required to identify the presence and severity of ataxia. Multiple tests generated a plethora of features, each representing aspects of ataxic movements but also likely containing redundancy. The performance analysis of subsets of the tests uncovered the optimal combination of information that essentially led towards the reduction of tests. Different groupings result in feature combinations that can improve or decrease the performance of learning models (Table 9) and decreasing the number of features without affecting the performance of the learning model infers that redundant information has been removed. Combination of FCT and FTT alone did not degrade diagnostic performance (Fig. 4) and slightly improved correlation in severity estimation in comparison to the performance of all tests combined (Table 9). On the other hand, the FCT and DDKT combination was the lowest accuracy in identifying ataxia. While the SARA prescribes that the examiner should evaluate the (a) accuracy in reaching target in the FCT; (b) the speed or time required to perform the DDKT; (c) the amplitude of the kinetic tremor in the FNT, clinical assessment is blind in what features are found to best correlate with SARA scores. It is thus of interest that not only features that clinicians are explicitly directed to assess (e.g. accuracy in the FCT) were captured but there were also added features, e.g. initiation delay. As the instrumented test depends on these features to accurately model the SARA, this extra information is presumably identified and accounted for (possibly subconsciously) by an experienced clinician even if it is not part of their explicit evaluation. Despite stability, timing, accuracy and rhythmicity being dependent on each other as discussed in [34], in our study, we referred to the SARA to assess a range of different impairments which are related to each STAR dimensions. Further research is required to assess the interdependency of each of the STAR dimensions. Features could be sorted into the four ataxia dimensions (STAR). This was most straightforward in the case of FCT, whose features could be readily placed into a STAR domain according to its physical meaning. In the case of features from FTT, FNT and DDKT, their attribution to a specific STAR domain was according to whether the feature was more related to the primary or the secondary axis of movement. The former corresponds to movements along the direction of the axis most related to accomplishing the task objectives, e.g. the upward/downward movement in tapping or the rotation of the forearm in DDKT (Fig. 1). Secondary-axis movements mostly occur because of instability of the execution platform, i.e. the proximal joints (shoulder or elbow) which must be stable for accuracy of the moving distal hand or wrist. Therefore, significant differences in secondary axes motion in ataxic and control subjects were attributed to instability in this platform. Due to the factor of repetition, the primary movement is required to adhere to a self-defined rhythm [6]. Measures pertaining to this axis can be used to infer the deficits in rhythmicity or timing. In the frequency analysis, timing aspects or "how quick is the movement performed" were described by the RF, whereas MR indicated the intensity of the rhythmic movement [12] which was considered as a measure of rhythmicity. Learning models will always be improved with more subjects. Nevertheless, a cohort of people with CA of this size is relatively large in comparison to earlier studies of ataxia [13, 15, 35]. Furthermore, power analysis and rigorous cross validation process validated the reliability and statistical significance necessary for assertions of clinical validity. There is an assumption that "all cerebellar ataxia is the same" and it is possible, indeed likely, that the presence of somatosensory impairment, vestibular involvement or other central nervous system (CNS) lesions may affect objective assessment of ataxia. One of the motivations for producing more precise means of assessing ataxia is to establish whether the factors that might differentiate ataxia associated with other neural lesions might differ from "pure" cerebellar ataxia. This would be a subject of future studies. In a similar vein more severe ataxia reflected by SARA scores \(>3\) would be important in future studies. Another potential direction of research would explore the combination of FCT and FTT as a mechanism of capturing the progression of disease in a longitudinal study. With the rapid advancement in pervasive Internet-of-Things technologies, capturing the severity of CA subjects more regularly in their natural environment (non-clinical setting) and monitor the progress remotely will inevitably enable more personalized health care with effective rehabilitation programs. The instrumented assessment scheme proposed was based on the four widely-used motor tests of upper limb functionality. The system described here was able to support clinical decision making with a fewer number of features selected from the conventional execution of these tests. The features were grouped and evaluated through the proposed definition of the ataxic manifestations (STAR) in a quantitative form which provided plausible interpretation of ataxia. In the scope of upper limb assessments, the characteristics belonging to timing resulted in the highest association with the SARA total score. A 4-level discrete form of severity rating scale was introduced to be in line with the conventional scale, the SARA. This further confirmed the agreement with the current practice of clinical assessments and provided a severity estimation within acceptable levels of deviations. The other important finding of this study is that the FCT and FTT were identified as the most suitable combined assessments that presented highly accurate CA diagnosis and severity estimation, among other combinations of tests. The reduction of tests would potentially lead to a more cost-effective assessment strategies to be performed in clinical practices where resources such as clinician time and the number of patient visits are often limited. The data that support the findings of this study are available from the corresponding author upon reasonable request. Cerebellar ataxia FCT: Finger chase test FTT: Finger tapping test FNT: Finger to nose test DDKT: Tests of dysdiadochokinesia ISULA: Instrumented System for the objective assessment of Upper Limb Ataxia SARA: Scale for the Assessment and Rating of Ataxia SARA-UL: Total score of upper limb tests in SARA SARA-Total: Total scores of all tests in SARA Manifestations of ataxia (stability, timing, accuracy, rhythmicity) FBE: Feed backward feature elimination Least absolute shrinkage and selection operator QDA: Quadratic discrimination analysis Ridge regression SVM: Support Vector Machine KNN: K-Nearest Neighbour Linear Discrimination LOO: Leave-One-Out cross validation IMU: FFT: Fast Fourier Transform ACC: Area under the Receiver Operating Characteristics curve MCC: Matthews correlation coefficient ES: Effect size Spearman correlation coefficient CNS: Morton SM, Bastian AJ. Cerebellar control of balance and locomotion. Neuroscientist. 2004;10(3):247–59. Nguyen N, Phan D, Pathirana P, Horne M, Power L, Szmulewicz D. Quantification of axial abnormality due to cerebellar ataxia with inertial measurements. Sensors. 2018;18(9):2791. Tran H, Pathirana PN, Horne M, Power L, Szmulewicz DJ. Quantitative evaluation of cerebellar ataxia through automated assessment of upper limb movements. IEEE Trans Neural Syst Rehabil Eng. 2019;27(5):1081–91. Holmes G. The croonian lectures on the clinical symptoms of cerebellar disease and their interpretation. Cerebellum. 2007;6(2):148–53. Holmes G. The symptoms of acute cerebellar injuries due to gunshot injuries. Brain. 1917;40(4):461–535. Schmitz-Hübsch T, Du Montcel ST, Baliko L, Berciano J, Boesch S, Depondt C, Giunti P, Globas C, Infante J, Kang J-S, et al. Scale for the assessment and rating of ataxia: development of a new clinical scale. Neurology. 2006;66(11):1717–20. Trouillas P, Takayanagi T, Hallett M, Currier R, Subramony S, Wessel K, Bryer A, Diener H, Massaquoi S, Gomez C, et al. International cooperative ataxia rating scale for pharmacological assessment of the cerebellar syndrome. J Neurol Sci. 1997;145(2):205–11. Shimoyama I, Ninchoji T, Uemura K. The finger-tapping test: a quantitative analysis. Arch Neurol. 1990;47(6):681–4. Notermans N, Van Dijk G, Van der Graaf Y, Van Gijn J, Wokke J. Measuring ataxia: quantification based on the standard neurological examination. J Neurol Neurosurg Psychiatry. 1994;57(1):22–6. Austin D, McNames J, Klein K, Jimison H, Pavel M. A statistical characterization of the finger tapping test: modeling, estimation, and applications. IEEE J Biomed Health Inf. 2014;19(2):501–7. Nguyen KD, Pathirana PN, Horne M, Power L, Szmulewicz DJ. Entropy-based analysis of rhythmic tapping for the quantitative assessment of cerebellar ataxia. Biomed Signal Process Control. 2020;59:101916. Krishna R, Pathirana PN, Horne M, Power L, Szmulewicz DJ. Quantitative assessment of cerebellar ataxia, through automated limb functional tests. J Neuroeng Rehabil. 2019;16(1):31. Martinez-Manzanera O, Lawerman T, Blok H, Lunsing R, Brandsma R, Sival D, Maurits N. Instrumented finger-to-nose test classification in children with ataxia or developmental coordination disorder and controls. Clin Biomech. 2018;60:51–9. Bastian AJ, Martin T, Keating J, Thach W. Cerebellar ataxia: abnormal control of interaction torques across multiple joints. J Neurophysiol. 1996;76(1):492–509. Menegoni F, Milano E, Trotti C, Galli M, Bigoni M, Baudo S, Mauro A. Quantitative evaluation of functional limitation of upper limb movements in subjects affected by ataxia. Eur J Neurol. 2009;16(2):232–9. Ekanayake SW, Morris AJ, Forrester M, Pathirana PN. Biokin: an ambulatory platform for gait kinematic and feature assessment. Healthc Technol Lett. 2015;2(1):40–5. Bouten CV, Koekkoek KT, Verduin M, Kodde R, Janssen JD. A triaxial accelerometer and portable data processing unit for the assessment of daily physical activity. IEEE Trans Biomed Eng. 1997;44(3):136–47. Wobbrock JO, Cutrell E, Harada S, MacKenzie IS. An error model for pointing based on fitts' law. In: Proceedings of the SIGCHI conference on human factors in computing systems, 2008; p. 1613–22. Spencer RM, Zelaznik HN, Diedrichsen J, Ivry RB. Disrupted timing of discontinuous but not continuous movements by cerebellar lesions. Science. 2003;300(5624):1437–9. Schlerf J, Spencer R, Zelaznik H, Ivry R. Timing of rhythmic movements in patients with cerebellar degeneration. Cerebellum. 2007;6(3):221–31. Chen W, Wang Z, Xie H, Yu W. Characterization of surface EMG signal based on fuzzy entropy. IEEE Trans Neural Syst Rehabil Eng. 2007;15(2):266–72. Goldberger AL, Peng C-K, Lipsitz LA. What is physiologic complexity and how does it change with aging and disease? Neurobiol Aging. 2002;23(1):23–6. Faul F, Erdfelder E, Lang A-G, Buchner A. G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods. 2007;39(2):175–91. Karegowda AG, Manjunath A, Jayaram M. Comparative study of attribute selection using gain ratio and correlation based feature selection. Int J Inf Technol Knowl Manag. 2010;2(2):271–7. Phan D, Nguyen N, Pathirana PN, Horne M, Power L, Szmulewicz D. A random forest approach for quantifying gait ataxia with truncal and peripheral measurements using multiple wearable sensors. IEEE Sens J. 2019;20(2):723–34. Urbanowicz RJ, Olson RS, Schmitt P, Meeker M, Moore JH. Benchmarking relief-based feature selection methods for bioinformatics data mining. J Biomed Inform. 2018;85:168–88. Tibshirani R. Regression shrinkage and selection via the lasso. J R Stat Soc Ser B (Methodol). 1996;58(1):267–88. Mitteroecker P, Bookstein F. Linear discrimination, ordination, and the visualization of selection gradients in modern morphometrics. Evol Biol. 2011;38(1):100–14. Srivastava S, Gupta MR, Frigyik BA. Bayesian quadratic discriminant analysis. J Mach Learn Res. 2007;8(Jun):1277–305. Suykens JA, Vandewalle J. Least squares support vector machine classifiers. Neural Process Lett. 1999;9(3):293–300. Cunningham P, Delany SJ. k-nearest neighbour classifiers. Mult Classif Syst. 2007;34(8):1–17. Picard RR, Cook RD. Cross-validation of regression models. J Am Stat Assoc. 1984;79(387):575–83. Okeh U, Okoro C. Evaluating measures of indicators of diagnostic test performance: fundamental meanings and formulars. J Biom Biostat. 2012;3(1):2. Tanaka H, Ishikawa T, Lee J, Kakei S. The cerebro-cerebellum as a locus of forward model: a review. Front Syst Neurosci. 2020;14:19. Summa S, Schirinzi T, Bernava GM, Romano A, Favetta M, Valente EM, Bertini E, Castelli E, Petrarca M, Pioggia G, et al. Development of SaraHome: a novel, well-accepted, technology-based assessment tool for patients with ataxia. Comput Methods Programs Biomed. 2020;188:105257. We extend our sincere appreciation to all the patients and controls who participated to this study. The work is conducted with the support of Florey Institute of Neuroscience and Mental Health, Melbourne, Australia. This was supported by the National Health and Medical Research Council GNT1101304 and APP1129595 and CSIRO's (50045944) Data61 while the clinical trials was undertaken in the Royal Victorian Eye and Ear Hospital (RVEEH). School of Engineering, Deakin University, Pigdons Road, Waurn Ponds, VIC, 3220, Australia Ha Tran, Khoa D. Nguyen & Pubudu N. Pathirana Florey Institute of Neuroscience and Mental Health, Royal Parade, Parkville, VIC, 3052, Australia Malcolm K. Horne & David J. Szmulewicz Balance Disorders & Ataxia Service, Royal Victorian Eye and Ear Hospital (RVEEH), Gisborne St, East Melbourne, VIC, 3002, Australia Laura Power & David J. Szmulewicz Cerebellar Ataxia Clinic, Alfred Hospital, Commercial Road, Prahran, VIC, 3004, Australia David J. Szmulewicz Ha Tran Khoa D. Nguyen Pubudu N. Pathirana Malcolm K. Horne DJS, MKH and PNP conceived and designed the clinical experiments; LP conducted the clinical testing; HT and KDN analysed the data; HT, KDN, PNP, MKH and DJS wrote the paper. All authors read and approved the final manuscript. Correspondence to Ha Tran. Written informed consent for publication was obtained from all participants. Declaration of Helsinki prior to participation and the study was approved by the Human Research and Ethics Committee, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia (HREC Reference Number: 11/994H/16). All study participants provided consent for publication of data and images. Pubudu N. Pathirana was involved in the initial design and development of BioKin™ as a data collection platform. A number of academic research outcomes have been published with Pubudu N. Pathirana as a co author, solely outlining the novelties on various signal and data processing technologies rather than on the data collecting platform of BioKin™. Other authors declare that they have no competing interests. Tran, H., Nguyen, K.D., Pathirana, P.N. et al. A comprehensive scheme for the objective upper body assessments of subjects with cerebellar ataxia. J NeuroEngineering Rehabil 17, 162 (2020). https://doi.org/10.1186/s12984-020-00790-3 Finger chase Finger tapping Finger to nose Dysdiadochokinesia Objective assessment
CommonCrawl
Entire solutions and traveling wave solutions of the Allen-Cahn-Nagumo equation Binary differential equations with symmetries April 2019, 39(4): 1975-2000. doi: 10.3934/dcds.2019083 Effect of quantified irreducibility on the computability of subshift entropy Silvère Gangloff 1, and Benjamin Hellouin de Menibus 2, Institut de Mathématiques de Toulouse, Université Paul Sabatier Toulouse 3,118 route de Narbonne, Toulouse, France Laboratoire de Recherche en Informatique, Université Paris-Sud - CNRS - CentraleSupelec, Université Paris-Saclay, Btiment 650 Ada Lovelace, rue Noetzlin, Gif-sur-Yvette, France Received February 2018 Revised August 2018 Published January 2019 Fund Project: The second author was supported by Basal PFB-03 CMM, Universidad de Chile, and did this work in part at in part in the Departamento de Matématicas, Universidad Andrés Bello, Republica 220, Santiago, Chile and Centro de Modelamiento Matematico, Beauchef 851, Santiago, Chile We study the algorithmic computability of topological entropy of subshifts subjected to a quantified version of a strong condition of mixing, called irreducibility. For subshifts of finite type, it is known that this problem goes from uncomputable to computable as the rate of irreducibility decreases. Furthermore, the set of possible values for the entropy goes from all right-recursively computable numbers to some subset of the computable numbers. However, the exact nature of the transition is not understood. In this text, we characterize a computability threshold for subshifts with decidable language (in any dimension), expressed as a summability condition on the rate function. This class includes subshifts of finite type under the threshold, and offers more flexibility for the constructions involved in the proof of uncomputability above the threshold. These constructions involve bounded density subshifts that control the density of particular symbols in all subwords. Keywords: Entropy, tilings, symbolic dynamics, subshift, computability, mixing, irreducilibity. Mathematics Subject Classification: Primary: 37B50, 37B40; Secondary: 37B10, 68Q17. Citation: Silvère Gangloff, Benjamin Hellouin de Menibus. Effect of quantified irreducibility on the computability of subshift entropy. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 1975-2000. doi: 10.3934/dcds.2019083 M. D'amico, G. Manzini and L. Margara, On computing the entropy of cellular automata, Theoretical Computer Science, 290 (2003), 1629-1646. doi: 10.1016/S0304-3975(02)00071-3. Google Scholar J.-C. Delvenne and V. D. Blondel, Quasi-periodic configurations and undecidable dynamics for tilings, infinite words and Turing machines, Theoretical Computer Science, 319 (2004), 127-143. doi: 10.1016/j.tcs.2004.02.018. Google Scholar K. Engel, On the Fibonacci number of an m×n lattice, Fibonacci Quart, 28 (1990), 72-78. Google Scholar S. Gangloff and M. Sablik, Quantified block gluing: aperiodicity and entropy of multidimensional SFT, Preprint, https://arXiv.org/abs/1706.01627, 2017. Google Scholar P. Guillon and C. Zinoviadis, Densities and entropies in cellular automata, In Conference on Computability in Europe, Springer, 7318 (2012), 253–263. doi: 10.1007/978-3-642-30870-3_26. Google Scholar P. Hertling and C. Spandl, Shifts with decidable language and non-computable entropy, Discrete Mathematics and Theoretical Computer Science, 10 (2008), 75-93. Google Scholar M. Hochman, On the dynamics and recursive properties of multidimensional symbolic systems, Inventiones Mathematicae, 176 (2009), 131-167. doi: 10.1007/s00222-008-0161-7. Google Scholar M. Hochman and T. Meyerovitch, A characterization of the entropies of multidimensional shifts of finite type, Annals of Mathematics, 171 (2010), 2011-2038. doi: 10.4007/annals.2010.171.2011. Google Scholar L. P. Hurd, J. Kari and K. Culik, The topological entropy of cellular automata is uncomputable, Ergodic Theory and Dynamical Systems, 12 (1992), 255-265. doi: 10.1017/S0143385700006738. Google Scholar E. Jeandel, Computability of the entropy of one-tape Turing machines, STACS-Symposium on Theoretical Aspects of Computer Science, 25 (2014), 421-432. doi: 10.4230/LIPIcs.STACS.2014.421. Google Scholar P. Koiran, The topological entropy of iterated piecewise affine maps is uncomputable, Theoretical Computer Science, 4 (2001), 351-356. Google Scholar E. H. Lieb, Residual entropy of square ice, Physical Review, 162 (1967), 162. doi: 10.1103/PhysRev.162.162. Google Scholar [13] D. Lind and B. Marcus, An Introduction to Symbolic Dynamics and Coding, Cambridge university press, 1995. doi: 10.1017/CBO9780511626302. Google Scholar D. A. Lind, The entropies of topological Markov shifts and a related class of algebraic integers, Ergodic Theory and Dynamical Systems, 4 (1984), 283-300. doi: 10.1017/S0143385700002443. Google Scholar J. Milnor, Is the entropy effectively computable?, Unpublished note, http://www.math.stonybrook.edu/~jack/comp-ent.pdf. Google Scholar J. Milnor and C. Tresser, On entropy and monotonicity for real cubic maps, Communications in Mathematical Physics, 209 (2000), 123-178. doi: 10.1007/s002200050018. Google Scholar M. Misiurewicz, On non-continuity of topological entropy, Bull. Ac. Pol. Sci. Ser. Sci. Math. Astr. Phys., 19 (1971), 319-320. Google Scholar R. Pavlov et al., Approximating the hard square entropy constant with probabilistic methods, The Annals of Probability, 40 (2012), 2362-2399. doi: 10.1214/11-AOP681. Google Scholar R. Pavlov and M. Schraudner, Entropies realizable by block gluing $ \mathbb Z ^d$-subshifts of finite type, Journal d'Analyse Mathématique, 126 (2015), 113-174. doi: 10.1007/s11854-015-0014-4. Google Scholar J. G. Simonsen, On the computability of the topological entropy of subshifts, Discrete Mathematics and Theoretical Computer Science, 8 (2006), 83-95. Google Scholar C. Spandl, Computing the topological entropy of shifts, Mathematical Logic Quarterly, 53 (2007), 493-510. doi: 10.1002/malq.200710014. Google Scholar B. Stanley, Bounded density shifts, Ergodic Theory and Dynamical Systems, 33 (2013), 1891-1928. doi: 10.1017/etds.2013.38. Google Scholar Figure 1. Every pattern $v$ on $D_m$ appearing in some locally admissible pattern of $ \mathcal A ^{C_{N_0}}$ appears jointly with $u$ in some other locally admissible pattern Figure 2. Illustration of Definition 3.12 Figure 3. Illustration of the definition of the function $ {\delta}_N $ Figure 4. Illustration of the definition of the algorithm. The sequence is already defined up to $ F^{n+1}(1) $. The number $ m^{*}_n $ is the smallest one such that the mixing condition is verified Table 1. (First line) Computational difficulty of computing the entropy; (Second line) Set of possible entropies. "Weak" and "Strong" mixing stand for irreducibility rates above or below the threshold, respectively; "Very strong" stands for constant irreducibility rates, or similar properties. "$ \Pi_1 $-comp." means that the problem is $ \Pi_1 $-computable, but not computable; "$ \Pi_1 $ reals" stands for the set of $ \Pi_1 $-computable reals; $ \dagger $ symbols indicate the contribution of the present article Subshift class Mixing properties None Weak Strong Very strong SFT $\Pi_1$-comp. [8] ? computable $\dagger$ computable [8] $d\geq 2$ all $\Pi_1$ reals [8] ? ? partial char. [19] Decidable $\Pi_1$-comp. [20] $\Pi_1$-comp. $\dagger$ computable $\dagger$ computable [21] $d\geq 1$ all $\Pi_1$ reals [6] all $\Pi_1$ reals $\dagger$ ? ? A. Crannell. A chaotic, non-mixing subshift. Conference Publications, 1998, 1998 (Special) : 195-202. doi: 10.3934/proc.1998.1998.195 Fryderyk Falniowski, Marcin Kulczycki, Dominik Kwietniak, Jian Li. Two results on entropy, chaos and independence in symbolic dynamics. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3487-3505. doi: 10.3934/dcdsb.2015.20.3487 Stefano Galatolo, Mathieu Hoyrup, Cristóbal Rojas. Dynamics and abstract computability: Computing invariant measures. Discrete & Continuous Dynamical Systems - A, 2011, 29 (1) : 193-212. doi: 10.3934/dcds.2011.29.193 Steven T. Piantadosi. Symbolic dynamics on free groups. Discrete & Continuous Dynamical Systems - A, 2008, 20 (3) : 725-738. doi: 10.3934/dcds.2008.20.725 François Blanchard, Wen Huang. Entropy sets, weakly mixing sets and entropy capacity. Discrete & Continuous Dynamical Systems - A, 2008, 20 (2) : 275-311. doi: 10.3934/dcds.2008.20.275 Jim Wiseman. Symbolic dynamics from signed matrices. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 621-638. doi: 10.3934/dcds.2004.11.621 George Osipenko, Stephen Campbell. Applied symbolic dynamics: attractors and filtrations. Discrete & Continuous Dynamical Systems - A, 1999, 5 (1) : 43-60. doi: 10.3934/dcds.1999.5.43 Michael Hochman. A note on universality in multidimensional symbolic dynamics. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 301-314. doi: 10.3934/dcdss.2009.2.301 Piotr Oprocha, Paweł Potorski. Topological mixing, knot points and bounds of topological entropy. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3547-3564. doi: 10.3934/dcdsb.2015.20.3547 Jose S. Cánovas, Tönu Puu, Manuel Ruiz Marín. Detecting chaos in a duopoly model via symbolic dynamics. Discrete & Continuous Dynamical Systems - B, 2010, 13 (2) : 269-278. doi: 10.3934/dcdsb.2010.13.269 Nicola Soave, Susanna Terracini. Symbolic dynamics for the $N$-centre problem at negative energies. Discrete & Continuous Dynamical Systems - A, 2012, 32 (9) : 3245-3301. doi: 10.3934/dcds.2012.32.3245 Dieter Mayer, Fredrik Strömberg. Symbolic dynamics for the geodesic flow on Hecke surfaces. Journal of Modern Dynamics, 2008, 2 (4) : 581-627. doi: 10.3934/jmd.2008.2.581 Frédéric Naud. Birkhoff cones, symbolic dynamics and spectrum of transfer operators. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 581-598. doi: 10.3934/dcds.2004.11.581 David Ralston. Heaviness in symbolic dynamics: Substitution and Sturmian systems. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 287-300. doi: 10.3934/dcdss.2009.2.287 David Burguet. Examples of $\mathcal{C}^r$ interval map with large symbolic extension entropy. Discrete & Continuous Dynamical Systems - A, 2010, 26 (3) : 873-899. doi: 10.3934/dcds.2010.26.873 Wen-Guei Hu, Song-Sun Lin. On spatial entropy of multi-dimensional symbolic dynamical systems. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3705-3717. doi: 10.3934/dcds.2016.36.3705 Mike Boyle, Tomasz Downarowicz. Symbolic extension entropy: $c^r$ examples, products and flows. Discrete & Continuous Dynamical Systems - A, 2006, 16 (2) : 329-341. doi: 10.3934/dcds.2006.16.329 Arnaud Goullet, Ian Glasgow, Nadine Aubry. Dynamics of microfluidic mixing using time pulsing. Conference Publications, 2005, 2005 (Special) : 327-336. doi: 10.3934/proc.2005.2005.327 Mike Boyle. The work of Mike Hochman on multidimensional symbolic dynamics and Borel dynamics. Journal of Modern Dynamics, 2019, 15: 427-435. doi: 10.3934/jmd.2019026 Anke D. Pohl. Symbolic dynamics for the geodesic flow on two-dimensional hyperbolic good orbifolds. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 2173-2241. doi: 10.3934/dcds.2014.34.2173 Silvère Gangloff Benjamin Hellouin de Menibus
CommonCrawl
Quality of life and psychological distress in women with recurrent miscarriage: a comparative study Zahra Tavoli1, Mahsa Mohammadi2, Azadeh Tavoli3, Ashraf Moini1, Mohammad Effatpanah4, Leila Khedmat5,7 & Ali Montazeri ORCID: orcid.org/0000-0002-5198-95396 This study aimed to evaluate quality of life and psychological distress in Iranian women with recurrent miscarriage and to compare it in women without miscarriage. This was a comparative study of quality of life among women with and without recurrent miscarriage. Cases were selected from patients with complain of recurrent miscarriage and comparison group were selected from women attending to two teaching hospitals for annual screening. Quality of life (QOL) was measured using the 36-Item Short Form Survey (SF-36). In addition the Hospital Anxiety and Depression Scale (HADS) were used to measure anxiety and depression. Comparison was made between two groups using the independent samples t-test and chi-square. In all 105 women with recurrent miscarriage and 105 healthy women were studied. The socio-demographic status for both groups was similar. Women with recurrent miscarriage showed a significant higher degree of psychological distress [mean (SD) anxiety score was: 10.6 (2.3) vs. 9.1 (2.2), P < 0.0001; and mean (SD) depression score was: 11.0 (2.3) vs. 9.5 (1.9), P < 0.0001]. In addition women with recurrent miscarriage reported significantly lower level of quality of life in all domains (role physical, general health, vitality, social functioning, role emotional, and mental health, all P values < 0.0001), except for physical functioning (P = 0.06) and bodily pain (P = 0.17). The findings demonstrated that women with recurrent miscarriage reported extensive functional disability, and lower level of well-being compared to women without recurrent miscarriage. The findings have some implications for prenatal care and suggest that appropriate treatment of recurrent miscarriage is essential. Recurrent pregnancy loss or recurrent miscarriage is characterized as three or more consecutive pregnancy loss prior to 20 weeks from the last menstrual period. Spontaneous pregnancy loss has been estimated to be prevalent in approximately 15% of clinically diagnosed pregnancies [1]. There are a number of etiological causes for recurrent miscarriage such as immunologic, genetic, and anatomic abnormalities, endocrine disorders, infectious, heritable and/or acquired thrombophilias and environmental factors. However, after the actual evaluation of recurrent pregnancy loss remain unexplained in 60% of cases [2]. Qualitative studies indicated that a history of miscarriage could harm women and be associated with feeling anxious, development of psychological disorders, and affecting quality of life in this population [3]. Most studies on quality of life and psychological disorders come from more developed countires. Women with a history of recurrent miscarriage experience an increase in depressive symptoms and may be at increased risk of negative psychological effects such as pregnancy-related anxiety, depression, irritability, excessive fatigue, fear, sleep disorders and lack of concentration [4, 5] The increase in psychological morbidity immidetly following miscarriage among women are well documented [6]. The importance of psychological factors and socioeconomic status instantly affecting pregnancy or mediating the effects leading to pregnant loss continues to be underestimated in clinical grounds especially recurrent miscarriage, despite research extrapolating their importance. In parallel, health behaviors of women with history of recurrent miscarriage are of great concern. Understanding of women's health behaviors subsequent to miscarriages is very important for the promotion of optimal health for women with history of recurrent miscarriage. Therefore, the purpose of this study was to evaluate the effects of recurrent miscarriage on quality of life and psychological distress of women with recurrent miscarriage. Study population and data collection This was a cross sectional study and participants were selected from 15 to 50-years-old women attending the gynecology outpatient clinics at two teaching hospitals affiliated to Tehran University of Medical Sciences between 2014 and 2015. A sample of women who had a history of three or more recurrent miscarriage (as defined: any pregnancy involuntarily ending before 20 weeks), and a comparison group consisting of a sample of women who did not have recurrent miscarriage, and did not face infertility problems were entered into the study. Exclusion criteria included women with a history of psychiatric disorders, having a history of addiction, treated with anti-anxiety and depression drugs, pregnant at the time of study, and suffering from chronic diseases. All women in both groups were apporached during the study period and they were asked to respond to the study questionnaires. They were assured that their information would remain confidential. Informed consent was obtained from all participants. Sample size calculation The sample size was calculated using the following formula $$ \mathrm{n}=\frac{{\left({{{\mathrm{z}}_1}_{-}}_{\raisebox{1ex}{$a$}\!\left/ \!\raisebox{-1ex}{$2$}\right.}+{\mathrm{z}}_{1-\beta}\right)}^2\left({\mathrm{s}}_1^2+{\mathrm{s}}_2^2\right)}{{\left({\upmu}_2-{\upmu}_1\right)}^2} $$ As such to have a study with a 80.0% power and able to detect a 20% difference in quality of life score between women with and without miscarriage a sample of 100 women in each group was thought. Data were collected using a demographic questionnaire, the Short form Health survey (SF-36), and the Hosptial Anxiety and Depression Scale (HADS). A demographic questionnaire containing 19 questions was admistered to collect data on age, occupational status, smoking, education, number of children, number of abortions, gestational age at the time of abortion and time of abortion. The SF-36 questionnaires: this is a general measure of health-related quality of life and contains 8 subscales namely physical functioning, role physical, bodily pain, general health, vitality, social functioning, and role emotional and mental health. Score on each subscale range from 0 (the worse) to 100 (the best). Psychometric properties of the Iranian version of the questionnaire are well documented [7]. The Hospital Anxiety and Depression Scale (HADS) was used to evaluate the levels of anxiety and depression. The HADS is a fourteen-item scale with two subscales including anxiety and depression and score on each subscale range from 0 to 21 with higher scores indicating higher level of anxiety and depression. Score of 11 or more is considered as a case suffering from disorder that usually requires treatment, scores 8–10 represent the boarderline condition that individuals who have these scores are usually referred for psychiatric assessments, and scores between 0 to 7 show normal statuses [8]. The Iranian version of the HADS exists and its psychometric properties are reported elsewhere [9]. Descrptive statistics were used to explore the data. We performed independent samples t-test and chi-square for group comparison where necessary. P value less than 0.05 was considered as significant level. The ethics committee of Tehran University of Medical; Sciences approved the study. All participants gave informed consent prior to the study commence. In all 210 women were entered into the study. The mean age of women with and without miscarriage was 32.1 (SD =4.7) and 32.2 (5.6) years, respectively (P = 0.86). Furthermore, 72.4% of participants were housewife and 27.6% were employed. In terms of educational, the majority of women had higher education (53.3%). Overall 52.9% of the participants had a history of childbirth and 47.1% had no successful childbirth history. There were no significant differences between the study groups. However, there was a significant difference in terms of having a child as expected (P < 0.0001). The results are shown in Table 1. Table 1 Descriptive information regarding education, occupation and childbirth in women under the study The quality of life data are summarized in Table 2. There were significant differences between women with and without miscarriage in all quality of life subscales as measured by the SF-36 (P < 0.0001) except for physical functioning (P = 0.06) and bodily pain (P = 0.17). Table 2 Comparison of quality of life between women with and without miscarriage Table 3 presents data for anxiety and depression. There were significant differences between women with and without miscarriage indicating that women with recurrent miscarriage experienced higher levels of anxiety and depression. Table 3 Comparison of anxiety and depression in women with and without recurrent miscarriage Furthermore, the data demonstrated that there were no significant differences in quality of life score between women with and without child in recurrent miscarriage group except for general health (P = 0.001) and mental health (P = 0.03). In addition, when anxiety and depression was compared between these women the findings showed that those women with recurrent micarriage who did not have a child were suffering more comapred to those who had at least one child (P value for anxiety = 0.005, P value for depression = 0.02). The findings are shown in Table 4. Table 4 Comparison of anxiety, depression and quality of life among women with a history of recurrent miscarriage with or without child The purpose of this study was to determine the impact of recurrent miscarriage on the quality of life and psychological distress in women with recurrent miscarriage compared to other women without history of miscarriage. The results showed that women with a history of recurrent pregnancy loss differed from women without a history of recurrent pregnancy loss on most health-related quality of life measures. The quality of life scores based on the SF-36 indicated that general health perceptions, vitality, role physical, role emotional, social functioning and mental health in women with recurrent miscarriage were lower than those without a history of multiple miscarriages. However, no significant difference was found in terms of physical functioning and bodily pain between women with and without history of recurrent miscarriage, The finding suggests that, in order to prevent the loss of quality of life in women subsequent to a miscarriage, supportive measures should be initiated by the treatment groups to promote mental health, in addition to physical illness. Couto et al., [10] reported that women with recurrent miscarriage had poorer results in all items including physical functioning, social functioning and role emotional, bodily pain, general health, mental health and vitality. However, in the present study, there were no significant differences between two groups in physical function and bodily pain. This difference could be due to the different status of the participants in the study because both groups of participants in mentioned study had been pregnant that due to the fear of the occurrence of abortion, their physical function has been limited and physical bodies have been created physical pain such as headaches that can be signs of anxiety. Another study showed that women with recurrent abortions had a low score in mental health and physical health [11]. In the present study, the findings indicated that individuals with recurrent miscarriage experienced more anxiety than those without a history of recurrent miscarriage. Similarly, Couto et al. [9] using the HADS found that anxiety level in women with a history of unsuccessful pregnancy was higher than that of control group. Other studies showed that two or more miscarriages were correlated with higher levels of state anxiety during pregnancy [12]. It has been reported that women with a history of miscarriage showed higher levels pregnancy-related anxiety, but studies cannot consistently suggested whether these psychological disorders remain via the subsequent pregnancy [2, 5, 13]. Mevorach-Zussman et al. [11] found that all women with recurrent miscarriage showed mild to moderate anxiety levels and these results indicate that women after experiencing recurrent miscarriage, if they do not undergo a new pregnancy or women who were pregnant subsequent to a miscarriage, may suffer higher rate of anxiety in comparison with those without such experiences. High levels of anxiety can lead to a decreased quality of life and perhaps can be a factor for spontaneous abortion in subsequent pregnancies or premature birth. In the present study, the rate of depression increased more markedly in recurrent miscarriage group, as compared to that of the control group. Couto et al., reported that the rate of depression in pregnant women with previous adverse pregnancy was more than that of control group women which is consistent with our study results [10]. Blackmore et al., [14] found that previous recurrent pregnancy loss might be a predictor of perinatal depression. Depression not only can harden patients' living conditions but also affects the quality of life by reducing vitality, mental health, general health perceptions and social role functioning. Therefore, the quality of life of these patients may be improved by eliminating depression. Our findings suggested that the level of anxiety and depression of the women without children suffering from recurrent miscarriage increased more markedly, as compared to that of women with at least one child. This is probably due to their fear that they may never have children, indicating the effect of anxiety and depression on the mental health and quality of life. If a woman has a recurrent abortion, the quality of life of the affected woman decreases, while having a child may increase mental health and general health perceptions than other women with recurrent miscarriage. It has been reported that the children-to-pregnancies ratio showed a significant associate with the mental health and sleep quality [11]. The descriptive nature of the study could be regarded as a limitation. In addition we did not collect data on many confounding factors that might influence the results. For instance we did not collect data on socioeconomic backgrounds of women or data on some important reproductive information including gestational age at miscarriage, and history of infertility. For future studies a better study design and a thorough collection of data are recommended. The findings from this study indicated that women with recurrent miscarriage suffer from sub-optimal health-related quality of life and experience higher level of anxiety and depression as compared to women without history of miscarriage. The findings have some implications for prenatal care delivery. SF-36: Short Form Health Survey Rai R, Regan L. Recurrent miscarriage. Lancet. 2006;368:601–11. Bicking Kinsey C, Baptiste-Roberts K, Zhu J, Kjerulff KH. Effect of multiple previous miscarriages on health behaviors and health care utilization during subsequent pregnancy. Womens Health Issues. 2015;25:155–1561. Adolfsson A, Johansson C, Nilsson E. Swedish women's emotional experience of thefirst trimester in a new pregnancy after one or more miscarriages: a qualitative interview study. Adv Sex Med. 2012;2:38–45. Gong X, Hao J, Tao F, Zhang J, Wang H, Xu R. Pregnancy loss and anxiety and depression during subsequent pregnancies: data from the C-ABC study. Eur J Obstet Gynecol Reprod Biol. 2013;166:30–6. Woods-Giscombe CL, Lobel M, Crandell JL. The impact of miscarriage and parity on patterns of maternal distress in pregnancy. Res Nurs Health. 2010;33:316–28. Lok IH, Yip AS, Lee DT, Sahote D, Chung TK. A 1-year longitudinal study of psychological morbidity after miscarriage. Fertil Steril. 2010;93:1966–75. Montazeri A, Goshtasebi A, Vahdaninia M, Gandek B. The short form health survey (SF-36): translation and validation study of the Iranian version. Qual Life Res. 2005;14:875–82. Zigmond AS, Snaith PR. The hospital anxiety and depression scale. Acta Psychiatr Scand. 1983;67:361–70. Article PubMed CAS Google Scholar Montazeri A, Vahdaninia M, Ebrahimi M, Jarvandi S. The hospital anxiety and depression scale (HADS): translation and validation study of the Iranian version. Health Qual Life Outcomes. 2003;1:14. Couto ER, Couto E, Vian B, Gregorio Z, Nomura ML, Zaccaria R, Passini Junior R. Quality of life,depression and anxiety among pregnant women with previous adverse pregnancy out comes. Sao Paulo Medical Jurnal. 2009;127:185–9. Mevorach-Zussman N, Bolotin A, Shalev H, Bilenco N, Mazor M, Bashiri A. Anxiety and deterioration of quality of life factors associated with recurrent miscarriage in an observational study. J Prinatal Med. 2012;40:495–501. Fertl KI, Bergner A, Beyer R, Klapp BF, Rauchfuss M. Levels and effects of different forms of anxiety during pregnancy after a prior miscarriage. Eur J Obstet Gynecol Reprod Biol. 2009;142:23–9. Hamama L, Rauch SA, Sperlich M, Defever E, Seng JS. Previous experience of spontaneous or elective abortion and risk for posttraumatic stress and depression during subsequent pregnancy. Depress Anxiety. 2010;27:699–707. Blackmore ER, Côté-Arsenault D, Tang W, Glover V, Evans J, Golding J, O'Connor TG. Previous prenatal loss as a predictor of perinatal depression and anxiety. Br J Psychiatry. 2011;198:373–8. The authors are grateful to participant who made this study possible. Tehran University of Medical Sciences financially supported the project. A minimal set of data is available from the corresponding author on request. Department of Obstetrics and Gynecology, School of Medicine, Tehran University of Medical Science, Tehran, Iran Zahra Tavoli & Ashraf Moini School of Medicine, Tehran University of Medical Sciences, Tehran, Iran Mahsa Mohammadi Department of Psychology, Faculty of Educational Sciences and Psychology, Alzahra University, Tehran, Iran Azadeh Tavoli Department of Psychiatry, School of Medicine, Tehran University of Medical Science, Tehran, Iran Mohammad Effatpanah Department of Community Medicine, School of Medicine, Tehran University of Medical Science, Tehran, Iran Leila Khedmat Population Health Research Group, Health Metrics Research Centre, Iranian Institute for Health Sciences Research, ACECR, Tehran, Iran Ali Montazeri Health Management Research Center and Department of Community Medicine, Faculty of Medicine, Baqiyatallah University of Medical Sciences, Tehran, Iran Zahra Tavoli Ashraf Moini ZT performed the research, designed the study and wrote the first draft; MM, AT, AM, ME and LKH, collected clinical data and clinical interpretation, participated in literature search, and drafted the manuscript, analyzed the data and read and revised the paper. AM: was involved in drafting, reviewing the manuscript and gave the final approval for publication. All authors read and approved the final manuscript. Correspondence to Ali Montazeri. The study was approved by the Tehran University of Medical Science. All research activities were performed in accordance with the Declaration of Helsinki. Tavoli, Z., Mohammadi, M., Tavoli, A. et al. Quality of life and psychological distress in women with recurrent miscarriage: a comparative study. Health Qual Life Outcomes 16, 150 (2018). https://doi.org/10.1186/s12955-018-0982-z
CommonCrawl
Study the continuity domain and derivability domain Let $(u_n)_{n\in\mathbb{N}}$ with the general term $u_n=\frac {1+x^{n}}{1+x+x^{2}+...+x^{n+p-1}}$, where $x\ge0$ and $p \in \mathbb{N}$. Let $f(x)= \lim_{n\to\infty}u_n$. Find the differentiability and continuity domain. First I tried to simplify a little $u_n$ using the sum of the geometric progression and I got this $$u_n=\frac{(1+x^{n})(x-1)}{x^{n+p}-1}.$$ So if $x \in (0,1)$, then $f(x)=1-x$. What should I do when $x = 1$ and $x>1$? calculus real-analysis derivatives C. Cristi C. CristiC. Cristi First of all, I think you have a slight error in you the sum of a geometric progression formula: the denominator must be $x^{n+p}-1$, not $x^{n+p\color{red}{-1}}-1$. If $x=1$, you can literally plug it into the original expression for $u_n$. If $x>1$, then the let's divide the numerator and denominator by $x^n$: $$u_n=\frac{(1+x^n)(x-1)}{x^{n+p}-1}=\frac{\left(\frac{1}{x^n}+1\right)(x-1)}{x^p-\frac{1}{x^n}},$$ and observe that in this case $1/x^n\to0$ as $n\to\infty$. zipirovichzipirovich $\begingroup$ oh, you're right $\endgroup$ – C. Cristi Apr 18 '18 at 13:46 $\begingroup$ If I plug in 1 in the original expression doesn't it interfere some errors? $\frac 00$? $\endgroup$ – C. Cristi Apr 18 '18 at 13:47 $\begingroup$ @C.Cristi: Not into the simplified, but into the original: $u_n(x)=\frac{1+x^n}{1+x+x^2+...+x^{n+p-1}}$, so $u_n(1)=\cdots$. $\endgroup$ – zipirovich Apr 18 '18 at 13:50 $\begingroup$ Yeah, right. If I'm right $u_n(1)=0$? $\endgroup$ – C. Cristi Apr 18 '18 at 13:50 $\begingroup$ @C.Cristi: yes. $\endgroup$ – zipirovich Apr 18 '18 at 13:51 Not the answer you're looking for? Browse other questions tagged calculus real-analysis derivatives or ask your own question. Continuity and pushing a limit inside the function's domain Examine the continuity and differentiability of $f(x)=| \cos x|$ What can be said about continuity and differentiability of $f$ on $\mathbb{R}$? Derivability and Differentiability Showing continuity of an operator from $L^p$ to $L^q$ Continuity and differentiability of $f(x,y)$ at $(0,0)$ Topologically, is there a definition of differentiability that is dependent on the underlying topology, similar to continuity? Differentiability implies continuity proof with little-o notation continuity of the function $f(x)=\lim_{n\to \infty}\sum_{k=0}^{n-1} \dfrac{x}{(kx+1)[(k+1)x+1]}$ How come the $\epsilon-\delta$ definition of continuity is preferred over the sequential definition of continuity?
CommonCrawl
Why should algebraic objects have naturally associated topological spaces? (Formerly: What is a topological space?) In this question, Harry Gindi states: The fact that a commutative ring has a natural topological space associated with it is a really interesting coincidence. Moreover, in the answers, Pete L. Clark gives a list of other "really interesting coincidences" of algebraic objects having naturally associated topological spaces. Is there a deeper explanation of the occurrence of these "really interesting coincidences"? It seems to suggest that the standard definition of "topological space" (collection of subsets, unions, intersections, blah blah), which somehow always seemed kind of a weird and artificial definition to me, has some kind of deeper significance or explanation, since it pops up everywhere... The (former) title of this question is meant to be provocative ;-) What are interesting families of subsets of a given set? How can I really motivate the Zariski topology on a scheme? --- particularly Allen Knutson's answer Edit 1: I should clarify a bit. Let me be more explicit: Is there a unified explanation (mathematical ... or perhaps not) for why various algebraic (where "algebraic" is loosely defined) objects should have naturally associated topological spaces? Pete in the comments notes that he does not like the use of the word "coincidence" here --- but if these things are not coincidences, then what's the explanation? Of course I do understand the intuitive idea behind the definition of "topological space", and how it abstracts for example the notions of "neighborhood" and "near" and "far". It is not surprising that the formalism of topological spaces is useful and ubiquitous in situations involving things like R^n, subsets of R^n, manifolds, metric spaces, simplicial complexes, CW complexes, etc. However, when you start with algebraic objects and then get topological spaces out of them --- I find that surprising somehow because a priori there is not necessarily anything "geometric" or "topological" or "shape-y" or "neighborhood-y" going on. Edit 2: Somebody has voted to close, saying this is "not a real question". I apologize for my imprecision and vagueness, but I still think this is a real question, for which real (mathematical) answers can conceivably exist. For example, I'm hoping that maybe there is a theorem along the lines of something like: Given an algebraic object A satisfying blah, define Spec(A) to be the set of blah-blahs of A such that blah-blah-blah. There is a natural topology on Spec(A), defined by [something]. When A is a commutative ring, this agrees with the Zariski topology on the prime spectrum. When A is a commutative C^* algebra, this agrees with the [is there a name?] topology on the Gelfand spectrum. When A is a Boolean algebra... When A is a commutative Banach ring... etc. Of course, such a theorem, if such a theorem exists at all, would also need a definition of 'algebraic object'. gn.general-topology ag.algebraic-geometry noncommutative-geometry oa.operator-algebras 122 silver badges33 bronze badges Kevin H. LinKevin H. Lin $\begingroup$ The article is Esquisse d'un Programme (Sketch of a Program), which is available on the Grothendieck Circle website: grothendieckcircle.org The relevant part is around page 22. $\endgroup$ – Steven Gubkin Feb 8 '10 at 13:32 $\begingroup$ For what it's worth, I don't regard a functor from a category of "algebraic objects" (broadly construed) to the category Top of topological spaces as a coincidence. Is it exciting that there are highly nontrivial -- sometimes fully faithful -- functors from algebraic categories to Top? Definitely. Were the first such examples of this (by Stone) surprising to the mathematical community? Presumably (I wasn't there). But I don't like the term "coincidence" here, and I certainly did not use it myself. $\endgroup$ – Pete L. Clark Feb 8 '10 at 13:46 $\begingroup$ I think the fact that there are so many topological structures on various things has less to do with the things and more to do with our desire to put a topological structure on anything we study. This is the first step of trying to think of something in geometric terms rather than purely algebraically. (As various answers below explain, while the axioms are confusing at first, they try to encode quite intuitive notions of "near" and "far". The reason they are confusing is, in my opinion, that people worked hard to make these notions as general as possible, and apply them to very weird space) $\endgroup$ – Ilya Grigoriev Feb 8 '10 at 18:23 $\begingroup$ Ilya has a point; I think mathematicians like to convince themselves that what they are studying is universal but of course the priorities of human mathematicians don't remain uninfluenced by the history of their subject. $\endgroup$ – Qiaochu Yuan Feb 8 '10 at 19:53 $\begingroup$ I think you are interpreting the phrase "put a topological structure on" too strictly. What I interpret Ilya to mean is something like "study the classifying space of a group" instead of "talking about topological groups." $\endgroup$ – Qiaochu Yuan Feb 9 '10 at 6:43 I will take the question at face value, but not in the sense of justifying the definition. A topological space is a convenient way of encoding, or perhaps better, organising, certain types of information. (Vague but true! I will give some instances. the data is sometimes `spatial' but more often than not, is not.) Perhaps we should not think of spaces as 'god given' merely 'convenient', and there are variants that are more appropriate in various contexts. A related question, coming from an old Shape Theorist (myself) is : when someone starts a theorem with 'Given a space $X$...', how is the space 'given'? As an algebraic topologist I sometimes need to use CW-complexes, but face the inconvenience that if I could give the CW structure precisely I could probably write down an algebraic model for its homotopy type precisely, and vice versa, so a good model is exactly the same as the one I started with. I hoped for more insight into what the space 'was' from my modelling. Giving the space is the end of the process, not the beginning. Strange. A space is a pseudo-visual way of thinking about 'data', which encodes important features, or at leastsome features that we can analyse, partially. If someone gives me a compact subspace of $\mathbb{R}^n$, perhaps using some equations and inequalities, can I work out algebraic invariants of its homotopy type, rather than just its weak homotopy type? The answer will usually be no. Yet important properties of $C^*$ algebras on such a space, can sometimes be related to algebraic topological invariants of the homotopy type. Spaces can arise as ways of encoding actual data as in topological data analysis, where there is a 'cloud' of data points and the practitioner is supposed to say something about the underlying space from which the data comes. There are finitely many data points, but no open sets given, they are for the data analyst to 'divine'. Not all spatial data is conveniently modelled by spaces as such and directed spaces of various types have been proposed as models for changing data. Models for space-time are like this, but also models for concurrent systems. Looking at finite topological spaces is again useful for encoding finite data (and I have rarely seen infinite amounts of data). For instance, relations between finite sets of data can be and are modelled in this way. Finite spaces give all homotopy types realisable by finite simplicial complexes. Finite spaces can be given precisely (provided they are not too big!) How do invariants of finite spaces appear in their structure? (Note the problem of infinite intersections does not arise here!!!) At the other extreme, do we need points? Are locales not cleaner beasties and they can arise in lots of algebraic situations, again encoding algebraic information. Is a locale a space? I repeat topological spaces are convenient, and in the examples you cite from algebraic geometry they happen to fit for good algebraic reasons. In other contexts they don't. Any Grothendieck topos looks like sheaves on a space, but the space involved will not usually be at all `nice' in the algebraic topological sense, so we use the topos and pretend it is a space, more or less. Tim PorterTim Porter $\begingroup$ Just as a data point: I was scared of topology until I learned about locales. At that point the light went on, and I said, "Oh, topologies are representations of Heyting algebras! Intuitionistic logic isn't scary, so I shouldn't be scared of topology, either!" $\endgroup$ – Neel Krishnaswami Feb 9 '10 at 9:04 $\begingroup$ People may know of the book 'Topology via logic' by Steve Vickers. That is very relevant to this question and this comment. $\endgroup$ – Tim Porter Feb 9 '10 at 11:39 $\begingroup$ That's just the one I read! It's a charming little book. $\endgroup$ – Neel Krishnaswami Feb 9 '10 at 13:27 $\begingroup$ Tim, you seem to take an anthropocentric view at first, but in the end you suggest that there might be external reasons too. This is sort of a chicken & egg question, but I would appreciate a clarification of your point of view. Do you think that topology is a nice way that we came up with to organize things in our environment, or that natural topological organization is something we grew to like enough that we feel a need to emphasize it whenever possible? $\endgroup$ – François G. Dorais♦ Feb 10 '10 at 5:46 $\begingroup$ My thoughts are, sort of, pragmatic, i.e. what works! When looking at something, we tend to use `models'. I think there is something 'out there' to model, but the models are not the same as the 'reality' we seek to understand. A lot of mathematics involves 'relations' in the non-technical as well as the technical sense. You can model relations in various ways including 'spatial' ones. Look at Chu spaces (Vaughan Pratt), or the old theorem of Dowker on simplicial complexes associated to relations. Those ideas help us organise things in a useful way, and are inherent in the idea of relation. $\endgroup$ – Tim Porter Feb 10 '10 at 7:55 This is an excellent question, I think. Topological spaces could be crudely --- and I mean crudely --- divided into two kinds: The geometric. These are the spaces that come up all over geometry and algebraic topology — manifolds, CW complexes, configuration spaces, CW complexes, etc. These are almost invariably Hausdorff, though there are plenty of compact Hausdorff spaces that are often thought of as "pathological", such as the Cantor set. I don't much like terms such as "nice space" and "pathological", because although they might be intended harmlessly, they sound to me a bit dismissive towards the second kind of space. The spectra. I'm using this term loosely (and not in the sense of homotopy theory), but I mean things like the spectrum of a ring, the spectrum of a Boolean algebra (= a compact Hausdorff totally disconnected space), the maximal spectrum of a C*-algebra, and then things such as Julia sets, dynamical attractors, and solutions to iterated function systems, all of which have a spectrummy feel to me. The spectra seem a bit unloved. No one denies the importance of, say, Spec of a commutative ring, but still, I reckon that most mathematicians subconsciously regard geometric spaces as the primary kind, and sometimes the spectra simply get swept away as "pathological". Edit: Don't read this without also reading Ilya Grigoriev's comment below! Tom LeinsterTom Leinster $\begingroup$ +1+ε (The little extra is for the delightful term spectrummy.) $\endgroup$ – François G. Dorais♦ Feb 9 '10 at 0:30 $\begingroup$ There is at least one additional kind: functional analysis topologies, especially all the various weak convergence topologies. $\endgroup$ – Ilya Grigoriev Feb 9 '10 at 3:42 $\begingroup$ Ah, good point. For a start, Banach spaces and Hilbert spaces are all around. $\endgroup$ – Tom Leinster Feb 9 '10 at 4:11 $\begingroup$ Tom, this is probably hopelessly muddle-headed of me, but it struck me that some of your geometric examples have a "built up from simpler pieces feel", which could be "direct-limit-ish", and some of your spectrummy example have an "inverse-limit-ish" feel. Perhaps the latter, which tend to arise via contravariant functors, are indicating that it's the algebraic objects which are built out of simpler pieces; with colimits being taken to limits? $\endgroup$ – Yemon Choi Feb 9 '10 at 4:55 $\begingroup$ There's also the topology on the integers in Fürstenberg's proof of the infinitude of primes. $\endgroup$ – Anonymous Feb 9 '10 at 7:15 I think a reasonable partial explanation comes from universal algebra. The lattice Con(A) of congruences of an algebra is always a complete algebraic lattice. Therefore, it is meet continuous in the sense that $\bigvee_i a \wedge b_i = a \wedge \bigvee_i b_i$ whenever the $b_i$ form a directed family of congruences. When Con(A) happens to be finitely distributive, then one can drop the 'directed' requirement. In this case, Con(A) becomes a frame and it can thus be viewed as an abstract topological space (i.e. a locale). In fact, since Con(A) is algebraic the corresponding locale is always spatial and it always corresponds to a concrete spectral space. In the case of a commutative ring A, the lattice Con(A) is isomorphic with the lattice Id(A) of ideals of A. The lattice Id(A) is not always distributive. (Though it is when A is a Prüfer domain and hence when A is a Dedekind domain, for example.) To remedy this, one looks at the radical ideals of A, which are always better behaved, to define the Zariski spectrum. In my humble opinion, the existence of radicals makes commutative rings very special among algebras. François G. Dorais♦François G. Dorais $\begingroup$ how much better behaved are the radical ideals? $\endgroup$ – Mozibur Ullah Aug 2 '12 at 0:18 (I have deleted my original answer to this question because the question was changed in such a way that it made my answer irrelevant. I still think that my basic point was valid, and so am posting a new answer to make that point again. As my original post gained quite a few votes, I judge it not ethical to completely reword my answer but keep those votes. I am also making this answer "community wiki" not because I think anyone else should edit it but to remove it from the reputation/vote game.) I think that the basic answer to this question is that there are connections between algebraic and topological things because we look for them. And we look for them because we have, in the past, found them useful. Something I continually (and I mean "continually", just ask one) tell my students is that mathematicians are fundamentally lazy. If we have a good theorem, we don't just use it for what it was first proved for, we look for other ways to use it, ways to extend it, ways to push it further than it was ever intended to be pushed. So if, as a topologist, I see the algebraists doing wonderful things with classifying and studying rings, then I'll do my best to make a ring out of my topological space so that I can steal (sorry, "use") their ideas and save myself a lot of bother. Thus: cohomology theory and the whole area of homotopy theory. That the reverse is true is no surprise. Again, mathematicians are lazy so if we see a bridge with lots of useful stuff going in one direction, we ignore the "one way" signs and go the other way. You could then ask "Why do the bridges exist at all?". Well, they don't always exist. Sometimes we can construct them and sometimes not. It feels a bit like you are looking at a bridge and say "Wow! Who would ever have thought of putting a bridge there?!" but ignoring all the stumps and collapsed half-bridges that litter the riverbank. Of course, one can ask about a specific bridge and ask why that one didn't collapse, but the question feels much more general than that. So, in conclusion, that bridges exist is, I feel, more down to the downright mulishness of mathematicians determined to build a bridge wherever they can, regardless of how many collapses and "Pont d'Avignon"s they create in the process. The above, clearly, works for any two areas of mathematics. Thinking particular of topological spaces, then I think that the questioner is missing the point of "near" and "far" a little when he says: (I should point out that the "near" and "far" bit added in the question is in response to my original answer.) Consider this scenario: I have something I don't know anything about. Can I find something out about something like it but simpler? Yes! Great! But how do I measure which things are better approximations of my unknown thing than others? Isn't that just what is going on in studying these algebraic objects? The language is fundamentally topological so there's no surprise at all that topological spaces result. So as soon as an area of mathematics becomes interesting in that there are things that you can't figure out easily and simply then the question of finding enough approximations comes in and thus topology. In conclusion of this second part, "interesting = topological" so making the (bizarre) assumption that algebra is interesting, it must thus be topological. Andrew Stacey $\begingroup$ Perhaps one other reason we like to stick topologies on things is to make our problem/framework "more finitary" (the liking for compactness-type properties on various spectra-of-algebras, for instance). $\endgroup$ – Yemon Choi Feb 9 '10 at 11:19 $\begingroup$ +1 for the bridges metaphor; I get that feeling a lot. $\endgroup$ – Qiaochu Yuan Feb 9 '10 at 16:09 $\begingroup$ I agree with everything except for the "interesting = topological". I think it's more like "geometric implies topological". I'm a geometric person myself, but I think there are also perfectly fine - and interesting - non-geometric ways to look at problems; elementary Galois theory comes to mind (I don't just mean the Fundamental Theorem, but the whole philosophy of factoring polynomials by thinking of field extensions). $\endgroup$ – Ilya Grigoriev Feb 9 '10 at 21:15 $\begingroup$ I might admit to going just a little over the top at the end ... $\endgroup$ – Andrew Stacey Feb 9 '10 at 21:31 $\begingroup$ Yeah, but functors of points pull a lot of the geometry back out of geometry. =p. $\endgroup$ – Harry Gindi Feb 10 '10 at 2:01 Well, this isn't a full answer, but I think it's worth posting. We can identify a Hausdorff space with its poset of open sets because every convergent (thanks JDH) ultrafilter converges uniquely to a point, so in fact, all of the "set data" of the space is contained in the poset of opens itself. We can define a lattice structure on this poset that takes the place of our algebra defined by intersections and unions. This is why the category of Hausdorff spaces is actually a category monadic over sets. Hausdorff spaces are actually totally described by their algebras, which seems pretty cool to me. Some speculation on the question: We have a natural poset structure on subobjects of algebraic objects, which at least gets us partway to having a topology. Further, we can classify maps out of the space by looking at kernels of maps. There is also a canonical action for normal subgroups and ideals (product of normal subgroups and sum of ideals). These have the nice property that they are closed under this operation. They are also closed under intersections. This gives us a complete modular lattice on kernels. The interesting case about rings is that we have a third operation, the product of ideals. The interesting thing about the product of ideals is that it is only defined for finite products. Then we have a somewhat natural structure to start working in (at least for rings). I believe what Qiaochu was talking about with the Galois connections is that for any algebraic structure, you can associate this poset structure on subobjects, and even better, sharpen the characerization by looking at kernels (at least in the case of groups and rings). However, my point was that the additional operation of multiplication, which is restricted to finite products (I guess I would say it has finite arity, but that's not exactly right), gives us an operation on the poset that looks like "finite unions" or "finite intersections". Harry Gindi $\begingroup$ More generally, for any sober space (and thus for $\mathrm{Spec}(R)$ for any ring $R$), the homeomorphism type is specified by the lattice of opens. The points of a sober space are in bijective correspondence with the completely prime filters on the lattice of open sets. $\endgroup$ – Clark Barwick Feb 8 '10 at 12:42 $\begingroup$ @Harry : are by any chance you working your way towards a quantale? $\endgroup$ – Tim Porter Feb 8 '10 at 17:25 $\begingroup$ @Tim, I didn't know that was a thing. I just looked it up, and it seems to be what I was talking about. $\endgroup$ – Harry Gindi Feb 8 '10 at 17:37 $\begingroup$ There are even quantaloids and quantaloid enriched categories! Fun and relevant to discussions on the café perhaps. $\endgroup$ – Tim Porter Feb 8 '10 at 18:41 $\begingroup$ Neat! But I mean, can't you just describe all quantale-looking things as monoidal categories? $\endgroup$ – Harry Gindi Feb 8 '10 at 21:00 From the perspective of locale theory, a topological space is nothing more than a model of the theory of finitary conjunctions and infinitary disjunctions (corresponding, perhaps, to the intuition that semidecidable propositions are closed under precisely these operations, and the idea that "open" is somehow just a geometric casting of "semidecidable"); that is, a topological space is little more than a lattice with finite meets and infinitary joins, the former distributing over the latter (i.e., a frame). It is perhaps not all that surprising that many algebraic structures should give rise to (complete) lattices satisfying this distributive property, is it? Well, that's a subjective judgement. Perhaps it isn't the best answer. But it's what I'm going with for now. [That is, it's not so surprising that many algebraic objects should give rise to frames; the more surprising thing is the realization that frames could be understood (contravariantly) in geometric terms in the first place.] Sridhar RameshSridhar Ramesh $\begingroup$ I somehow missed that François G. Dorais made a related point, much more concretely, above. Alas, I am so new here as to lack even the reputation to upvote it... $\endgroup$ – Sridhar Ramesh Feb 9 '10 at 6:59 I know this is very late, I just happened to run into this question. I think Von Neumann might answer this question like this (as he once has): "One does not understand anything in mathematics one simply gets used to it." Then he may add, we have a natural functor from the category of topological spaces to the category of rings, it is an interesting natural phenomenon that this functor is strictly invertible if we restrict to suitable subcategories of the category of top spaces, and of the category of rings. This is Gelfand-Naimark. One does not "understand" phenomena anymore then one understands why the universe exists. By the way a topological space is also naturally an algebraic object, we can take its partially ordered set of open subsets. This often completely determines the space: http://math.nie.edu.sg/dszhao/Research%20papers/conference%20proceeding/posetmodels.pdf (Although I didn't really read this.) YashaYasha Not the answer you're looking for? Browse other questions tagged gn.general-topology ag.algebraic-geometry noncommutative-geometry oa.operator-algebras or ask your own question. Commutative rings to algebraic spaces in one jump? "Algebraic" topologies like the Zariski topology? What is the Zariski topology good/bad for? What are the τ-local rings for a subcanonical Grothendieck topology τ on the category of affine schemes of finite type over Spec(Z)? (specifically for τ=fppf) Is $\mathbb{C}^2$ homeomorphic to $\mathbb{C}^2 - (0,0)$ with the Zariski topology? Forcing over the poset of nonempty open subsets of a nice topological space demazure's and gabriel's book, problem with the proof of a theorem Zero-dimensional spaces and clopen separations A map of non-pathological topology?
CommonCrawl
Cost-effectiveness of comprehensive geriatric assessment at an ambulatory geriatric unit based on the AGe-FIT trial Martina Lundqvist1, Jenny Alwin1, Martin Henriksson1, Magnus Husberg1, Per Carlsson1 & Anne W. Ekdahl2,3 Older people with multi-morbidity are increasingly challenging for today's healthcare, and novel, cost-effective healthcare solutions are needed. The aim of this study was to assess the cost-effectiveness of comprehensive geriatric assessment (CGA) at an ambulatory geriatric unit for people ≥75 years with multi-morbidity. The primary outcome was the incremental cost-effectiveness ratio (ICER) comparing costs and quality-adjusted life years (QALYs) of a CGA strategy with usual care in a Swedish setting. Outcomes were estimated over a lifelong time horizon using decision-analytic modelling based on data from the randomized AGe-FIT trial. The analysis employed a public health care sector perspective. Costs and QALYs were discounted by 3% per annum and are reported in 2016 euros. Compared with usual care CGA was associated with a per patient mean incremental cost of approximately 25,000 EUR and a gain of 0.54 QALYs resulting in an ICER of 46,000 EUR. The incremental costs were primarily caused by intervention costs and costs associated with increased survival, whereas the gain in QALYs was primarily a consequence of the fact that patients in the CGA group lived longer. CGA in an ambulatory setting for older people with multi-morbidity results in a cost per QALY of 46,000 EUR compared with usual care, a figure generally considered reasonable in a Swedish healthcare context. A rather simple reorganisation of care for older people with multi-morbidity may therefore cost effectively contribute to meet the needs of this complex patient population. The trial was retrospectively registered in clinicaltrial.gov, NCT01446757. September, 2011. Older people with multi-morbidity are increasingly challenging for today's healthcare, and traditional healthcare organization into organ specific care often fails to meet the complex needs of these patients [1]. With a rapid increase in the population of elderly, healthcare solutions that better meet the needs of these patients by improving longevity and quality of life (QoL) at a reasonable cost are required. One such solution is comprehensive geriatric assessment (CGA), a systematic and holistic approach to the care of older people with multi-morbidities. Although the definition and execution of CGA may differ across applications, general components common to most programs include multidisciplinary teams, regular team-meetings and the use of standardized instruments for medical, functional, psychological and social assessments [1, 2]. CGA in hospital settings has been shown to reduce mortality and institutionalization at 12 months follow-up [3]. Furthermore, comprehensive geriatric care in a hospital setting for elderly patients with hip fractures improved mobility and was cost-effective compared to usual orthopedic care [4]. However, few studies have evaluated CGA in the context of outpatient care, and the lack of cost-effectiveness evidence of CGA in general has been recognized [1, 3, 5,6,7]. In the Ambulatory Geriatric assessment – a Frailty Intervention Trial (AGe-FIT), CGA in outpatient care reduced the number of inpatient days and increased the sense of security for the patients at a 2-year follow-up when compared to usual care [8]. At 3 years, CGA patients lived on average 69 days longer with no differences in short-term healthcare costs [9]. To fully assess the overall value of CGA, the long-term (beyond trial follow-up) impact on ultimate health outcomes such as quality-adjusted life years (QALYs) and costs of CGA has to be assessed [10]. The aim of this study is to assess the cost-effectiveness of CGA at an ambulatory geriatric unit, compared to usual care for elderly people with multi-morbidity and high healthcare consumption. Analytical approach and cost-effectiveness The patient population evaluated comprised elderly people with multi-morbidity and high healthcare consumption according to the AGe-FIT trial [8, 9]. The evaluated treatment strategies were CGA at an ambulatory geriatric unit as in the AGe-FIT trial described below and usual care. Costs and QALYs, weighting each year lived with the quality of life (where 1 is full health and 0 is dead), were evaluated over a lifelong time horizon for each treatment strategy using standard methods of decision-analytic modelling [11], synthesizing two years of trial data from the AGe-FIT trial with other relevant data sources. The primary outcome was the incremental cost-effectiveness ratio (ICER) of CGA compared with usual care, relating the incremental costs of CGA to the incremental health outcome (QALYs): $$ ICER=\frac{Cos{ts}_{CGA}- Cos{ts}_{Usual\kern0.17em care}}{QALYs_{CGA}-{QALYs}_{Usual\kern0.17em Care}} $$ In addition, cost per life year (without quality adjustment) gain was assessed. Costs and health outcomes were discounted by 3% per annum. All costs were calculated in Swedish kronor (SEK), and converted to 2016 euros (€) using the exchange rate of 1 EUR = 9.47 SEK (year 2016 mean exchange rate). The analysis was performed from the perspective of the public healthcare sector including costs for services in both the Municipality and the County Council. The AGe-FIT trial The main data source for the present study, the AGe-FIT trial, has been reported in detail elsewhere [8, 9, 12]. In brief, the AGe-FIT trial was randomized, controlled, assessor-blinded, and carried out in the Municipality of Norrköping, Sweden during 2011 and 2013. Community dwelling patients, of 75 years or older, having three or more concomitant diagnoses and admitted to the hospital for inpatient care three times or more during the past 12 months, were included. The participants gave written informed consent to participate in the study, in case of cognitive decline a proxy gave informed consent according to the protocol [12]. Participants who gave written consent to participate were randomized to either the intervention group receiving care at the ambulatory geriatric unit (AGU) in addition to usual care, or the control group receiving usual care only. The intervention consisted of CGA provided at the AGU and included an interdisciplinary approach in order to make a person-centered plan for future care [12]. Most often it started with a home-visit from a nurse and a social worker to focus on "caring-problems" such as nutrition, skin-problems, elimination and the social environment. A pharmacist reviewed medication before a visit to a physician in the AGU and, depending on needs, a functional investigation by the occupational and physical therapists. In the usual care strategy, standard healthcare services were used as needed. The AGe-FIT trial was approved by the regional ethical vetting board at Linköping University (No: 2011/41–31 and No: 2015/6–32), and is registered on http://clinicaltrails.gov (NCT01446757). The study adheres to the CONSORT guidelines [13]. Decision analytic model In order to estimate the long-term costs and QALYs and assess cost-effectiveness of the CGA intervention, a simple two-state (Alive and Dead) Markov model was employed [14]. In the model, all patients start in the Alive state. During each year (annual Markov cycle) patients face a risk of dying, and thus transition to the absorbing Dead state. The annual risk of dying was conditional on the assigned treatment strategy and based on the 24-month follow-up data from the AGe-FIT trial. The model was run for 30 cycles to ensure that in effect all patients had reached the absorbing Dead state at the termination of analysis. For each Markov cycle that patients reside in the Alive state, they incurred an annual cost and QALY-estimate. There were no costs and QALYs associated with the Dead state. At the termination of analysis, discounted costs and QALYs were summed over all cycles to estimate per patient mean costs and QALYs for CGA and usual care, respectively. An overview of data inputs is provided below. Further information is provided in Additional files 1 and 2. Data on healthcare resource use was collected for primary healthcare, ambulatory care (geriatric and other), inpatient care and municipal services (including use of home help services and nursing home utilization) in the Age-FIT trial by linking trial participant–ID to relevant patient registers [8]. For the first two years (Markov cycles) of the analysis these costs corresponded to the per patient mean cost per treatment strategy observed in the AGe-FIT trial (Table 1). From year three and onwards the costs observed in year two of the AGe-FIT trial were applied to the Alive state. The impact on the final results of extrapolating the observed two-year data over a longer time horizon was explored in sensitivity scenarios. See Additional file 2 for details. Table 1 Costs and quality of life data input Quality adjusted life years A QALY-estimate for the Alive state, per treatment strategy, was based on HRQoL data measured with the EQ-5D-3L instrument in the AGe-FIT trial at baseline, 12 months and 24 months [15]. Patient answers on the EQ-5D-3L instrument were converted to QALY-weights using the widely used UK value set [15]. QALY-estimates for year one and two were estimated by calculating the area under the curve for patients alive at 12 and 24 months (Table 1). The QALY-estimate for year two was applied to the Alive state for year three onwards. Furthermore, an annual reduction (for both treatment strategies) in HRQoL was applied to account for the fact that patients get older with an expected health deterioration. Based on published EQ-5D data on the Swedish general population, the QALY-estimates were adjusted by a decrement of 0.0025 per year [16]. The impact on the final results of adjusting the long-term QALY-estimates, was explored in sensitivity scenarios. For the first two years of the analysis the mortality risks associated with each treatment were estimated from the AGe-FIT trial, and thus corresponded to the observed mortality in the trial. For the first year of analysis this risk was 11.5% and 13.8% for CGA and usual care, respectively. Corresponding figures for year two were 8.2% and 15.3%. The mortality risks for year three and onwards were estimated by using age-specific mortality rates for the general population in Sweden [17]. See Additional file 1 for details. The analyzed patient population corresponded to the characteristics of patients in the Age-FIT trial with the mean age of the population in the trial (83 years). Uncertainty in the estimated cost-effectiveness results associated with sampling uncertainty in the estimated input parameter values, was evaluated by employing probabilistic sensitivity analysis [18]. In this analysis, the uncertainty in single-model inputs is propagated through the model, using simulation techniques, so that the uncertainty in the cost-effectiveness results indicates the uncertainty in the decision to implement a treatment strategy, rather than the uncertainty surrounding single model inputs [18]. The probability of the CGA being cost-effective at different threshold values for cost-effectiveness was assessed and reported in cost-effectiveness acceptability curves [19]. The importance of parameters not associated with statistical uncertainty for the final results were investigated in sensitivity analyses. See Additional file 2 for details. All statistical analyses of Age-FIT trial data were performed in SPSS version 22.0 [20]. The decision-analytic model was programmed and analyzed in Microsoft Excel (Microsoft Corporation, Redmond, Washington DC, USA). The CGA strategy was associated with an incremental cost of approximately 25,000 EUR compared with usual care (Table 2), mainly due to additional costs for the CGA care. The CGA strategy was associated with a life year gain of 1.05; the gain in QALYs was 0.54, primarily an effect of the mortality reduction, yielding a cost per QALY of approximately 46,000 EUR for CGA compared with usual care. Without the quality of life adjustment the cost per life-year gained for the CGA strategy was approximately 23,000 EUR. Table 2 Costs, outcomes and cost-effectiveness results The result of the probabilistic analysis is illustrated on the cost-effectiveness plane in Fig. 1 (panel a). Investigating the joint distribution of incremental costs and QALYs reveals that CGA is associated with an increase in costs in 95% of the simulations (at threshold of cost-effectiveness of 50,000 EUR) and a gain in QALYs in 93% of the simulations. The probability of CGA being cost effective at different threshold values is also shown in Fig. 1 (panel b). At a threshold value of 50,000 EUR this probability was approximately 60% (Fig. 1). Results of probabilistic analysis. a Cost-effectiveness plane based on 10,000 iterations illustrating the distribution of the ICER. b Cost-effectiveness acceptability curves showing the probability that CGA is cost-effective at different thresholds for cost-effectiveness The sensitivity analyses showed small impact on the estimated cost-effectiveness with results ranging from approximately 45,000 to 49,000 EUR. In the scenario when the treatment effect on mortality was set to zero after two years, both the incremental costs and the incremental QALYs decreased, although the impact on overall cost-effectiveness is small. See Additional file 2 for details. Finding healthcare solutions that meet the needs of older people with multi-morbidity and at the same time are cost-effective is challenging. Assessing the cost-effectiveness of CGA compared with usual care revealed that the CGA is expected to reduce mortality, increase quality-adjusted life expectancy, and increase health care costs. The increased health care costs are primarily a consequence of the intervention costs and costs of increased survival, where patients are assumed to continue to consume health care resources. This contrasts the previous report based on a follow-up of the duration of the Age-FIT trial where it was concluded that the patients lived longer with no difference in short-term health care costs. The full economic evaluation taking a life-long perspective presented here indicates that the health gain is larger than previously reported, and is achieved at an incremental cost, leading to a cost per QALY of 46,000 EUR with CGA compared with usual care. Previous studies have shown contradicting results when it comes to cost-effectiveness of CGA-based care. When comparing CGA-based care in an orthopedic geriatric unit with usual orthopedic care CGA-based care were found to be both less costly and more effective than usual orthopedic care [4]. Similarly a study concluded that the Dutch Geriatric Intervention program (DGIP) was effective at a reasonable cost for frail older people when compared with usual care [5]. On the contrary, a geriatric medical intervention in an acute setting showed no effect on QALYs and increased costs during a three-month trial follow-up compared with standard care [6]. It should be noted that the setting and study population differs between studies, as does the exact content of the interventions, and the results should therefore be compared with great caution. A strength of the present study is that it is based on a randomized clinical trial where patients were followed for more than two years with a well-defined and representative study population. Another strength is that registries were used to collect resource use, cost and mortality data reflecting a clinical practice setting. A third strength is that long-term extrapolation of health outcomes and costs beyond the duration of the trial was employed appropriately to estimate cost-effectiveness. Regarding limitations, there was considerable missing HRQoL data in the AGe-FIT study. In a previous analysis of this data, a number of sensitivity analyses tested different replacement methods and showed no major differences in the results [8]. Although the extrapolation of study data to a lifetime time horizon increases the relevance of the results for healthcare decision making, it also introduces inevitable uncertainty. There is always a tradeoff between the relevance and the precision in the estimated cost-effectiveness. In the present study, data from the AGe-fit trial was used in combination with assumptions and data external to the trial for the long-term extrapolation. We believe that the assumptions for the long-term extrapolation were conservative, and sensitivity analyses indicated that the results are unlikely to be altered substantially due to these assumptions. The results of the sensitivity scenarios indicate that the estimated cost-effectiveness is likely to be around 46,000 EUR per QALY for the CGA strategy. In Sweden there is no explicit threshold value for cost-effectiveness but approximately 50,000 EUR per QALY is often mentioned when considering reimbursement of pharmaceuticals. In the UK for example, an explicit threshold of 20,000–30,000 GBP is employed. Often, aspects other than cost effectiveness are considered when decisions regarding the allocation of healthcare resources are taken. These aspects include severity of the condition, uncertainty of results and implications for the overall healthcare budget. Taken together the estimated cost per QALY of 46,000 EUR may thus be considered value for money in some jurisdictions, whereas in others it may be borderline or even above generally accepted thresholds. The AGe-FIT trial was performed in a clinical practice setting and the intervention itself may therefore be subjected to substantial improvement with more experience and knowledge. With such improvements it is likely that the cost-effectiveness will also be more favorable. As the interventions themselves are a moving target, it is also important to have flexible and relevant evaluation methods available for continuous updates when new data become available. This study provides such a framework, and the methods are often used when assessing the cost-effectiveness of new pharmaceuticals, but has, hitherto been scarce when assessing the value of new interventions for older patients with multi-morbidity. The present study shows that a reorganisation, and more structured management of the care for older people with multi-morbidity, can improve health outcomes at an acceptable cost in order to meet the needs of this complex patient population. AGe-FIT: Ambulatory Geriatric Assessment – a Frailty Intervention Trail AGU: Ambulatory Geriatric Unit CGA: Comprehensive Geriatric Assessment DGPI: Dutch Geriatric Intervention Program ICER: Incremental Cost-Effectiveness Ratio QALY: SBU. Comprehensive geriatric assessment and care for frail elderly. Stockholm: Swedish Council on Health Technology Assessment (SBU); 2014. SBU report no 221 (in Swedish). Wieland D, Hirth V. Comprehensive geriatric assessment. Cancer Control. 2003;10:454–62. Ellis G, Whitehead MA, O'Neill D, Langhorne P, Robinson D. Comprehensive geriatric assessment for older adults admitted to hospital. Cochrane Database Syst Rev. 2011;(7):CD006211. Prestmo A, Hagen G, Sletvold O, Helbostad JL, Thingstad P, Taraldsen K, et al. Comprehensive geriatric care for patients with hip fractures: a prospective, randomised, controlled trial. Lancet. 2015;385:1623–33. Melis R, Adang E, Teerenstra S, van Eijken M, Wimo A, Tv A, et al. multidimensional geriatric assessment: back to the future cost-effectiveness of a multidisciplinary intervention model for community-dwelling frail older people. J Gerontol A Biol Sci Med Sci. 2008;63:275–82. Tanajewski L, Franklin M, Gkountouras G, Berdunov V, Edmans J, Conroy S, et al. Cost-effectiveness of a specialist geriatric medical intervention for frail older people discharged from acute medical units: economic evaluation in a two-Centre randomised controlled trial (AMIGOS). PLoS One. 2015;10:e0121340. Boult C, Boult L, Morishita L, Dowd B, Kane R, Urdangarin C. A randomized clinical trial of outpatient geriatric evaluation and management. J Am Geriatr Soc. 2001;49:351–9. Ekdahl AW, Wirehn AB, Alwin J, Jaarsma T, Unosson M, Husberg M, et al. Costs and effects of an ambulatory geriatric unit (the AGe-FIT study): a randomized controlled trial. J Am Med Dir Assoc. 2015;16:497–503. Ekdahl AW, Alwin J, Eckerblad J, Husberg M, Jaarsma T, Mazya AL, et al. Long-term evaluation of the ambulatory geriatric assessment: a frailty intervention trial (AGe-FIT): clinical outcomes and Total costs after 36 months. J Am Med Dir Assoc. 2016;17:263–8. Claxton K, Sculpher M, Drummond M. A Rational framework for decision making by the National Institute for Clinical Excellence (NICE). Lancet. 2002;360:711-5. Briggs A, Sculpher M, Claxton K. Decision Modelling for Health Economic Evaluation: Oxford University Press; 2006. Mazya AL, Eckerblad J, Jaarsma T, Hellström I, Krevers B, Milberg A, et al. The ambulatory geriatric assessment - a frailty intervention trial (AGe-FIT) - a randomized controlled trial aimed to prevent hospital readmissions and functional deterioration in high risk older adults: a study protocol. Eur Geriatr Med. 2013;4:242–7. Schulz KF, Altman DG, Moher D. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c332. Sonnenberg F, Beck J. Markov models in medical decision making: a practical guide. Med Decis Mak. 1993;13:322–38. Dolan P. Modeling valuations for EuroQol health states. Med Care. 1997;35:1095–108. Burström K, Johannesson M, Diderichsen F. Swedish population health-related quality of life results using the EQ-5D. Qual Res. 2001;10:621–35. www.scb.se. Accessed 29 Mar 2017. Claxton K, Sculpher M, McCabe C, Briggs A, Akehurst R, Buxton M, et al. Probabilistic sensitivity analysis for NICE technology assessment: not an optional extra. Health Econ. 2005;14:339–47. Fenwick E, O'Brien BJ, Briggs A. Cost-effectiveness acceptability curves - facts, fallacies and frequently asked questions. Health Econ. 2004;13:405–15. IBM Corp. Released 2013. IBM SPSS for windows, version 22.0. Armonk: IBM Corp. The authors would like to thank Rolf Wiklund for help with cost data extraction, and the research group of the AGe-FIT trial: Jeanette Eckerblad, Tiny Jaarsma, Barbro Krevers, Amelie Lindh Mazya, Anna Milberg, Mitra Unosson and Ann-Britt Wiréhn. We would also like to extend our gratitude to Mattias Aronsson for assistance with the model, and finally all clinicians working at the ambulatory geriatric unit. This study was funded by Region Östergötland and Linköping University, Sweden. The funding body had no role in the design of the study, data collection, analysis, interpretation of data or in writing the manuscript. All data generated or analysed during this study are included in this published article (and its Additional files 1 and 2). Department of Medical and Health Sciences, Linköping University, Linköping, Sweden Martina Lundqvist, Jenny Alwin, Martin Henriksson, Magnus Husberg & Per Carlsson Department of Neurobiology, Care Sciences and Society (NVS), Division of Clinical geriatrics, Karolinska Institute (KI), Stockholm, Sweden Anne W. Ekdahl Institution of Clinical Sciences, Lund University, Helsingborg, Sweden Martina Lundqvist Jenny Alwin Martin Henriksson Magnus Husberg Per Carlsson ML, JA, AE, PC and MHe designed the study. MHu, ML, JA conducted the data preparation. MHe, JA, ML, MHu, AE and PC conducted the analysis and interpretation. ML, JA, MHe, MHu, PC, AE contributed to the drafting of the manuscript. All authors critical revised and approved the final manuscript. Correspondence to Martina Lundqvist. The AGe-FIT trial was approved by the regional ethical vetting board at Linköping University (No: 2011/41–31 and No: 2015/6–32), and is registered on clinicaltrials.gov (NCT01446757). All participants gave written informed consent to participate in the study. Mortality. Annual mortality probabilities applied in the model (DOCX 10284 kb) Uncertainty and sensitivity analysis. Detailed parameter estimates used in the model and results of sensitivity analysis (DOCX 23 kb) Lundqvist, M., Alwin, J., Henriksson, M. et al. Cost-effectiveness of comprehensive geriatric assessment at an ambulatory geriatric unit based on the AGe-FIT trial. BMC Geriatr 18, 32 (2018). https://doi.org/10.1186/s12877-017-0703-1 Quality-adjusted life years Multi-morbidity
CommonCrawl
instaGRAAL: chromosome-level quality scaffolding of genomes using a proximity ligation-based scaffolder Lyam Baudry1,2, Nadège Guiglielmoni1,3, Hervé Marie-Nelly1,2, Alexandre Cormier4, Martial Marbouty1, Komlan Avia4,5, Yann Loe Mie6, Olivier Godfroy4, Lieven Sterck7,8, J. Mark Cock4, Christophe Zimmer9, Susana M. Coelho4 & Romain Koszul ORCID: orcid.org/0000-0002-3086-11731 Genome Biology volume 21, Article number: 148 (2020) Cite this article Hi-C exploits contact frequencies between pairs of loci to bridge and order contigs during genome assembly, resulting in chromosome-level assemblies. Because few robust programs are available for this type of data, we developed instaGRAAL, a complete overhaul of the GRAAL program, which has adapted the latter to allow efficient assembly of large genomes. instaGRAAL features a number of improvements over GRAAL, including a modular correction approach that optionally integrates independent data. We validate the program using data for two brown algae, and human, to generate near-complete assemblies with minimal human intervention. Continuous developments in DNA sequencing technologies aim at alleviating the technical challenges that limit the ability to assemble sequence data into full-length chromosomes [1,2,3]. Conventional assembly programs and pipelines often encounter difficulties to close gaps in draft genome assemblies introduced by regions enriched in repeated elements. These assemblers efficiently generate overlapping sets of reads (i.e., contiguous sequences or contigs) but encounter difficulties linking these contigs together into scaffolds. At the chromosome level, these programs often incorrectly orient DNA sequences or predict incorrect numbers of chromosomes [4]. The development of long-read sequencing technology and accompanying assembly programs has considerably alleviated these difficulties, but some gaps remain nevertheless in genome scaffolds, notably at the level of long repeated/low-complexity DNA sequences. In addition, long-read-based assemblies are associated with increased error rate among long reads, which can result in misassemblies [3]. Consequently, many currently available genomes still contain structural errors, as well as gaps that need to be bridged to reach a chromosome-level structure. These limitations have been partially addressed thanks to active support from the community and competitions such as GAGE [5] or the Assemblathon [6]. However, there is as yet no systematic, reliable workflow of producing near-perfect genome assemblies of guaranteed optimal best quality without a considerable amount of empiric parameter adjustment and manual post-processing evaluation and correction [7]. Recent sequencing projects have typically relied on a combination of independently obtained data such as optical mapping, long-read sequencing, and chromosomal conformation capture (3C, Hi-C) to obtain large genome assemblies of high accuracy. The latter procedure derives from techniques aiming at recovering snapshots of the higher-order organization of a genome [8, 9]. When applied to genomics, Hi-C-based methods are sometimes referred to as proximity ligation approaches, as they quantify and exploit physical contacts between pairs of DNA segments in a genome to assess their collinearity along a chromosome, and the distance between the segments [10]. Early studies using control datasets demonstrated that Hi-C can be used to scaffold and/or correct a wide range of eukaryotic DNA regions [11,12,13,14], i.e. stretches of bp, whether they be small-scale contigs or full chromosomes. The Hi-C scaffolder GRAAL (Genome Re-Assembly Assessing Likelihood from 3D) is a probabilistic program that uses a Markov Chain Monte Carlo (MCMC) approach. This tool was able to generate the first chromosome-level assembly of an incomplete eukaryote genome [13] by permuting DNA segments according to their contact frequencies until the most likely scaffold was reached (see also [15]). Since these proof of concept studies, the assemblies of many genomes of various sizes from eukaryotes [16,17,18] and prokaryotes [19] have been significantly improved using scaffolding approaches exploiting Hi-C data. Although GRAAL was effective on medium-sized or small (< 100 Mb) eukaryotic genomes such as that of the fungus Trichoderma reesei [20], scalability limitations were encountered when tackling genomes whose complexity and size required significant computer calculation capacity. Furthermore, as was also observed with other Hi-C-based scaffolders, the raw output of GRAAL includes a number of caveats that need to be corrected manually to obtain a finished genome assembly. To overcome these limitations, we developed instaGRAAL, an enhanced, open-source program optimized to reduce the computational load of chromosome scaffolding and that includes a misassembly "correction" module installed alongside the scaffolder. Moreover, instaGRAAL can optionally exploit available genetic linkage data. We applied instaGRAAL to three genomes of increasing size: in the first two runs, and in order to demonstrate its added value, we applied the program to the 214-Mb and 500-Mb haploid genomes of the brown alga Ectocarpus sp. [21, 22] and Desmarestia herbacea (unpublished), respectively. Brown algae are a group of complex multicellular eukaryotes that have been evolving independently from animal and land plants for more than a billion years. Ectocarpus sp. was the first species within the brown algal group to be sequenced (reference v1 assembly [22]), as a model organism to investigate multiple aspects of brown algal biology including the acquisition of multicellularity, sex determination, life cycle regulation, and adaptation to the intertidal [22,23,24,25]. A range of genetic and genomic resources have also been established for Ectocarpus sp. including a dense genetic map generated with 3588 SNP markers (v2 assembly) [26], which was used to comprehensively validate both a GRAAL (v3) and the instaGRAAL (v4) assemblies. In a third run, we benchmarked instaGRAAL using the human genome, to confirm that our software readily scales to larger (Gb-sized) and more complex assemblies, an important requirement to tackle the next era of assembly projects. From GRAAL to instaGRAAL The core principles of GRAAL and instaGRAAL are similar: both exploit a MCMC approach to perform a series of permutations (insertions, deletions, inversions, swapping, etc.) of genome fragments (referred to here as "bins," see the "Material and methods" section) based on an expected contact distribution [13]. The parameters (A, α, and δ) that describe this contact distribution are first initialized using a model inspired by polymer physics [27]. This model describes the expected contact frequency P(s) between two loci separated by a genomic distance s (when applicable): $$ P(s)=\left\{\begin{array}{c}\max \left(A\cdotp {s}^{-\alpha },\delta \right):\in \mathrm{tracontacts}\\ {}\delta :\mathrm{intercontacts}\end{array}\right. $$ The parameters are then iteratively updated directly from the real scaffolds once their sizes increase sufficiently [13]. Each bin is tested in several positions relative to putative neighboring fragments. The likelihood of each arrangement is assessed from the simulated or computed contact distribution, and the arrangement is either accepted or rejected [13]. This analysis is carried out in cycles, with a cycle being completed when all the bins of the genome have been processed in this way. Any number of cycles can be run iteratively, and the process is usually continued until the genome structure ceases to evolve, as measured by the evolution of the parameters of the model. The core functions of the program use Python libraries, as well as the CUDA programming language, and therefore necessitate a NVIDIA graphics card with at least 1 Gb of memory. The technical limitations of GRAAL were (1) high memory usage when handling Hi-C data for large genomes (i.e. over 100 Mb), (2) difficulties when installing the software, and (3) the need to adjust multiple ad hoc parameters to adapt to differences in genome size, read coverage, Hi-C contact distribution, specific contact features, etc. instaGRAAL (https://github.com/koszullab/instaGRAAL) addresses all these shortcomings. First, we rewrote the memory-critical parts of the program, such as permutation sampling and likelihood calculation, so that they are computed using sparse contact maps. We reduced the software's dependency footprint and added detailed documentation, deployment scripts, and containers to ease its installation. Finally, we opened up multiple hard-coded parameters to give more control for end-users while improving the documentation on each of them and selecting relevant default parameters that can be implemented for a wide range of applications (see options online and the "Discussion" section). Overall, these upgrades result in a program that is lighter in resources, more flexible, and more user-friendly. Other problems encountered with the original GRAAL program included (1) the presence of potential artifacts introduced by the permutation sampler, such as spurious permutations (e.g. local inversions) or incorrect junctions between bins; (2) difficulties with the correct integration of other types of data such as long reads; and (3) difficulties with handling sequences that were either too short, highly repeated, or with low coverage. We addressed these points by identifying and putting aside these problematic sequences during a filtering step. These sequences are subsequently reinserted into the final scaffolds, whenever possible (see the "Material and methods" section), with the help of linkage data when available. Overall, when compared to the raw GRAAL output, the resulting "corrected" instaGRAAL assemblies were significantly more complete and more faithful to the actual chromosome structure. Scaffolding of the Ectocarpus sp. chromosomes with instaGRAAL To test and validate instaGRAAL, we generated an improved assembly of the genome of the model brown alga Ectocarpus sp. A v1 genome consisting of 1561 scaffolds generated from Sanger sequence data is available [22]. A Hi-C library was generated from a clonal culture of a haploid partheno-sporophyte carrying the male sex chromosome using a GC-neutral restriction enzyme (DpnII). The library was paired-end sequenced (2 × 75 bp—the first ten bases were used as a tag and to remove PCR duplicates) on a NextSeq apparatus (Illumina). Of the resulting 80,521,968 paired-end reads, 41,288,678 read pairs were aligned unambiguously along the v1 genome using bowtie2 (quality scores below 30 were discarded), resulting in 2,554,639 links bridging 1,806,386 restriction fragments (Fig. 1a) (see the "Material and methods" section for details on the experimental and computational steps). The resulting contact map in sparse matrix format was then used to initialize instaGRAAL along with the restriction fragments (RFs) of the reference genome (Fig. 1a, b) (see Additional file 1: Table S1 for an example of sparse file matrix). Matrix generation and binning process. a From left to right: (i) the input data to be processed, and paired-end reads to be mapped onto the Ectocarpus. sp. reference v1 genome assembly; (ii) raw contact map before binning—each pixel is a contact count between two restriction fragments (RF); and (iii) raw contact map after binning—each pixel is a contact between a determined number of RFs (see b). b Schematic description of one iteration of the binning process over 10 restriction fragments (arrows). From left to right: (i) initial contact map, each pixel is a contact count between two RFs; (ii) filtering step—RFs either too short or presenting a read coverage below one standard deviation below the mean are discarded; (iii) binning step (1 bin = 3RFs)—adjacent RFs are pooled by three, with sum-pooling along all pixels in a 3 × 3 square; and (iv) binning step (1 bin = 9 RFs)—adjacent RFs are pooled by nine Given the probabilistic nature of the algorithm, we evaluated the program's consistency by running it three times with different resolutions. Briefly, we filtered out RFs that were shorter than 50 bp and/or whose coverage was one standard deviation below the mean coverage. Then, we sum-pooled (or binned) the sparse matrix by groups (or bins) of three RFs five times, recursively (Fig. 1a, b). Each recursive instance of the sum-pooling is subsequently referred to as a level of the contact map. A level determines the resolution at which permutations are being tested: the higher the level, the lower the resolution, the longer the sequences being permuted and, consequently, the faster the computation. The binning process is shown in Fig. 1b. Regarding Ectocarpus sp., we found that level 4 (bins of 81 RFs) was an acceptable balance between high resolution and fast computation on a desktop computer with a GeForce GTX TITAN Z graphics card. Moreover, whether instaGRAAL was run at level 4, 5, or 6 (equivalent to bins of 81, 243, and 729 RFs, respectively), all assemblies quickly (~ 6 h) converged towards similar genome structures (Fig. 2a). Evolution of the Ectocarpus sp. contact map, the parameters of the polymer model, and the log-likelihood of the contact map. a The raw contact map before (upper part) and after (bottom part) scaffolding using instaGRAAL. Scaffolds are ordered by size. b Evolution of three parameters of the polymer model (exponent, pre-factor, mean trans-contacts) and the log-likelihood as a function of iterations We plotted the evolution of the log-likelihood and of model parameters as a function of the number of arrangements performed (iterations) (Fig. 2b). The interquartile ranges (IQR, used to indicate stability in Marie-Nelly et al. [13]) of all parameters decreased to near-zero values at the end of each scaffolding run, indicating that they all stably converged and that the final structures oscillated near the final values in negligible ways. More qualitatively, each run led to the formation of 27 main scaffolds (Fig. 2a) with the 27th largest scaffold being more than a hundred times longer than the 28th largest one (Fig. 3, Additional file 1: movie S1). Each of the 27 scaffolds was between four and ten times longer than the combined length of the remaining sequences (Fig. 3). This strongly suggests that the 27 scaffolds correspond to chromosomes, a number consistent with karyotype analyses [28]. Taken together, these results indicate that instaGRAAL successfully assembled the Ectocarpus sp. genome into chromosome-level scaffolds. As the supplementary movie suggests, scaffold-level convergence is visible after only a few cycles, indicating that instaGRAAL is able to quickly determine the global genome structure most likely to fit the contact data. The remainder of the cycles is devoted to intra-chromosomal refinement. Size distribution (log scale) of the final Ectocarpus sp. scaffolds after 250 instaGRAAL iterations. After filtering, and prior to correction, 27 main scaffolds (red bars) or putative chromosomes were obtained. The dotted green horizontal line represents the proportion of the filtered genome that was not integrated into the main 27 scaffolds and represents less than 0.6% of the initial assembly. Each scaffold presents, after normalization, a high-quality Hi-C profile with features that are typical of eukaryotic genomes (Additional file 1 Fig. S1) Correcting the chromosome-level instaGRAAL assembly of the Ectocarpus sp. genome instaGRAAL also includes a number of procedures that aim to correct some of the modifications introduced into the input contigs from the original assembly by the Hi-C scaffolding (Fig. 4). We implemented it as a separate "correction" module that is automatically installed alongside the scaffolder. Step-by-step correction procedure. Correction procedure (top to bottom): (i) in silico restriction of the genome and binning, yielding a set of bins; (ii) reordering of all bins into scaffolds without taking into account their input contig of origin; typically, groups of bins from the same input contig naturally aggregate, but some bins get scattered to other scaffolds (e.g., bin 13, pink arrow), while others will be "flipped" with respect to the original assembly (e.g., bin 4, red arrows); (iii) reconstruction of the original input contigs by relocating scattered bins next to the biggest bin group; and (iv) bins in the original input contigs are oriented according to their original consensus orientation These modifications principally involve discrete inversions or insertions of DNA segments (typically corresponding to single bins or RFs) (see also [13]). Such alterations are inherent to the statistical nature of instaGRAAL, which will occasionally improperly permute neighboring bins because of the high density of contacts between them. However, we reasoned that input contigs from the original assembly, especially those generated for Ectocarpus sp. with Sanger sequencing, were unlikely to contain misassemblies. Therefore, we decided to favor input contigs' structure whenever local conflicts arose. These are part of a broader set of assembly errors that we detected by aligning the v1 assembly on the instaGRAAL scaffolds and analyzing the mapping results using QUAST. The v1 assembly was used as a reference by QUAST to identify potential errors introduced by instaGRAAL when scaffolding the v1 assembly. We corrected these errors as follows: first, all bins processed by instaGRAAL that belonged to the same input contig were constrained to their original orientation (Fig. 4). If an input contig was split across multiple scaffolds, the smaller parts of this contig were relocated to the largest one, respecting the original order and orientation of the bins. Then, we reinserted whenever possible sequences that had been filtered out prior to instaGRAAL processing (e.g., contig extremities with poor read coverage; see the "Material and methods" section and Marie-Nelly et al. [13]) into the chromosome-level scaffold at their original position in the original input contig. 3,832,980 bp were reinserted into the assembly this way. These simple steps alleviated artificial truncations of input contigs observed with the original GRAAL program. Some filtered bins had no reliable region to be associated with post-scaffolding, because their initial input contig had been completely filtered before scaffolding. These sequences, which were left as-is and appended at the end of the genome, were included into 543 scaffolds spanning 3,141,370 bp, i.e., < 2% of the total DNA. Together, these steps removed all the misassemblies detected by QUAST. To further validate the assembly, we exploited an assembly generated by combining genetic recombination data and the Sanger assembly [21, 26] ("linkage group [LG] v2 assembly") as well as an assembly generated by running the original GRAAL program on the original reference v1 genome assembly ("GRAAL v3 assembly"). We searched for potential translocations between scaffold extremities between the linkage group v2 assembly and the v3 or v4 assemblies. This comparison, which was implemented as a separate module installed alongside the scaffolder, detected such events in the uncorrected v3 GRAAL assembly but none in the corrected v4 instaGRAAL assembly. The corrected instaGRAAL v4 assembly is therefore fully consistent with the genetic recombination map data, confirming the efficiency of the approach. Comparisons with previous Ectocarpus sp. assemblies and validation of the instaGRAAL assembly We compared the corrected instaGRAAL v4 assembly with the three earlier assemblies of the Ectocarpus sp. genome mentioned above (Table 1 and Additional file 1: Table S2): (1) the original v1 genome assembly generated using Sanger sequencing data [22], which was assumed to be highly accurate but fragmented (1561 scaffolds); (2) the linkage group [LG] v2 assembly; and (3) the original GRAAL program v3 assembly. Table 1 Comparison of Nx, NGx (i.e., Nx with respect to the original reference v1 genome assembly; in bp), and BUSCO completeness for the different assemblies (linkage group v2, GRAAL v3, and corrected instaGRAAL v4) of the Ectocarpus sp. genome We aligned the corrected instaGRAAL (v4), LG (v2), and GRAAL (v3) assemblies onto the original v1 assembly to detect misassemblies and determine whether the genome annotations (362,919 features) were conserved. We then validated each assembly using genetic linkage data (see the "Material and methods" section). For each assembly, we computed the following metrics: the number of misassemblies, ortholog completeness, and cumulative length/Nx distributions (Table 1). These assessments were carried out using BUSCO [29] for ortholog completeness (Additional file 1: Fig. S1) and QUAST-LG's validation pipeline [30] to search for misassemblies introduced in the scaffolds. QUAST-LG is an updated version of the traditional QUAST pipeline specifically designed for large genomes and is a state-of-the-art software for assembly evaluation and comparison. We used QUAST to verify that annotations transferred successfully from the reference v1 assembly to the instaGRAAL v4 assembly and that no structural discrepancy (a.k.a. misassemblies) was found in the instaGRAAL v4 assembly with respect to the reference v1 assembly. We followed the terminology used by both programs, such as the BUSCO definition of ortholog and completeness, as well as QUAST's classification system of contig and scaffold misassemblies. The corrected instaGRAAL assembly was of better quality than both the LG v2 and GRAAL v3 assemblies (Table 1 and Additional file 1: Fig. S2). The corrected assembly incorporated 795 of the v1 genome scaffolds (96.8% of the sequence data) into the 27 chromosomes based on the high-density genetic map [21], compared to 531 for the LG v2 assembly (90.5% of the sequence data). Moreover, this assembly contained fewer misassemblies and was more complete in terms of BUSCO ortholog content. For some metrics, the differences were marginal, but always in favor of the corrected instaGRAAL v4 assembly. BUSCO completeness was similar (76.2%, 76.9%, and 77.6% for the GRAAL v3 assembly, LG v2, and corrected instaGRAAL v4 assemblies, respectively) (Additional file 1: Fig. S2) and an improvement over the 75.9% of the v1 assembly. These absolute numbers remain quite low, presumably because of the lack of a set of orthologs well adapted to brown algae. All quantitative metrics, such as N50, L50, and cumulative length distribution, increased dramatically when compared with the reference genome v1 assembly (Table 1). N50 increased more than tenfold, from 496,777 bp to 6,867,074 bp after the initial scaffolding and to 6,942,903 bp after the correction steps. 99.4% of the sequences in the 1018 contigs were integrated into the 27 largest scaffolds after instaGRAAL processing. Overall, the analysis indicated that many of the rearrangements found in the LG v2 assembly were potentially errors and that both GRAAL and instaGRAAL were efficient at placing large regions where they belong in the genome, albeit less accurately for GRAAL and in the absence of correction. These statistics underline the importance of the post-scaffolding correction steps and the usefulness of a program that automates these steps. Comparison between the Ectocarpus sp. instaGRAAL and linkage group assemblies Compared to the LG v2 assembly, the corrected instaGRAAL v4 assembly lost 23 scaffolds but gained 287 that the genetic map had been unable to anchor to chromosomes (Additional file 1: Table S2). We observed few conflicts between the two assemblies, and the linkage markers are globally consistent with the instaGRAAL scaffolds (Additional file 1: Fig. S3). One major difference is that instaGRAAL was able to link the 4th and 28th linkage groups (LG) that were considered to be separate by the genetic map [26] because of the limited number of recombination events observed. The fusion in the instaGRAAL v4 assembly is consistent with the fact that the 28th LG is the smallest, with only 54 markers over 41.8 cM and covering 3.8 Mb. The 28th LG has a very large gap which might reflect uncertainty in the ordering of the markers. Interestingly, this gap is located at one end of the group, precisely where instaGRAAL now detects a fusion with the 4th LG. In addition, the fact that there is no mix between the 4th and 28th LGs on the merged instaGRAAL (pseudo) chromosome but rather a simple concatenation suggests that the genetic map was unsuccessful in joining those two LGs, but that instaGRAAL correctly assembled the two LGs (see Additional file 1: Table S3 for correspondences between LGs and instaGRAAL super scaffolds). instaGRAAL was also more accurate than the genetic map in orienting scaffolds (Additional file 1: Table S2). Among the scaffolds that were oriented in the LG v2 assembly, about half of the "plus" orientated were actually "minus" and vice versa. The limited number of markers detected in the scaffolds anchored to the genetic map was likely the reason for this high level of incorrect orientations. Scaffolding of the Desmarestia herbacea genome To test and validate instaGRAAL on a second, larger genome, we generated an assembly of the haploid genome of D. herbacea, a brown alga that had not been sequenced before. We set up the assembly pipeline and subsequent scaffolding from raw sequencing reads to assess the robustness of instaGRAAL with de novo, non-curated data. The pipeline proceeded as follows: first, we acquired 259,556,174 short paired-end shotgun reads (Illumina HiSeq2500 and 4000) as well as 1,353,202 long reads generated using PacBio and Nanopore (about 150× short reads and 15× long reads). Sequencing reads were processed using the hybrid MaSuRCA assembler (v3.2.9) [31], yielding 7743 contigs representing 496 Mb (Table S4). We generated Hi-C data following a protocol similar to that used for Ectocarpus sp. (see the "Material and methods" section). Briefly, 101,879,083 reads were mapped onto the hybrid assembly, yielding 7,649,550 contacts linking 1,359,057 fragments. We then ran instaGRAAL using similar default parameters to that used for Ectocarpus sp., for the same number of cycles. We corrected the resulting scaffolds. The scaffolding process resulted in 40 scaffolds larger than 1 Mb (Additional file 1: Fig. S4, S5, S6), representing 98.1% of the initial, filtered scaffolding and 89.3% of the total initial genome after correction and reintegration. The exact number of chromosomes in D. herbacea is unknown but was estimated to be ~ 23, and possibly up to 29, based on cytological observations [32]. Most (35) of the scaffolds generated by instaGRAAL were syntenic with the 27 Ectocarpus sp. scaffolds. Among the remaining five scaffolds, one corresponded to the genome of an associated bacterium, and two to large regions with highly divergent GC content (37 and 40% vs. 48% for the rest of the genome) and no predicted D. herbacea genes. Overall, instaGRAAL successfully scaffolded the D. herbacea genome, although the final number of scaffolds remained slightly higher than the estimated number of chromosomes in this species. Comparisons with existing methods To date, only a limited number of Hi-C-based scaffolding programs are publicly available, and as far as we can tell, no detailed comparison has been performed between the existing programs to assess their respective qualities and drawbacks. In an attempt to benchmark instaGRAAL, we ran SALSA2 [33] and 3D-DNA on the same Ectocarpus sp. v1 and Desmarestia herbacea reference genome and Hi-C reads. 3D-DNA is a scaffolder that was hallmarked with the assembly of Aedes aegypti, and SALSA2 is a recent program with a promising approach that directly integrates Hi-C weights into the assembly graph. For Ectocarpus sp., SALSA2 ran for nine iterations and yielded 1042 scaffolds, with an N50 of 6,552,506 (L50 = 11). Its BUSCO completeness was 77.6%, a level identical to that obtained with instaGRAAL. Overall, the metrics were satisfactory but SALSA2 was outperformed by instaGRAAL post-correction. The contact map of the resulting SALSA2 assembly displayed noticeably unfinished scaffolds (Additional file 1 Fig. S7 and S8). This, coupled with a lower N50 value, suggests that instaGRAAL is more successful at merging scaffolds when appropriate. We computed similar size and completeness statistics for the final instaGRAAL D. herbacea assembly and compared these to the values obtained with SALSA2 and 3D-DNA. We also mapped the Hi-C reads onto all three final assemblies in order to qualitatively assess the chromosome structure. The results are summarized in Table S4. Briefly, statistics across assemblies were similar; the corrected instaGRAAL assembly had 73% BUSCO completeness, consistent with the values of 73.6% and 70.3% obtained for SALSA2 and 3D-DNA, respectively. However, the Lx/Nx metrics diverged significantly; the instaGRAAL assembly N50 was 12.4 Mb, similar to SALSA2 (12.8) and much larger than 3D-DNA (0.2 Mb). However, visual inspection of the contact maps indicated that neither SALSA2 nor 3D-DNA succeeded in fully scaffolding the genome of Desmarestia herbacea (Additional file 1: Fig. S7). Notably, SALSA2 created a number of poorly supported junctions to generate chromosomes, whereas 3D-DNA failed to converge towards any kind of structure. In contrast, although the instaGRAAL final assembly still contains input contigs that are incorrectly positioned, a coherent structure corresponding to 40 scaffolds (including contaminants) emerged (Additional file 1: Fig. S4). One possibility is that the de novo MaSuRCA assembly was low quality, likely due to the low coverage of long reads, which would have resulted in alignment errors that disrupted the contact distribution and subsequent Hi-C scaffolding. Another possible explanation for these differences is that it remains difficult to dissect all the options and tunable parameters of these scaffolders, and therefore that we did not find the optimal combination with respect to the D. herbacea draft assembly. Nevertheless, these results highlight the robustness of instaGRAAL which was able to scaffold the D. herbacea genome using default parameters. Scaffolding the human genome To confirm that instaGRAAL scaffolds larger (Gb scale) genomes in a reasonable time, we ran it on the GRCh38 human genome sliced into 300-kb segments (artificial assembly), using a Hi-C dataset generated with an Arima Genomics Hi-C kit (see the "Material and methods" section). instaGRAAL was run for 15 cycles, with the parameter --levels sets to 5, and the scaffolds were subsequently corrected with instaGRAAL-polish. We obtained a total of 1302 scaffolds, out of which 24 have a length ranging from 18 to 239 Mb. These 24 chromosome-level scaffolds are represented in the contact map in Additional file 1: Fig. S9. These scaffolds have an N50 and an NGA50 of 143 Mb, close to the 145 Mb obtained for the reference genome (Table 2; the results from [33] using SALSA2 are included). The dot plot similarity map between the instaGRAAL scaffolds and reference genome assembly (Additional file 1: Fig. S10) shows that the 22 autosomes and the X chromosome were recovered by instaGRAAL (although a few relocations and inversions remain visible). In addition, a 24th scaffold is visible composed of sequences also in contacts with the other scaffolds, corresponding to repeated sequences clustering together. instaGRAAL produced scaffolds with a lower contiguity than those of SALSA2: while their N50 are comparable, the N75 of instaGRAAL is significantly lower. However, the number of complete genomic features in the instaGRAAL scaffolds is largely improved compared to the input fragments, while SALSA2 only slightly increased this score. These results suggest that although the scaffolds of instaGRAAL are less contiguous, they are of better quality. Since these scaffolds were obtained after only 15 cycles, increasing the number of cycles is very likely to improve the N75. All in all, and though additional work is needed to polish such an output as with all assembly projects, these results confirm that instaGRAAL can efficiently scaffold large genomes. Table 2 Comparison of Nx, NGx (i.e., Nx with respect to the original human reference genome assembly; in bp), and other QUAST statistics for the different assemblies (artificial assembly, corrected instaGRAAL, and SALSA2) of the Homo sapiens genome Benchmarking of the system requirements To quantify the improvements made over the original GRAAL program, we ran both GRAAL and instaGRAAL over the Ectocarpus sp. v1 genome separately and measured the peak memory load, the graphics card memory load taken by the contact maps, and the per-cycle runtime as averaged from 20 cycles. The results are summarized in Table S5. As expected, the memory load on the graphics card is an order of magnitude smaller for instaGRAAL, while the peak RAM and runtime are several times smaller. The shrinkage of memory requirements is predicted by the use of sparse data structures and the fact that our original dataset for Ectocarpus sp. is relatively lean when compared to the size of the genome. The origin of the accelerated runtime is less clear and could be due to multiple contributions to the program, including the use of sparse data structures but also external contributions (e.g., porting to Python 3, upgraded libraries, or more recent CUDA versions). It is important to note, however, that these results are highly specific to the hardware and data used here, and due to the many different factors involved, any comparison should stick to orders of magnitude. Nevertheless, this confirms that instaGRAAL's improvements over GRAAL are very substantial and make it suitable for modern, large genome assembly projects. instaGRAAL is a Hi-C scaffolding program that can process large eukaryotic genomes. Below, we discuss the improvements made to the program, its remaining limitations, and the steps that will be needed to tackle them. Refinement/correction step An important improvement of instaGRAAL compared to GRAAL relates to post-scaffolding corrections. Local misassemblies, e.g., local bin inversions or disruptive insertions of small scaffolds within larger ones, are an inevitable consequence of the algorithm's most erratic random walks. These small misassemblies are retained because flipping a bin does not markedly change the relative distance of an RFs relative to its neighbors, and because small scaffolds typically carry less signal and therefore exhibit a greater variance in terms of acceptable positions. Depending on the trust put in the initial set of contigs, one may be unwilling to tolerate these changes as well as "partial translocations," i.e., the splitting of an original contig into two scaffolds. The prevalence of such mistakes can be estimated by comparing the orientation of bins relative to their neighbors in the instaGRAAL v4 assembly vs. the original assembly (v1 assembly). Our assumption is that if a single bin was flipped or split by instaGRAAL, this was likely a mistake that needed to be corrected. Consequently, we chose to remain faithful to the input contigs of the original v1 assembly, given that the initial Ectocarpus sp. v1 (reference) genome sequence was based on Sanger reads. Our correction therefore aims at reinstalling the initial contig structure and orientation while preserving to a maximum extent the overall instaGRAAL scaffold structure. In addition, our correction reintegrates into the assembly the bins removed during the initial filtering process according to their position along the original assembly contigs. Most filtered bins corresponded to the extremities of the original contigs, because their size depended on the position of the restriction sites within the contig, or because they consisted of repeated sequences with little or no read coverage. The tail filtering correction step inserts these bins back at the extremities of these contigs in the instaGRAAL assembly. The combination of a probabilistic algorithm with a deterministic correction step provides robustness to instaGRAAL. First, the MCMC step identifies, with few prior assumptions, a high-likelihood family of genome structures, almost always very close to the correct global scaffolding. The correction step combines this result with prior assumptions made about the initial contig structures generated through robust, established assembly programs, refining the genomic structure within each scaffold. To give the user a fine-grained degree of control over our correction procedures, the implementation into instaGRAAL is split into independent modules that each assume about the initial contig structure necessary to perform the correction: the "reorient" module assumes that the initial contigs do not display inversions, and the "rearrange" module assumes that there are no relocations within contigs. We underline that despite the improvements brought about by these new procedures, instaGRAAL assemblies remain perfectible, notably because of the reliance on the quality of the input contigs used for correction. For instance, the D. herbacea genome heavily relies on contigs generated from a de novo hybrid assembly, and the contact maps in Additional file 1: Fig. S4, S5, and S6 show some extraneous signal that may point at misassemblies. Analogous observations may be made with respect to Ectocarpus sp. in Additional file 1: Fig. S11. In addition, inherent limits to Hi-C technology such as the restriction fragment size mean that there are going to be false junctions between fragments or bins. This is only a problem if one chooses not to reconstruct every input contig within a newly formed scaffold with our correction procedure, i.e., one is distrustful of the initial input contigs. This was not the case for Ectocarpus sp. but could be argued for D. herbacea, where the de novo contigs generated from 15× coverage may be of poor quality. Sparse data handling The implementation of a sparse data storage method in instaGRAAL allows much more intense computation than with GRAAL. Because the majority of map regions are devoid of contacts, instaGRAAL essentially halves the order of magnitude of both algorithm complexity and memory load, i.e., they increase roughly linearly with the size of the genome instead of geometrically. This improvement potentially allows the assembly of Gb-sized genomes in 4 to 5 days using a laptop (i.e., much faster with more computational resources). Variations in GC% along the genome, and/or other genomic features, can lead to variation in Hi-C read coverage and impair interpretation of the Hi-C data. Correction and attenuation procedures that alleviate these biases are therefore commonly used in Hi-C studies [34,35,36]. However, these procedures are not compatible with instaGRAAL's estimation of the contact distribution (for more details, see [37]). A subset of bins will therefore diverge strongly from the others, displaying little if no coverage. A filtering step is needed to remove these bins as they would otherwise impact the contact distribution and the model parameter estimation. These disruptive bins represent a negligible fraction of the total genome (< 3% of the total genome size of Ectocarpus sp., for instance) and are reincorporated into the assembly during correction. On the other hand, a subset of bins representing small, individual scaffolds are not reinserted during correction and are added to the final assembly as extra-scaffolds (as in all sequencing projects). Additional analyses and new techniques such as long or linked reads are needed to improve the integration of these scaffolds into the genome. The binning procedure will influence the structure of the final assembly as well as its quality. For example, low-level binning (e.g., one bin = three RFs) will lead to an increased number of bins and a large, sparse contact map with a low signal-to-noise ratio, where many of the bins display poor read coverage as on average they will have fewer contacts with their immediate neighbors. Because of the resulting low signal-to-noise ratio, an invalid prior model will be generated, and when referring to this model, the algorithm will fail to scaffold the bins properly, if at all. Moreover, due to its probabilistic nature, the algorithm will generate a number of false positive structural modifications such as erroneous local inversions or permutations of bins. The numerous bins will create more genome structures to explore to handle all the potential combinations, and exploring this space until convergence will take longer and be computationally demanding. On the other hand, one of the advantages of instaGRAAL is its ability to scaffold fragments or bins instead of contigs themselves. This has two main effects: First, it dodges the size bias issue whereby larger contigs will feature more contacts and will need to be normalized. Second, it allows for greater flexibility when exploring genome space, potentially uncovering misassemblies within input contigs. This is more relevant in the case of large contigs generated with long reads. And even if we assume that the initial contigs are completely devoid of misassemblies, this flexibility is useful when the contact distribution is disrupted by extraneous signals and the scaffolder needs to decide between two regions of similar affinity. The correction tool subsequently reconstructs the initial contigs from these rough arrangements, as discussed above (reference-based correction). An optimal resolution is therefore a compromise between the bin size, the coverage, and the quality of the input contigs from the original assembly. Although a machine powerful enough operating on an extremely contact-rich matrix would be successful at any level, it is unclear whether such resources are necessary. Our present assemblies (e.g., 1 bin = 81 RFs for both; see the "Material and methods" section) had good quality metrics after a day's worth of calculation on a standard desktop computer for Ectocarpus sp. and D. herbacea. Moreover, convergence was qualitatively obvious after a few cycles. This suggests that more computational power yields diminishing returns and therefore that appropriate correction procedures are a more efficient approach for remaining misassemblies. Binning The fragmentation of the original assembly used to generate the initial contact map has a substantial effect on the quality of the final scaffolding. Because binning cannot be performed beyond the resolution of individual input contigs, however small they may be, there is a fixed upper limit to the scale at which a given matrix can be binned. A highly fragmented genome with many small input contigs will necessarily generate a high-noise, high-resolution matrix. Attempts to reassemble a genome based on such a matrix will run into the problems discussed above (resolution). This limitation can be alleviated, to some extent, by discarding the smallest contigs, with the hope that the remaining contigs will cover enough of the genome. The input contigs that are removed can be reintegrated into the final scaffold during the correction steps. This ensures an improved Nx metric while retaining genome completeness. It should be noted, however, that the size of the input contigs is important as they need to contain sufficient restriction sites, and each of the restriction fragments must have sufficient coverage. The choice of enzyme and the frequency of its corresponding site are thus crucial. For instance, with an average of one restriction site every 600 to 1000 bp for DpnII, input contigs as short as 10 kb may contain enough information to be correctly reassembled. The restriction map therefore strongly influences both the minimum limit on N50 and genome fragmentation. In order to test our tool against existing programs, we ran two scaffolders available online (SALSA2 and 3D-DNA) on our two genomic datasets. In all instances, instaGRAAL proved more successful at scaffolding both genomes. However, we have not extensively tested all the combinations of parameters of both programs, and acknowledge the difficulty in designing and implementing Hi-C scaffolding pipelines with extensive dependencies that compound the initial complexity of the task and add yet more configurable options to know in advance. Finding the correct combination of CUDA and Python dependencies to install instaGRAAL on a given machine can be challenging as well. Therefore, our benchmarking attempt should be rather seen as a way to stress the importance of implementing sensible default parameters that readily cover as many use cases as possible for the end user. There is almost no doubt that both 3D-DNA and SALSA2, with the appropriate parameters and correction steps, would produce satisfying scaffolding; on the other hand, knowing which input parameters has to be specified in advance is a non-trivial task, especially given the computational resources needed for a single scaffolding run. With instaGRAAL, we wish to combine the simplicity of a default configuration that works in most instances, with the flexibility offered by the power of MCMC methods. Choosing your parameters In the benchmarking, we have discussed why some parameters are crucial and why we took care, through trial-and-error, to implement sensible defaults for future similar assembly projects. On the other hand, it is crucial that such defaults be not the result of overfitting for the assemblies we tested. However, none of what we outlined previously assumes anything specific about the genomes at hand beyond very broad metrics such as their total size or N50. The parameters of the program scale intuitively with such metrics. For larger genomes, one may simply increase the size of the bins so that the contact map does not grow too large, which is what we did for the human genome. The N50 sets the resolution limit in that it is often desirable to be able to break down contigs into many bins of roughly equal size so as not to run into the aforementioned size bias and also to be able to give more flexibility to the program. For instance, an N50 close to 100 kb should not feature bins larger than 50–60 kb. Oftentimes, however, such minutiae is not necessary, and for most genome projects ranging across 107–109 bp, instaGRAAL will typically work out of the box with default parameters. For instance, we kept the same parameters for both algae and only switched to a lower resolution (higher bin size) for the human genome to scale with its size. When needed, through these simple rules of thumb, one may adapt the defaults to other genomes with more extreme metrics. Handling diploid genomes As assembly projects have grown more complex and exhaustive, expectations have increased as well. Assembling diploid, if not polyploid, genomes with well-characterized haplotypes is a stumbling block in the field. Moreover, such problems are more likely to be encountered as the low-hanging fruit gets picked. Typical projects involve assembling many individual complete human genomes with haplotypes, or the sequencing and scaffolding of even larger and more complex genomes such as that of plants. In this context, instaGRAAL in particular (and Hi-C in general) is relatively agnostic, as its success or failure will hinge on the reference genome being properly haplotyped in the first place. While it may prove intractable to phase haplotypes directly from only Hi-C data, instaGRAAL will conserve such information when provided in the first place. This is because the scaffolder is robust to local disruptions like haplotype-induced mapping artifacts. It has been shown that GRAAL and by extension instaGRAAL will eventually resolve such disruptions even when the distribution is noisy, as long as the general three-parameter model (and power law) still holds globally [13, 19, 20]. In other words, even though instaGRAAL cannot "guess" whether a given reference sequence is homologous or heterozygous without considerable difficulty, it can still cleanly scaffold chromosome pairs from clear contig pairs because the global 3D intra-signature from a given contig is too strong to be confused with mapping artifacts in a pair. Should such information be missing, the scaffolder will likely interlace all regions into a giant linkage group. In that respect, instaGRAAL could interface well with diploid classical assemblers and is suitable for any pipeline integration involving diploid genomes. More work is needed in that direction so that the scaffolder does not rely that strongly on the quality of the input contigs to work out haplotypes. Integrating information from the Hi-C analysis with other types of data Aggregating data from multiple sources to construct a high-quality genome sequence remains a challenging problem with no systematic solution. As long-read technologies become more affordable, there is an increasing demand to reconcile the scaffolding capabilities of Hi-C-based methods with the ability of long reads to span regions that are difficult to assemble, such as repeated sequences. The most intuitive approach would be to perform Hi-C scaffolding on an assembly derived from high-coverage and corrected long reads, as was done for several previous assembly projects [16, 38]. Alternative approaches also exist, such as generating Hi-C- and long-read-based assemblies separately and merging them using programs such as CAMSA [39] or Metassembler [40]. Pipelines such as PBJelly [41] have proven successful at filling existing gaps in draft genomes, regardless of their origin, with the help of long reads. Lastly, with assembly projects involving both long and short reads, hybrid assemblies and hybrid polishing have become an important focus. Polishers such as Racon [42] or Pilon [43] are widely used, and new tools such as HyPo are also emerging [44]. Yet the question of which kind of pipeline to use (e.g., Racon to Hi-C scaffolding to Pilon, or Racon to Pilon to Hi-C scaffolding, etc.) along which hybrid assembler (Masurca, Alpaca, hybridSPAdes, etc.) [31, 45, 46] can prove cumbersome, and often finding the process yielding the most satisfying output in terms of metrics involves much trial-and-error with different configurations. InstaGRAAL shows that high-quality metrics can still be attained without the help of long reads, but long-read polishing may still be necessary in order to get rid of the lingering errors we mentioned. Long reads are not the only type of data that can be used to improve assemblies. Linkage maps, RNA-seq, optical mapping, and 10X technology all provide independent data sources that can help improve genome structure and polish specific regions. The success of future assembly projects will hinge on the ability to process these various types of data in a seamless and efficient manner. Preparation of the Hi-C libraries The Hi-C library construction protocol was adapted from [8, 47]. Briefly, partheno-sporophyte material was chemically cross-linked for 1 h at RT using formaldehyde (final concentration, 3% in 1× PBS; final volume, 30 ml; Sigma-Aldrich, St. Louis, MO). The formaldehyde was then quenched for 20 min at RT by adding 10 ml of 2.5 M glycine. The cells were recovered by centrifugation and stored at − 80 °C until use. The Hi-C library was then prepared as follows. Cells were resuspended in 1.2 ml of 1× DpnII buffer (NEB, Ipswich, MA), transferred to a VK05 tubes (Precellys, Bertin Technologies, Rockville, MD), and disrupted using the Precellys apparatus and the following program ([20 s—6000 rpm, 30 s—pause] 9× cycles). The lysate was recovered (around 1.2 ml) and transferred to two 1.5-ml tubes. SDS was added to a final concentration of 0.3%, and the 2 reactions were incubated at 65 °C for 20 min followed by an incubation of 30 min at 37 °C. A volume of 50 μl of 20% Triton-X100 was added to each tube, and incubation was continued for 30 min. DpnII restriction enzyme (150 units) was added to each tube, and the reactions were incubated overnight at 37 °C. Next morning, reactions were centrifuged at 16,000×g for 20 min. The supernatants were discarded, and the pellets were resuspended in 200 μl of NE2 1× buffer and pooled (final volume = 400 μl). DNA extremities were labeled with biotin using the following mix (50 μl NE2 10× buffer, 37.5 μl 0.4 mM dCTP-14-biotin, 4.5 μl 10 mM dATP-dGTP-dTTP mix, 10 μl Klenow 5 U/μl) and an incubation of 45 min at 37 °C. The labeling reaction was then split in two for the ligation reaction (ligation buffer—1.6 ml, ATP 100 mM—160 μl, BSA 10 mg/ml—160 μl, ligase 5 U/μl—50 μl, H2O—13.8 ml). The ligation reactions were incubated for 4 h at 16 °C. After addition of 200 μl of 10% SDS, 200 μl of 500 mM EDTA, and 200 μl of proteinase K 20 mg/ml, the tubes were incubated overnight at 65 °C. DNA was then extracted, purified, and processed for sequencing as previously described (Lazar-Stefanita et al. [47]). Hi-C libraries were sequenced on a NextSeq 550 apparatus (2 × 75 bp, paired-end Illumina NextSeq with the first ten bases acting as barcodes; Marbouty et al. [15]). Contact map generation Contact maps were generated from reads using the hicstuff pipeline for processing generic 3C data, available at https://github.com/koszullab/hicstuff. The backend uses the bowtie2 (version 2.2.5) aligner run in paired-end mode (with the following options: --maxins 5 –very-sensitive-local). Alignments with mapping quality lower than 30 were discarded. The output was in the form of a sparse matrix where each fragment of every chromosome was given a unique identifier and every pair of fragments was given a contact count if it was non-zero. Fragments were then filtered based on their size and total coverage. First, fragments shorter than 50 bp were discarded. Then, fragments whose coverage was less than one standard deviation below the mean of the global coverage distribution were removed from the initial contact map. A total of 6,974,350 bp of sequences was removed this way. An initial contact distribution based on a simplified a polymer model [27] with three parameters was first computed for this matrix. Finally, the instaGRAAL algorithm was run using the resulting matrix and distribution. For the Ectocarpus sp. genome, instaGRAAL was run at levels 4 (n = 81 RFs), 5 (n = 243 RFs), and 6 (n = 729 RFs). Levels 5 and 6 were only used to check for genome stability and consistency in the final chromosome count. Level 4 was used for all subsequent analyses. All runs were performed for 250 cycles. The starting fragments for the analysis were the reference genome scaffolds split into restriction fragments. The same parameters were used for the D. herbacea genome. The same parameters were used for the human genome, except we used level 6 instead of 4. Correcting genome assemblies The assembled genome generated by instaGRAAL was corrected for misassemblies using a number of simple procedures that aimed to reinstate the local structure of the input contigs of the original assembly where possible. Briefly, bins belonging to the same input contig were juxtaposed in the same relative positions as in the original assembly. Small groups of bins were preferentially moved to the location of larger groups when several such groups were present in the assembly. The orientations of sets of bins that had been regrouped in this manner were modified so that orientation was consistent and matched that of the majority of the group, re-orientating minority bins when necessary. Both steps are illustrated in Fig. 4. Finally, fragments that had been removed during the filtering steps were reincorporated if they had been adjacent to an already integrated bin in the original assembly. The remaining sequences that could not be reintegrated this way were appended as non-integrated scaffolds. Validation metrics Original and other assembly metrics (Nx, GC distribution) were obtained using QUAST-LG [30]. Misassemblies were quantified using QUAST-LG with the minimap2 aligner in the backend. Ortholog completeness was computed with BUSCO (v3) [29]. Assembly completeness was also assessed with BUSCO. The evolution of genome metrics between cycles was obtained using instaGRAAL's own implementation. Validation with the genetic map The validation procedure with respect to linkage data was implemented as part of instaGRAAL. Briefly, the script considers a set of linkage group where regions are separated by SNP markers and a set of Hi-C scaffolds where regions are bins separated by restriction sites. It then finds best-matching pairs of linkage groups/scaffolds by counting how many of these regions overlap from one set to the other. Then, for each pair, the bins in the Hi-C scaffold are rearranged so that their order is consistent with that of the corresponding linkage group. Such rearrangements are parsimonious and try to alter as little as possible. Since there is not a one-to-one mapping from restriction sites to SNP markers, some regions in the Hi-C scaffolds are not present in the linkage groups, in which case they are left unchanged. When the Hi-C scaffolds are altered this way, as was found in the case of the raw GRAAL v3 assembly, the script acts as a correction. When the scaffolds are unchanged, as was the case with the instaGRAAL corrected v4 assembly, the script acts as a validation. Benchmarking with other assemblers For each genome, the 3D-DNA program was run using the run-assembly-pipeline.sh entry point script with the following options: -i 1000 --polisher-input-size 10000 --splitter-input-size 10000. The Hi-C data was prepared with the Juicer pipeline as recommended by 3D-DNA's documentation. The SALSA2 program was run with the –cutoff=0 option, and misassembly correction with the –clean=yes option. No expected genome size was provided. The program halted after 9 iterations for Ectocarpus sp. and 18 iterations for D. herbacea. Hi-C data was prepared with the Arima pipeline as recommended by SALSA2's documentation. The similarity dot plot between corrected instaGRAAL and SALSA scaffolds was generated with minimap2. Benchmarking with the human genome We followed a procedure similar to the benchmark analysis detailed in [33]. Briefly, the GRCh38 reference genome was cut into 300-kb fragments. The Hi-C library generated using an Arima Genomics kit was aligned against the genome (SRA: SRR6675327). instaGRAAL was run on the resulting contact map, using the same default parameters as for the algae genomes, except we increased the resolution level to 6 (from 4). The similarity dot plot between instaGRAAL and SALSA scaffolds was generated with minimap2, with the options -DP -k19 -w19 -m200. Software tool requirements The instaGRAAL software is written in Python 3 and uses CUDA for the computationally intensive parts. It requires a working installation of CUDA with the pycuda library. CUDA is a proprietary parallel computing framework developed by NVIDIA and requires a NVIDIA graphics card. The scaffolder also requires a number of common scientific Python libraries specified in its documentation. The instaGRAAL website lists computer systems onto which the program was successfully installed and run. The datasets generated and analyzed in the present work are available in the SRA repository, SRR8550777 [48]. The instaGRAAL software and its documentation are freely available under the GPL-3.0 license at https://github.com/koszullab/instaGRAAL [49]. Assemblies, contact maps, and relevant materials for the reproduction of the main results and figures are available at https://github.com/koszullab/ectocarpus_scripts [50]. Khan AR, Pervez MT, Babar ME, Naveed N, Shoaib M. A comprehensive study of de novo genome assemblers: current challenges and future prospective. Evol Bioinforma Online. 2018;14. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5826002/. Accessed 12 Dec 2019. Sedlazeck FJ, Lee H, Darby CA, Schatz MC. Piercing the dark matter: bioinformatics of long-range sequencing and mapping. Nat Rev Genet. 2018;19:329. Rice ES, Green RE. New approaches for genome assembly and scaffolding. Annu Rev AnimBiosci. 2019;7:17–40. Alkan C, Coe BP, Eichler EE. Genome structural variation discovery and genotyping. Nat Rev Genet. 2011;12:363–76. Salzberg SL, Phillippy AM, Zimin A, Puiu D, Magoc T, Koren S, et al. GAGE: a critical evaluation of genome assemblies and assembly algorithms. Genome Res. 2012;22:557–67. Bradnam KR, Fass JN, Alexandrov A, Baranay P, Bechner M, Birol I, et al. Assemblathon 2: evaluating de novo methods of genome assembly in three vertebrate species. GigaScience [Internet]. 2013 [cited 2018 Nov 2];2. Available from: https://academic.oup.com/gigascience/article/2/1/2047-217X-2-10/2656129. Alhakami H, Mirebrahim H, Lonardi S. A comparative evaluation of genome assembly reconciliation tools. Genome Biol. 2017;18:93. Lieberman-Aiden E, van Berkum NL, Williams L, Imakaev M, Ragoczy T, Telling A, et al. Comprehensive mapping of long-range interactions reveals folding principles of the human genome. Science. 2009;326:289–93. Dekker J, Rippe K, Dekker M, Kleckner N. Capturing chromosome conformation. Science. 2002;295:1306–11. Flot J-F, Marie-Nelly H, Koszul R. Contact genomics: scaffolding and phasing (meta) genomes using chromosome 3D physical signatures. FEBS Lett. 2015;589:2966-74. Burton JN, Adey A, Patwardhan RP, Qiu R, Kitzman JO, Shendure J. Chromosome-scale scaffolding of de novo genome assemblies based on chromatin interactions. Nat Biotechnol. 2013;31:1119–25. Kaplan N, Dekker J. High-throughput genome scaffolding from in vivo DNA interaction frequency. Nat Biotechnol. 2013;31:1143–7. Marie-Nelly H, Marbouty M, Cournac A, Flot J-F, Liti G, Parodi DP, et al. High-quality genome (re) assembly using chromosomal contact data. Nat Commun. 2014;5:5695. Marie-Nelly H. A probabilistic approach for genome assembly from high-throughput chromosome conformation capture data [Doctoral dissertation]. Université Pierre et Marie Curie – Paris 6. 2013;. Marbouty M, Cournac A, Flot J-F, Marie-Nelly H, Mozziconacci J, Koszul R. Metagenomic chromosome conformation capture (meta3C) unveils the diversity of chromosome organization in microorganisms. eLife. 2014;3:e03318. Bickhart DM, Rosen BD, Koren S, Sayre BL, Hastie AR, Chan S, et al. Single-molecule sequencing and chromatin conformation capture enable de novo reference assembly of the domestic goat genome. Nat Genet. 2017;49:643–50. Dudchenko O, Batra SS, Omer AD, Nyquist SK, Hoeger M, Durand NC, et al. De novo assembly of the Aedes aegypti genome using Hi-C yields chromosome-length scaffolds. Science. 2017;356:92–5. Putnam NH, O'Connell BL, Stites JC, Rice BJ, Blanchette M, Calef R, et al. Chromosome-scale shotgun assembly using an in vitro method for long-range linkage. Genome Res. 2016;26:342–50. Marbouty M, Baudry L, Cournac A, Koszul R. Scaffolding bacterial genomes and probing host-virus interactions in gut microbiome by proximity ligation (chromosome capture) assay. Sci Adv. 2017;3:e1602105. Jourdier E, Baudry L, Poggi-Parodi D, Vicq Y, Koszul R, Margeot A, et al. Proximity ligation scaffolding and comparison of two Trichoderma reesei strains genomes. BiotechnolBiofuels. 2017;10:151. Cormier A, Avia K, Sterck L, Derrien T, Wucher V, Andres G, et al. Re-annotation, improved large-scale assembly and establishment of a catalogue of noncoding loci for the genome of the model brown alga Ectocarpus. New Phytol. 2017;214:219–32. Cock JM, Sterck L, Rouzé P, Scornet D, Allen AE, Amoutzias G, et al. The Ectocarpus genome and the independent evolution of multicellularity in brown algae. Nature. 2010;465:617–21. Coelho SM, Godfroy O, Arun A, Corguillé GL, Peters AF, Cock JM. OUROBOROS is a master regulator of the gametophyte to sporophyte life cycle transition in the brown alga Ectocarpus. Proc Natl Acad Sci. 2011;108:11518–23. Ahmed S, Cock JM, Pessia E, Luthringer R, Cormier A, Robuchon M, et al. A haploid system of sex determination in the brown alga Ectocarpus sp. Curr Biol. 2014;24:1945–57. Arun A, Coelho SM, Peters AF, Bourdareau S, Pérès L, Scornet D, et al. Convergent recruitment of TALE homeodomain life cycle regulators to direct sporophyte development in land plants and brown algae. McCormick S, Hardtke CS, editors. eLife. 2019;8:e43101. Avia K, Coelho SM, Montecinos GJ, Cormier A, Lerck F, Mauger S, et al. High-density genetic map and identification of QTLs for responses to temperature and salinity stresses in the model brown alga Ectocarpus. Sci Rep. 2017;7:43241. Rippe K. Making contacts on a nucleic acid polymer. Trends Biochem Sci. 2001;26:733–40. Müller DG. UntersuchungenzurEntwicklungsgeschichte der BraunalgeEctocarpussiliculosusAusNeapel. Planta. 1966;68:57–68. Simão FA, Waterhouse RM, Ioannidis P, Kriventseva EV, Zdobnov EM. BUSCO: assessing genome assembly and annotation completeness with single-copy orthologs. Bioinformatics. 2015;31:3210–2. Mikheenko A, Prjibelski A, Saveliev V, Antipov D, Gurevich A. Versatile genome assembly evaluation with QUAST-LG. Bioinformatics. 2018;34:i142–50. Zimin AV, Marçais G, Puiu D, Roberts M, Salzberg SL, Yorke JA. The MaSuRCA genome assembler. Bioinformatics. 2013;29:2669–77. Ramirez ME, Müller DG, Peters AF. Life history and taxonomy of two populations of ligulate Desmarestia (Phaeophyceae) from Chile. Can J Bot. 1986;64:2948–54. Ghurye J, Rhie A, Walenz BP, Schmitt A, Selvaraj S, Pop M, et al. Integrating Hi-C links with assembly graphs for chromosome-scale assembly. PLoS Comput Biol. 2019;15:e1007273. Cournac A, Marie-Nelly H, Marbouty M, Koszul R, Mozziconacci J. Normalization of a chromosomal contact map. BMC Genomics. 2012;13:436. Imakaev M, Fudenberg G, McCord RP, Naumova N, Goloborodko A, Lajoie BR, et al. Iterative correction of Hi-C data reveals hallmarks of chromosome organization. Nat Methods. 2012;9:999–1003. Yaffe E, Tanay A. Probabilistic modeling of Hi-C contact maps eliminates systematic biases to characterize global chromosomal architecture. Nat Genet. 2011;43:1059–65. Muller H, Scolari VF, Agier N, Piazza A, Thierry A, Mercy G, et al. Characterizing meiotic chromosomes' structure and pairing using a designer sequence optimized for Hi-C. Mol Syst Biol. 2018;14:e8293. Consortium (IWGSC) TIWGS, Investigators IR principal, Appels R, Eversole K, Feuillet C, Keller B, et al. Shifting the limits in wheat research and breeding using a fully annotated reference genome. Science. 2018;361:eaar7191. Aganezov SS, Alekseyev MA. CAMSA: a tool for comparative analysis and merging of scaffold assemblies. BMC Bioinformatics. 2017;18:496. Wences AH, Schatz MC. Metassembler: merging and optimizing de novo genome assemblies. Genome Biol. 2015;16:207. English AC, Richards S, Han Y, Wang M, Vee V, Qu J, et al. Mind the gap: upgrading genomes with Pacific Biosciences RS long-read sequencing technology. PLoS One. 2012;7:e47768. Vaser R, Sović I, Nagarajan N, Šikić M. Fast and accurate de novo genome assembly from long uncorrected reads. Genome Res. 2017;27:737–46. Walker BJ, Abeel T, Shea T, Priest M, Abouelliel A, Sakthikumar S, et al. Pilon: an integrated tool for comprehensive microbial variant detection and genome assembly improvement. PLoS One. 2014;9:e112963. Kundu R, Casey J, Sung W-K. HyPo: super fast accurate polisher for long read genome assemblies. bioRxiv. 2019;2019.12.19.882506. Antipov D, Korobeynikov A, McLean JS, Pevzner PA. hybridSPAdes: an algorithm for hybrid assembly of short and long reads. Bioinformatics. 2016;32:1009–15. Miller JR, Zhou P, Mudge J, Gurtowski J, Lee H, Ramaraj T, et al. Hybrid assembly with long and short reads improves discovery of gene family expansions. BMC Genomics. 2017;18:541. Lazar-Stefanita L, Scolari VF, Mercy G, Muller H, Guérin TM, Thierry A, et al. Cohesins and condensins orchestrate the 4D dynamics of yeast chromosomes during the cell cycle. EMBO J. 2017;36(18):2684-97. Baudry L, Guiglielmoni N, Marie-Nelly H, Cormier A, Marbouty M, Avia K, Mie YL, Godfroy O, Sterck L, Cock JM, Zimmer C, Coelho SM, Koszul R. Large genome reassembly based on Hi-C data, continuation of GRAAL. Sequence Read Archive. Datasets. 2020. https://www.ncbi.nlm.nih.gov/sra/?term=SRR8550777. Lyam Baudry, Nadège Guiglielmoni, Hervé Marie-Nelly, Romain Koszul. Large genome reassembly based on Hi-C data, continuation of GRAAL. 2019. https://github.com/koszullab/instagraal https://doi.org/10.5281/zenodo.3753965. Accessed 16 Apr 2020. Lyam Baudry, Nadège Guiglielmoni, Alexandre Cormier, Komlan Avia, Mark Cock, Susana Coelho, Romain Koszul. Large genome reassembly based on Hi-C data, continuation of GRAAL. 2019. https://github.com/koszullab/ectocarpus_scripts https://doi.org/10.5281/zenodo.3753973. Accessed 16 Apr 2020. We thank our colleagues from the team, especially Cyril Matthey-Doret, as well as Hugo Darras, Heather Marlow, Francois Spitz, Jitendra Narayan, Jean-François Flot, Jérémy Gauthier, Jean-Michel Drezen, and all Github users and contributors for valuable feedback and comments. Review history The review history is available as Additional file 2. Andrew Cosgrove was the primary editor on this article and managed its editorial process and peer review in collaboration with the rest of the editorial team. This research was supported by funding to R.K. and S.M.C. from the European Research Council under the Horizon 2020 Program (ERC grant agreements 260822 and 638240, respectively). This project has also received funding from the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 764840. Institut Pasteur, Unité Régulation Spatiale des Génomes, CNRS, UMR 3525, C3BI USR 3756, F-75015, Paris, France Lyam Baudry, Nadège Guiglielmoni, Hervé Marie-Nelly, Martial Marbouty & Romain Koszul Sorbonne Université, Collège Doctoral, F-75005, Paris, France Lyam Baudry & Hervé Marie-Nelly Evolutionary Biology & Ecology, Université Libre de Bruxelles, 1050, Brussels, Belgium Nadège Guiglielmoni Sorbonne Université, Laboratory of Integrative Biology of Marine Models, Algal Genetics, UMR 8227, Roscoff, France Alexandre Cormier, Komlan Avia, Olivier Godfroy, J. Mark Cock & Susana M. Coelho Present Address: Université de Strasbourg, INRA, SVQV UMR-A 1131, Colmar, France Komlan Avia Institut Pasteur, Center of Bioinformatics, Biostatistics and Integrative Biology (C3BI), USR3756, CNRS, Paris, France Yann Loe Mie Department of Plant Biotechnology and Bioinformatics, Ghent University, B-9052 Ghent, Ghent, Belgium Lieven Sterck VIB Center for Plant Systems Biology, Technologiepark 927, B-9052, Ghent, Belgium Institut Pasteur, Imaging and Modeling Unit, CNRS, UMR 3691, C3BI USR 3756, F-75015, Paris, France Christophe Zimmer Lyam Baudry Hervé Marie-Nelly Alexandre Cormier Martial Marbouty Olivier Godfroy J. Mark Cock Susana M. Coelho Romain Koszul LB rewrote and updated the GRAAL program originally designed by HMN, CZ, and RK. MM and AC performed the experiments. LB and NG performed and ran the scaffoldings. LB, NG, and RK analyzed the assemblies, with contributions from AC, KA, LS, JMC, and SMC. LM and RK wrote the manuscript, with contributions from NG, MM, JMC, MC, and SMC. LB, MM, SMC, and RK conceived the study. The authors read and approved the final manuscript. Twitter handle: @rkoszul (Romain Koszul). Correspondence to Susana M. Coelho or Romain Koszul. No ethical approval was required. instaGRAAL is owned by the Institut Pasteur. The entire program and its source code are freely available under a free software license. Supplementary tables and figures. Review history. Baudry, L., Guiglielmoni, N., Marie-Nelly, H. et al. instaGRAAL: chromosome-level quality scaffolding of genomes using a proximity ligation-based scaffolder. Genome Biol 21, 148 (2020). https://doi.org/10.1186/s13059-020-02041-z DOI: https://doi.org/10.1186/s13059-020-02041-z Ectocarpus Hi-C scaffolding Hi-C, genome assembly Desmarestia herbacea
CommonCrawl
$f(A \cap B)\subset f(A)\cap f(B)$, and otherwise? I got a serious doubt ahead the question Be $f:X\longrightarrow Y$ a function. If $A,B\subset X$, show that $f(A \cap B)\subset f(A)\cap f(B)$ I did as follows $$\forall\;y\in f(A\cap B)\Longrightarrow \exists x\in A\cap B, \text{ such that } f(x)=y\\ \Longrightarrow x \in A\text{ and }x\in B\Longrightarrow f(x)\in f(A)\text{ and }f(x)\in f(B)\\ \Longrightarrow f(x)\in f(A)\cap f(B)\Longrightarrow y\in f(A)\cap f(B)$$ This ensures that $\forall y \in f(A\cap B)$ then $y\in f(A)\cap f(B)$, therefore $f(A\cap B)\subset f(A)\cap f(B)$. Okay, we have the full demonstration. We know that for equality to be valid, then $ f $ must be injective. But my question is when should I see that equality is not worth, not by counter example, but finding an error in the following demonstration $$\forall\;y\in f(A)\cap f(B)\Longrightarrow y\in f(A)\text{ and }y\in f(B) \Longrightarrow \\ \exists x\in A \text{ and } B, \text{ such that } f(x)=y\\ \Longrightarrow x \in A\cap B\ \Longrightarrow f(x)\in f(A\cap B)\Longrightarrow y\in f(A\cap B)$$ Where is the error in the statement? Which of these steps can not do and why? functions elementary-set-theory marcelolpjuniormarcelolpjunior $\begingroup$ @rank No, I was actually looking for the point that will need it. But to see the answers below could understand. $\endgroup$ – marcelolpjunior Mar 25 '15 at 11:50 $\begingroup$ General principle: direct images are worst behaved that inverse images. $\endgroup$ – Martín-Blas Pérez Pinilla Mar 25 '15 at 11:51 $\begingroup$ A few related posts: math.stackexchange.com/q/228613 math.stackexchange.com/q/225333 math.stackexchange.com/q/231145 math.stackexchange.com/q/170725 math.stackexchange.com/q/144870 $\endgroup$ – Martin Sleziak Apr 16 '17 at 16:40 you wrote : $$\forall\;y\in f(A)\cap f(B)\Longrightarrow y\in f(A)\text{ and }y\in f(B) \Longrightarrow \\ \exists x\in A \text{ and } B, \text{ such that } f(x)=y\\ $$ The problem is in the last implication : from $y\in f(A)\text{ and }y\in f(B)$ you get that there exist $x_A\in A$ and $x_B\in B$ such that $f(x_A)=y=f(x_B)$, you cannot assume that $x_A=x=x_B$. Clément GuérinClément Guérin $\begingroup$ Being quite honest with you, already researched a lot about this issue, and you in a few seconds I answered very clearly say that now I understand the resolution. Many Thanks. $\endgroup$ – marcelolpjunior Mar 25 '15 at 11:48 $\begingroup$ You're welcome, let me add that the only reason I can answer your question so quickly is because I have done the same kind of mistake myself few years ago... $\endgroup$ – Clément Guérin Mar 25 '15 at 12:47 Suppose $y\in f(A)\cap f(B)$. Then $y\in f(A)$, so there is $x_1\in A$ with $f(x)=y$. Moreover $y\in f(B)$, so there is $x_2\in B$ such that $f(x_2)=y$. There's no reason why we should have $x_1=x_2$, except in the case when $f$ is injective. Counterexample: $X=\{1,2\}$, $Y=\{0\}$, $f(1)=f(2)=0$. With $A=\{1\}$ and $B=\{2\}$ we have $$ f(A\cap B)=f(\emptyset)=\emptyset\\ f(A)\cap f(B)=\{0\}\cap\{0\}=\{0\} $$ If you had written the proof in words, instead of piling up symbols, you'd probably have discovered the issue. Of course we might take $x_1=x_2$ in special cases. even when $f$ is not injective. For particuler subsets $A$ and $B$ we could have $f(A\cap B)=f(A)\cap f(B)$ (for example, when $B=X$), but not in general, unless $f$ is injective. Actually, it's easy to prove that $f$ is injective if and only if, for all $A,B\subset X$, $f(A\cap B)=f(A)\cap f(B)$. $\forall\;y\in f(A)\cap f(B)\Longrightarrow y\in f(A)\text{ and }y\in f(B) \Longrightarrow \exists x\in A \text{ and } B, \text{ such that } f(x)=y$ You have problem here. $y\in f(A)\text{ and }y\in f(B) \Longrightarrow \exists x_1\in A \text{ and } x_2 \in B, \text{ such that } f(x_1)=y=f(x_2).$ These $x_1$ and $x_2$ need not be equal. There you need the injectivity of $f$ to conclude that $x_1=x_2.$ KrishKrish True, if $y\in f(A)\cap f(B),$ then (1) there exists $x\in A$ such that $f(x)=y,$ and (2) there exists $x\in B$ such that $f(x)=y.$ The error is in assuming that these are referring to a particular $x$ and that the $x$ is the same in both cases! Rather, we should interpret (1) as telling us that $\{x\in A:f(x)=y\}\neq\emptyset,$ and interpret (2) similarly. However, without injectivity, we can't necessarily conclude that $$\{x\in A:f(x)=y\}\cap\{x\in B:f(x)=y\}\neq\emptyset,$$ which is equivalent to saying that there is some $x\in A\cap B$ such that $f(x)=y.$ Cameron BuieCameron Buie Not the answer you're looking for? Browse other questions tagged functions elementary-set-theory or ask your own question. Proof of $f(A∩B)⊆f(A)∩f(B)$ Overview of basic results about images and preimages Do we have always $f(A \cap B) = f(A) \cap f(B)$? Is this proof correct for : Does $F(A)\cap F(B)\subseteq F(A\cap B) $ for all functions $F$? Prove $f(S \cap T) \subseteq f(S) \cap f(T)$ Verifying a proposition on image and preimage: $f(A\cap B)\subseteq f(A)\cap f(B)$ and $f^{-1}(C\cap D)=f^{-1}(C)\cap f^{-1}(D)$ Is this a valid proof of $f(S \cap T) \subseteq f(S) \cap f(T)$? Is This Set a Subset of the Other? Proof Assistance. Conditions Equivalent to Injectivity Prove that the function $\Phi :\mathcal{F}(X,Y)\longrightarrow Y$, is not injective. How do I rigorously show $f(\cap \mathcal{C}) \subset \cap f(\mathcal{C})$ Intersection of inverse images On the bijectivity of $f:\mathcal P(E)\to\mathcal P(A)\times \mathcal P(B); X\mapsto (X\cap A, X\cap B)$ with $A,B\subset E$ How to prove that the inverse image of the image of a set is a subset of the set. Prove that $f$ is injective iff $f(A\cap B)=f(A)\cap f(B)$. having some trouble with proof that if $A \subseteq B$ then $(A \cap C) \subseteq (B \cap C)$ Showing a counterexample of $\cap\left(\cup A_{ij}\right)\subset \cup\left(\cap A_{ij}\right)$ How to show that if $(A\cup C)\subset (A\cup B)$ and $(A\cap C)\subset (A\cap B)$ then $C\subset B$
CommonCrawl
Results for 'Algorithmic information theory' Algorithmic information theory and undecidability.Panu Raatikainen - 2000 - Synthese 123 (2):217-225.details Chaitin's incompleteness result related to random reals and the halting probability has been advertised as the ultimate and the strongest possible version of the incompleteness and undecidability theorems. It is argued that such claims are exaggerations. Toward an algorithmic metaphysics.Steve Petersen - 2013 - In David Dowe (ed.), Algorithmic Probability and Friends: Bayesian Prediction and Artificial Intelligence. Springer. pp. 306-317.details There are writers in both metaphysics and algorithmic information theory (AIT) who seem to think that the latter could provide a formal theory of the former. This paper is intended as a step in that direction. It demonstrates how AIT might be used to define basic metaphysical notions such as *object* and *property* for a simple, idealized world. The extent to which these definitions capture intuitions about the metaphysics of the simple world, times the extent to (...) which we think the simple world is analogous to our own, will determine a lower bound for basing a metaphysics for *our* world on AIT. (shrink) Semantic Information G Theory and Logical Bayesian Inference for Machine Learning.Chenguang Lu - 2019 - Information 10 (8):261.details An important problem with machine learning is that when label number n>2, it is very difficult to construct and optimize a group of learning functions, and we wish that optimized learning functions are still useful when prior distribution P(x) (where x is an instance) is changed. To resolve this problem, the semantic information G theory, Logical Bayesian Inference (LBI), and a group of Channel Matching (CM) algorithms together form a systematic solution. MultilabelMultilabel A semantic channel in the G (...) theory consists of a group of truth functions or membership functions. In comparison with likelihood functions, Bayesian posteriors, and Logistic functions used by popular methods, membership functions can be more conveniently used as learning functions without the above problem. In Logical Bayesian Inference (LBI), every label's learning is independent. For Multilabel learning, we can directly obtain a group of optimized membership functions from a big enough sample with labels, without preparing different samples for different labels. A group of Channel Matching (CM) algorithms are developed for machine learning. For the Maximum Mutual Information (MMI) classification of three classes with Gaussian distributions on a two-dimensional feature space, 2-3 iterations can make mutual information between three classes and three labels surpass 99% of the MMI for most initial partitions. For mixture models, the Expectation-Maxmization (EM) algorithm is improved and becomes the CM-EM algorithm, which can outperform the EM algorithm when mixture ratios are imbalanced, or local convergence exists. The CM iteration algorithm needs to combine neural networks for MMI classifications on high-dimensional feature spaces. LBI needs further studies for the unification of statistics and logic. (shrink) Algorithms and Arguments: The Foundational Role of the ATAI-question.Paola Cantu' & Italo Testa - 2011 - In Frans H. van Eemeren, Bart Garssen, David Godden & Gordon Mitchell (eds.), Proceedings of the Seventh International Conference of the International Society for the Study of Argumentation (pp. 192-203). Rozenberg / Sic Sat.details Argumentation theory underwent a significant development in the Fifties and Sixties: its revival is usually connected to Perelman's criticism of formal logic and the development of informal logic. Interestingly enough it was during this period that Artificial Intelligence was developed, which defended the following thesis (from now on referred to as the AI-thesis): human reasoning can be emulated by machines. The paper suggests a reconstruction of the opposition between formal and informal logic as a move against a premise of (...) an argument for the AI-thesis, and suggests making a distinction between a broad and a narrow notion of algorithm that might be used to reformulate the question as a foundational problem for argumentation theory. (shrink) Is Evolution Algorithmic?Marcin Miłkowski - 2009 - Minds and Machines 19 (4):465-475.details In Darwin's Dangerous Idea, Daniel Dennett claims that evolution is algorithmic. On Dennett's analysis, evolutionary processes are trivially algorithmic because he assumes that all natural processes are algorithmic. I will argue that there are more robust ways to understand algorithmic processes that make the claim that evolution is algorithmic empirical and not conceptual. While laws of nature can be seen as compression algorithms of information about the world, it does not follow logically that they (...) are implemented as algorithms by physical processes. For that to be true, the processes have to be part of computational systems. The basic difference between mere simulation and real computing is having proper causal structure. I will show what kind of requirements this poses for natural evolutionary processes if they are to be computational. (shrink) Information recovery problems.John Corcoran - 1995 - Theoria 10 (3):55-78.details An information recovery problem is the problem of constructing a proposition containing the information dropped in going from a given premise to a given conclusion that folIows. The proposition(s) to beconstructed can be required to satisfy other conditions as well, e.g. being independent of the conclusion, or being "informationally unconnected" with the conclusion, or some other condition dictated by the context. This paper discusses various types of such problems, it presents techniques and principles useful in solving them, and (...) it develops algorithmic methods for certain classes of such problems. The results are then applied to classical number theory, in particular, to questions concerning possible refinements of the 1931 Gödel Axiom Set, e.g. whether any of its axioms can be analyzed into "informational atoms". Two propositions are "informationally unconnected" [with each other] if no informative (nontautological) consequence of one also follows from the other. A proposition is an "informational atom" if it is informative but no information can be dropped from it without rendering it uninformative (tautological). Presentation, employment, and investigation of these two new concepts are prominent features of this paper. (shrink) Information, learning and falsification.David Balduzzi - 2011details There are (at least) three approaches to quantifying information. The first, algorithmic information or Kolmogorov complexity, takes events as strings and, given a universal Turing machine, quantifies the information content of a string as the length of the shortest program producing it [1]. The second, Shannon information, takes events as belonging to ensembles and quantifies the information resulting from observing the given event in terms of the number of alternate events that have been ruled (...) out [2]. The third, statistical learning theory, has introduced measures of capacity that control (in part) the expected risk of classifiers [3]. These capacities quantify the expectations regarding future data that learning algorithms embed into classifiers. Solomonoff and Hutter have applied algorithmic information to prove remarkable results on universal induction. Shannon information provides the mathematical foundation for communication and coding theory. However, both approaches have shortcomings. Algorithmic information is not computable, severely limiting its practical usefulness. Shannon information refers to ensembles rather than actual events: it makes no sense to compute the Shannon information of a single string – or rather, there are many answers to this question depending on how a related ensemble is constructed. Although there are asymptotic results linking algorithmic and Shannon information, it is unsatisfying that there is such a large gap – a difference in kind – between the two measures. This note describes a new method of quantifying information, effective information, that links algorithmic information to Shannon information, and also links both to capacities arising in statistical learning theory [4, 5]. After introducing the measure, we show that it provides a non-universal analog of Kolmogorov complexity. We then apply it to derive basic capacities in statistical learning theory: empirical VC-entropy and empirical Rademacher complexity. A nice byproduct of our approach is an interpretation of the explanatory power of a learning algorithm in terms of the number of hypotheses it falsifies [6], counted in two different ways for the two capacities. We also discuss how effective information relates to information gain, Shannon and mutual information. (shrink) Complexity and information.Panu Raatikainen - 1998 - ", Reports From the Department of Philosophy, University of Helsinki, 2.details \Complexity" is a catchword of certain extremely popular and rapidly developing interdisciplinary new sciences, often called accordingly the sciences of complexity1. It is often closely associated with another notably popular but ambiguous word, \information" information, in turn, may be justly called the central new concept in the whole 20th century science. Moreover, the notion of information is regularly coupled with a key concept of thermodynamics, viz. entropy. And like this was not enough, it is quite usual to (...) add one more, at present extraordinarily popular notion, namely chaos, and wed it with the above-mentioned concepts. (shrink) Preface to Forenames of God: Enumerations of Ernesto Laclau toward a Political Theology of Algorithms.Virgil W. Brower - 2021 - Internationales Jahrbuch Für Medienphilosophie 7 (1):243-251.details Perhaps nowhere better than, "On the Names of God," can readers discern Laclau's appreciation of theology, specifically, negative theology, and the radical potencies of political theology. // It is Laclau's close attention to Eckhart and Dionysius in this essay that reveals a core theological strategy to be learned by populist reasons or social logics and applied in politics or democracies to come. // This mode of algorithmically informed negative political theology is not mathematically inert. It aspires to relate a fraction (...) or ratio to a series ... It strains to reduce the decided determinateness of such seriality ever condemned to the naive metaphysics of bad infinity. // It is worth considering that it is the specific 'number' of Dionysius in differential identification with an ineffable god (and, as such, a singular becoming between theology and numbers) that is floating in at least two dimensions [of signification] (be it political Demand on the horizontal dimension or theological Desire on [a] floating dimension) that cannot but *perform the link that relinks* names of god with any political life, populist reason, social justice, or radical democracy straining toward peace. (shrink) Genealogy of Algorithms: Datafication as Transvaluation.Virgil W. Brower - 2020 - le Foucaldien 6 (1):1-43.details This article investigates religious ideals persistent in the datafication of information society. Its nodal point is Thomas Bayes, after whom Laplace names the primal probability algorithm. It reconsiders their mathematical innovations with Laplace's providential deism and Bayes' singular theological treatise. Conceptions of divine justice one finds among probability theorists play no small part in the algorithmic data-mining and microtargeting of Cambridge Analytica. Theological traces within mathematical computation are emphasized as the vantage over large numbers shifts to weights beyond (...) enumeration in probability theory. Collateral secularizations of predestination and theodicy emerge as probability optimizes into Bayesian prediction and machine learning. The paper revisits the semiotics and theism of Peirce and a given beyond the probable in Whitehead to recontextualize the critiques of providence by Agamben and Foucault. It reconsiders datafication problems alongside Nietzschean valuations. Religiosity likely remains encoded within the very algorithms presumed purified by technoscientific secularity or mathematical dispassion. (shrink) Information of the chassis and information of the program in synthetic cells.Antoine Danchin - 2009 - Systems and Synthetic Biology 3:125-134.details Synthetic biology aims at reconstructing life to put to the test the limits of our understanding. It is based on premises similar to those which permitted invention of computers, where a machine, which reproduces over time, runs a program, which replicates. The underlying heuristics explored here is that an authentic category of reality, information, must be coupled with the standard categories, matter, energy, space and time to account for what life is. The use of this still elusive category permits (...) us to interact with reality via construction of self-consistent models producing predictions which can be instantiated into experiments. While the present theory of information has much to say about the program, with the creative properties of recursivity at its heart, we almost entirely lack a theory of the information supporting the machine. We suggest that the program of life codes for processes meant to trap information which comes from the context provided by the environment of the machine. (shrink) Theory Choice and Social Choice: Okasha versus Sen.Jacob Stegenga - 2015 - Mind 124 (493):263-277.details A platitude that took hold with Kuhn is that there can be several equally good ways of balancing theoretical virtues for theory choice. Okasha recently modelled theory choice using technical apparatus from the domain of social choice: famously, Arrow showed that no method of social choice can jointly satisfy four desiderata, and each of the desiderata in social choice has an analogue in theory choice. Okasha suggested that one can avoid the Arrow analogue for theory choice (...) by employing a strategy used by Sen in social choice, namely, to enhance the information made available to the choice algorithms. I argue here that, despite Okasha's claims to the contrary, the information-enhancing strategy is not compelling in the domain of theory choice. (shrink) Agency Laundering and Information Technologies.Alan Rubel, Clinton Castro & Adam Pham - 2019 - Ethical Theory and Moral Practice 22 (4):1017-1041.details When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call "agency laundering." At root, agency laundering involves obfuscating one's moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in a number (...) of recent cases involving automated, or algorithmic, decision-systems. We apply our conception of agency laundering to a series of examples, including Facebook's automated advertising suggestions, Uber's driver interfaces, algorithmic evaluation of K-12 teachers, and risk assessment in criminal sentencing. We distinguish agency laundering from several other critiques of information technology, including the so-called "responsibility gap," "bias laundering," and masking. (shrink) The purpose of qualia: What if human thinking is not (only) information processing?Martin Korth - manuscriptdetails Despite recent breakthroughs in the field of artificial intelligence (AI) – or more specifically machine learning (ML) algorithms for object recognition and natural language processing – it seems to be the majority view that current AI approaches are still no real match for natural intelligence (NI). More importantly, philosophers have collected a long catalogue of features which imply that NI works differently from current AI not only in a gradual sense, but in a more substantial way: NI is closely related (...) to consciousness, intentionality and experiential features like qualia (the subjective contents of mental states)1 and allows for understanding (e.g., taking insight into causal relationships instead of 'blindly' relying on correlations), as well as aesthetical and ethical judgement beyond what we can put into (explicit or data-induced implicit) rules to program machines with. Additionally, Psychologists find NI to range from unconscious psychological processes to focused information processing, and from embodied and implicit cognition to 'true' agency and creativity. NI thus seems to transcend any neurobiological functionalism by operating on 'bits of meaning' instead of information in the sense of data, quite unlike both the 'good old fashioned', symbolic AI of the past, as well as the current wave of deep neural network based, 'sub-symbolic' AI, which both share the idea of thinking as (only) information processing: In symbolic AI, the name explicitly references to its formal system based, i.e. essentially rule-based, nature, but also sub-symbolic AI is (implicitly) rule-based, only now via globally parametrized, nested functions. In the following I propose an alternative view of NI as information processing plus 'bundle pushing', discuss an example which illustrates how bundle pushing can cut information processing short,and suggest first ideas for scientific experiments in neuro-biology and information theory as further investigations. (shrink) Historical and Conceptual Foundations of Information Physics.Anta Javier - 2021 - Dissertation, Universitat de Barcelonadetails The main objective of this dissertation is to philosophically assess how the use of informational concepts in the field of classical thermostatistical physics has historically evolved from the late 1940s to the present day. I will first analyze in depth the main notions that form the conceptual basis on which 'informational physics' historically unfolded, encompassing (i) different entropy, probability and information notions, (ii) their multiple interpretative variations, and (iii) the formal, numerical and semantic-interpretative relationships among them. In the following, (...) I will assess the history of informational thermophysics during the second half of the twentieth century. Firstly, I analyse the intellectual factors that gave rise to this current in the late forties (i.e., popularization of Shannon's theory, interest in a naturalized epistemology of science, etc.), then study its consolidation in the Brillouinian and Jaynesian programs, and finally claim how Carnap (1977) and his disciples tried to criticize this tendency within the scientific community. Then, I evaluate how informational physics became a predominant intellectual current in the scientific community in the nineties, made possible by the convergence of Jaynesianism and Brillouinism in proposals such as that of Tribus and McIrvine (1971) or Bekenstein (1973) and the application of algorithmic information theory into the thermophysical domain. As a sign of its radicality at this historical stage, I explore the main proposals to include information as part of our physical reality, such as Wheeler's (1990), Stonier's (1990) or Landauer's (1991), detailing the main philosophical arguments (e.g., Timpson, 2013; Lombardi et al. 2016a) against those inflationary attitudes towards information. Following this historical assessment, I systematically analyze whether the descriptive exploitation of informational concepts has historically contributed to providing us with knowledge of thermophysical reality via (i) explaining thermal processes such as equilibrium approximation, (ii) advantageously predicting thermal phenomena, or (iii) enabling understanding of thermal property such as thermodynamic entropy. I argue that these epistemic shortcomings would make it impossible to draw ontological conclusions in a justified way about the physical nature of information. In conclusion, I will argue that the historical exploitation of informational concepts has not contributed significantly to the epistemic progress of thermophysics. This would lead to characterize informational proposals as 'degenerate science' (à la Lakatos 1978a) regarding classical thermostatistical physics or as theoretically underdeveloped regarding the study of the cognitive dynamics of scientists in this physical domain. (shrink) The unsolvability of the mind-body problem liberates the will.Scheffel Jan - manuscriptdetails The mind-body problem is analyzed in a physicalist perspective. By combining the concepts of emergence and algorithmic information theory in a thought experiment employing a basic nonlinear process, it is argued that epistemically strongly emergent properties may develop in a physical system. A comparison with the significantly more complex neural network of the brain shows that also consciousness is epistemically emergent in a strong sense. Thus reductionist understanding of consciousness appears not possible; the mind-body problem does not (...) have a reductionist solution. The ontologically emergent character of consciousness is then identified from a combinatorial analysis relating to system limits set by quantum mechanics, implying that consciousness is fundamentally irreducible to low-level phenomena. In the perspective of a modified definition of free will, the character of the physical interactions of the brain's neural system is subsequently studied. As an ontologically open system, it is asserted that its future states are undeterminable in principle. We argue that this leads to freedom of the will. (shrink) Type-2 Fuzzy Sets and Newton's Fuzzy Potential in an Algorithm of Classification Objects of a Conceptual Space.Adrianna Jagiełło, Piotr Lisowski & Roman Urban - 2022 - Journal of Logic, Language and Information 31 (3):389-408.details This paper deals with Gärdenfors' theory of conceptual spaces. Let \({\mathcal {S}}\) be a conceptual space consisting of 2-type fuzzy sets equipped with several kinds of metrics. Let a finite set of prototypes \(\tilde{P}_1,\ldots,\tilde{P}_n\in \mathcal {S}\) be given. Our main result is the construction of a classification algorithm. That is, given an element \({\tilde{A}}\in \mathcal {S},\) our algorithm classifies it into the conceptual field determined by one of the given prototypes \(\tilde{P}_i.\) The construction of our algorithm uses some physical (...) analogies and the Newton potential plays a significant role here. Importantly, the resulting conceptual fields are not convex in the Euclidean sense, which we believe is a reasonable departure from the assumptions of Gardenfors' original definition of the conceptual space. A partitioning algorithm of the space \(\mathcal {S}\) is also considered in the paper. In the application section, we test our classification algorithm on real data and obtain very satisfactory results. Moreover, the example we consider is another argument against requiring convexity of conceptual fields. (shrink) A compromise between reductionism and non-reductionism.Eray Özkural - 2007 - In Carlos Gershenson, Diederik Aerts & Bruce Edmonds (eds.), Worldviews, Science, and Us: Philosophy and Complexity. World Scientific. pp. 285.details This paper investigates the seeming incompatibility of reductionism and non-reductionism in the context of complexity sciences. I review algorithmic information theory for this purpose. I offer two physical metaphors to form a better understanding of algorithmic complexity, and I briefly discuss its advantages, shortcomings and applications. Then, I revisit the non-reductionist approaches in philosophy of mind which are often arguments from ignorance to counter physicalism. A new approach called mild non-reductionism is proposed which reconciliates the necessities (...) of acknowledging irreducibility found in complex systems, and maintaining physicalism. (shrink) Cognition according to Quantum Information: Three Epistemological Puzzles Solved.Vasil Penchev - 2020 - Epistemology eJournal (Elsevier: SSRN) 13 (20):1-15.details The cognition of quantum processes raises a series of questions about ordering and information connecting the states of one and the same system before and after measurement: Quantum measurement, quantum in-variance and the non-locality of quantum information are considered in the paper from an epistemological viewpoint. The adequate generalization of 'measurement' is discussed to involve the discrepancy, due to the fundamental Planck constant, between any quantum coherent state and its statistical representation as a statistical ensemble after measurement. Quantum (...) in-variance designates the relation of any quantum coherent state to the corresponding statistical ensemble of measured results. A set-theory corollary is the curious in-variance to the axiom of choice: Any coherent state excludes any well-ordering and thus excludes also the axiom of choice. However the above equivalence requires it to be equated to a well-ordered set after measurement and thus requires the axiom of choice for it to be able to be obtained. Quantum in-variance underlies quantum information and reveals it as the relation of an unordered quantum "much" (i.e. a coherent state) and a well-ordered "many" of the measured results (i.e. a statistical ensemble). It opens up to a new horizon, in which all physical processes and phenomena can be interpreted as quantum computations realizing relevant operations and algorithms on quantum information. All phenomena of entanglement can be described in terms of the so defined quantum information. Quantum in-variance elucidates the link between general relativity and quantum mechanics and thus, the problem of quantum gravity. The non-locality of quantum information unifies the exact position of any space-time point of a smooth trajectory and the common possibility of all space-time points due to a quantum leap. This is deduced from quantum in-variance. Epistemology involves the relation of ordering and thus a generalized kind of information, quantum one, to explain the special features of the cognition in quantum mechanics. (shrink) Statements and open problems on decidable sets X⊆N that contain informal notions and refer to the current knowledge on X.Apoloniusz Tyszka - 2022 - Journal of Applied Computer Science and Mathematics 16 (2):31-35.details Let f(1)=2, f(2)=4, and let f(n+1)=f(n)! for every integer n≥2. Edmund Landau's conjecture states that the set P(n^2+1) of primes of the form n^2+1 is infinite. Landau's conjecture implies the following unproven statement Φ: card(P(n^2+1))<ω ⇒ P(n^2+1)⊆[2,f(7)]. Let B denote the system of equations: {x_j!=x_k: i,k∈{1,...,9}}∪{x_i⋅x_j=x_k: i,j,k∈{1,...,9}}. The system of equations {x_1!=x_1, x_1 \cdot x_1=x_2, x_2!=x_3, x_3!=x_4, x_4!=x_5, x_5!=x_6, x_6!=x_7, x_7!=x_8, x_8!=x_9} has exactly two solutions in positive integers x_1,...,x_9, namely (1,...,1) and (f(1),...,f(9)). No known system S⊆B with a finite (...) number of solutions in positive integers x_1,...,x_9 has a solution (x_1,...,x_9)∈(N\{0})^9 satisfying max(x_1,...,x_9)>f(9). For every known system S⊆B, if the finiteness/infiniteness of the set {(x_1,...,x_9)∈(N\{0})^9: (x_1,...,x_9) solves S} is unknown, then the statement ∃ x_1,...,x_9∈N\{0} ((x_1,...,x_9) solves S)∧(max(x_1,...,x_9)>f(9)) remains unproven. Let Λ denote the statement: if the system of equations {x_2!=x_3, x_3!=x_4, x_5!=x_6, x_8!=x_9, x_1 \cdot x_1=x_2, x_3 \cdot x_5=x_6, x_4 \cdot x_8=x_9, x_5 \cdot x_7=x_8} has at most finitely many solutions in positive integers x_1,...,x_9, then each such solution (x_1,...,x_9) satisfies x_1,...,x_9≤f(9). The statement Λ is equivalent to the statement Φ. It heuristically justifies the statement Φ . This justification does not yield the finiteness/infiniteness of P(n^2+1). We present a new heuristic argument for the infiniteness of P(n^2+1), which is not based on the statement Φ. Algorithms always terminate. We explain the distinction between existing algorithms (i.e. algorithms whose existence is provable in ZFC) and known algorithms (i.e. algorithms whose definition is constructive and currently known). Assuming that the infiniteness of a set X⊆N is false or unproven, we define which elements of X are classified as known. No known set X⊆N satisfies Conditions (1)-(4) and is widely known in number theory or naturally defined, where this term has only informal meaning. *** (1) A known algorithm with no input returns an integer n satisfying card(X)<ω ⇒ X⊆(-∞,n]. (2) A known algorithm for every k∈N decides whether or not k∈X. (3) No known algorithm with no input returns the logical value of the statement card(X)=ω. (4) There are many elements of X and it is conjectured, though so far unproven, that X is infinite. (5) X is naturally defined. The infiniteness of X is false or unproven. X has the simplest definition among known sets Y⊆N with the same set of known elements. *** Conditions (2)-(5) hold for X=P(n^2+1). The statement Φ implies Condition (1) for X=P(n^2+1). The set X={n∈N: the interval [-1,n] contains more than 29.5+\frac{11!}{3n+1}⋅sin(n) primes of the form k!+1} satisfies Conditions (1)-(5) except the requirement that X is naturally defined. 501893∈X. Condition (1) holds with n=501893. card(X∩[0,501893])=159827. X∩[501894,∞)= {n∈N: the interval [-1,n] contains at least 30 primes of the form k!+1}. We present a table that shows satisfiable conjunctions of the form #(Condition 1) ∧ (Condition 2) ∧ #(Condition 3) ∧ (Condition 4) ∧ #(Condition 5), where # denotes the negation ¬ or the absence of any symbol. No set X⊆N will satisfy Conditions (1)-(4) forever, if for every algorithm with no input, at some future day, a computer will be able to execute this algorithm in 1 second or less. The physical limits of computation disprove this assumption. (shrink) On interpreting Chaitin's incompleteness theorem.Panu Raatikainen - 1998 - Journal of Philosophical Logic 27 (6):569-586.details The aim of this paper is to comprehensively question the validity of the standard way of interpreting Chaitin's famous incompleteness theorem, which says that for every formalized theory of arithmetic there is a finite constant c such that the theory in question cannot prove any particular number to have Kolmogorov complexity larger than c. The received interpretation of theorem claims that the limiting constant is determined by the complexity of the theory itself, which is assumed to be (...) good measure of the strength of the theory. I exhibit certain strong counterexamples and establish conclusively that the received view is false. Moreover, I show that the limiting constants provided by the theorem do not in any way reflect the power of formalized theories, but that the values of these constants are actually determined by the chosen coding of Turing machines, and are thus quite accidental. (shrink) Informational Theories of Content and Mental Representation.Marc Artiga & Miguel Ángel Sebastián - 2020 - Review of Philosophy and Psychology 11 (3):613-627.details Informational theories of semantic content have been recently gaining prominence in the debate on the notion of mental representation. In this paper we examine new-wave informational theories which have a special focus on cognitive science. In particular, we argue that these theories face four important difficulties: they do not fully solve the problem of error, fall prey to the wrong distality attribution problem, have serious difficulties accounting for ambiguous and redundant representations and fail to deliver a metasemantic theory of (...) representation. Furthermore, we argue that these difficulties derive from their exclusive reliance on the notion of information, so we suggest that pure informational accounts should be complemented with functional approaches. (shrink) Integrated Information Theory, Intrinsicality, and Overlapping Conscious Systems.James C. Blackmon - 2021 - Journal of Consciousness Studies 28 (11-12):31-53.details Integrated Information Theory (IIT) identifies consciousness with having a maximum amount of integrated information. But a thing's having the maximum amount of anything cannot be intrinsic to it, for that depends on how that thing compares to certain other things. IIT's consciousness, then, is not intrinsic. A mereological argument elaborates this consequence: IIT implies that one physical system can be conscious while a physical duplicate of it is not conscious. Thus, by a common and reasonable conception of (...) intrinsicality, IIT's consciousness is not intrinsic. It is then argued that to avoid the implication that consciousness is not intrinsic, IIT must abandon its Exclusion Postulate, which prohibits overlapping conscious systems. Indeed, theories of consciousness that attribute consciousness to physical systems, should embrace the view that some conscious systems overlap. A discussion of the admittedly counterintuitive nature of this solution, along with some medical and neuroscientific realities that would seem to support it, is included. (shrink) Is the Integrated Information Theory of Consciousness Compatible with Russellian Panpsychism?Hedda Hassel Mørch - 2019 - Erkenntnis 84 (5):1065-1085.details The Integrated Information Theory is a leading scientific theory of consciousness, which implies a kind of panpsychism. In this paper, I consider whether IIT is compatible with a particular kind of panpsychism, known as Russellian panpsychism, which purports to avoid the main problems of both physicalism and dualism. I will first show that if IIT were compatible with Russellian panpsychism, it would contribute to solving Russellian panpsychism's combination problem, which threatens to show that the view does not (...) avoid the main problems of physicalism and dualism after all. I then show that the theories are not compatible as they currently stand, in view of what I call the coarse-graining problem. After I explain the coarse-graining problem, I will offer two possible solutions, each involving a small modification of IIT. Given either of these modifications, IIT and Russellian panpsychism may be fully compatible after all, and jointly enable significant progress on the mind–body problem. (shrink) Exploring Randomness.Panu Raatikainen - 2001 - Notices of the AMS 48 (9):992-6.details Review of "Exploring Randomness" (200) and "The Unknowable" (1999) by Gregory Chaitin. Two Informational Theories of Memory: a case from Memory-Conjunction Errors.Danilo Fraga Dantas - 2020 - Disputatio 12 (59):395-431.details The causal and simulation theories are often presented as very distinct views about declarative memory, their major difference lying on the causal condition. The causal theory states that remembering involves an accurate representation causally connected to an earlier experience. In the simulation theory, remembering involves an accurate representation generated by a reliable memory process. I investigate how to construe detailed versions of these theories that correctly classify memory errors as misremembering or confabulation. Neither causalists nor simulationists have paid (...) attention to memory-conjunction errors, which is unfortunate because both theories have problems with these cases. The source of the difficulty is the background assumption that an act of remembering has one target. I fix these theories for those cases. The resulting versions are closely related when implemented using tools of information theory, differing only on how memory transmits information about the past. The implementation provides us with insights about the distinction between confabulatory and non-confabulatory memory, where memory-conjunction errors have a privileged position. (shrink) On the Solvability of the Mind-Body Problem.Jan Scheffel - manuscriptdetails The mind-body problem is analyzed in a physicalist perspective. By combining the concepts of emergence and algorithmic information theory in a thought experiment employing a basic nonlinear process, it is shown that epistemically strongly emergent properties may develop in a physical system. Turning to the significantly more complex neural network of the brain it is subsequently argued that consciousness is epistemically emergent. Thus reductionist understanding of consciousness appears not possible; the mind-body problem does not have a reductionist (...) solution. The ontologically emergent character of consciousness is then identified from a combinatorial analysis relating to universal limits set by quantum mechanics, implying that consciousness is fundamentally irreducible to low-level phenomena. (shrink) Can Informational Theories Account for Metarepresentation?Miguel Ángel Sebastián & Marc Artiga - 2020 - Topoi 39 (1):81-94.details In this essay we discuss recent attempts to analyse the notion of representation, as it is employed in cognitive science, in purely informational terms. In particular, we argue that recent informational theories cannot accommodate the existence of metarepresentations. Since metarepresentations play a central role in the explanation of many cognitive abilities, this is a serious shortcoming of these proposals. An Informational Theory of Counterfactuals.Danilo Dantas - 2018 - Acta Analytica 33 (4):525-538.details Backtracking counterfactuals are problem cases for the standard, similarity based, theories of counterfactuals e.g., Lewis. These theories usually need to employ extra-assumptions to deal with those cases. Hiddleston, 632–657, 2005) proposes a causal theory of counterfactuals that, supposedly, deals well with backtracking. The main advantage of the causal theory is that it provides a unified account for backtracking and non-backtracking counterfactuals. In this paper, I present a backtracking counterfactual that is a problem case for Hiddleston's account. Then I (...) propose an informational theory of counterfactuals, which deals well with this problem case while maintaining the main advantage of Hiddleston's account. In addition, the informational theory offers a general theory of backtracking that provides clues for the semantics and epistemology of counterfactuals. I propose that backtracking is reasonable when the state of affairs expressed in the antecedent of a counterfactual transmits less information about an event in the past than the actual state of affairs. (shrink) Information Theory is abused in neuroscience.Lance Nizami - 2019 - Cybernetics and Human Knowing 26 (4):47-97.details In 1948, Claude Shannon introduced his version of a concept that was core to Norbert Wiener's cybernetics, namely, information theory. Shannon's formalisms include a physical framework, namely a general communication system having six unique elements. Under this framework, Shannon information theory offers two particularly useful statistics, channel capacity and information transmitted. Remarkably, hundreds of neuroscience laboratories subsequently reported such numbers. But how (and why) did neuroscientists adapt a communications-engineering framework? Surprisingly, the literature offers no clear (...) answers. To therefore first answer "how", 115 authoritative peer-reviewed papers, proceedings, books and book chapters were scrutinized for neuroscientists' characterizations of the elements of Shannon's general communication system. Evidently, many neuroscientists attempted no identification of the system's elements. Others identified only a few of Shannon's system's elements. Indeed, the available neuroscience interpretations show a stunning incoherence, both within and across studies. The interpretational gamut implies hundreds, perhaps thousands, of different possible neuronal versions of Shannon's general communication system. The obvious lack of a definitive, credible interpretation makes neuroscience calculations of channel capacity and information transmitted meaningless. To now answer why Shannon's system was ever adapted for neuroscience, three common features of the neuroscience literature were examined: ignorance of the role of the observer, the presumption of "decoding" of neuronal voltage-spike trains, and the pursuit of ingrained analogies such as information, computation, and machine. Each of these factors facilitated a plethora of interpretations of Shannon's system elements. Finally, let us not ignore the impact of these "informational misadventures" on society at large. It is the same impact as scientific fraud. (shrink) On Action Theory Change.Ivan José Varzinczak - 2010 - Journal of Artificial Intelligence Research 37 (1):189-246.details As historically acknowledged in the Reasoning about Actions and Change community, intuitiveness of a logical domain description cannot be fully automated. Moreover, like any other logical theory, action theories may also evolve, and thus knowledge engineers need revision methods to help in accommodating new incoming information about the behavior of actions in an adequate manner. The present work is about changing action domain descriptions in multimodal logic. Its contribution is threefold: first we revisit the semantics of action (...) class='Hi'>theory contraction proposed in previous work, giving more robust operators that express minimal change based on a notion of distance between Kripke-models. Second we give algorithms for syntactical action theory contraction and establish their correctness with respect to our semantics for those action theories that satisfy a principle of modularity investigated in previous work. Since modularity can be ensured for every action theory and, as we show here, needs to be computed at most once during the evolution of a domain description, it does not represent a limitation at all to the method here studied. Finally we state AGM-like postulates for action theory contraction and assess the behavior of our operators with respect to them. Moreover, we also address the revision counterpart of action theory change, showing that it benefits from our semantics for contraction. (shrink) Kuznetsov V. From studying theoretical physics to philosophical modeling scientific theories: Under influence of Pavel Kopnin and his school.Volodymyr Kuznetsov - 2017 - ФІЛОСОФСЬКІ ДІАЛОГИ'2016 ІСТОРІЯ ТА СУЧАСНІСТЬ У НАУКОВИХ РОЗМИСЛАХ ІНСТИТУТУ ФІЛОСОФІЇ 11:62-92.details The paper explicates the stages of the author's philosophical evolution in the light of Kopnin's ideas and heritage. Starting from Kopnin's understanding of dialectical materialism, the author has stated that category transformations of physics has opened from conceptualization of immutability to mutability and then to interaction, evolvement and emergence. He has connected the problem of physical cognition universals with an elaboration of the specific system of tools and methods of identifying, individuating and distinguishing objects from a scientific theory domain. (...) The role of vacuum conception and the idea of existence (actual and potential, observable and nonobservable, virtual and hidden) types were analyzed. In collaboration with S.Crymski heuristic and regulative functions of categories of substance, world as a whole as well as postulates of relativity and absoluteness, and anthropic and self-development principles were singled out. Elaborating Kopnin's view of scientific theories as a practically effective and relatively true mapping of their domains, the author in collaboration with M. Burgin have originated the unified structure-nominative reconstruction (model) of scientific theory as a knowledge system. According to it, every scientific knowledge system includes hierarchically organized and complex subsystems that partially and separately have been studied by standard, structuralist, operationalist, problem-solving, axiological and other directions of the current philosophy of science. 1) The logico-linguistic subsystem represents and normalizes by means of different, including mathematical, languages and normalizes and logical calculi the knowledge available on objects under study. 2) The model-representing subsystem comprises peculiar to the knowledge system ways of their modeling and understanding. 3) The pragmatic-procedural subsystem contains general and unique to the knowledge system operations, methods, procedures, algorithms and programs. 4) From the viewpoint of the problem-heuristic subsystem, the knowledge system is a unique way of setting and resolving questions, problems, puzzles and tasks of cognition of objects into question. It also includes various heuristics and estimations (truth, consistency, beauty, efficacy, adequacy, heuristicity etc) of components and structures of the knowledge system. 5) The subsystem of links fixes interrelations between above-mentioned components, structures and subsystems of the knowledge system. The structure-nominative reconstruction has been used in the philosophical and comparative case-studies of mathematical, physical, economic, legal, political, pedagogical, social, and sociological theories. It has enlarged the collection of knowledge structures, connected, for instance, with a multitude of theoreticity levels and with an application of numerous mathematical languages. It has deepened the comprehension of relations between the main directions of current philosophy of science. They are interpreted as dealing mainly with isolated subsystems of scientific theory. This reconstruction has disclosed a variety of undetected knowledge structures, associated also, for instance, with principles of symmetry and supersymmetry and with laws of various levels and degrees. In cooperation with the physicist Olexander Gabovich the modified structure-nominative reconstruction is in the processes of development and justification. Ideas and concepts were also in the center of Kopnin's cognitive activity. The author has suggested and elaborated the triplet model of concepts. According to it, any scientific concept is a dependent on cognitive situation, dynamical, multifunctional state of scientist's thinking, and available knowledge system. A concept is modeled as being consisted from three interrelated structures. 1) The concept base characterizes objects falling under a concept as well as their properties and relations. In terms of volume and content the logical modeling reveals partially only the concept base. 2) The concept representing part includes structures and means (names, statements, abstract properties, quantitative values of object properties and relations, mathematical equations and their systems, theoretical models etc.) of object representation in the appropriate knowledge system. 3) The linkage unites a structures and procedures that connect components from the abovementioned structures. The partial cases of the triplet model are logical, information, two-tired, standard, exemplar, prototype, knowledge-dependent and other concept models. It has introduced the triplet classification that comprises several hundreds of concept types. Different kinds of fuzziness are distinguished. Even the most precise and exact concepts are fuzzy in some triplet aspect. The notions of relations between real scientific concepts are essentially extended. For example, the definition and strict analysis of such relations between concepts as formalization, quantification, mathematization, generalization, fuzzification, and various kinds of identity are proposed. The concepts «PLANET» and «ELEMENTARY PARTICLE» and some of their metamorphoses were analyzed in triplet terms. The Kopnin's methodology and epistemology of cognition was being used for creating conception of the philosophy of law as elaborating of understanding, justification, estimating and criticizing legal system. The basic information on the major directions in current Western philosophy of law (legal realism, feminism, criticism, postmodernism, economical analysis of law etc.) is firstly introduced to the Ukrainian audience. The classification of more than fifty directions in modern legal philosophy is suggested. Some results of historical, linguistic, scientometric and philosophic-legal studies of the present state of Ukrainian academic science are given. (shrink) Simulating Grice: Emergent Pragmatics in Spatialized Game Theory.Patrick Grim - 2011 - In Anton Benz, Christian Ebert & Robert van Rooij (eds.), Language, Games, and Evolution. Springer-Verlag.details How do conventions of communication emerge? How do sounds or gestures take on a semantic meaning, and how do pragmatic conventions emerge regarding the passing of adequate, reliable, and relevant information? My colleagues and I have attempted in earlier work to extend spatialized game theory to questions of semantics. Agent-based simulations indicate that simple signaling systems emerge fairly naturally on the basis of individual information maximization in environments of wandering food sources and predators. Simple signaling emerges by (...) means of any of various forms of updating on the behavior of immediate neighbors: imitation, localized genetic algorithms, and partial training in neural nets. Here the goal is to apply similar techniques to questions of pragmatics. The motivating idea is the same: the idea that important aspects of pragmatics, like important aspects of semantics, may fall out as a natural results of information maximization in informational networks. The attempt below is to simulate fundamental elements of the Gricean picture: in particular, to show within networks of very simple agents the emergence of behavior in accord with the Gricean maxims. What these simulations suggest is that important features of pragmatics, like important aspects of semantics, don't have to be added in a theory of informational networks. They come for free. (shrink) An Improbable God Between Simplicity and Complexity: Thinking about Dawkins's Challenge.Philippe Gagnon - 2013 - International Philosophical Quarterly 53 (4):409-433.details Richard Dawkins has popularized an argument that he thinks sound for showing that there is almost certainly no God. It rests on the assumptions (1) that complex and statistically improbable things are more difficult to explain than those that are not and (2) that an explanatory mechanism must show how this complexity can be built up from simpler means. But what justifies claims about the designer's own complexity? One comes to a different understanding of order and of simplicity when one (...) considers the psychological counterpart of information. In assessing his treatment of biological organisms as either self-programmed machines or algorithms, I show how self-generated organized complexity does not fit well with our knowledge of abduction and of information theory as applied to genetics. I also review some philosophical proposals for explaining how the complexity of the world could be externally controlled if one wanted to uphold a traditional understanding of divine simplicity. (shrink) What is Integrated Information Theory a Theory Of?Adam Pautz - manuscriptdetails In the first instance, IIT is formulated as a theory of the physical basis of the 'degree' or 'level' or 'amount' of consciousness in a system. I raise a series of questions about the central explanatory target, the 'degree' or 'level' or 'amount' of consciousness. I suggest it is not at all clear what scientists and philosophers are talking about when they talk about consciousness as gradable. This point is developed in more detail in my paper "What Is the (...) Integrated Information Theory of Consciousness?"Journal of Consciousness Studies 26 (1-2):1-2 (2019) . (shrink) Information Theory's failure in neuroscience: on the limitations of cybernetics.Lance Nizami - 2014 - In Proceedings of the IEEE 2014 Conference on Norbert Wiener in the 21st Century.details In Cybernetics (1961 Edition), Professor Norbert Wiener noted that "The role of information and the technique of measuring and transmitting information constitute a whole discipline for the engineer, for the neuroscientist, for the psychologist, and for the sociologist". Sociology aside, the neuroscientists and the psychologists inferred "information transmitted" using the discrete summations from Shannon Information Theory. The present author has since scrutinized the psychologists' approach in depth, and found it wrong. The neuroscientists' approach is highly (...) related, but remains unexamined. Neuroscientists quantified "the ability of [physiological sensory] receptors (or other signal-processing elements) to transmit information about stimulus parameters". Such parameters could vary along a single continuum (e.g., intensity), or along multiple dimensions that altogether provide a Gestalt – such as a face. Here, unprecedented scrutiny is given to how 23 neuroscience papers computed "information transmitted" in terms of stimulus parameters and the evoked neuronal spikes. The computations relied upon Shannon's "confusion matrix", which quantifies the fidelity of a "general communication system". Shannon's matrix is square, with the same labels for columns and for rows. Nonetheless, neuroscientists labelled the columns by "stimulus category" and the rows by "spike-count category". The resulting "information transmitted" is spurious, unless the evoked spike-counts are worked backwards to infer the hypothetical evoking stimuli. The latter task is probabilistic and, regardless, requires that the confusion matrix be square. Was it? For these 23 significant papers, the answer is No. (shrink) Conceptual atomism and the computational theory of mind: a defense of content-internalism and semantic externalism.John-Michael Kuczynski - 2007 - John Benjamins & Co.details Contemporary philosophy and theoretical psychology are dominated by an acceptance of content-externalism: the view that the contents of one's mental states are constitutively, as opposed to causally, dependent on facts about the external world. In the present work, it is shown that content-externalism involves a failure to distinguish between semantics and pre-semantics---between, on the one hand, the literal meanings of expressions and, on the other hand, the information that one must exploit in order to ascertain their literal meanings. It (...) is further shown that, given the falsity of content-externalism, the falsity of the Computational Theory of Mind (CTM) follows. It is also shown that CTM involves a misunderstanding of terms such as "computation," "syntax," "algorithm," and "formal truth." Novel analyses of the concepts expressed by these terms are put forth. These analyses yield clear, intuition-friendly, and extensionally correct answers to the questions "what are propositions?, "what is it for a proposition to be true?", and "what are the logical and psychological differences between conceptual (propositional) and non-conceptual (non-propositional) content?" Naively taking literal meaning to be in lockstep with cognitive content, Burge, Salmon, Falvey, and other semantic externalists have wrongly taken Kripke's correct semantic views to justify drastic and otherwise contraindicated revisions of commonsense. (Salmon: What is non-existent exists; at a given time, one can rationally accept a proposition and its negation. Burge: Somebody who is having a thought may be psychologically indistinguishable from somebody who is thinking nothing. Falvey: somebody who rightly believes himself to be thinking about water is psychologically indistinguishable from somebody who wrongly thinks himself to be doing so and who, indeed, isn't thinking about anything.) Given a few truisms concerning the differences between thought-borne and sentence-borne information, the data is easily modeled without conceding any legitimacy to any one of these rationality-dismantling atrocities. (It thus turns out, ironically, that no one has done more to undermine Kripke's correct semantic points than Kripke's own followers!). (shrink) Contextuality in the Integrated Information Theory.J. Acacio de Barros, Carlos Montemayor & Leonardo De Assis - forthcoming - In J. A. de Barros, B. Coecke & E. Pothos (eds.), Lecture Notes on Computer Science.details Integrated Information Theory (IIT) is one of the most influential theories of consciousness, mainly due to its claim of mathematically formalizing consciousness in a measurable way. However, the theory, as it is formulated, does not account for contextual observations that are crucial for understanding consciousness. Here we put forth three possible difficulties for its current version, which could be interpreted as a trilemma. Either consciousness is contextual or not. If contextual, either IIT needs revisions to its axioms (...) to include contextuality, or it is inconsistent. If consciousness is not contextual, then IIT faces an empirical challenge. Therefore, we argue that IIT in its current version is inadequate. (shrink) Book Review – Alien Information Theory: Psychedelic Drug Technologies and the Cosmic Game.Peter Sjöstedt-H. - 2019 - Psychedelic Press UK: Psychedelic Book Reviews.details Dr Peter Sjöstedt-H reviews Dr Andrew R. Gallimore's book, Alien Information Theory. -/- This was published on PsyPressUK on 13 June 2019. One decade of universal artificial intelligence.Marcus Hutter - 2012 - In Pei Wang & Ben Goertzel (eds.), Theoretical Foundations of Artificial General Intelligence. Springer. pp. 67--88.details The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the award-winning PhD thesis (...) (Legg, 2008) provided the philosophical embedding and investigated the UAI-based universal measure of rational intelligence, which is formal, objective and non-anthropocentric. Recently, effective approximations of AIXI have been derived and experimentally investigated in JAIR paper (Veness et al. 2011). This practical breakthrough has resulted in some impressive applications, finally muting earlier critique that UAI is only a theory. For the first time, without providing any domain knowledge, the same agent is able to self-adapt to a diverse range of interactive environments. For instance, AIXI is able to learn from scratch to play TicTacToe, Pacman, Kuhn Poker, and other games by trial and error, without even providing the rules of the games. These achievements give new hope that the grand goal of Artificial General Intelligence is not elusive. This article provides an informal overview of UAI in context. It attempts to gently introduce a very theoretical, formal, and mathematical subject, and discusses philosophical and technical ingredients, traits of intelligence, some social questions, and the past and future of UAI. (shrink) Composition as pattern.Steve Petersen - 2019 - Philosophical Studies 176 (5):1119-1139.details I argue for patternism, a new answer to the question of when some objects compose a whole. None of the standard principles of composition comfortably capture our natural judgments, such as that my cat exists and my table exists, but there is nothing wholly composed of them. Patternism holds, very roughly, that some things compose a whole whenever together they form a "real pattern". Plausibly we are inclined to acknowledge the existence of my cat and my table but not of (...) their fusion, because the first two have a kind of internal organizational coherence that their putative fusion lacks. Kolmogorov complexity theory supplies the needed rigorous sense of "internal organizational coherence". (shrink) The integrated information theory of agency.Hugh Desmond & Philippe Huneman - 2022 - Brain and Behavioral Sciences 45:e45.details We propose that measures of information integration can be more straightforwardly interpreted as measures of agency rather than of consciousness. This may be useful to the goals of consciousness research, given how agency and consciousness are "duals" in many (although not all) respects. A Contingency Interpretation of Information Theory as a Bridge between God's Immanence and Transcendence.Philippe Gagnon - 2020 - In Michael Fuller, Dirk Evers, Anne L. C. Runehov, Knut-Willy Sæther & Bernard Michollet (eds.), Issues in Science and Theology: Nature – and Beyond. Cham: Springer Nature. pp. 169-185.details This paper investigates the degree to which information theory, and the derived uses that make it work as a metaphor of our age, can be helpful in thinking about God's immanence and transcendance. We ask when it is possible to say that a consciousness has to be behind the information we encounter. If God is to be thought about as a communicator of information, we need to ask whether a communication system has to pre-exist to the (...) divine and impose itself to God. If we want God to be Creator, and not someone who would work like a human being, 'creating' will mean sustaining in being as much the channel, the material system, as the message. Is information control? It seems that God's actions are not going to be informational control of everything. To clarify the issue, we attempt to distinguish two kinds of 'genialities' in nature, as a way to evaluate the likelihood of God from nature. We investigate concepts and images of God, in terms of the history of ideas but also in terms of philosophical theology, metaphysics, and religious ontology. (shrink) What Is the Integrated Information Theory of Consciousness?Adam Pautz - 2019 - Journal of Consciousness Studies 26 (1-2):1-2.details In the first instance, IIT is formulated as a theory of the physical basis of the 'degree' or 'level' or 'amount' of consciousness in a system. In addition, integrated information theorists have tried to provide a systematic theory of how physical states determine the specific qualitative contents of episodes of consciousness: for instance, an experience as of a red and round thing rather than a green and square thing. I raise a series of questions about the central (...) explanatory target, the 'degree' or 'level' or 'amount' of consciousness. I suggest it is not at all clear what scientists and philosophers are talking about when they talk about consciousness as gradable. I also raise some questions about the explanation of qualitative content. (shrink) A Review of:"Information Theory, Evolution and the Origin of Life as a Digital Message How Life Resembles a Computer" Second Edition. Hubert P. Yockey, 2005, Cambridge University Press, Cambridge: 400 pages, index; hardcover, US $60.00; ISBN: 0-521-80293-8. [REVIEW]Attila Grandpierre - 2006 - World Futures 62 (5):401-403.details Information Theory, Evolution and The Origin ofLife: The Origin and Evolution of Life as a Digital Message: How Life Resembles a Computer, Second Edition. Hu- bert P. Yockey, 2005, Cambridge University Press, Cambridge: 400 pages, index; hardcover, US $60.00; ISBN: 0-521-80293-8. The reason that there are principles of biology that cannot be derived from the laws of physics and chemistry lies simply in the fact that the genetic information content of the genome for constructing even the simplest (...) organisms is much larger than the information content of these laws. Yockey in his previous book (1992, 335) In this new book, Information Theory, Evolution and The Origin ofLife, Hubert Yockey points out that the digital, segregated, and linear character of the genetic information system has a fundamental significance. If inheritance would blend and not segregate, Darwinian evolution would not occur. If inheritance would be analog, instead of digital, evolution would be also impossible, because it would be impossible to remove the effect of noise. In this way, life is guided by information, and so information is a central concept in molecular biology. The author presents a picture of how the main concepts of the genetic code were developed. He was able to show that despite Francis Crick's belief that the Central Dogma is only a hypothesis, the Central Dogma of Francis Crick is a mathematical consequence of the redundant nature of the genetic code. The redundancy arises from the fact that the DNA and mRNA alphabet is formed by triplets of 4 nucleotides, and so the number of letters (triplets) is 64, whereas the proteome alphabet has only 20 letters (20 amino acids), and so the translation from the larger alphabet to the smaller one is necessarily redundant. Except for Tryptohan and Methionine, all amino acids are coded by more than one triplet, therefore, it is undecidable which source code letter was actually sent from mRNA. This proof has a corollary telling that there are no such mathematical constraints for protein-protein communication. With this clarification, Yockey contributes to diminishing the widespread confusion related to such a central concept like the Central Dogma. Thus the Central Dogma prohibits the origin of life "proteins first." Proteins can not be generated by "self-organization." Understanding this property of the Central Dogma will have a serious impact on research on the origin of life. (shrink) Simplicity, Language-Dependency and the Best System Account of Laws.Billy Wheeler - 2014 - Theoria : An International Journal for Theory, History and Fundations of Science 31 (2):189-206.details It is often said that the best system account of laws needs supplementing with a theory of perfectly natural properties. The 'strength' and 'simplicity' of a system is language-relative and without a fixed vocabulary it is impossible to compare rival systems. Recently a number of philosophers have attempted to reformulate the BSA in an effort to avoid commitment to natural properties. I assess these proposals and argue that they are problematic as they stand. Nonetheless, I agree with their aim, (...) and show that if simplicity is interpreted as 'compression', algorithmic information theory provides a framework for system comparison without the need for natural properties. (shrink) Bridging Conceptual Gaps: The Kolmogorov-Sinai Entropy.Massimiliano Badino - forthcoming - Isonomía. Revista de Teoría y Filosofía Del Derecho.details The Kolmogorov-Sinai entropy is a fairly exotic mathematical concept which has recently aroused some interest on the philosophers' part. The most salient trait of this concept is its working as a junction between such diverse ambits as statistical mechanics, information theory and algorithm theory. In this paper I argue that, in order to understand this very special feature of the Kolmogorov-Sinai entropy, is essential to reconstruct its genealogy. Somewhat surprisingly, this story takes us as far back as (...) the beginning of celestial mechanics and through some of the most exciting developments of mathematical physics of the 19th century. (shrink) Is Mass at Rest One and the Same? A Philosophical Comment: on the Quantum Information Theory of Mass in General Relativity and the Standard Model.Vasil Penchev - 2014 - Journal of SibFU. Humanities and Social Sciences 7 (4):704-720.details The way, in which quantum information can unify quantum mechanics (and therefore the standard model) and general relativity, is investigated. Quantum information is defined as the generalization of the concept of information as to the choice among infinite sets of alternatives. Relevantly, the axiom of choice is necessary in general. The unit of quantum information, a qubit is interpreted as a relevant elementary choice among an infinite set of alternatives generalizing that of a bit. The invariance (...) to the axiom of choice shared by quantum mechanics is introduced: It constitutes quantum information as the relation of any state unorderable in principle (e.g. any coherent quantum state before measurement) and the same state already well-ordered (e.g. the well-ordered statistical ensemble of the measurement of the quantum system at issue). This allows of equating the classical and quantum time correspondingly as the well-ordering of any physical quantity or quantities and their coherent superposition. That equating is interpretable as the isomorphism of Minkowski space and Hilbert space. Quantum information is the structure interpretable in both ways and thus underlying their unification. Its deformation is representable correspondingly as gravitation in the deformed pseudo-Riemannian space of general relativity and the entanglement of two or more quantum systems. The standard model studies a single quantum system and thus privileges a single reference frame turning out to be inertial for the generalized symmetry [U(1)]X[SU(2)]X[SU(3)] "gauging" the standard model. As the standard model refers to a single quantum system, it is necessarily linear and thus the corresponding privileged reference frame is necessary inertial. The Higgs mechanism U(1) → [U(1)]X[SU(2)] confirmed enough already experimentally describes exactly the choice of the initial position of a privileged reference frame as the corresponding breaking of the symmetry. The standard model defines 'mass at rest' linearly and absolutely, but general relativity non-linearly and relatively. The "Big Bang" hypothesis is additional interpreting that position as that of the "Big Bang". It serves also in order to reconcile the linear standard model in the singularity of the "Big Bang" with the observed nonlinearity of the further expansion of the universe described very well by general relativity. Quantum information links the standard model and general relativity in another way by mediation of entanglement. The linearity and absoluteness of the former and the nonlinearity and relativeness of the latter can be considered as the relation of a whole and the same whole divided into parts entangled in general. (shrink) Consciousness and Complexity: Neurobiological Naturalism and Integrated Information Theory.Francesco Ellia & Robert Chis-Ciure - 2022 - Consciousness and Cognition 100:103281.details In this paper, we take a meta-theoretical stance and aim to compare and assess two conceptual frameworks that endeavor to explain phenomenal experience. In particular, we compare Feinberg & Mallatt's Neurobiological Naturalism (NN) and Tononi's and colleagues' Integrated Information Theory (IIT), given that the former pointed out some similarities between the two theories (Feinberg & Mallatt 2016c-d). To probe their similarity, we first give a general introduction to both frameworks. Next, we expound a ground plan for carrying out (...) our analysis. We move on to articulate a philosophical profile of NN and IIT, addressing their ontological commitments and epistemological foundations. Finally, we compare the two point-by-point, also discussing how they stand on the issue of artificial consciousness. (shrink)
CommonCrawl
Finding small solutions of the equation $ \mathit{{Bx-Ay = z}} $ and its applications to cryptanalysis of the RSA cryptosystem AMC Home New quantum codes from constacyclic codes over the ring $ R_{k,m} $ doi: 10.3934/amc.2020124 Complete weight enumerator of torsion codes Xiangrui Meng and Jian Gao , School of Mathematics and Statistics, Shandong University of Technology, Zibo, Shandong 255000, China * Corresponding author: Jian Gao Received July 2020 Revised September 2020 Published December 2020 Fund Project: This research is supported by the National Natural Science Foundation of China (Nos. 11701336, 11626144, 11671235, 12071264) In this paper, we introduce two classes of MacDonald codes over the finite non-chain ring $ \mathbb{F}_p+v\mathbb{F}_p+v^2\mathbb{F}_p $ and their torsion codes which are linear codes over $ \mathbb{F}_p $, where $ p $ is an odd prime and $ v^3 = v $. We give the complete weight enumerator of two classes of torsion codes. As an application, systematic authentication codes are obtained by these torsion codes. Keywords: Complete weight enumerator, Torsion codes, authentication codes. Mathematics Subject Classification: Primary: 94B05, 94B15; Secondary: 11T71. Citation: Xiangrui Meng, Jian Gao. Complete weight enumerator of torsion codes. Advances in Mathematics of Communications, doi: 10.3934/amc.2020124 S. Bae, C. Li and Q. Yue, On the complete weight enumerators of some reducible cyclic codes, Discrete Mathematics, 60 (2015), 2275-2287. doi: 10.1016/j.disc.2015.05.016. Google Scholar I. F. Blake and K. Kith, On the complete weight enumerator of Reed-Solomon codes, SIAM Journal on Discrete Mathematics, 4 (1991), 164-171. doi: 10.1137/0404016. Google Scholar Y. Cengellenmis and M. Department, MacDonald codes over the ring $\mathbb{F}_3+ v\mathbb{F}_3$, IUG Journal of Natural and Engineering Studues, 20 (2012), 109-112. Google Scholar C. Colbourn and M. Gupta, On quaternary MacDonald codes, Proceedings ITCC 2003, International Conference on Information Technology: Coding and Computing, 5 (2003), 212-215. Google Scholar A. Dertli and Y. Cengellenmis, Macdonald codes over the ring $\mathbb{F}_2+v\mathbb{F}_2$, International Journal of Algebra, 5 (2011), 985-991. Google Scholar L. Diao, J. Gao and J. Lu, On $\mathbb{Z}_{p}\mathbb{Z}_{p}[v]$-additive cyclic codes, Advances in Mathematics of Communications, 14, (2020), 555–572. doi: 10.3934/amc.2018038. Google Scholar C. Ding and J. Yin, Algebraic constructions of constant composition codes, International Conference on Information Technology, 51 (2005), 1585-1589. doi: 10.1109/TIT.2005.844087. Google Scholar C. Ding and X. Wang, A coding theory construction of new systematic authentication codes, Theoretical Computer Science, 330 (2005), 81-99. doi: 10.1016/j.tcs.2004.09.011. Google Scholar C. Ding, T. Helleseth, T. Kløve and X. Wang, A generic construction of Cartesian authentication codes, IEEE Transactions on Information Theory, 53 (2007), 2229-2235. doi: 10.1109/TIT.2007.896872. Google Scholar T. Helleseth and A. Kholosha, Monomial and quadratic bent functions over the finite fields of odd characteristic, IEEE Transactions on Information Theory, 52 (2006), 2018-2032. doi: 10.1109/TIT.2006.872854. Google Scholar X. Hou and J. Gao, $\mathbb{Z}_{p}\mathbb{Z}_{p}[v]$-additive cyclic codes are asymptotically good, Journal of Applied Mathematics and Computing, (2020), https://doi.org/10.1007/s12190-020-01466-w. Google Scholar A. Kuzmin and A. Nechaev, Complete weight enumerators of generalized Kerdock code and related linear codes over Galois ring, Discrete Applied Mathematics, 111 (2001), 117-137. doi: 10.1016/S0166-218X(00)00348-6. Google Scholar C. Li, Q. Yue and F.-W. Fu, Complete weight enumerators of some cyclic codes, Designs, Codes and Crytography, 80 (2016), 295-315. doi: 10.1007/s10623-015-0091-5. Google Scholar C. Li, S. Bae, J. Ahn, S. Yang and Z. Yao, Complete weight enumerators of some linear codes and their applications, Designs, Codes and Cryptography, 81 (2016), 153-168. doi: 10.1007/s10623-015-0136-9. Google Scholar J. Luo and T. Helleset, Constant composition codes as subcodes of cyclic codes, IEEE Transactions on Information Theory, 57 (2011), 7482-7488. doi: 10.1109/TIT.2011.2161631. Google Scholar J. E. MacDonald, Design methods for maximum minimum-distance error-correcting codes, IBM Journal of Research and Development, 4 (1960), 43-57. doi: 10.1147/rd.41.0043. Google Scholar F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-correcting Codes, North-Holland, Amsterdam, 1977. Google Scholar A. M. Patel, Maximal $q$-ary linear codes with large minimum distance, IEEE Transactions on Information Theory, 21 (1975), 106-110. doi: 10.1109/tit.1975.1055315. Google Scholar R. S. Rees and D. R. Stinson, Combinatorial characterizations of authentication codes Ⅱ, Designs, Codes and Cryptography, 2 (1992), 175-187. doi: 10.1007/BF00124896. Google Scholar G. J. Simmons, Authentication theory coding theory, International Cryptology Conference, (1985), 411–4317. Google Scholar X. Wang, J. Gao and F.-W. Fu, Secret sharing schemes from linear codes over $\mathbb{F}_p+ v\mathbb{F}_p$, International Journal of Foundations of Computer Science, 27 (2016), 595-605. doi: 10.1142/S0129054116500180. Google Scholar X. Wang, J. Gao and F.-W. Fu, Complete weight enumerators of two classes of linear codes, Cryptography and Communications, 9 (2017), 545-562. doi: 10.1007/s12095-016-0198-1. Google Scholar Y. Wang and J. Gao, MacDonald codes over the ring $\mathbb{F}_p+ v\mathbb{F}_p+v^2\mathbb{F}_p$, Computational and Applied Mathematics, 38 (2019), 169. doi: 10.1007/s40314-019-0937-y. Google Scholar S. Yang and Z. Yao, Complete weight enumerators of a family of three-weight linear codes, Designs, Codes and Cryptography, 82 (2017), 663-674. doi: 10.1007/s10623-016-0191-x. Google Scholar Dandan Wang, Xiwang Cao, Gaojun Luo. A class of linear codes and their complete weight enumerators. Advances in Mathematics of Communications, 2021, 15 (1) : 73-97. doi: 10.3934/amc.2020044 Shudi Yang, Xiangli Kong, Xueying Shi. Complete weight enumerators of a class of linear codes over finite fields. Advances in Mathematics of Communications, 2021, 15 (1) : 99-112. doi: 10.3934/amc.2020045 Fengwei Li, Qin Yue, Xiaoming Sun. The values of two classes of Gaussian periods in index 2 case and weight distributions of linear codes. Advances in Mathematics of Communications, 2021, 15 (1) : 131-153. doi: 10.3934/amc.2020049 Agnaldo José Ferrari, Tatiana Miguel Rodrigues de Souza. Rotated $ A_n $-lattice codes of full diversity. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020118 Vito Napolitano, Ferdinando Zullo. Codes with few weights arising from linear sets. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020129 San Ling, Buket Özkaya. New bounds on the minimum distance of cyclic codes. Advances in Mathematics of Communications, 2021, 15 (1) : 1-8. doi: 10.3934/amc.2020038 Karan Khathuria, Joachim Rosenthal, Violetta Weger. Encryption scheme based on expanded Reed-Solomon codes. Advances in Mathematics of Communications, 2021, 15 (2) : 207-218. doi: 10.3934/amc.2020053 Nicola Pace, Angelo Sonnino. On the existence of PD-sets: Algorithms arising from automorphism groups of codes. Advances in Mathematics of Communications, 2021, 15 (2) : 267-277. doi: 10.3934/amc.2020065 Jong Yoon Hyun, Boran Kim, Minwon Na. Construction of minimal linear codes from multi-variable functions. Advances in Mathematics of Communications, 2021, 15 (2) : 227-240. doi: 10.3934/amc.2020055 Shanding Xu, Longjiang Qu, Xiwang Cao. Three classes of partitioned difference families and their optimal constant composition codes. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020120 Hongming Ru, Chunming Tang, Yanfeng Qi, Yuxiao Deng. A construction of $ p $-ary linear codes with two or three weights. Advances in Mathematics of Communications, 2021, 15 (1) : 9-22. doi: 10.3934/amc.2020039 Tingting Wu, Li Liu, Lanqiang Li, Shixin Zhu. Repeated-root constacyclic codes of length $ 6lp^s $. Advances in Mathematics of Communications, 2021, 15 (1) : 167-189. doi: 10.3934/amc.2020051 Saadoun Mahmoudi, Karim Samei. Codes over $ \frak m $-adic completion rings. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020122 Hongwei Liu, Jingge Liu. On $ \sigma $-self-orthogonal constacyclic codes over $ \mathbb F_{p^m}+u\mathbb F_{p^m} $. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020127 Ivan Bailera, Joaquim Borges, Josep Rifà. On Hadamard full propelinear codes with associated group $ C_{2t}\times C_2 $. Advances in Mathematics of Communications, 2021, 15 (1) : 35-54. doi: 10.3934/amc.2020041 Yuan Cao, Yonglin Cao, Hai Q. Dinh, Ramakrishna Bandi, Fang-Wei Fu. An explicit representation and enumeration for negacyclic codes of length $ 2^kn $ over $ \mathbb{Z}_4+u\mathbb{Z}_4 $. Advances in Mathematics of Communications, 2021, 15 (2) : 291-309. doi: 10.3934/amc.2020067 Hai Q. Dinh, Bac T. Nguyen, Paravee Maneejuk. Constacyclic codes of length $ 8p^s $ over $ \mathbb F_{p^m} + u\mathbb F_{p^m} $. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020123 Nguyen Thi Kim Son, Nguyen Phuong Dong, Le Hoang Son, Alireza Khastan, Hoang Viet Long. Complete controllability for a class of fractional evolution equations with uncertainty. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020104 Xiangrui Meng Jian Gao
CommonCrawl
Gross National Product (GNP) Deflator Will Kenton Will Kenton is an expert on the economy and investing laws and regulations. He previously held senior editorial roles at Investopedia and Kapitall Wire and holds a MA in Economics from The New School for Social Research and Doctor of Philosophy in English literature from NYU. Robert C. Kelly Reviewed by Robert C. Kelly Robert Kelly is managing director of XTS Energy LLC, and has more than three decades of experience as a business executive. He is a professor of economics and has raised more than $4.5 billion in investment capital. What Is the Gross National Product (GNP) Deflator? The gross national product deflator is an economic metric that accounts for the effects of inflation in the current year's gross national product (GNP) by converting its output to a level relative to a base period. The GNP deflator can be confused with the more commonly used gross domestic product (GDP) deflator. The GDP deflator uses the same equation as the GNP deflator, but with nominal and real GDP rather than GNP. The gross national product (GNP) deflator is an economic metric that accounts for the effects of inflation in the current year's GNP. The GNP deflator provides an alternative to the Consumer Price Index (CPI) and can be used in conjunction with it to analyze some changes in trade flows and the effects on the welfare of people within a relatively open market country. The higher the GNP deflator, the higher the rate of inflation for the period. Understanding the Gross National Product (GNP) Deflator The GNP deflator is simply the adjustment for inflation that is made to nominal GNP to produce real GNP. The GNP deflator provides an alternative to the Consumer Price Index (CPI) and can be used in conjunction with it to analyze some changes in trade flows and the effects on the welfare of people within a relatively open market country. The CPI is based upon a basket of goods and services, while the GNP deflator incorporates all of the final goods produced by an economy. This allows the GNP deflator to more accurately capture the effects of inflation since it's not limited to a smaller subset of goods. Calculating the Gross National Product (GNP) Deflator The GNP deflator is calculated with the following formula: GNP Deflator = ( Nominal GNP Real GNP ) × 100 \text{GNP Deflator}\ = \ \left(\frac{\text{Nominal GNP}}{\text{Real GNP}}\right)\times 100 GNP Deflator = (Real GNPNominal GNP​)×100 The result is expressed as a percentage, usually with three decimal places. The first step to calculating the GNP deflator is to determine the base period for analysis. In theory, you can work with GDP and foreign earnings data for the base period and current periods, and then extract the figures needed for the deflator calculation. However, nominal GNP and real GNP figures, as well as the deflator charted over time, can usually be accessed through releases from central banks or other economic entities. In the United States, the Bureau of Economic Analysis (BEA), the St. Louis Federal Reserve Bank, and others provide this data, as well as other indicators that track similar economic statistics that measure essentially the same thing but through different formulations. So actually calculating the GNP deflator is usually unnecessary. The more important task is how to interpret the data that the GNP deflator is applied to. Interpreting GNP Figures The GNP deflator, as mentioned, is just the inflation adjustment. The higher the GNP deflator, the higher the rate of inflation for the period. The relevant question is what having an inflation-adjusted gross national product—the real GNP—actually tells you. The real GNP is simply the actual national income of the country being measured. It doesn't care where the production is located in the world as long as the earnings come back home. In terms of differences between real GNP and real GDP, real GDP is the preferred measure of U.S. economic health. Real GNP shows how the U.S. is doing in terms of its foreign investments in addition to domestic production. What Real Gross Domestic Product (Real GDP) Is, How to Calculate It, vs Nominal Real gross domestic product is an inflation-adjusted measure of the value of all goods and services produced in an economy. Real Economic Growth Rate (Real GDP Growth Rate): Definition The real economic growth rate is a measure of economic growth that adjusts for inflation and is expressed as a percentage. Nominal Gross Domestic Product: Definition and How to Calculate Nominal gross domestic product measures the value of all finished goods and services produced by a country at their current market prices. Gross Domestic Product (GDP): Formula and How to Use It Gross domestic product is the monetary value of all finished goods and services made within a country during a specific period. Consumer Price Index (CPI) Explained: What It Is and How It's Used The Consumer Price Index (CPI) measures change over time in the prices paid by consumers for a representative basket of goods and services. Base-Year Analysis A base-year analysis includes a starting point year that is used to measure relative changes in certain economic or financial variables. Why Is the Consumer Price Index Controversial? GDP vs. GNP: What's the Difference? What Causes Inflation? Deadly Flaws in Major Market Indicators Measuring a Nation's Economic Development with GNP Understanding and Calculating Gross National Product (GNP)
CommonCrawl
Alternating sum of binomial coefficients: given $n \in \mathbb N$, prove $\sum^n_{k=0}(-1)^k {n \choose k} = 0$ Let $n$ be a positive integer. Prove that \begin{align} \sum_{k=0}^n \left(-1\right)^k \binom{n}{k} = 0 . \end{align} I tried to solve it using induction, but that got me nowhere. I think the easiest way to prove it is to think of a finite set of $n$ elements, but I can't find the solution. combinatorics summation induction binomial-coefficients darij grinberg FranckNFranckN $\begingroup$ Note that both proofs below fail for $n=0$. $\endgroup$ – Carsten S Dec 18 '13 at 15:22 $\begingroup$ @CarstenSchultz Because $$\sum_{k=0}^0 (-1)^k\binom{0}{k} = 1,$$ the result doesn't hold for $n = 0$. $\endgroup$ – Daniel Fischer♦ Dec 18 '13 at 15:48 $\begingroup$ @DanielFischer, I am aware of that ;) But I do not know, if $0$ is in Franck's $\mathbb N$. $\endgroup$ – Carsten S Dec 18 '13 at 16:58 $\begingroup$ doesn't $\mathbb N$ start at 1? $\endgroup$ – FranckN Dec 18 '13 at 17:00 $\begingroup$ @FranckN There are different conventions. Some let $\mathbb{N}$ start with $0$, some with $1$. Given the problem statement, it is overwhelmingly likely that the problem author belongs to the latter group. $\endgroup$ – Daniel Fischer♦ Dec 18 '13 at 17:02 Using Binomial Theorem for positive integer exponent $n$ $$(a+b)^n=\sum_{0\le r\le n}\binom nr a^{n-r}b^r$$ Set $\displaystyle a=1,b=-1$ in the above identity lab bhattacharjeelab bhattacharjee I think the easiest way to prove it is to think of a finite set of $n$ elements, If you think of it that way, it's the number of even sized ($(-1)^k = 1$) subsets of $\{1,\,\dotsc,\,n\}$ minus the number of odd-sized ($(-1)^k = -1$) subsets. $$\varphi \colon S \mapsto \begin{cases} S\cup \{1\} &, 1 \notin S\\ S \setminus \{1\} &, 1 \in S \end{cases}$$ that "flips $1$", i.e. adds $1$ to $S$ if $1\notin S$ and removes it if $1\in S$, is a bijection between the set of even-sized and the set of odd-sized subsets. Thus $\{1,\, \dotsc,\,n\}$ has as many even-sized subsets as odd-sized, i.e. $$\sum_{k=0}^n (-1)^k\binom{n}{k} = 0$$ for all $n \geqslant 1$. Daniel Fischer♦Daniel Fischer $\begingroup$ but what happen if n its an odd number? I think it doesn't apply $\endgroup$ – FranckN Dec 18 '13 at 14:55 $\begingroup$ Take a subset $S$ that doesn't contain $1$. If $S$ has an odd number of elements, then $S\cup \{1\}$ has an even number of elements, and vice versa. $\endgroup$ – Daniel Fischer♦ Dec 18 '13 at 14:58 Please allow me to give a less direct proof. Let $p$ be the product of $n$ different primes $q_1,\ldots,q_n$. We know $$\sum_{d \mid p}\mu(d)=0,$$ where $\mu$ is the Möbius function. Each divisor $d$ of $p$ is the product of primes from the set $\{q_1,\ldots,q_n\}$, and will satisfy $\mu(d)=1$ or $\mu(d)=-1$, depending on the parity of the number of primes dividing $d$. It follows that there as many ways to choose an odd number of primes as ways to choose an even number of primes. Equivalently, $$\sum_{0\leq 2k \leq n}\binom{n}{2k}=\sum_{0\leq 2k+1 \leq n}\binom{n}{2k+1},$$ it follows that $$\sum_{k=0}^n\binom{n}{k}(-1)^k=0.$$ Ahaan S. Rungta punctured duskpunctured dusk As this question just got revived, I thought I'd add another bijective proof. Namely, we are trying to show that the number of subsets of $\{1,2,\dots,n\}$ with an odd number of elements is equal to the number of subsets with an even number of elements. To that end, given any subset $S$, just take the symmetric difference with $\{1\}$, i.e. $S\to S\triangle \{1\}$. TomGrubbTomGrubb Here is a different tack: If you drop the term for $k=0$, this sum is the negation of the Euler characteristic of the $n$-dimensional simplex, whose faces of dimension $k$ correspond to subsets of ${0,...,n}$ with cardinality $k+1$. The simplex is a contractible space, so its Euler characteristic is the same as that of a point, namely 1. Putting back in the term for $k=0$, we see that the original sum is 1-1=0. This is way more work than necessary (appealing to homotopical invariance of the Euler characteristic), but it's fun, and it's suggestive of the idea that alternating sums can sometimes be dealt with topologically. Dan RamrasDan Ramras Even though this question is pretty old, and the OP probably will not see the answer, I think it's worthwhile to provide a proof by induction, which the OP (and maybe others) had problems with and surprisingly no one has posted yet. Since the statement is true for $n=1$, suppose it holds for $n=m$. Then the statement for $n=m+1$ follows from $$\require\cancel \sum_{k=0}^{m+1} (-1)^k {m+1 \choose k} =\sum_{k=0}^{m} (-1)^k {m \choose k} \\ \cancel{{m \choose 0}-{m+1 \choose 0}}+\sum_{k=1}^{m} (-1)^k {m \choose k}-(-1)^k{m+1 \choose k}-(-1)^{m+1}{m+1 \choose m+1}=0 \\ \sum_{k=1}^m (-1)^{k+1}\left({m+1 \choose k}-{m \choose k}\right)+(-1)^{m+2}{m+1 \choose m+1}=0,$$ which, recalling the property $\displaystyle {a \choose b}+{a \choose b+1}={a+1 \choose b+1},$ is equivalent to $$\sum_{k=1}^m (-1)^{k+1} {m \choose k-1}+(-1)^{m+2}{m+1 \choose m+1}=0 \\ \sum_{k=0}^{m-1} (-1)^{k} {m \choose k}+(-1)^{m}{m \choose m}=0 \\ \sum_{k=0}^{m} (-1)^k {m \choose k}=0,$$ and this is is precisely our inductive hypothesis. Vincenzo OlivaVincenzo Oliva Alternatively, prove a more general identity below: $$\sum_{r=0}^k\,(-1)^r\binom{n}{r}=(-1)^k\binom{n-1}{k}\,,$$ for all integers $n,k\geq 0$. When $k=n$, we have $$\sum_{r=0}^{n}\,(-1)^r\binom{n}{r}=(-1)^n\binom{n-1}{n}=0\,.$$ BatominovskiBatominovski Not the answer you're looking for? Browse other questions tagged combinatorics summation induction binomial-coefficients or ask your own question. Alternating sum of binomial coefficients is equal to zero Show that ${n \choose 1} + {n \choose 3} +\cdots = {n \choose 0} + {n \choose 2}+\cdots$ Proof by induction that alternating sum of binomial coefficients is $0$ Using Binomial Theorem to prove the following Explain how $\binom{n}{0}-\binom{n}{1}+\binom{n}{2}+…+(-1)^{n}\binom{n}{n} = 0$ Working with binomial coefficient $\sum_{k=0}^n (-1)^k \binom nk=0$ Alternating row sums of Pascal's Triangle If $\sum_{k=0}^{n}\binom nk=2^n$ then how is $2(\binom n0+\binom n2+\binom n4+…)=2^n$ Proving that $\sum_{j=0}^n(-1)^j\binom{n}{j} = \binom{n}{0} - \binom{n}{1} + … \pm \binom{n}{n}=0$ Prove Binom Sum $\sum_{k=0}^n(-1)^k \binom{n}{k} = 0$ Inequality with sum of Binomial coefficients: $\sum\limits^n_{k=1}k \sqrt{\binom nk}\leq\sqrt{2^{n-1}n^3}$ Summation over a product of binomial coefficients A combinatorial identity with binomial coefficients and floor function. Closed Form for an Alternating Sum Involving Binomial Coefficients Error in a binomial coefficient sum identity proof Prove an alternating binomial sum with odd weights Proving an identity involving the alternating sum of products of binomial coefficients Proving a binomial identity involving the sum of a product of four binomial coefficients
CommonCrawl
An optimization problem with volume constraint for an inhomogeneous operator with nonstandard growth Approximation properties of Lüroth expansions June 2021, 41(6): 2891-2905. doi: 10.3934/dcds.2020390 A symmetric Random Walk defined by the time-one map of a geodesic flow Pablo D. Carrasco , and Túlio Vales Av. Presidente Antônio Carlos 6627, Belo Horizonte-MG, BR31270-901 * Corresponding author: Pablo D. Carrasco Received August 2020 Revised October 2020 Published June 2021 Early access December 2020 In this note we consider a symmetric Random Walk defined by a $ (f, f^{-1}) $ Kalikow type system, where $ f $ is the time-one map of the geodesic flow corresponding to an hyperbolic manifold. We provide necessary and sufficient conditions for the existence of an stationary measure for the walk that is equivalent to the volume in the corresponding unit tangent bundle. Some dynamical consequences for the Random Walk are deduced in these cases. Keywords: Random walk in a random environment, stationary measures, geodesic flow. Mathematics Subject Classification: Primary: 537C40, 37D30; Secondary: 60K37, 37H99. Citation: Pablo D. Carrasco, Túlio Vales. A symmetric Random Walk defined by the time-one map of a geodesic flow. Discrete & Continuous Dynamical Systems, 2021, 41 (6) : 2891-2905. doi: 10.3934/dcds.2020390 D. Anosov, Geodesic flows on closed Riemannian manifolds of negative curvature, Trudy Mat. Inst. Steklov., 90 (1967), 209 pp. Google Scholar A. Avila, M. Viana and A. Wilkinson, Absolute continuity, Lyapunov exponents and rigidity I: Geodesic flows, Journal of the European Mathematical Society, 17 (2015), 1435-1462. doi: 10.4171/JEMS/534. Google Scholar A. Candel and L. Conlon, Foliations I, American Mathematical Society, Providence, RI, 2000. doi: 10.1090/gsm/023. Google Scholar J.-P. Conze and Y. Guivarc'h, Marches en milieu aléatoire et mesures quasi-invariantes pour un système dynamique, Colloquium Mathematicum, 84 (2000), 457-480. doi: 10.4064/cm-84/85-2-457-480. Google Scholar D. Dolgopyat, B. Fayad, and M. Saprykina, Erratic behavior for 1-dimensional Random Walks in a Liouville quasi-periodic environment., preprint, 2019, arXiv: 1901.10709. Google Scholar M. Gorodin and B. Lifsic, Central limit theorem for stationary Markov processes, In Third Vilnius Conference on Probability and Statistics, volume 1, 1981,147–148. Google Scholar [7] B. Hasselblatt and A. Katok, A First Course in Dynamics, Cambridge University Press, New York, 2003. doi: 10.1017/CBO9780511998188. Google Scholar F. Rodriguez Hertz, M. Rodriguez Hertz and R. Ures, A Survey of Partially Hyperbolic Dynamics, In Partially Hyperbolic Dynamics, Laminations and Teichmüller Flow, Fields Institute Communications, vol. 51, 2007, 35–88. Google Scholar M. W. Hirsch, C. C. Pugh, and M. I. Shub, Invariant Manifolds, Springer Berlin Heidelberg, 1977. Google Scholar V. Kaloshin and Y. Sinai, Simple random walks along orbits of Anosov diffeomorphisms, Tr. Mat. Inst. Steklova, 228 (2000), 236-245. Google Scholar A. Katok and A. Kononenko, Cocycles' stability for partially hyperbolic systems, Mathematical Research Letters, 3 (1996), 191-210. doi: 10.4310/MRL.1996.v3.n2.a6. Google Scholar Y. Kifer, Ergodic Theory of Random Transformations, Birkhäuser Boston, Inc., Boston, MA, 1986. doi: 10.1007/978-1-4684-9175-3. Google Scholar C. Kipnis and S. Varadhan, Central limit theorem for additive functionals of reversible Markov processes and applications to simple exclusions, Communications Math. Physics, 104 (1986), 1-19. doi: 10.1007/BF01210789. Google Scholar J. Neveu and A. Feinstein, Mathematical Foundations of the Calculus of Probability, Holden-Day, Inc. San Francisco, Calif.-London-Amsterdam, 1965. Google Scholar Y. Pesin, Lectures on Partial Hyperbolicity and Stable Ergodicity, European Mathematical Society, Zürich, 2004. doi: 10.4171/003. Google Scholar C. Pugh and M. Shub, Stable ergodicity and julienne quasi-conformality, J. Eur. Math. Soc. (JEMS), 2 (2000), 1-52. doi: 10.1007/s100970050013. Google Scholar F. Rodriguez-Hertz, J. Rodriguez-Hertz and R. Ures, Accessibility and stable ergodicity for partially hyperbolic diffeomorphisms with 1D-center bundle, Inventiones Mathematicae, 172 (2008), 353-381. doi: 10.1007/s00222-007-0100-z. Google Scholar F. Rodriguez-Hertz, J. Rodriguez-Hertz and R. Ures, A non-dynamically coherent example on $\mathbb{T}^3$, Annales de l'Institut Henri Poincaré C, Analyse non linéaire, 33 (2016), 1023-1032. doi: 10.1016/j.anihpc.2015.03.003. Google Scholar V. Rokhlin, Lectures on the entropy theory of transformations with invariant measure, Uspehi Mat. Nauk., 22 (1967), 3-56. Google Scholar Y. Sinai, Simple random walks on tori, Journal of Statistical Physics, 94 (1999), 695-708. doi: 10.1023/A:1004564824697. Google Scholar W. A. Veech, Periodic points and invariant pseudomeasures for toral endomorphisms, Ergodic Theory and Dynamical Systems, 6 (1986), 449-473. doi: 10.1017/S0143385700003606. Google Scholar A. Wilkinson, The cohomological equation for partially hyperbolic diffeomorphisms, Astérisque, 358 (2013), 75–165. Google Scholar O. Zeitouni, Random walks in random environment, In Lecture Notes in Math., vol. 1837, Springer, Berlin, 2004,189–312. doi: 10.1007/978-3-540-39874-5_2. Google Scholar Wafa Hamrouni, Ali Abdennadher. Random walk's models for fractional diffusion equation. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2509-2530. doi: 10.3934/dcdsb.2016058 Edward Belbruno. Random walk in the three-body problem and applications. Discrete & Continuous Dynamical Systems - S, 2008, 1 (4) : 519-540. doi: 10.3934/dcdss.2008.1.519 Brendan Weickert. Infinite-dimensional complex dynamics: A quantum random walk. Discrete & Continuous Dynamical Systems, 2001, 7 (3) : 517-524. doi: 10.3934/dcds.2001.7.517 Samuel Herrmann, Nicolas Massin. Exit problem for Ornstein-Uhlenbeck processes: A random walk approach. Discrete & Continuous Dynamical Systems - B, 2020, 25 (8) : 3199-3215. doi: 10.3934/dcdsb.2020058 Kumiko Hattori, Noriaki Ogo, Takafumi Otsuka. A family of self-avoiding random walks interpolating the loop-erased random walk and a self-avoiding walk on the Sierpiński gasket. Discrete & Continuous Dynamical Systems - S, 2017, 10 (2) : 289-311. doi: 10.3934/dcdss.2017014 Colin Little. Deterministically driven random walks in a random environment on $\mathbb{Z}$. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5555-5578. doi: 10.3934/dcds.2016044 Theodore Papamarkou, Alexey Lindo, Eric B. Ford. Geometric adaptive Monte Carlo in random environment. Foundations of Data Science, 2021, 3 (2) : 201-224. doi: 10.3934/fods.2021014 Zhihui Yuan. Multifractal analysis of random weak Gibbs measures. Discrete & Continuous Dynamical Systems, 2017, 37 (10) : 5367-5405. doi: 10.3934/dcds.2017234 Ivan Werner. Equilibrium states and invariant measures for random dynamical systems. Discrete & Continuous Dynamical Systems, 2015, 35 (3) : 1285-1326. doi: 10.3934/dcds.2015.35.1285 Nguyen Huu Du, Nguyen Hai Dang. Asymptotic behavior of Kolmogorov systems with predator-prey type in random environment. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2693-2712. doi: 10.3934/cpaa.2014.13.2693 Jie Xu, Yu Miao, Jicheng Liu. Strong averaging principle for slow-fast SPDEs with Poisson random measures. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 2233-2256. doi: 10.3934/dcdsb.2015.20.2233 Fawwaz Batayneh, Cecilia González-Tokman. On the number of invariant measures for random expanding maps in higher dimensions. Discrete & Continuous Dynamical Systems, 2021, 41 (12) : 5887-5914. doi: 10.3934/dcds.2021100 Veronika Schleper. A hybrid model for traffic flow and crowd dynamics with random individual properties. Mathematical Biosciences & Engineering, 2015, 12 (2) : 393-413. doi: 10.3934/mbe.2015.12.393 Tom Goldstein, Xavier Bresson, Stan Osher. Global minimization of Markov random fields with applications to optical flow. Inverse Problems & Imaging, 2012, 6 (4) : 623-644. doi: 10.3934/ipi.2012.6.623 Gregoire Nadin. How does the spreading speed associated with the Fisher-KPP equation depend on random stationary diffusion and reaction terms?. Discrete & Continuous Dynamical Systems - B, 2015, 20 (6) : 1785-1803. doi: 10.3934/dcdsb.2015.20.1785 Yaofeng Su. Almost surely invariance principle for non-stationary and random intermittent dynamical systems. Discrete & Continuous Dynamical Systems, 2019, 39 (11) : 6585-6597. doi: 10.3934/dcds.2019286 Jérôme Coville, Nicolas Dirr, Stephan Luckhaus. Non-existence of positive stationary solutions for a class of semi-linear PDEs with random coefficients. Networks & Heterogeneous Media, 2010, 5 (4) : 745-763. doi: 10.3934/nhm.2010.5.745 Zhiming Li, Lin Shu. The metric entropy of random dynamical systems in a Hilbert space: Characterization of invariant measures satisfying Pesin's entropy formula. Discrete & Continuous Dynamical Systems, 2013, 33 (9) : 4123-4155. doi: 10.3934/dcds.2013.33.4123 Jiuping Xu, Pei Wei. Production-distribution planning of construction supply chain management under fuzzy random environment for large-scale construction projects. Journal of Industrial & Management Optimization, 2013, 9 (1) : 31-56. doi: 10.3934/jimo.2013.9.31 Zhenqi Jenny Wang. The twisted cohomological equation over the geodesic flow. Discrete & Continuous Dynamical Systems, 2019, 39 (7) : 3923-3940. doi: 10.3934/dcds.2019158 Pablo D. Carrasco Túlio Vales
CommonCrawl
The impact of human health co-benefits on evaluations of global climate policy Noah Scovronick1,2 na1, Mark Budolfson ORCID: orcid.org/0000-0003-0414-04333,4 na1, Francis Dennig5,6 na1, Frank Errickson7 na1, Marc Fleurbaey2,8, Wei Peng9, Robert H. Socolow10, Dean Spears11,12,13,14 & Fabian Wagner2,15,16 na1 168 Altmetric Climate-change mitigation Climate-change policy Energy and society The health co-benefits of CO2 mitigation can provide a strong incentive for climate policy through reductions in air pollutant emissions that occur when targeting shared sources. However, reducing air pollutant emissions may also have an important co-harm, as the aerosols they form produce net cooling overall. Nevertheless, aerosol impacts have not been fully incorporated into cost-benefit modeling that estimates how much the world should optimally mitigate. Here we find that when both co-benefits and co-harms are taken fully into account, optimal climate policy results in immediate net benefits globally, overturning previous findings from cost-benefit models that omit these effects. The global health benefits from climate policy could reach trillions of dollars annually, but will importantly depend on the air quality policies that nations adopt independently of climate change. Depending on how society values better health, economically optimal levels of mitigation may be consistent with a target of 2 °C or lower. Climate policies targeting CO2 may also reduce air pollutant emissions—and the aerosols they produce—as the two share emission sources. Prior studies on the topic have quantified the associated health co-benefits of pre-defined greenhouse gas reduction scenarios1,2,3,4,5,6, or estimated the economic impacts from reducing specific pollutants7,8,9, but these types of impacts have not been fully incorporated into cost-benefit modeling that estimates how much the world should optimally mitigate. We move this literature forward by developing a comprehensive cost-benefit integrated assessment model based on William Nordhaus' Regionalized Integrated Climate Economy (RICE) model, where the new developments allow the model to weigh both the health co-benefits and the climate co-harms of aerosol co-reductions (co-harms exist because aerosols produce net cooling overall10); the latter in particular has been a largely neglected aspect of the co-benefits discussion. (We use the term co-harm to refer to the net climate harm of aerosol reductions, recognizing that reducing some species of emissions individually may produce different effects than the net of all species together; for example, although reducing black carbon produces a climate benefit, that effect may be outweighed by the climate harm of reductions in other species.). These modeling developments, which account for the key air pollutant emissions and their individual properties, provide new capability to investigate fundamental policy questions that have not been answered by existing studies11. This includes determining: (1) the optimal climate policy across time and how it is affected by independent air quality control, (2) whether climate policy produces immediate net benefits, or if there are intergenerational tradeoffs, and (3) if specified climate targets are justifiable on cost-benefit grounds. To answer these questions, we modify the RICE optimization model12 to include an empirically calibrated, regionally differentiated feedback mechanism whereby reducing CO2 also reduces regional air pollutant emissions from co-emitting sources. We then quantify and monetize the impact on both health and radiative forcing throughout the world and compute the resulting optimal climate policy. We thus modify the standard tradeoff between CO2 mitigation costs and climate damages with a more complete analysis that simultaneously weighs mitigation costs, climate damages from CO2, and the health and climate consequences of changes in air pollutant co-emissions. The resulting model estimates optimal climate policy after jointly considering all these factors. As a robustness check, we also modify the widely used FUND (Climate Framework for Uncertainty, Negotiation and Distribution) model13 to include the same mechanisms. We find that when both co-benefits and co-harms are taken fully into account, optimal climate policy results in immediate net benefits globally, which overturns previous findings from cost-benefit models that omit these effects. The global health benefits from climate policy could reach trillions of dollars annually, but their magnitude will importantly depend on the air quality policies that nations adopt independently of climate change. Depending on how society values better health, we show that economically optimal levels of mitigation may be consistent with a target of 2 °C or lower. We summarize the results of the new model (hereafter referred to as RICE + AIR, for RICE + Aerosol Impacts and Responses) in terms of the optimal fraction of business-as-usual CO2 emissions that should be reduced over time. We refer to this quantitative reduction in CO2 as the optimal decarbonization fraction and express it as a percentage. (What we call the decarbonization fraction is often referred to as the control rate.). The blue line in Fig. 1a shows the optimal decarbonization pathway if the cost-benefit analysis only considers the climate impacts of CO2 and associated aerosol co-reductions. Health impacts are not included. In this reference case, the optimal decarbonization fraction is 24% of business-as-usual emissions in 2030, rising to 35% in 2050 and ultimately reaching full decarbonization by 2130. It is similar to the optimal trajectory of the standard RICE model, which features exogenous aerosol forcing and excludes health co-benefits (Supplementary Fig. 1). Optimal decarbonization and temperature. a Decarbonization over time for the reference case optimal policy that excludes health co-benefits (blue line) and in the full RICE + AIR optimal policy that includes health co-benefits (red line). b Estimated global temperature rise above preindustrial levels that would occur given the decarbonization in a. Decarbonization is relative to a business-as-usual scenario without any climate action, and 100% decarbonization signifies zero net carbon emissions In contrast to the blue line, which considers only the climate impacts of CO2 and aerosols, the red line also includes the health co-benefits of the aerosol reductions that result from decarbonization. This leads to increased optimal CO2 mitigation. The difference between the red line and the blue line demonstrates the importance of the health benefits; roughly 45–60% more decarbonization is optimal over the next five decades (and 10–40% thereafter) compared to the reference case that only considers climate consequences. Full mitigation also occurs earlier in time. The additional emission reductions that result from the inclusion of the health co-benefits cumulatively amounts to ~270 GtC (Fig. 1a). Importantly, all of these results account for the damages from lost cooling attributable to the aerosol co-reductions. The additional decarbonization justified by the health gains leads to a peak temperature 0.4 °C lower than the reference case (Fig. 1b). The carbon price pathways associated with Fig. 1 can be found in Supplementary Fig. 2. Figure 2a shows that the optimal climate policy has immediate and continual monetized global net benefits when accounting for health co-benefits. This overturns the findings from standard cost-benefit optimization models, which ignore health co-benefits and thus imply that optimal climate policy has net costs for much of this century (Fig. 2b). The result is consistent with prior co-benefits studies that have analyzed specific emission reduction scenarios and reported high benefits relative to mitigation costs1,14, but which generally do not also account for (monetized) climate-related impacts. Costs and benefits of mitigation. Decomposition of the change in global consumption relative to the business-as-usual (BAU) scenario under a the full RICE + AIR optimal policy, and b the reference case optimal policy. Health co-benefits and benefits from avoided CO2 damages are positive, while mitigation costs and aerosol co-harms (climate damage from the co-reduction of cooling aerosols) are negative. The black solid line displays the global net effect. a shows that the net effect on global consumption is immediately positive when health co-benefits are taken into account, in contrast to the reference case (b), which is representative of standard cost-benefit models that do not include health co-benefits and thus imply that optimal climate policy has net costs for much of this century. If health co-benefits are added to the reference policy in b—by adding the light-red bars displayed in a—the global net effect becomes immediately positive, and if health co-benefits were removed from a, the net effect would be negative for most of this century The distribution of health co-benefits by region for the full RICE + AIR optimum is displayed in Fig. 3a. In line with recent scenario-based studies1,2,15, many of the co-benefits accrue in India and China in early periods, attributable to their large populations and high capacity for mitigation-related reductions in PM2.5. China's benefits decline by mid-century due to relatively rapid economic development and a stabilizing population—which both act to constrain emissions—whereas those in India persist and are the major driver of increased decarbonization relative to the reference case (see below for a sensitivity test which corroborates India's importance). Towards the end of the century, sub-Saharan Africa replaces China as the second-largest beneficiary, as air pollution remains problematic due to lagging economic development accompanied by the world's largest population. Other regions also stand to benefit, including less populous regions that show important benefits per capita and/or per Gross Domestic Product (GDP) (Fig. 3b). Health benefits of carbon mitigation. Life-years gained a overall and b per 100,000 population by region from the air quality improvements associated with the optimal decarbonization in RICE + AIR. c, d show the resulting monetized benefits in total, and as a percent of GDP, respectively. Note that if a region's PM2.5 exposure (concentration) drops below 5.8 µg/m3, health benefits no longer accrue—this threshold assumption is common in other global air quality assessments16, and is tested in the sensitivity analyses In the results presented above, the averted premature mortality from aerosol co-reductions produces annual monetized benefits in the hundreds of billions of dollars over the next few decades, rising to several trillion annually at the end of the century (Fig. 3c). To derive these numbers, we multiply, for each region-time pair, the total life-years gained by 2 years of per capita consumption. This approach produces different life-year monetizations for each region, which leads to the slight change in composition of monetized benefits (compare Fig. 3a, c). However, this does not imply that we assign life-years in poorer regions less value in the objective function, because the optimization accounts for the diminishing marginal utility of consumption through a concave relationship between wealth and well-being, as described below and in the Methods section (Eq. (1)). Below we also show the sensitivity of our results to alternative valuations of health benefits. Independent air quality control All results presented above assume the level of air quality control that occurs independently of climate policy will proceed approximately as projected in the coming decades, based on current and planned policies. We systematically test the importance of this assumption by implementing an environmental Kuznets-type approach where emission intensities decrease with increasing per capita income (described in detail in the Methods section, and in particular Eqs. (2)–(4) and associated discussion). Figure 4 reports results when we slow down and speed up this Kuznets-relevant income, allowing it to range between roughly 50% (χ = 0.5) and 150% (χ = 1.5) of the true (modeled) income; lower values imply less stringent air quality control and vice versa. In these model runs the true income is used in all other parts of the model. Impact of independent air quality control. a Optimal decarbonization and b associated global temperature rise given different levels of autonomous air quality control. The reference and RICE + AIR cases from Fig. 1, where χ = 1, are displayed as the blue and red solid lines, respectively The results demonstrate that although assumptions about air quality control do influence optimal mitigation levels, more decarbonization remains optimal compared to the reference case under all of these scenarios. However, the mechanisms driving the additional decarbonization vary. If the (pre-mitigation) air is dirtier than projected (χ < 1), extra mitigation occurs primarily to reap the health co-benefits. If stringent air quality control occurs in the future (χ > 1), more decarbonization is also optimal compared to the reference case. In part this is to capture remaining health co-benefits, but also for two other reasons. First, the potential damages associated with the substantial loss of the aerosol cooling effect requires additional CO2 reductions as a counterbalance. And second, when the air is cleaner, each marginal reduction in CO2 has a greater net benefit because it is not coupled to as much cooling aerosol. Another way of understanding the role of independent air quality measures on optimal climate policy is by comparing not only the different RICE + AIR cases, as in Fig. 4, but also the difference between the RICE + AIR cases and their corresponding reference cases, as reported in Table 1. The results confirm that as autonomous emission controls get stronger (represented by higher χ values), the less health co-benefits matter for climate policy. Table 1 Impact of independent air quality control One of the scenarios displayed in Fig. 4 is the fully co-optimal case that selects the ideal combination of both air quality and climate policies by introducing an additional policy lever that can act directly on individual air pollutant emissions through end-of-pipe measures, rather than via CO2 (described in Supplementary Note 1). When this lever is added, large reductions in air pollutant emissions occur when and where associated abatement costs are low, the reductions lead to relatively large decreases in exposure, and/or they produce few climate damages. This first-best policy where climate and air pollution policies are co-optimized leads to rapid air quality improvements but still increased decarbonization relative to the reference case, albeit by a lower margin than in the standard χ = 1 case that underlies our main results. Note that we selected χ = 1 as the basis for the main result in order to explore the effect health co-benefits have when regions act consistently with their current and planned air pollution policies (which are suboptimal). As this section demonstrates, the magnitude of the co-benefit effect is importantly influenced by the assumed level of independent air quality control. The RICE model has a discounted utilitarian objective, meaning that for optimal policy calculations, the objective of the model is to maximize the sum of discounted well-being (see Eq. (1) of the Methods and associated discussion). The discount rate for consumption is determined via the Ramsey rule, which adds the rate of pure time preference to the product of the elasticity of marginal utility and the economic growth rate. The rate of pure time preference is the rate at which the weight given to future well-being declines over time. The elasticity of marginal utility represents the lesser importance of each additional dollar to well-being as one gets richer. Unless otherwise noted, we assume an elasticity of marginal utility of 1.5, the default in RICE. In all the results presented so far we have used a rate of pure time preference of 1.5% per year, which is the default value RICE, but close to the upper end of the range highlighted by the IPCC (Intergovernmental Panel on Climate Change)16,17. This choice puts more weight on near-term impacts compared to those occurring further in the future. As a result, aerosol impacts, which occur more immediately than the climate impacts from CO2, have a relatively outsized importance. Nevertheless, when we implement a much lower rate that corresponds to a near-zero (0.1%) preference for the present over the future, optimal decarbonization remains substantially higher in RICE + AIR compared to the analogous reference case (Fig. 5). (Near-zero time preference is often used in the climate economics literature17, including in the Stern Review18). The influence of pure time preference. Optimal decarbonization with low (0.1%), medium (1.5%), and high (3.5%) rates of pure time preference for the reference case (that excludes health co-benefits) and in the full RICE + AIR case (that includes health co-benefits). All runs have a consumption elasticity of marginal utility of 1.5 Conversely, if we increase time preference to 3.5%, which implies a discount rate for consumption of 7%, the difference between RICE and RICE + AIR becomes even more stark. (To calculate the 7% discount rate via the Ramsey rule, we used the economic growth rate in the initial time period. We note that the exact discount rate may change over time due to the effect of climate damages on economic growth; the impact however, is negligible.). We chose the 7% value because it is the highest of the primary discount rates generally used in US federal cost-benefit analyses. (Although most experts consider lower rates to be more appropriate in an intergenerational context such as in the case of climate change-related analyses16,17, the current US administration has signaled a preference for including the higher 7% rate19.) Such high time preference places so much emphasis on the near-term health benefits associated with reducing CO2 that optimal decarbonization is substantially higher than both the analogous reference case and the reference case with more moderate (1.5%) time preference. These results indicate that even after strongly discounting the future, a robust climate policy is still warranted. In Supplementary Fig. 3 we present further analyses showing results with a 3% discount rate, which is the lower value generally used for US federal cost-benefit analyses. Monetizing and valuing health benefits The results above assume that one life-year gained equals 2 years of per capita consumption in dollar terms. Two years of per capita consumption corresponds to the approach to monetizing health impacts used in key studies that form the basis of the RICE climate damage function12,20, and is similar to survey-based life-year assessments21. However, this approach yields much lower monetary benefits than using the value of a statistical life (VSL), which is also widely adopted in the literature. If we assume that each adult death attributable to PM2.5 results in approximately 10 years of life lost22, we would use roughly 8–16 years of per capita consumption per life-year gained, instead of two; thus, in what follows we use 8–16 years of per capita consumption as one possible approximation of a VSL, in addition to a more direct VSL-based approach described below. The sensitivity of our results to alternative life-year monetizations is reported in Table 2. The monetized global health benefits in our main results discussed above would be roughly 4–11 times higher if we used VSL-like monetizations in the optimization, and the associated optimal level of decarbonization would likely be consistent with keeping the maximum global temperature rise to 2 °C. Two degrees is a target specified in the Paris Agreement and widely considered as necessary to avoid dangerous climate change, but one that has generally not been warranted according to previous cost-benefit assessments (using similar discounting parameters) that omit health co-benefits12. Combining the low (0.1%) rate of time preference with a VSL-based approach may justify a target as low as 1.8 °C. Table 2 Monetizing life-years/lives Table 2 reports the sensitivity of the results to differences in how life-years are monetized in dollar terms. In RICE's cost-benefit approach, an important second step occurs when the monetized health benefits are valued in well-being terms via the objective function (see Eq. (1) in the Methods section), which gives the model the aim of maximizing the (discounted) sum of global well-being through time, as is standard in optimal policy modeling. A key feature of the objective function is diminishing marginal utility of consumption, which captures the core concept that an additional dollar generates more well-being when given to a poorer person than to a richer person. The result is that while life-years are assigned a lower dollar amount in poorer countries in absolute terms, they are actually assigned greater value in well-being/utility terms. We discuss this issue in more detail in Supplementary Note 2 and show in Supplementary Table 1 that the strong effect of adding health co-benefits persists whether life years are valued more highly in poorer regions, less highly, or exactly the same as in wealthier regions; however, the magnitude of the effect changes. Additional sensitivities in RICE + AIR Table 3 reports results for several other sensitivities, presenting the percent increase in optimal decarbonization rates in the RICE + AIR case compared to the corresponding reference case. The table is organized as follows, with variables in parentheses referring to the relevant parameter in the model equations reported in the Methods. Table 3 Additional sensitivity analyses First (Test 1), we explore different air pollutant co-reduction levels (κ), with the range representing the high and low values after applying alternative estimates from the other four Shared Socioeconomic Pathways (SSPs). Note that the SSPs are not ordered in terms of their co-reduction potentials, but instead reflect different possible futures across multiple socioeconomic dimensions. In the second test (Test 2), we substitute the TM5-FAst Scenario Screening Tool (TM5-FASST) source-receptor matrix (SRM) for the SRM based on simulations of the European Monitoring and Evaluation Program atmospheric chemistry and transport model. We confined this sensitivity to Asia because it was the only region analyzed as part of a recent project at the International Institute of Applied Systems Analysis. However, the four Asian regions account for ~85% of all life-years gained globally over the next century in the main analyses (Fig. 3), and therefore largely drive the findings. In Test 3 and Test 4 we assume that the relative risk for all-cause mortality (β) was the lower and upper bound, respectively, of the confidence interval in Forestiere et al.24 rather than the central estimate. Test 5 lowers the PM2.5 threshold below which there are no adverse health effects (τ) to 1 μg/m3 instead of 5.8 μg/m3. In Test 6 we assume there is no benefit from reducing air pollution at levels above 50 μg/m3. We include this counterfactual boundary case for two reasons. The first is to investigate the maximum possible concavity in risk functions at high levels of exposure; in reality however, recent empirical studies25,26 indicate that the marginal effect of air pollution remains positive at levels well above 50 μg/m3. The second reason is to provide additional emphasis on the importance of India and China, as this test effectively eliminates the impact of both of those nations (as well as the Middle East/North Africa region) over the next several decades and thus explores the model's sensitivity to their exclusion. Test 7 assumes that climate damages are twice the standard values in RICE, while the final test (Test 8) uses the Finite Amplitude Impulse Response (FAIR) climate model (version 1.0)27 as an alternative to RICE's native climate model (Supplementary Note 3 describes how we integrate the FAIR model). Exploring model uncertainty by linking AIR to the FUND model Like RICE/DICE (the Dynamic Integrated Climate Economy model (DICE) is RICE's global counterpart), the FUND model is another one of the three leading climate economy models used by the US Interagency Working Group to estimate the social cost of carbon28. FUND has different world regions, a different economic framework, a different climate model, and a different specification of damages when compared to RICE, and thus provides an important opportunity to explore model uncertainty. Comparing Fig. 1 with Fig. 6 demonstrates that results for FUND + AIR and RICE + AIR are qualitatively similar, despite the well-known differences in the structure and policy recommendations of the two models28,29; adding aerosol impacts leads to dramatically increased optimal levels of decarbonization. (Supplementary Note 4 describes how we link FUND to AIR.) FUND + AIR results. Optimal decarbonization rates over time for the FUND reference case (that excludes health benefits) and the FUND + AIR case (that includes health benefits) We have developed a new modeling framework for analyzing the costs and benefits of the co-reductions in air pollutant emissions that result from CO2 policy. We find that these impacts have a critical role in determining optimal decarbonization rates, as the potential health co-benefits that result from improved air quality are large, occur quickly enough to be economically important, outweigh the near-term co-harms from lost cooling, and are concentrated in developing regions. This remains true even with high discount rates and relatively conservative valuations of improved health. However, as the size of the health co-benefits will be partially determined by future air quality policies, decision makers should jointly plan both types of interventions. Depending on how society values health, it may be economically optimal to limit temperature rise to 2 °C or lower, thus corroborating the climate targets from the Paris Agreement. Overall, optimal mitigation results in immediate net benefits globally. Our findings should be interpreted in light of several factors. First, our sensitivity analyses identified key variables that meaningfully affect the optimal level of decarbonization in RICE + AIR. These include the shape and magnitude of the exposure-response functions that relate PM2.5 exposure to mortality, the assumed relationship between CO2 emissions and air pollutant emissions, the discount rate, and the valuation of the health benefits. The first two factors are largely amenable to empirical inquiry, whereas the latter two depend partially on ethical judgements; all have a strong bearing on how much society should mitigate, and when. Second, we follow standard convention in assuming a uniform global carbon price in our optimal policy calculations. Economists prefer this assumption because a uniform price minimizes the cost to the global economy of any particular level of emissions reductions, and is thus a necessary part of the first-best approach to climate policy. However, another necessary part of first-best policy is a non-climate equity policy involving redistribution to address the large economic inequalities that exist throughout the world; in the absence of such an equity-focused policy, a uniform global carbon price would ignore equity considerations, and arguably impose an unjustifiably heavy burden on developing countries. An important feature of our results relevant to this issue is that health co-benefits provide the most incentive for additional decarbonization in lower- and middle-income regions. An interesting extension of our research would explore how co-benefits affect mitigation policies in a range of different burden-sharing regimes and in second-best climate policy calculations30,31. Third, we follow common practice in the co-benefits literature in assuming that there is a background appetite for air pollution policy that is independent of climate policy, such that the same background levels of investment will be made in air quality regardless of CO2 reductions (and so policies will ratchet up insofar as greater CO2 reductions make the antecedent level of air quality mitigation less expensive); this is the basic assumption behind the co-reductions we estimate in our main results. As reported above in Fig. 4, we also co-optimize air quality and climate policies, which is another scenario (with different properties) in which there would be co-benefits from CO2 reductions. A limitation is that other scenarios are also possible, such as CO2 reductions in a context involving a fixed cap and trade regime for air pollutants where the cap remains fixed at the same level in conjunction with new CO2 reductions; in this scenario, it is possible that there would not be many co-benefits from CO2 reductions32. Fourth, we were not able to fully assess uncertainties around key aerosol processes. For example, the aerosol indirect effect may be the single largest source of uncertainty in radiative forcing assessments10,33. New evidence continues to accrue about key factors that contribute to the formation of new particles from precursor gases, and there are likely to be complex feedback mechanisms that occur in response to future temperature changes34,35. Some of these processes are not yet adequately represented in even the most sophisticated Earth system models, let alone reduced-form versions. A related concern is that we have used separate models to estimate the health effects (TM5-FASST) and climate effects (Model for the Assessment of Greenhouse gas Induced Climate Change 6 (MAGICC6)) of air pollutant emissions, a decoupling that may also introduce uncertainty. Nevertheless, MAGICC and TM5 have distinct strengths that we harness accordingly; MAGICC takes account of the full aerosol load of the whole atmosphere, top to bottom, and at the level of hemispheres, while TM5 tells us about the concentration at ground level, where people live and breathe, in principle at much higher resolution. Both models work with the same global emission inventories. Fifth, we have only explored the effect of co-reducing air pollutant emissions on PM2.5-related mortality. Other co-benefits may occur, which would likely push further in the direction of increased decarbonization; these include morbidity impacts from PM2.5—which are generally minor compared to those from mortality—as well as other more indirect impacts such as potential increases in crop yields, improvements in visibility, and health impacts from changing exposure to tropospheric ozone2,9,35,36. In addition, cost-benefit models also miss some other impacts of reduced climate change, including the effects of methane on ozone, the effects of ocean acidification, and others37. Due to the uncertainties inherent in our modeling framework, we expect the accuracy of quantitative estimates of the co-harms and co-benefits of optimal climate policy to sharpen over time, particularly as our understanding of atmospheric science progresses. Nevertheless, the novel modeling approach described here offers important new insights into how much we should mitigate and over what time period, and the sensitivity tests above indicate that we should not expect the qualitative story told by our results to change in light of improved empirical estimates. Our methods also enable investigation of other key questions beyond the scope of this study, including how the inclusion of health co-benefits influences optimal climate policy under different burden-sharing regimes and different worldviews about how much to prioritize the poor, future people, and citizens of other countries. The RICE model The RICE model was first developed in 1996 to analyze the tradeoffs between investing in climate mitigation, which incurs a cost relatively soon, and climate damages, which incur costs in the more distant future38,39. RICE is the regionalized counterpart to the DICE model, which is one of three leading cost-benefit climate economy models used by researchers and governments for regulatory analysis, including to estimate the social cost of carbon28. Here we describe the key aspects of the standard RICE2010 model; for a more extensive description of this open-access model, see ref. 12. (Also see ref. 39 for a more extensively documented, but earlier version of RICE.) Briefly, RICE is a regionalized global optimization model that includes an economic component and a geophysical component that are linked. RICE divides the world into 12 regions, some of which are single countries, while others are groups of countries. Each region has a distinct endowment of economic inputs, including capital, labor, and technology, which together produce that region's gross output via a Cobb–Douglas production function. Pre-mitigation carbon emissions are a function of gross output and an exogenously determined, region-specific carbon intensity pathway. These carbon emissions can be reduced (mitigated) at a cost to gross output through control policies, set to equalize the marginal abatement cost in all regions. Local mitigation cost is borne by each region, and there are no inter-regional transfers. Any remaining (post-mitigation) carbon emissions are incorporated into the climate module where they influence global temperature and, ultimately, the future economy through climate-related damages. Future climate change affects regions differently, with poorer regions generally more vulnerable to climate damages. Damage estimates increase quadratically with a change in the global surface temperature and, like mitigation costs, are incurred directly as the loss of a proportion of gross output. Gross output minus the loss of mitigation costs and climate damages is what we refer to hereafter as GDP. The model's optimization balances mitigation costs, which lower consumption at the time of mitigation, against climate damages which lower consumption in the future. (Regional consumption is the fraction of GDP that is not saved; mitigation cost and climate damage affect consumption only via their effect on GDP). The optimal tradeoff maximizes the sum of discounted well-being, W, which is a concave function of consumption as follows: $$W(c_{it}) = \mathop {\sum}\limits_{it} {\frac{{L_{it}}}{{(1 + \rho )^t}}} \frac{{c_{it}^{(1 - \eta )}}}{{1 - \eta }},$$ where L is population, c per capita consumption, ρ the rate of pure time preference, and η the consumption elasticity of marginal utility (inequality aversion). The subscripts i and t are the region and time indices, respectively. The model is solved by maximizing this global objective function. As a result, any factor affecting consumption, such as health impacts or climate damages, can be included in the model's optimization framework. Unless otherwise specified, we maintain RICE's default parameter values for time preference and inequality aversion of 1.5% and 1.5, respectively. For this study, all simulations were run using the Mimi model development package in the Julia programming language. This Mimi/Julia version of RICE is fully faithful to the standard Excel version, but has more flexibility40. We make four changes to this standard version of RICE, in addition to those directly related to the AIR module, which are described below. First, we update the population projections to those of the UN2017 medium variant41. This is a newer source of projections and also enables us to use internally consistent projections of deaths and life-years, as described further below. The impact of changing population has been comprehensively explored elsewhere42,43,44. Second, we update the exogenous radiative forcing terms to the values used in RCP6.045, which is in line with the latest versions of DICE46,47, which is the global (single-region) variant of RICE. These estimates are newer than those found in RICE2010 and are available in disaggregated form, thus allowing us to remove and endogenize the individual aerosol term for use in the AIR module, while maintaining the other non-CO2 forcings (also described in more detail below). Third, we allow CO2 concentrations and the global temperature to be endogenous in the second time-step (2015–2025)—it is fixed in standard RICE2010—as the now-endogenized aerosols will produce effects immediately after mitigation. And fourth, we use a modified objective function that avoids Negishi weights, which distort time preferences48 and the inter-regional tradeoff in ways that are opaque and difficult to justify descriptively and normatively49. We have explained this latter change in more detail in a previous publication50 and also in a sensitivity presented in Supplementary Table 2. In the standard RICE2010 model, anthropogenic CO2 is the only endogenous climate forcer. All other sources of radiative forcing, including land-use change, non-CO2 gases, and aerosols are represented through a single exogenous forcing term that aggregates the individual trajectories of each source. This simplifying assumption is problematic when it comes to aerosols. Mitigation actions affecting CO2 have the potential to strongly influence emissions of the air pollutants that produce aerosols, as the two types of emissions share many sources4,5. Therefore, if carbon emissions are reduced, aerosols will tend to decrease simultaneously. A change in aerosols implies a change in radiative forcing as well as a change in ambient particulate air pollution5. Climate change and air pollution both affect well-being, and capturing the impact of these pathways was the motivation for developing the AIR module, which we now explain. Overview of the AIR module In this section, we provide a general overview of how we developed the AIR module, with a technical description—including all equations—in the sections that follow. Broadly speaking, our approach consists of five steps. First, we estimate the baseline (before carbon mitigation) emissions of five air pollutant species (primary PM2.5, oxides of nitrogen, sulfur dioxide, organic carbon, and black carbon). Emissions are estimated for each region-time pair with income-dependent emission intensity projections (emissions per unit GDP) based on the Greenhouse gas and Air pollution Interactions and Synergies (GAINS) model and specifically the ECLIPSE emission scenarios51. Our central case assumes air pollutant emissions in the coming decades follow the ECLIPSEV5a baseline scenario, which includes current and planned air quality legislation but no climate policy. In sensitivity analyses we alter this assumption, allowing for faster or slower independent air quality cleanup, including a case where we simultaneously co-optimize both air quality and climate policy. This co-optimization introduces policy levers for end-of-pipe technologies that act on individual air pollutants. These levers require associated cost curves that are also drawn from the GAINS model. In the second step, we determine the change in air pollutant emissions that would result from a change in CO2 emissions using information from the SSP project52. This provides an estimate of co-reductions based on empirically realistic projections about the regionally differentiated interaction between future climate and air pollution policies, and is consistent with theoretical results from economics that show that co-benefit effects could be different in scenarios with different properties32. Emission information in the SSPs is estimated from bottom-up integrated assessment modeling that includes regionally differentiated, spatially explicit representations of energy production and structure and, like in RICE, assumes that CO2 policy occurs through a single global carbon price. Third, we link changes in air pollutant emissions to changes in estimated average human exposure to PM2.5 by applying the source-receptor matrix (SRM) from the TM5-FASST air quality model53. The TM5-FASST SRM was computed from simulations of the full TM5 chemical transport model for 56 source regions, which were aggregated to approximate the RICE regions53. Once exposure is estimated, it is possible to calculate the number of life-years gained attributable to (reduced) air pollution by combining an exposure--response function with projections of future mortality and life expectancy54, which we took from the UN World Population Prospects. We applied a (log)linear exposure-response function for mortality from all causes in adults based on a meta-analysis published in a recent World Health Organization report24. We selected this approach for consistency with the UN projections—which only estimate mortality from all causes—and because recent epidemiological analyses indicate that the strong effects of air pollution occur at exposure (concentration) levels up to and including those most relevant to our study and that they likely affect a wide range of outcomes25,26. Fourth, we allow the change in air pollutant emissions to influence the global temperature using aerosol forcing coefficients derived from the MAGICC climate model55. The coefficients incorporate both the direct and indirect effects represented in MAGICC, with the latter including those related to albedo and cloud responses. Aerosol forcing is added to the forcing from the other greenhouse gases in RICE's climate module to produce estimates of future climate change. And fifth, for each region we monetize and then value the aerosol impacts. The average per capita health benefits are added to per capita consumption, whereas the climate effects—monetized using RICE's standard climate damage function—subtract from consumption12. Consumption is transformed into well-being by a concave function in the optimization via RICE's objective/social welfare function (Eq. (1)). Once the impacts have been valued in this way, they then enter the optimization. Figure 7 illustrates the different model components and their linkages. We now present a technical description of each of these steps. Diagram of the AIR module. Flow chart illustrating how the AIR module (rectangles) links with the RICE model (gray circle) to estimate emissions of air pollutants and their impacts. The model/method underlying each step in AIR is shown within parentheses, along with the relevant equations Estimating baseline emissions Air pollutant emissions from natural sources and open burning remain exogenous in our framework, following the trajectory based on RCP6.045. All other anthropogenic air pollutant emissions are endogenous, as follows. The level of baseline (pre-mitigation) anthropogenic emissions of each aerosol precursor, E0, is a function of an emission intensity factor (emissions per unit GDP), e, and the GDP, Y: $$E_{itp}^0(Y_{it},L_{it},e_{itp}(Y_{it}/L_{it})) = e_{itp}(Y_{it}/L_{it})\cdot Y_{it}.$$ The emission intensity is region (i), time (t), and pollutant (p) specific and characteristically depends on the per capita income level of the region, defined above as the GDP, Y, divided by the population, L. GDP is endogenously determined in RICE while population is exogenous, taking the medium variant estimate of the 2017 version of the UN Population Prospects through 210041, and remaining constant thereafter. We estimated emission intensities (and emissions) for five air pollutants: sulfur dioxide (SO2), primary fine particulate matter (PM2.5), oxides of nitrogen (NOx), organic carbon (OC), and black carbon (BC). Emission intensities up to the year 2050 were derived from region-specific projections used in the GAINS model and specifically the ECLIPSEV5a scenario that reflects existing national and regional air pollution policies, but excludes decarbonization from climate mitigation51. We fit the following functional form to the scenario data to extrapolate beyond 2050: $$\begin{array}{c}e_{itp}(Y_{it}/L_{it},\chi ) = \varphi _{1,ip}\cdot \left[ {\exp \left( { - \Omega _{1,ip}\cdot I_{it}(Y_{it}/L_{it},\chi )} \right)} \right.\\ \left. { + \varphi _{2,ip}\cdot \exp \left( { - \Omega _{2,ip}\cdot I_{it}(Y_{it}/L_{it},\chi )} \right)} \right],\end{array}$$ where the Ωs and φs are fitting parameters and χ is the Kuznets-relevant income, as described below. Resulting emission intensities are displayed graphically in Supplementary Fig. 4. This functional form implies that emission intensities decrease with rising per capita income I, as observed in the projected emission scenario. This can be interpreted as a particular version of the environmental Kuznets curve, estimated with the ECLIPSE data. We write our Kuznets-relevant income I(χ) as: $$I_{it}(Y_{it}/L_{it},\chi ) = \frac{{Y_{i(t = 2005)}}}{{L_{i(t = 2005)}}} + \chi \cdot \left( {\frac{{Y_{it}}}{{L_{it}}} - \frac{{Y_{i(t = 2005)}}}{{L_{i(t = 2005)}}}} \right).$$ For χ = 1, the Kuznets-relevant income equals the true (modeled) per capita income in the region-time pair, producing our default best-guess emission intensity factors. Changing I(χ) allows us to explore assumptions of more or less stringent autonomous air quality policies: we can speed up (χ > 1) or slow down (χ < 1) the decrease in emission intensities over time accordingly. Supplementary Figure 5 shows the baseline (pre-mitigation) level of emissions by region and time under the default of χ = 1, where the Kuznets-relevant income equals the true (modeled) income. Relationship of CO2 mitigation to air pollutant emissions The above method projects the level of air pollutant emissions assuming no CO2 mitigation. Reducing emissions of CO2 will typically also reduce the emissions of air pollutants, as they often stem from the same sources. In our framework, the percentage reduction in CO2 relative to the business-as-usual (without mitigation) scenario is called the decarbonization fraction (control rate), μit, and it is associated through parameter κip with a reduction in pollutant p relative to its baseline (pre-mitigation) level: $$\frac{{\Delta E_{itp}(\mu _{it})}}{{E_{itp}^0}} = \kappa _{itp}\frac{{\Delta E_{it,{\mathrm{CO}}_{\mathrm{2}}}(\mu _{it})}}{{E_{it,{\mathrm{CO}}_{\mathrm{2}}}^0}} = \kappa _{ip}\cdot \mu _{it}$$ $$\Delta E_{itp}(\mu _{it},Y_{it},L_{it}) = \kappa _{ip}\cdot \mu _{it}\cdot E_{itp}^0 = \kappa _{ip}\cdot \mu _{it}\cdot e_{itp}(Y_{it}/L_{it})\cdot Y_{it}.$$ The parameter κip describes the effectiveness of CO2 mitigation in co-reducing emissions of pollutant p. The κip parameter was estimated from the SSPs, which are a set of five storylines designed to analyze tradeoffs between climate change and socioeconomic factors52. Each of the five SSPs contains multiple sub-scenarios that differ only by their level of decarbonization; all socioeconomic factors remain constant. Therefore, each pairwise comparison of these sub-scenarios includes an implicit estimate of κitp at each time period for each region and pollutant. The emission information in the SSPs is estimated from bottom-up integrated assessment modeling that includes regionally differentiated, spatially explicit representations of energy production and structure, and like RICE, assumes that CO2 policy occurs through a single global carbon price. We fit a simple linear regression line through the multiple estimates of κ for each SSP, constrained to begin at the origin, and take the slope of that line as our estimate (Supplementary Fig. 6). We derive five estimates of κ (one for each SSP) for each region-pollutant pair, using the middle-of-the-road SSP2 as our standard case, with the alternative SSPs tested in sensitivity analyses. The total post-mitigation level of emissions is therefore: $$E_{itp}(\mu _{it}) = E_{itp}^0 - \Delta E_{itp} = E_{itp}^0\cdot (1 - \kappa _{ip}\cdot \mu _{it}).$$ We acknowledge that there are cases where the data derived from the SSP database appear to exhibit a non-linear rather than linear relationship between CO2 reduction and air pollutant reduction (Supplementary Fig. 6). Our goal was to find a relationship that on average, and in particular at full mitigation, provides a reasonable approximation to the implied air pollution reduction. Having said this, we are aware that our assumption of linearity in the relation may affect our overall results: a higher co-reduction at low CO2 mitigation offers incentives for further reduction than in the linear case, and vice versa. Thus, the shape of the mitigation profile could be affected, though the effect is likely to be small if, as we assume, the carbon prices in the RICE regions are coupled to each other. From air pollutant emissions to health impacts The health co-benefits in RICE + AIR are calculated directly from the change in ambient population-weighted concentrations of PM2.5 attributable to CO2 mitigation. In the next paragraph, we describe how we estimate this change in concentration. Further below we describe how we keep track of the absolute level of pre- and post-mitigation PM2.5 concentrations, which are used in the health impact calculations only to ensure that no health benefits accrue at exposures below given threshold values. The change in ambient concentrations of PM2.5 attributable to CO2 mitigation is a function of the change in aerosol precursor emissions—in this case primary PM2.5, NOx, and SO2—as well as other factors such as meteorological conditions. To estimate this relationship, we extracted the SRM from the freely available TM5-FASST global atmospheric SRM53. For each pollutant, the SRM provides an estimate of the change in population-weighted PM2.5 concentrations (hereafter referred to as exposure) given a unit change in emissions. The SRM from TM5-FASST was computed for 56 source regions from simulations with the full TM5 chemical transport model53. Using an SRM is a practical alternative to running full atmospheric chemistry transport model simulations, which is infeasible in our optimization context. The TM5-FASST model is described in detail elsewhere, and has been used similarly in other projects3,53. The TM5-FASST interface allows the 56 source regions to be aggregated into larger regions that approximate the RICE regions. Due to the size of the RICE regions, we assume that the change in exposure to PM2.5, ΔC, in region i depends only on the change in emissions within that same region (i') and that the change is estimated via the SRM (SR) that encodes atmospheric transport. We also assume that the SRM does not change over time and that changes in the three precursor pollutants are additive: $$\begin{array}{c}\Delta C_{it}(\mu _{it},Y_{it},L_{it}) = \mathop {\sum}\limits_{i{\prime}p} {{\mathrm{SR}}_{ii{\prime}p}} \cdot \Delta E_{i{\prime}tp}(\mu _{i{\prime}t},Y_{i{\prime}t},L_{i{\prime}t})\\ = \mathop {\sum}\limits_p {{\mathrm{SR}}_{iip}} \cdot \Delta E_{itp}(\mu _{it},Y_{it},L_{it}).\end{array}$$ As mentioned above, we keep track of the absolute level of exposure, a variable that is used only to ensure that no health benefits accrue at exposures below a given threshold level (described further below). The absolute exposure levels in the pre-mitigation case (where μ = 0) are calculated by translating the change in emissions relative to 2005 into a change in exposure using the SRM, and then subtracting it from the 2005 exposure $$C_{it}^0(\mu _{it} = 0) = \max \left( {C_{i,t = 2005} - \mathop {\sum}\limits_{i{\prime}p} {{\mathrm{SR}}_{ii{\prime}p}} \cdot (E_{i{\prime},t = 2005,p} - E_{i{\prime}tp}^0),0} \right).$$ Here the max function ensures PM2.5 concentrations do not drop below zero. Emissions and exposure in 2005 were taken from the EDGAR (Emission Database for Global Atmospheric Research) emission database56 and Brauer et al.57, respectively, and then aggregated into the RICE regions. Mitigation of CO2 (μ > 0) reduces exposure further: $$C_{it}(\mu _{it}) = \max \left( {C_{i,t = 2005} - \mathop {\sum}\limits_{i{\prime}p} {{\mathrm{SR}}_{ii{\prime}p}} \cdot (E_{i{\prime},t = 2005,p} - (E_{i{\prime}tp}^0 - \Delta E_{i{\prime}tp})),0} \right)$$ $$= \max \left( {C_{i,t = 2005} - \mathop {\sum}\limits_{i{\prime}p} {{\mathrm{SR}}_{ii{\prime}p}} \cdot (E_{i{\prime},t = 2005,p} - E_{i{\prime}tp}(\mu _{it})),0} \right)$$ We define the health co-benefit as the avoided premature mortality resulting from reductions in PM2.5 exposure attributable to CO2 mitigation (ΔC from Eq. (8)). This benefit can be quantified through the attributable fraction (AF) (54): $${\mathrm{AF}}_{it} = \frac{{{\mathrm{RR}}_{it} - 1}}{{{\mathrm{RR}}_{it}}},$$ where the relative risk, RR, for each region and each time period is a function of the change in exposure and a health impact function (β) that links a unit change in exposure to a change in the risk of adult (≥30) mortality from all causes: $${\mathrm{RR}}_{it} = \exp (\beta \cdot \Delta C_{it}(\mu _{it},Y_{it},L_{it})).$$ We assume a log-linear relationship between PM2.5 exposure and all-cause mortality with a relative risk of 1.066 (95% confidence interval (CI) = 1.040, 1.093) for each 10 μg/m3 change in exposure, based on a meta-analysis published by the World Health Organization24. However, we note that many recent assessments of ambient air pollution have used the cause-specific integrated exposure-response (IER) functions to estimate mortality impacts58. Here we focus on all-cause mortality for two reasons. First, we have recently shown that population size/growth strongly affects estimates of optimal climate policy, including for reasons unrelated to human health42,43. Therefore, we use the most recent long-term (to 2100) population projections provided by the UN Population Division, which does not publish corresponding estimates of cause-specific mortality. Second, important recent studies indicate that the IER functions may underestimate excess mortality25,26, and suggest that mortality risks at the exposure levels seen in our regional analyses may fall within the 95% CI of the WHO estimate presented above and tested in sensitivity analyses25. In all analyses, we assume that components of PM2.5 are equally toxic24, that health benefits accrue in the same 10-year time-steps as the improvement in air quality, and that population and mortality remains constant after 2100. We also confine the health impacts to mortality from PM2.5 exposure, though note that there is also concern about the health effects of exposure to smaller particles59. Multiplying the AF by the total number of life-years lost from all causes, Θ, yields the life years gained from the reduction in air pollution: $${\mathrm{LY}}_{it}(\mu _{it},Y_{it},L_{it}) = {\mathrm{AF}}_{it}\cdot \Theta _{it},$$ $$= \frac{{{\mathrm{RR}}_{it} - 1}}{{{\mathrm{RR}}_{it}}}\cdot \Theta _{it},$$ $$= (1 - \exp ( - \beta \cdot \Delta C_{it}(\mu _{it},Y_{it},L_{it})))\cdot \Theta _{it}.$$ Since β ⋅ ΔCit(μit, Yit, Lit) ≪ 1, we can write as an approximation: $${\mathrm{LY}}_{it}(\mu _{it},Y_{it},L_{it}) = \beta \cdot \Delta C_{it}(\mu _{it},Y_{it},L_{it})\cdot \Theta _{it}.$$ Theta (Θ) values are estimated from the UN data by multiplying the total deaths by the remaining life expectancy at the age of death. As UN life expectancy data is by exact age, reported at 5-year intervals, whereas mortality data is for 5-year age groups, remaining life expectancy for each 5-year age group was taken as the average of the group's bounding ages. For example, remaining life expectancy for all (averted) deaths in the 30–34 age group would be the average of the remaining life expectancy for a 30 year old and a 35 year old. Health benefits in a given region can accrue until absolute exposure to PM2.5 converges to some minimum level, representing either a point below which no additional health impacts occur (a threshold) or a theoretical minimum level where residual PM2.5 consists only of natural sources. We followed recent global studies (e.g., ref. 60) and chose a lower threshold/theoretical minimum of 5.8 μg/m3. However, we acknowledge that there is a growing consensus that there may not be a safe level of PM2.5 below which no adverse health effects occur59,61 and therefore ran sensitivities down to 1 μg/m3. If a reduction in a given time period brings a region's exposure below the threshold, the health co-benefit is calculated from the increment between the unmitigated level and the threshold: $$\begin{array}{c}{\mathrm{LY}}_{it}^ \ast (\mu _{it},Y_{it},L_{it}) = \beta \cdot \Theta _i\cdot\max \left[ {\min \left( {\Delta C_{it}(\mu _{it},Y_{it},L_{it}),} \right.} \right.\\ \left. {\left. {(C_{it}^0(\mu _{it},Y_{it},L_{it}) - \tau )} \right),0} \right],\end{array}$$ where τ is the value of the threshold level. Radiative forcing and temperature effects from aerosols Some air pollutants are climate forcers: BC is a warming agent while SO2, NOx, and OC act to cool the atmosphere10. The net global forcing in each time period attributable to aerosols is taken as the sum of the individual contributions: $${\mathrm{RF}}_t^{{\mathrm{aer}}} = \mathop {\sum}\limits_p {\mathop {\sum}\limits_i {r_{ipt}} } \cdot {\mathrm{E}}_{itp}(\mu _{it},Y_{it},L_{it}),$$ where ript is a region-specific coefficient that relates the regional change in emissions of pollutant p to the change in average global forcing. We used the MAGICC6 climate model55 to derive the coefficients by determining the impact on forcing from a pulse change in emissions in each time period in each region: $$r_{ipt} = \left[ {\frac{{\partial {\mathrm{RF}}_t^{{\mathrm{aer}}}}}{{\partial {\mathrm{E}}_{itp}}}} \right]_{{\mathrm{MAGICC6}}}.$$ For this we ran MAGICC6 with a pre-defined representative concentration pathway (RCP) scenario and then again after having reduced the emissions of one pollutant in one region by a marginal amount. We repeated this procedure for each region, pollutant, time-step, and, finally, each RCP. We thus derived a reduced-form surface-response representation of aerosol forcing in MAGICC6. We separated the effects of the different pollutants to allow them to be controlled independently, as they are in reality (also see Supplementary Note 1). In this experiment, we observed a time dependence that accounts for changes in atmospheric dynamics as emissions accumulate: $$r_{ipt} = u_{1,ip}\cdot t + u_{0,ip}.$$ The resulting coefficients incorporate both the direct and indirect forcing effects represented in MAGICC6, with the latter including those related to albedo and cloud responses. The time- dependence is independent of the initial conditions. We noted that r depends on the magnitude of the pulse change, but the effect is relatively small. Supplementary Table 3 reports values of ript and Supplementary Fig. 7 shows the radiative forcing over time for three scenarios. Here the reader will note that we have used two separate models for estimating the health effects (TM5-FASST) and climate effects (MAGICC6) of air pollutant emissions, a decoupling that may introduce uncertainty. Nevertheless, MAGICC and TM5 have distinct strengths that we harness accordingly; MAGICC takes account of the full aerosol load of the whole atmosphere, top to bottom, and at the level of hemispheres, while TM5 tells us about the concentration at ground level, where people live and breathe, in principle at much higher resolution. Both models work with the same global emission inventories. Aerosol forcing affects global mean atmospheric temperature just as CO2 forcing does, so that the added atmospheric temperature flow is equal to: $$\Delta T_t^{{\mathrm{atm}}} = \xi ({\mathrm{RF}}_t^{{\mathrm{CO}}_2} + {\mathrm{RF}}_t^{{\mathrm{aer}}}),$$ where \({\mathrm{RF}}_t^{{\mathrm{CO}}_2}\) is CO2 forcing and ξ is the decadal speed of adjustment for atmospheric temperature (equal to 0.208). Aerosol feedbacks on the economy As described, aerosol impacts occur from a change in radiative forcing and from a change in air quality. Changes in radiative forcing are transferred to RICE's climate module where they influence the global average surface temperature, which is the basis of the monetized climate damage estimates39. Changes in air quality are monetized as the health co-benefit, B, by multiplying the number of life years gained by the value of a life-year (VOLY). We follow the same approach taken in early versions of RICE's climate damage function where a VOLY is assumed to equal 2 years of regional per capita consumption, c39: $$B_{it}(\mu _{it},c_{it}) = {\mathrm{VOLY}}_{it}(c_{it}^{{\mathrm{pre - health}}})\cdot {\mathrm{LY}}_{it}^ \ast (\mu _{it},Y_{it},L_{it}).$$ A VOLY of 2 years of per capita consumption is generally the same order of magnitude as empirical estimates based on willingness-to-pay surveys21, but we also test several alternative values in sensitivity analyses. Final (post-health) per capita consumption, cit is calculated as: $$c_{it} = c_{it}^{{\mathrm{pre - health}}} + B_{it}/L_{it}.$$ With monetized aerosol impacts now included in the economic framework, RICE can follow its normal optimization procedure to find the decarbonization pathway that maximizes the objective (Eq. (1)). Additional information on select sensitivity analyses The Supplementary Information contains additional information on the co-optimization of air quality and climate policies (Supplementary Note 1 and Supplementary Fig. 8), life-year monetization and valuation (Supplementary Note 2), integrating the FAIR climate module into RICE + AIR (Supplementary Note 3), and the development of FUND + AIR (Supplementary Note 4). The authors declare that all data supporting the findings of this study are available within the article and its Supplementary Information files. The AIR model is licensed under the open source MIT license. The code is freely available at https://github.com/Environment-Research/AIR. Markandya, A. et al. Health co-benefits from air pollution and mitigation costs of the paris agreement: a modelling study. Lancet Planet. Health 2, e126–e133 (2018). Shindell, D., Faluvegi, G., Seltzer, K. & Shindell, C. Quantified, localized health benefits of accelerated carbon dioxide emissions reductions. Nat. Clim. Change 8, 291 (2018). Rao, S. et al. A multi-model assessment of the co-benefits of climate mitigation for global air quality. Environ. Res. Lett. 11, 124013 (2016). West, J. J. et al. Co-benefits of mitigating global greenhouse gas emissions for future air quality and human health. Nat. Clim. Change 3, 885–889 (2013). Shindell, D. et al. Simultaneously mitigating near-term climate change and improving human health and food security. Science 335, 183–189 (2012). Shindell, D. et al. Climate, health, agricultural and economic impacts of tighter vehicle-emission standards. Nat. Clim. Change 1, 59 (2011). Ikefuji, M., Magnus, J. R. & Sakamoto, H. The effect of health benefits on climate change mitigation policies. Clim. Change 126, 229–243 (2014). Bollen, J., van der Zwaan, B., Brink, C. & Eerens, H. Local air pollution and global climate change: a combined cost-benefit analysis. Resour. Energy Econ. 31, 161–181 (2009). Shindell, D. The social cost of atmospheric release. Clim. Change 130, 313–326 (2015). Myhre, G. et al. in Climate change 2013: The Physical Science Basis (eds Stocker, T. et al.) Ch. 8 (Cambridge University Press, Cambridge and New York, 2013). Clarke, L. et al. in Climate Change 2014. Mitigation of Climate Change. Contribution of Working Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (eds Edenhofer, O. et al.) (Cambridge University Press, Cambridge, 2014). Nordhaus, W. D. Economic aspects of global warming in a post-copenhagen environment. Proc. Natl Acad. Sci. USA 107, 11721–11726 (2010). Waldhoff, S. T., Anthoff, D., Rose, S. & Tol, R. S. The marginal damage costs of different greenhouse gases: an application of fund. Economics 8, 1–33 (2014). Saari, R. K., Selin, N. E., Rausch, S. & Thompson, T. M. A self-consistent method to assess air quality co-benefits from us climate policies. J. Air Waste Manag. Assoc. 65, 74–89 (2015). Li, M. et al. Air quality co-benefits of carbon pricing in china. Nat. Clim. Change 8, 398–403 (2018). Kolstad, C. et al. in Climate Change 2014: Mitigation of Climate Change. Contribution of Working Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (eds Edenhofer, O. et al.) (Cambridge University Press, Cambridge, 2014). National Academies of Sciences, Engineering, and Medicine. Valuing Climate Damages: Updating Estimation of the Social Cost of Carbon Dioxide (National Academies Press, Washington, 2017). Stern, N. H. The Economics of Climate Change: The Stern Review (Cambridge University Press, Cambridge, 2007). Executive Office of the President. Executive Order 13783: Promoting Energy Independence and Economic Growth (2017). Tol, R. S. The economic effects of climate change. J. Econ. Perspect. 23, 29–51 (2009). Desaigues, B. et al. Economic valuation of air pollution mortality: a 9-country contingent valuation survey of value of a life year (voly). Ecol. Indic. 11, 902–910 (2011). Committee on the Medical Effects of Air Pollutants. The mortality effects of long-term exposure to particulate air pollution in the United Kingdom. Health Protection Agency Report (Health Protection Agency, London, 2010). Robinson, L. A., Hammitt, J. K. & Okeeffe, L. Valuing Mortality Risk Reductions in Global Benefit-cost Analysis. Working Paper No. 7: Guidelines for Benefit-Cost Analysis Project (2018). Forestiere, F., Kan, H. & Cohen, A. Background Paper 4: Updated Exposure-Response Functions Available for Estimating Mortality Impacts (World Health Organization Europe, Copenhagen, 2014). Yin, P. et al. Long-term fine particulate matter exposure and nonaccidental and cause-specific mortality in a large national cohort of chinese men. Environ. Health Perspect. 125, 117002 (2017). Burnett, R. et al. Global estimates of mortality associated with long-term exposure to outdoor fine particulate matter. Proc. Natl Acad. Sci. USA 115, 9592–9597 (2018). Millar, R. J., Nicholls, Z. R., Friedlingstein, P. & Allen, M. R. A modified impulse-response representation of the global near-surface air temperature and atmospheric concentration response to carbon dioxide emissions. Atmos. Chem. Phys. 17, 7213–7228 (2017). Interagency Working Group on Social Cost of Greenhouse Gases. Technical Update of the Social Cost of Carbon for Regulatory Impact Analysis Under Executive Order 12866 (United States Government, 2016). Adler, M. Priority for the worse-off and the social cost of carbon. Nat. Clim. Change 7, 433–449 (2017). Chichilnisky, G. & Heal, G. Who should abate carbon emissions?: An international viewpoint. Econ. Lett. 44, 443–449 (1994). Anthoff, D. Working Paper: Optimal Global Dynamic Carbon Abatement (University of California, Berkeley, 2011). Fullerton, D. & Karney, D. H. Multiple pollutants, co-benefits, and suboptimal environmental policies. J. Environ. Econ. Manag. 87, 52–71 (2018). Boucher, O. et al. in Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (eds Edenhofer, O. et al.) (Cambridge University Press, Cambridge, 2013). Fuzzi, S. et al. Particulate matter, air quality and climate: lessons learned and future needs. Atmos. Chem. Phys. 15, 8217–8299 (2015). Silva, R. A. et al. The effect of future ambient air pollution on human premature mortality to 2100 using output from the accmip model ensemble. Atmos. Chem. Phys. 16, 9847–9862 (2016). Chang, K. M. et al. Ancillary health effects of climate mitigation scenarios as drivers of policy uptake: a review of air quality, transportation and diet co-benefits modeling studies. Environ. Res. Lett. 12, 113001 (2017). Scovronick, N. et al. Human health and the social cost of carbon: a primer and call to action. Epidemiology (2019). In press. Nordhaus, W. D. & Yang, Z. A regional dynamic general-equilibrium model of alternative climate-change strategies. Am. Econ. Rev. 86, 741–765 (1996). Nordhaus, W. D. & Boyer, J. Warming The World: Economic Models of Global Warming (MIT Press, Cambridge, 2000). Anthoff, D., Errickson, F. & Rennels, L. Mimi-rice-2010.jl v1.1.0. United Nations Population Division. World Population Prospects: 2017 Revision (United Nations Department of Economic and Social Affairs, 2017). Scovronick, N. et al. Impact of population growth and population ethics on climate change mitigation policy. Proc. Natl Acad. Sci. USA 114, 12338–12343 (2017). Budolfson, M. et al. Optimal climate policy and the future of world economic development. World Bank Econ. Rev. 33, 21–40 (2018). Gillingham, K. et al. Modeling uncertainty in integrated assessment of climate change: a multi-model comparison. J. Assoc. Environ. Resour. Econ. 5, 791–826 (2018). Meinshausen, M. et al. The RCP greenhouse gas concentrations and their extensions from 1765 to 2300. Clim. Change 109, 213–241 (2011). Nordhaus, W. & Sztorc, P. Dice 2013R: Introduction and User's Manual 2nd edn (2013). http://www.econ.yale.edu/%7Enordhaus/homepage/documents/DICE_Manual_100413r1.pdf. Nordhaus, W. D. Revisiting the social cost of carbon. Proc. Natl Acad. Sci. USA 114, 1518–1523 (2017). Dennig, F. & Emmerling, J. A Note on Optima with Negishi Weights. Working Paper (2017). Stanton, E. A. Negishi welfare weights in integrated assessment models: the mathematics of global inequality. Clim. Change 107, 417–432 (2011). Dennig, F., Budolfson, M. B., Fleurbaey, M., Siebert, A. & Socolow, R. H. Inequality, climate impacts on the future poor, and carbon prices. Proc. Natl Acad. Sci. USA 112, 15827–15832 (2015). Stohl, A. et al. Evaluating the climate and air quality impacts of short-lived pollutants. Atmos. Chem. Phys. 15, 10529–10566 (2015). Rao, S. et al. Future air pollution in the shared socio-economic pathways. Glob. Environ. Change 42, 346–358 (2017). Van Dingenen, R. et al. Tm5-fasst: a global atmospheric source-receptor model for rapid impact analysis of emission changes on air quality and short-lived climate pollutants. Atmos. Chem. Phys. 18, 16173–16211 (2018). Pruss-Ustun, A., Mathers, C. C., Corvalan, C. & Woodward, A. Assessing the Environmental Burden of Disease at National and Local Levels. World Health Organization Report (World Health Organization, Geneva, 2003). Meinshausen, M., Raper, S. & Wigley, T. Emulating coupled atmosphere-ocean and carbon cycle models with a simpler model, magicc6–part 1: model description and calibration. Atmos. Chem. Phys. 11, 1417–1456 (2011). European Commission Joint Research Centre. Emission Database for Global Atmospheric Research (EDGAR) (2016). Brauer, M. et al. Ambient air pollution exposure estimation for the global burden of disease 2013. Environ. Sci. Technol. 50, 79–88 (2016). Cohen, A. J. et al. Estimates and 25-year trends of the global burden of disease attributable to ambient air pollution: an analysis of data from the global burden of diseases study 2015. Lancet 389, 1907–1918 (2017). World Health Organization Regional Office for Europe. Review of evidence on health aspects of air pollution: Revihaap project. Technical Report (2013). Apte, J. S., Marshall, J. D., Cohen, A. J. & Brauer, M. Addressing global mortality from ambient pm2. 5. Environ. Sci. Technol. 49, 8057–8066 (2015). Di, Q. et al. Air pollution and mortality in the medicare population. N. Engl. J. Med. 376, 2513–2522 (2017). We thank Ciara Burnham and the Climate Future's Initiative at Princeton University for support. M.B. was also supported by a Catalyst Award from the Gund Institute for Environment at the University of Vermont. These authors contributed equally: Noah Scovronick, Mark Budolfson, Francis Dennig, Frank Errickson, Fabian Wagner. Department of Environmental Health, Rollins School of Public Health, Emory University, 1518 Clifton Rd. NE, Atlanta, GA, 30322, USA Noah Scovronick Woodrow Wilson School of Public and International Affairs, Princeton University, Robertson Hall, Princeton, NJ, 08544, USA Noah Scovronick, Marc Fleurbaey & Fabian Wagner Gund Institute for Environment and Department of Philosophy, University of Vermont, Burlington, VT, 05310, USA Mark Budolfson Edmond J. Safra Center for Ethics, Harvard University, 124 Mount Auburn Street, Cambridge, MA, 02138, USA Social Sciences (Economics), Yale-NUS College, Singapore, 138610, Singapore Francis Dennig NUS fellow at the Center for Advanced Study in the Behavioral Sciences, Stanford, CA, 94305, USA Energy and Resources Group, University of California Berkeley, 310 Barrows Hall, Berkeley, CA, 94720, USA Frank Errickson University Center for Human Values, Princeton University, 304 Louis Marx Hall, Princeton, NJ, 08544, USA Marc Fleurbaey School of International Affairs and Department of Civil and Environmental Engineering, Pennsylvania State University, University Park, PA, 16801, USA Wei Peng Department of Mechanical and Aerospace Engineering, Princeton University, Olden Street, Princeton, NJ, 08544, USA Robert H. Socolow Department of Economics, University of Texas at Austin, 2225 Speedway, Austin, TX, 78712, USA Dean Spears Economics and Planning Unit, Indian Statistical Institute – Delhi Centre, 7, S.J.S Sansawal Marg, New Delhi, 110016, India IZA Institute of Labor Economics, Schaumburg-Lippe-Strasse 5-9, 53113, Bonn, Germany Institute for Futures Studies, Holländargatan 13, Stockholm, Sweden International Institute for Applied Systems Analysis, Laxenburg, 2361, Austria Fabian Wagner Andlinger Center for Energy and the Environment, Princeton University, 86 Olden Street, Princeton, NJ, 08544, USA M.B., F.D., F.E., M.F., W.P., N.S., R.H.S., D.S., and F.W. helped design the research, interpret results, and edit the manuscript. M.B. and F.D. led the economic modeling. F.D., F.E., and F.W. led model development. N.S. led health impact modeling. F.W. led emission and climate modeling. N.S. and M.B. prepared the manuscript. Correspondence to Noah Scovronick. Journal peer review information: Nature Communications thanks Marcus Sarofim, Anil Markandya and the other anonymous reviewer(s) for their contribution to the peer review of this work. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Description of Additional Supplementary Files Supplementary Data 1 Scovronick, N., Budolfson, M., Dennig, F. et al. The impact of human health co-benefits on evaluations of global climate policy. Nat Commun 10, 2095 (2019). https://doi.org/10.1038/s41467-019-09499-x DOI: https://doi.org/10.1038/s41467-019-09499-x Health co-benefits of climate change mitigation depend on strategic power plant retirements and pollution controls Dan Tong Guannan Geng Steven J. Davis Nature Climate Change (2021) Growing environmental footprint of plastics driven by coal combustion Livia Cabernard Stephan Pfister Stefanie Hellweg Nature Sustainability (2021) The mortality cost of carbon R. Daniel Bressler Nature Communications (2021) Equity is more important for the social cost of methane than climate uncertainty Frank C. Errickson Klaus Keller David Anthoff Influence of climate change impacts and mitigation costs on inequality between countries Nicolas Taconet Aurélie Méjean Céline Guivarch Climatic Change (2020) Economics at Nature Research
CommonCrawl
Elevated temperatures cause loss of seed set in common bean (Phaseolus vulgaris L.) potentially through the disruption of source-sink relationships Ali Soltani ORCID: orcid.org/0000-0003-0951-561X1,2, Sarathi M. Weraduwage3, Thomas D. Sharkey2,3,4 & David B. Lowry1,2 Climate change models predict more frequent incidents of heat stress worldwide. This trend will contribute to food insecurity, particularly for some of the most vulnerable regions, by limiting the productivity of crops. Despite its great importance, there is a limited understanding of the underlying mechanisms of variation in heat tolerance within plant species. Common bean, Phaseolus vulgaris, is relatively susceptible to heat stress, which is of concern given its critical role in global food security. Here, we evaluated three genotypes of P. vulgaris belonging to kidney market class under heat and control conditions. The Sacramento and NY-105 genotypes were previously reported to be heat tolerant, while Redhawk is heat susceptible. We quantified several morpho-physiological traits for leaves and found that photosynthetic rate, stomatal conductance, and leaf area all increased under elevated temperatures. Leaf area expansion under heat stress was greatest for the most susceptible genotype, Redhawk. To understand gene regulatory responses among the genotypes, total RNA was extracted from the fourth trifoliate leaves for RNA-sequencing. Several genes involved in the protection of PSII (HSP21, ABA4, and LHCB4.3) exhibited increased expression under heat stress, indicating the importance of photoprotection of PSII. Furthermore, expression of the gene SUT2 was reduced in heat. SUT2 is involved in the phloem loading of sucrose and its distal translocation to sinks. We also detected an almost four-fold reduction in the concentration of free hexoses in heat-treated beans. This reduction was more drastic in the susceptible genotype. Overall, our data suggests that while moderate heat stress does not negatively affect photosynthesis, it likely interrupts intricate source-sink relationships. These results collectively suggest a physiological mechanism for why pollen fertility and seed set are negatively impacted by elevated temperatures. Identifying the physiological and transcriptome dynamics of bean genotypes in response to heat stress will likely facilitate the development of varieties that can better tolerate a future of elevated temperatures. Heat is among the most devastating abiotic stresses negatively impacting crop production and food security worldwide [1]. Every 1 °C increase in seasonal temperature results in a 10–17% reduction in crop yields [2, 3]. With the global mean temperatures set to increase at a pace of 0.2 °C per decade, the impact of heat on worldwide food production will only become more acute [4]. For some regions of the globe, temperatures are increasing even more rapidly and heat waves are expected to increase in intensity and duration [5]. Elevated temperatures negatively affect several crucial physiological processes in plants. One of the effects of elevated temperature is an increased accumulation of reactive oxygen species (ROS), [6] which are harmful to plant membranes, proteins, and other macromolecules. Photosynthesis is also affected by elevated temperatures, with carbon assimilation reduced as a result of a reduction in rubisco activation [7, 8]. Rubisco activase is inhibited at high temperature, leading to heat-induced deactivation of rubisco under moderate heat stress [9, 10]. Further, heat modifies cell membrane characteristics and consequently affects membrane-binding proteins [11,12,13]. Photosystem II (PSII) is thought to be particularly sensitive to heat stress. However, negative effects of heat stress on net photosynthesis has not been reported in all crops [14]. Besides carbon assimilation, carbon translocation from source to sink tissues has been shown to be interrupted by heat stress [15]. Continuous sucrose transport from source leaves to developing reproductive tissues is crucial for male fertility, seed set, and seed filling [16, 17]. Pressman et al. [18] found that the disruption of sucrose supply and/or its breakdown to hexoses under heat stress results in male sterility and aborted seed set in tomato, a drastic negative impact of this abiotic stress on the carbon balance of a plant. Several mechanisms to cope with heat have evolved in plants. Elevated temperatures triggers a suite of physiological responses that can ameliorate the deleterious effects of heat. For example, the ROS scavenging machinery is overexpressed, which will convert ROS to less harmful molecules [19]. ROS also serves as a signaling molecule that can activate heat shock factors (HSFs), [20, 21]. HSFs bind to palindromic motifs in the promoter of heat responsive genes, including heat shock proteins (HSPs). HSPs are molecular chaperones that refold and stabilize protein structures under heat stress. Besides adopting these thermo-tolerance strategies, preventative mechanisms have evolved that improve the cooling capacity of the leaves by increasing transpiration rates [22]. In this study, we focused on understanding how heat-tolerant and heat-sensitive varieties of the common bean, Phaseolus vulgaris, respond to elevated temperatures. Common bean is an economically important crop that originated and was domesticated in the New World. P. vulgaris is separated into two major genepools: Middle American and Andean [23, 24]. Humans domesticated beans from both of these major gene pools independently [25]. Further, three major races have been characterized within the Middle American genepool: Mesoamerica, Durango, and Jalisco. Similarly, there are three races in Andean genepool: Nueva Granada, Chile, and Peru. After the colonization of the New World, Andean beans were introduced to European and African countries [26]. Today, beans are cultivated in many regions of the world. However, being adapted to moderate climates [27], common bean has limited heat tolerance. Seed yield reductions have been reported for temperatures higher than 30 °C during the day or higher than 20 °C at nights [28]. Heat stress-induced yield reductions were described as a result of flower abscissions, development of parthenocarpic pods (pin pods), lower seed set per pod or decreased seed size [28, 29]. The level of heat sensitivity in beans depends strongly on the developmental stage in which plants are exposed to elevated temperatures. Several studies reported that heat stress at pre-fertilization stages are more detrimental to pollen development and/or anther dehiscence [29, 30]. Furthermore, Shonnard and Gepts [31] reported sensitivity of beans to heat stress at pod filling stages in addition to flower bud formation. Elevated temperatures are predicted to be more frequent, particularly in some Central American and African countries, where Andean genotypes are the main beans in cultivation [32]. Heat and drought stress often impact crops simultaneously in these regions. While irrigation can be used to eliminate water stress, the direct impacts of heat stress cannot be alleviated by any management practice. Development of heat tolerant varieties remains the only practical solution for minimizing the negative effects of elevated temperatures. Developing bean varieties that can better tolerate elevated temperatures is now considered crucial in both Latin America [33] and Africa [34]. Several heat tolerant varieties have been identified and used in breeding pipelines [31, 33, 35, 36]. Understanding the differential physiological responses of heat-sensitive and heat-tolerant Andean genotypes to elevated temperatures is a crucial step toward improving the thermo-tolerance of this economically important crop. Here, we investigate the transcriptome and physiological responses of three varieties of Andean beans to elevated temperatures. These varieties were selected based on their tolerance (Sacramento and NY-105) or susceptibility (Redhawk) to heat stress. The main objectives of this study were: i) to elucidate the effect of heat stress on the leaf morpho-physiology of the bean plants, ii) understand the transcriptomic responses of leaves under elevated temperatures, and iii) to identify the potential genes/physiological pathways that are involved in heat stress tolerance. Understanding the physiological mechanism(s) of tolerance to elevated temperatures will help the researchers to develop heat tolerant varieties more efficiently. Effect of elevated temperature on leaf morpho-physiology In both vegetative and flowering stages, the leaf temperature at night was similar to the ambient temperatures in both control and elevated temperature conditions (Fig. 1). However, during the day, leaf temperatures were 2-3 °C cooler than the ambient temperatures in both treatment levels. Leaf surface temperatures and photosynthesis characteristics measured for three bean genotypes (Sacramento, NY-105, and Redhawk) grown under control and heat stress treatments. In each figure, the left graph represents the means for each genotype grown under control (blue) or heat (red) conditions. For each pane, the upper right graph represents the main effect of treatments across genotypes while the graph below represents the main effect of genotypes across treatment. The letters on each bar represents the results of post-hoc analysis. The same letter indicates the means are not significantly different at 0.05 probability level. The bars in all figures represent the 95% confidence intervals. In the leaf temperature figures, ambient temperatures in control (blue) and heat (red) stress conditions are highlighted by dashed lines. S = Sacramento, N = NY-105, R = Redhawk. C = control treatment, H = heat treatment At the early developmental stage, we did not detect a significant effect of treatment on photosynthetic rate, stomatal conductance, intercellular [CO2] (Ci), or ΦPSII. However, plants under elevated temperature conditions had significantly higher dark respiration rates (Fig. 1). At the flowering stage, significant increases were detected in ΦPSII under the elevated temperature (Fig. 1). Similarly, plants grown under elevated temperatures had higher stomatal conductance and photosynthetic rates. The Ci and respiration rate were similar between the control and elevated temperature treatments for fourth trifoliate at the flowering stage. Among genotypes, Redhawk had the lowest stomatal conductance and the lowest Ci at the early stage of development. However, Redhawk had the highest ΦPSII and photosynthesis rate across treatments at the later flowering stage. NY-105 had the highest gs and Ci at the flowering stage (Fig. 1). Leaf area was also significantly greater in the elevated temperature treatment at flowering stage (Fig. 2). This increase was most drastic in Redhawk (Fig. 2). In contrast, a significant reduction in leaf mass per unit leaf area was detected for plants in the elevated temperature treatment. Although the main effect of genotype was not significant for leaf area, NY-105 had the lowest leaf mass per unit leaf area (thinnest leaves). Neither treatment nor genotype had a significant effect on relative water content (RWC %), indicating that plants in the elevated temperature treatment were not under water stress (Fig. 2). Leaf morphometric traits measured from three bean genotypes (Sacramento, NY-105, and Redhawk) at flowering stage grown under control and heat stress condition. In each figure, the left graph represents the means for each genotype grown under control (blue) or heat (red) condition. The upper right graph represents the main effect of treatments across genotypes and the graph below represents the main effect of genotypes across treatment. The letters on each bar represents the results of post-hoc analysis. The same letter indicates the means are not significantly different at 0.05 probability level. The bars in all figures represent 95% confidence intervals. S = Sacramento, N = NY-105, R = Redhawk. C = control treatment, H = heat treatment Stomatal density was measured for both the abaxial and adaxial sides of leaves at flowering stage (Additional file 7: Figure S2). A significant decrease in stomata density on the abaxial side of leaves was found for plants grown in the elevated temperature treatment. Among genotypes, Sacramento had the highest stomatal density for both adaxial and abaxial sides across treatments. Effect of elevated temperature on seed set A drastic reduction in the number of filled pods, seeds per pod, and total number of plump seeds were detected in plants grown under elevated temperatures (Additional file 8: Figure S3). The heat susceptible genotype Redhawk did not produce any normal pods. In contrast, Sacramento, on average, produced ~ 25 plump seeds per plant. Plants produced fewer normal pods under elevated temperatures but far more parthenocarpic pin pods. Elevated temperatures increased leaf pigment concentrations Total chlorophyll content, chlorophyll a, chlorophyll b, and carotenoids significantly increased in the elevated temperature treatment (Fig. 3). The increase in total chlorophyll content was greatest in Sacramento (58%) and NY-105 (59%). Redhawk increased chlorophyll content by only 36%. Leaf chlorophyll and carotenoid content in three bean genotypes (Sacramento, NY-105, and Redhawk) at flowering stage grown under control and heat stress condition. The amounts of chlorophyll and carotenoid were reported based on micro gram of pigments per 100 mg fresh weight of tissue. In each figure, the left graph represents the means for each genotype grown under control (blue) or heat (red) condition. The letters on each bar represents the results of post-hoc analysis. The same letter indicates the means are not significantly different at 0.05 probability level. The bars in all figures represent 95% confidence intervals. S = Sacramento, N = NY-105, R = Redhawk. C = control treatment, H = heat treatment Elevated temperatures results in the accumulation of macro- and micro- nutrients in leaves Among the 12 macro and micro- elements that were measured from the dried leaf tissues, 11 increased significantly under elevated temperature (Additional file 9: Figure S4). Potassium, sulfur, copper, and zinc were among the elements that were two-fold higher under elevated temperature. Aluminum was the only element that did not show any difference between the control and elevated temperature conditions. Sequencing, read alignment, and clustering of the samples On average, about 1% of reads were discarded from libraries because they were low-quality (Additional file 10: Figure S5). About 34% of the reads did not align to the reference genome. The average number of aligned reads used for DEG analysis was ~ 18.5 million reads, with a range of 14.3–25.0 million reads per sample. Unsupervised clustering of samples was performed based on expression profiles (Fig. 4). The samples belonging to the different treatments (control and elevated temperature) separated along the first dimension (BCV 1), while genotypes separated along the second dimension (BCV 2). Along the second dimension, Redhawk and NY-105 were clustered together while Sacramento individuals clustered distantly from those two genotypes. Unsupervised classification of samples based on their expression profiles. The samples from different levels of treatment (heat vs. control) were separated along the first dimension, indicating that the treatment was the main source of variation. The three genotypes were separated along the second dimension. Sacramento (square shape samples) was distantly separated from two other genotypes (NY-105 and Redhawk) along the second dimension Identifying differentially expressed genes In total, 646, 1247, and 801 genes were upregulated under the elevated temperature treatment for Redhawk, NY-105, and Sacramento, respectively (Fig. 5a, Additional file 1). 990, 1533, and 1360 genes were downregulated in the elevated temperature treatment for Redhawk, NY-105, and Sacramento, respectively (Fig. 5a, Additional file 1). Among these, 283 genes were upregulated and 696 genes were downregulated under the elevated temperature among all three genotypes (Fig. 5b, Additional file 2). These genes, which responded in a similar way to the elevated temperature treatment across genotypes, are considered "core heat response genes". Among some of the core genes were genes involved in carbohydrate and nitrogen metabolism, and genes contributing to thermotolerance and oxidative stress protection. Clustering analysis based on log-fold change of expression revealed three and six clusters of up- and down-regulated core genes, respectively (Fig. 6 and Additional file 3). The U-1 cluster contains the genes with the highest level of upregulation under heat stress for the three genotypes. In contrast, the D-1 cluster comprises the genes with the greatest level of down-regulation. Based on the expression profiles of all of the core genes, Sacramento and NY-105 were the most similar genotypes (Fig. 6). Further, 259 genes were upregulated and 394 genes were downregulated under elevated temperature in both of the tolerant genotypes (Sacramento and NY-105, Fig. 5b). We did not detect this set of genes among DEGs in Redhawk and consequently considered them as "tolerance-related heat response genes" (Additional file 4). Differentially expressed genes under elevated temperatures in three varieties of common bean. a The overexpressed (red) and downregulated (green) genes under heat stress for three genotypes including Redhawk, NY-105, and Sacramento. b Venn diagram depicting the number of common up- or down- regulated genes under heat stress among genotypes Heatmap indicating the log fold change of core genes expression under heat stress. Three and six clusters were identified within up- and down-regulated genes, respectively. Detailed information about clusters is provided in front of each cluster. GO terms uniquely detected in each cluster were bolded. GO terms related with protein disulfide regulations were highlighted by red in the U-1 cluster and green in the D-1 cluster We further identified genes with significant genotype × treatment interactions. All three genotypes were fit in a single model and pairwise contrasts were conducted between all three genotypes. A total of 225 genes (Additional file 5) had significant genotype × treatment interactions for pairwise contrast between Sacramento and Redhawk. For the pairwise contrast between NY-105 and Redhawk, there were 484 genes with significant genotype × treatment interactions. The lowest number of genes (n = 99) with genotype × treatment interactions was detected for the Sacramento and NY-105 contrast. Gene ontology enrichment and pathway analyses Gene Ontology (GO) enrichment was conducted separately for core genes (Table 1) and tolerance-related genes (Table 2). Results indicate that genes involved in oxidoreductase activity as well as heme and tetrapyrrole binding were significantly up-regulated under elevated temperature for all three genotypes (Table 1). The majority of down-regulated genes under elevated temperature had functions related to protein kinase activity, carbohydrate binding, and phosphotransferase activity. Three GO terms (GO:0015035, GO:0015036, and GO:0016667) were in the highest up-regulated cluster for core genes (U-1, Fig. 6). These GO terms are related to protein disulfide oxidoreductase activity. Interestingly, two GO terms (GO:0016668 and GO:0047134) were detected among the D-1 cluster, which is comprised of genes with the greatest degree of down-regulation. These terms are associated with protein-disulfide reductase activity. Table 1 GO enrichment results for up-regulated and down-regulated genes under the elevated temperature treatment for core heat response genes Table 2 GO enrichment results for up-regulated and down-regulated genes under the elevated temperature treatment for tolerant-related heat response genes Several pathways were screened for the presence of differentially expressed genes. Overall, protein kinases and genes involved in responses to cold and biotic stresses were down-regulated in the heat treatment (Additional file 11: Figure S6). In contrast, genes involved in heat stress, cell cycle regulation, and glutaredoxin activities were up-regulated. Sucrose, free hexoses, and starch measurement Elevated temperature significantly reduced the concentration of leaf sucrose (Fig. 7). Sucrose reduction was greatest (~ 2 fold) in the heat-susceptible genotype Redhawk. Furthermore, we detected a drastic reduction (~ 4 fold) in the concentration of free hexoses for plants under elevated temperature (Fig. 7). Among the genotypes, Redhawk had the lowest concentration of hexoses in both control and heat treatments. We did not detect any significant differences in starch content between the treatments. Leaf metabolite analysis of three bean genotypes (Sacramento, NY-105, and Redhawk) at flowering stage grown under control and heat stress condition. In each figure, the left graph represents the means for each genotype grown under control (blue) or heat (red) conditions. The upper right graph represents the main effect of treatments across genotypes and the graph below represents the main effect of genotypes across treatments. The letters on each bar represents the results of post-hoc analysis. The same letter indicates the means are not significantly different at 0.05 probability level. The bars in all figures represent the 95% confidence intervals. S = Sacramento, N = NY-105, R = Redhawk. C = control treatment, H = heat treatment Correlations among morpho-physiological traits To further investigate the relationships among morpho-physiological parameters, a correlation heatmap was constructed (Additional file 12: Figure S7). Based on this analysis, strong positive correlations were detected between photosynthetic rate and ΦPSII (r = 0.92, P < 0.001). A positive but weaker correlation (r = 0.49, P = 0.0004) was detected between the rate of photosynthesis and gs. Interestingly, no significant correlation was detected between the rate of photosynthesis and Ci. Further, a strong correlation was detected between leaf area and stomatal density. This strong negative correlation (r = − 0.69, P < 0.001) between leaf area and stomatal density may indicate that leaf expansion results in decreasing the stomatal density. In this study, we combined physiological and gene expression analyses to identify the mechanisms important for heat stress responses in common bean. Physiological measurements revealed that photosynthesis was not negatively impacted by the heat stress, which suggests that carbon assimilation is not the limiting factor in seed set under elevated temperatures. Instead, our transcriptome and physiological analyses indicate that crucial source-sink relationships are disrupted by the elevated temperatures. Based on the expression data, the control and elevated temperature samples clustered separately, indicating that the treatment was the main source of variation in this experiment. Below, we discuss the following three major topics regarding the effects of heat stress on physiology and gene regulation: i) effects of elevated temperature on leaf gas exchange and morpho-physiological characteristics, ii) potential effects of elevated temperature on source-sink relationships, and iii) detection of heat stress responsive transcription factors and other genes involved in heat tolerance. Elevated temperatures modified gas exchange and leaf morpho-physiology In general, photosynthetic rate did not appear to be negatively affected by the heat stress imposed by our experiment. However, enhanced respiration rates during the early developmental stages indicate lower ratios of photosynthesis to respiration under elevated temperatures. Leaf area expansion can compensate for a lower ratio of photosynthesis to respiration and contribute to higher whole plant photosynthetic rates and daily carbon gains. We found that elevated temperature resulted in a drastic increase in leaf area, particularly for the most susceptible line, Redhawk. Redhawk also had the highest respiration rate at early vegetative stages. A higher ratio of dark respiration to photosynthesis at the onset of heat stress might result from growth respiration associated with the leaf area expansion. At the flowering stage, a higher photosynthetic rate and ΦPSII was detected for plants under heat stress. ΦPSII is the proportion of absorbed light used in PSII photochemistry and reflects the rate of electron transport through PSII, which is equal to the combined rate of photosynthesis and photorespiration [13]. An increase in ΦPSII under moderate heat stress has been observed in previous studies and is attributed to high rates of photorespiration under heat stress [13]. We found higher stomatal conductance (gs) for plants under heat stress. By raising gs, a plant can decrease leaf temperatures via latent heat loss through evaporative cooling. Increased evaporative cooling resulting from increased gs under heat stress was reported recently in Pinus taeda and Populus deltoides × nigra [37] as well as Arabidopsis [22]. We found that plants grown under elevated temperatures had a lower stomatal density. This finding is in accordance with a reduced stomatal density reported in Arabidopsis plants under heat stress [22]. The fact that stomatal conductance was higher, even though stomatal density was lower in plants grown under heat stress, indicates that stomata were more open in heat stressed plants. Crawford et al. [22] suggested that a reduction in stomatal density results in an increase in the inter-stomatal space and improves vapor diffusion. Another strategy that could potentially improve the cooling capacity of the plants is a decrease in leaf thickness, which has been observed in Arabidopsis [22]. Crawford et al. [22] argued that plants adopting these strategies could have enhanced cooling capacity under elevated temperature conditions. Potential effect of heat on source-sink relationships Reproductive tissue functionality largely depends on carbon assimilation in the leaves and its subsequent delivery through the phloem to reproductive organs [38]. From a physiological standpoint, carbon allocation to reproductive tissues can be reduced by i) limitations in photosynthesis, ii) limitations in phloem loading at the source or unloading at the sinks, or iii) competition among other sinks for carbon. Regardless of the cause, limited carbon allocation to reproductive tissues results in aborted seed and/or reduced fruit set. Suboptimal environmental conditions, particularly drought [39], cold [40], and heat [18], are among the most devastating abiotic stresses that limit carbon allocations to flower buds, which results in drastic yield losses. Although drought and heat both hinder carbon allocation to reproductive tissues, these stresses differ in their mechanisms of action. Drought hinders carbon allocation to reproductive tissues mainly by suppressing photosynthesis. Under water deficit, the net photosynthetic rate is typically significantly reduced while the respiration rate is affected less [41]. The lower rate of photosynthesis under drought seems to be a direct result of the ABA-dependent effect of stomatal closure [42]. However, moderate heat stress seems to limit the delivery of sucrose to sinks, rather than hindering photosynthesis [14]. In our experiment, there was a significant effect of heat stress on seed set. This is despite the fact that we could not detect negative effects of heat stress on biochemical or physiological aspects of photosynthesis. Therefore, under the levels of elevated temperatures tested in this study, seed set must be affected by factors other than changes in CO2 assimilation. Changes in sugar and other nutrient transport from source leaves to developing pods and alterations in stress responsive transcription factors that specifically affect reproductive growth likely contribute to the deleterious effects of elevated temperatures. This is consistent with a study in maize, where heat did not affect the photosynthetic rate, but resulted in lower carbon allocation to the reproductive tissues [14]. Several studies have highlighted the important role of partitioning photosynthates under stress conditions [18, 43, 44]. In rice, drought and heat resulted in sugar starvation in multiple floral organs [43]. In tomato, lower starch accumulation occurred in pollen grains that had developed under heat stress [18]. Interestingly, in heat tolerant tomato cultivars, the starch accumulation in pollen grains was not affected by heat [44]. In our study, we found that sucrose transporter 2 (SUT2) expression was downregulated under elevated temperatures among all genotypes. SUT genes are major H+/sucrose symporters that play an important role in loading sucrose from leaves into the phloem as well as the subsequent translocation of sucrose into sinks [45]. We speculate that down-regulation of SUT2 expression under elevated temperature is accompanied by lower export of carbon from source leaves to reproductive organs, which ultimately translates to yield reduction. Downregulation of sucrose transporter genes under heat stress was also reported in rice [46] and barely [47]. It was reported that SUT expression levels were significantly higher in heat tolerant rice genotypes compared with a susceptible line under elevated temperatures [48]. Under drought stress in maize [49] or heat stress in tomato [50], lower sucrose import was reported in the reproductive tissues, resulting in glucose depletion. This glucose depletion was sensed and triggered programed cell death (PCD), which resulted in the abortion of reproductive tissues [49, 50]. Other important gene families involved in carbon metabolism are β-amylases (BAM) and invertases. β-amylases catalyze starch breakdown to maltose [51, 52]. We detected increased expression of two β-amylase genes, BAM3 and BAM5, under heat stress. BAM3 is localized in the chloroplast stroma of mesophyll cells [53]. BAM5 is a catalytically active cytosolic enzyme which was dominantly detected in phloem tissues [53, 54]. Higher activity of β-amylase by heat stress has been reported previously [55, 56]. Upregulation of BAM3 and BAM5 may result in increased maltose concentration. Kaplan and Guy [56] suggested that maltose serves as a compatible solute that protects the stromal proteins and the functionality of the photosynthetic electron transport chain under extreme temperatures. Another key gene that was upregulated under elevated temperatures in our experiment is sucrose synthase 6 (SUS6). SUS family catalyze the reversible conversion of sucrose to nucleoside diphosphate-glucose and fructose. This reaction consequently results in the accumulation of starch. Thus, SUS family members are considered to be predominant regulators of carbon flow and are involved in both sink strength and phloem loading [57]. Considering the overexpression of the SUS6, BAM3, BAM5, as well as the downregulation of SUT2, it is plausible that under heat stress, carbon flow in the leaves shifts from sucrose export to starch synthesis and maltose formation. Invertase genes are also involved in the regulation of carbon flow. Members of this family hydrolyze sucrose into hexoses and are classified into three main groups based on their patterns of expression within a plant [58]: cell-wall invertases (CWIN), vacuolar invertases (VIN), and cytosolic invertases (CIN). We found that the bean homolog to the Arabidopsis CWIN1 gene exhibits reduced expression under elevated temperatures for all three genotypes. We observed a five-fold reduction of hexose concentration in the plants grown in the heat treatment. This might, at least partially, result from the lower expression levels of CWIN1. Previous studies have suggested that invertase activity and hexose concentrations have a negative feedback effect on photosynthetic rate [59] through sugar sensing mechanisms [60,61,62]. Beyond carbon allocation, nitrogen accumulation and the right carbon:nitrogen ratio is critical for seed set [63]. Two genes homologous to nitrate transporter 1:2 (NRT1:2) and one gene homologous to chloride channel B (CLC-B) were significantly upregulated under the elevated temperature treatment. These genes are involved in nitrate uptake [64, 65]. Furthermore, we detected higher expression of nitrite reductase (NiR) and glutamine synthetase N-1 (GS) under elevated temperatures. These two enzymes are localized in the chloroplast and play important roles in nitrogen assimilation [66]. A recent study found that overexpression of NiR results in higher chlorophyll content in tobacco leaves [67], which is consistent with the higher chlorophyll concentrations found in our experiment. Heat stress responsive transcription factors and genes involved in protection Protein denaturation is one of the first and most drastic adverse effects of heat stress in biological systems. This can potentially affect several metabolic pathways by reducing the enzymatic activities. Higher presence of disulfide bonds in hyperthermophile organisms suggests the potential role of these bonds in the protein stability in hot environments [68]. In our experiment, we found that the highly up-regulated genes (cluster U-1, Fig. 6) are enriched for protein disulfide oxidoreductase activity. Interestingly the highly down-regulated genes (D-1, Fig. 6) are enriched for the protein-disulfide reductase function. These results indicate that enzymes involved in protein disulfide modifications shift their regulation dramatically under elevated temperatures. Although, upregulated genes showed the same levels of increase among genotypes, greater down-regulation levels were detected in tolerant genotypes particularly for the D-1 cluster (Fig. 6). Heat shock factors (HSF) are among the most crucial transcription factors that orchestrate the physiological responses of plants to heat. Plants have developed a higher diversity of HSFs compared with animals. Although just four HSFs are found in animals [69], plant HSFs consist of multiple families with several members in each family [70]. Heat Shock transcription Factor A2 (HsfA2) is a critical gene induced under heat stress. Two genes homologous to Arabidopsis HsfA2 were significantly overexpressed in our experiment under heat for all three genotypes. Loss of function mutations in this gene are associated with higher sensitivity to heat stress in Arabidopsis [71]. HsfA2 regulates the expression of downstream genes encoding for chaperones and enzymes involved in heat tolerance [69]. Furthermore, HsfA2 plays a role in histone methylation and epigenetic regulation for long-term acclimation to heat stress [72]. We detected a significant up-regulation of Growth-Regulating Factor 5 (GRF5) under heat stress for all three genotypes. GRF5 is a transcription factor that has been shown to be involved in positive regulation of leaf growth, chloroplast division, and increased photosynthetic rate [73]. Overexpression of GRF5 in Arabidopsis results in a significant increase in chloroplast number, without any detectible changes in chloroplast size, leaf thickness, or mesophyll cells size. These physiological modifications were associated with higher ETR, qP, CO2 assimilation, and WUA, particularly under higher light intensities (400 μmol photons m− 2, s− 1 or more). In our experiment, we detected the overexpression of GRF5 along with higher chlorophyll content (Fig. 3) and photosynthetic rate under elevated temperatures (Fig. 1). Ectopic expression of GRF5 is associated with overexpression of PORA (NADPH:Pchlide oxidoreductase A), which promotes chlorophyll synthesis and positive regulation of GLK1, a gene involved in chloroplast development. Expression of PORA and GLK1 was significantly higher for Redhawk than the two tolerant genotypes. Two homologues of the transcription factor SQUAMOSA PROMOTER BINDING PROTEIN-LIKE 12 (SPL12) were overexpressed under the elevated temperature treatment for all three genotypes. These genes are involved in thermotolerance and seed production in Arabidopsis and tobacco plants [74]. SPL12 was shown to be expressed in both vegetative and inflorescence organs with a significantly higher expression levels in sepal and petals [74]. Double mutations of spl1 and spl12 resulted in sterility of plants under heat, due to partial failure in flower opening. Lower superoxide dismutase (SOD) and higher ROS accumulation was detected in the inflorescence of these double mutant plants. Improving protective mechanisms to ameliorate the deleterious effects of oxidative stresses can contribute to better adaptation/acclimation of plants to heat stress. HSP21 was upregulated for all three genotypes under heat stress. This gene is involved in chloroplast development under heat [75] and the encoded protein is principally involved in protection of PSII against oxidative stresses [76]. A homolog of the Arabidopsis gene abscisic acid-deficient 4 (ABA4, AT1G67080.1) was upregulated under heat in all three genotypes. ABA4 codes for a protein involved in neoxanthin biosynthesis. Neoxanthin is the ultimate precursor of ABA and is also a carotenoid species that resides in the LHCII complex and protects PSII from photo-oxidative stresses [77]. A gene homologous to AT2G40100.1, which encodes for light harvesting complex photosystem II (LHCB4.3), was upregulated only in the two tolerant varieties under heat stress. Bianchi et al. [78] reported that this protein is a crucial component of PSII and is involved in structural integrity and photoprotection of this photosystem. Overexpression of HSP21, ABA4, LHCB4.3 indicates a crucial role for PSII protection in plant survival under elevated temperatures. Another gene family involved in the oxidative protection of plants is peroxidase family. Peroxidase family members possess diverse functions including involvement in tolerance to abiotic stresses [79]. We detected Peroxidase 47 among upregulated genes in the tolerant genotypes under the elevated temperature condition. This gene was upregulated ~ 5-fold in Sacramento and NY-105, but was not detected among DEG in Redhawk. This gene might be involved in the protection of tolerant genotypes against oxidative stresses. Overall, our study suggests that mechanisms other than photosynthesis play a primary role in intraspecific genetic variation in heat tolerance in common bean. For example, we found evidence that lower expression of leaf sucrose transporters (SUT2) at elevated temperatures may limit the transport of photosynthates from source to sinks and consequently starve the developing pollen and newly fertilized seeds. Further, carbon and nitrogen metabolism is likely significantly altered under elevated temperatures, given that associated genes were often differentially expressed in the heat treatment. The lack of heat damage to photosynthetic apparatus could potentially be explained by genes involved in photoprotection of photosystem II, which were upregulated under elevated temperatures. We also found that genes involved in protein disulfide modifications shift dramatically in their regulation in response to heat, which could contribute to higher protein stability and consequently enzymatic functionality under the heat stress. Besides shedding new light on the transcriptome and physiological aspects of heat stress, the results of our study suggests potential trajectory for breeding for heat-tolerance. The fact that we could not find any relationship between photosynthesis rate and the heat tolerance indicate that selection for high photosynthesis under elevated temperatures in breeding programs is unlikely to be efficient way to develop heat tolerant beans. Instead, more emphasis should be given to other traits, especially those involved in source-sink relationships. Although we could not find any relationship between photosynthesis rate and heat tolerance, we cannot rule out the effect of photosynthesis completely. Variation in the mechanisms of heat tolerance could exist in common bean germplasm and thus, evaluation of more diverse genotypes will be necessary. Considering the multidimensionality of heat stress and the mechanisms of tolerance, an integrative approach should be employed in the future studies. To identify the pivotal genetic factors involved in heat tolerance mechanisms, it will be necessary to conduct genetic mapping through Quantitative Trait Locus and/or Genome-Wide Association Studies. Measuring physiological and metabolite parameters in combination with transcriptome and proteome data from several tissues, including source and sink, will provide crucial information about the mechanisms of heat tolerance. Finally, designing experiments to track changes in carbon and nitrogen allocation from sources to sinks should provide insights about nutrient partitioning during heat stress. Prior to conducting a detailed physiological and gene expression analysis, we screened P. vulgaris germplasm to identify the most tolerant genotypes at elevated temperatures. We selected lines for screening based on observations of heat tolerance in the field for these lines made by Dr. Timothy Porch (USDA-ARS; unpublished data). The goal of this pilot experiment was to confirm heat tolerance under growth chamber conditions and select the two most heat tolerant genotypes for further experimentation. All plants were grown under the heat and control conditions, as described in the growth conditions section below. The genotypes in this initial experiment included four candidate heat tolerant genotypes: Sacramento (light red kidney), NY-105 (light red kidney), G122 (cranberry), and TARS HT1 (dark red kidney), as well as two candidate susceptible genotypes: Camelot (dark red kidney) and Lisa (white kidney). Dr. Phillip Miklas (USDA-ARS WA) and Dr. Timothy Porch (USDA-ARS PR) provided these seeds. Four replicates of each genotype were grown in heat and control conditions. Genotypes were randomized within each treatment. Four traits including the number of filled pods, number of pin pods, total number of seeds and seed weight were measured to evaluate the fitness of these genotypes under elevated temperature (Additional file 6: Figure S1). Two criteria were considered for choosing the most tolerant genotypes for downstream experiments; i) seed and pod set were similar across control and heat stress conditions, indicating tolerance and ii) low variability for each genotype in each treatment (Additional file 6: Figure S1), reflecting higher stability of critical traits. Sacramento and NY-105 were the most heat tolerant varieties with the lowest amount of variation in this initial experiment due to their higher seed set under the elevated temperature conditions. Following this pilot experiment, we set up the main experiment with the two most heat tolerant genotypes and a heat susceptible genotype, Redhawk (dark red kidney). Redhawk was selected because it is the primary line being grown commercially for agricultural production of kidney beans. Dr. James Kelly (Michigan State University) provided the seeds of Redhawk. Sacramento, NY-105, and Redhawk genotypes were germinated under 29/20 °C (day and night temperatures, respectively) in two identical growth chambers (Big foot, Biochambers, Winnipeg, MB, Canada). Plants were grown in 3.79 L pots filled with 2 SUREMIX (SURE, Galesburg, MI, USA):1/2 sand. Eight replicates of each genotype were grown in each chamber with individual plant position randomized within each chamber. Upon reaching the V4 developmental stage, heat treatment plants were exposed to 32/25 °C until physiological maturity. Note, we use the terms "heat" and "elevated temperature" interchangeably to describe the high temperature treatment. In contrast, the control plants were kept at 29/20 °C throughout the duration of the experiment. The control and heat temperatures were selected based on the previous studies showing that bean production is limited by day and night time temperatures above 30 °C and 20 °C, respectively [28]. Plants were frequently watered to avoid any confounding effects of water stress. The photoperiod was set for 16-h days and 8-h nights. For both treatments, the LED light intensity (25% blue, 75% red) was set to 500 μmol photons m− 2 s− 1 and the relative humidity set to 60%. The experiment was conducted as a completely randomized design using eight replicates per treatment and genotype. The plants were rearranged every week within each growth chamber to minimize the effect of location. For each trait, the differences between means was tested by performing ANOVAs in the R environment using the aov() function, followed by post-hoc Tukey tests. The Pearson correlations among physiological traits were calculated and plotted in R using the psych and corrplot packages. Photosynthetic parameters A LI-6800 portable gas exchange system (LI-COR Biosciences, Lincoln, NE) connected to a Multiphase Flash™ Fluorometer (6800-01A) was used to obtain gas exchange and chlorophyll a fluorescence measurements simultaneously. Photosynthetic rates (A), stomatal conductance (gs), operational efficiency of photosystem II in light adapted leaves (ΦPSII), and pre-dawn respiration rates were quantified in this way. Daytime gas exchange measurements were made during the 3rd to the 8th hr. of the photoperiod. Predawn respiration rates were measured within two hr. prior to the beginning of the photoperiod. The LI-6800 was set up outside the growth chamber and the following conditions were maintained in the LI-6800 leaf chamber to reflect daytime growth chamber conditions: leaf temperature of 29 °C (control), 32 °C (heat stress); sample [CO2] of 400 μmol mol− 1; 500 μmol m− 2 s− 1 light intensity (25% blue, 75% red); leaf vapor pressure deficit of 1.5–2.0 kPa. During predawn respiration measurements, leaf chamber lights were switched off and the leaf temperature was set to 20 °C (control) and 25 °C (heat stress). To prevent circadian effects on gas exchange measurements, selection of plants for measurements were alternated between the control and the heat stress chambers. A Fluke infrared thermometer (Model 68, Fluke Corporation, Washington, USA) was used to determine the actual surface temperature of the leaves while plants were inside of the growth chambers. These measurements were taken from the same leaves that were used for gas exchange measurements. Leaf surface temperatures were measured during the day, between the 3rd and 8th hr. of the photoperiod, and also during the nighttime within the two hours prior to the beginning of the photoperiod. Chlorophyll and carotenoid content Leaves were ground to a fine powder using mortar and pestle, following flash freezing with liquid nitrogen. One hundred mg of frozen tissue was weighted and transferred to each tube. One ml of 95% ethanol was added to each sample and thoroughly mixed. The tubes were then centrifuged at 17000 g at 4 °C for 5 min. The supernatant for each sample was transferred to a fresh tube and diluted by the addition of 1 ml of 95% ethanol. The absorbance of samples was measured by GENESYS 10S UV-Vis spectrophotometer (ThermoFisher, Waltham, MA) at three wavelengths: 470 nm, 649 nm, and 665 nm. The amount of chlorophyll a (Ca), chlorophyll b (Cb) and carotenoids (Cc) were estimated using the following equations [80]: $$ {\mathrm{C}}_{\mathrm{a}}=13.95{A}_{665-}\;6.88{A}_{649} $$ $$ {\mathrm{C}}_{\mathrm{b}}=24.96{A}_{649}-7.32{A}_{665} $$ $$ {\mathrm{C}}_{\mathrm{c}}=\frac{l000{A}_{470}-2.05{C}_a-114.8{\mathrm{C}}_{\mathrm{b}}}{245} $$ Leaf area and relative water content Leaf area was quantified for the left leaflet of the fourth trifoliate using the LI-3100 Area Meter (LI-COR INC, Lincoln, NE). Fresh weight (FW) of the same leaflet was measured simultaneously. The leaflet was then placed in a plastic bag containing a saturated paper towel. After 24 h, the turgid weight (TW) of each leaf was measured. The leaves were then placed in a drying oven, set to 60 °C, for five days, at which point the dry weight (DW) was measured. Relative water content (%) was calculated using eq. 4: $$ RWC\ \left(\%\right)=\frac{FW- DW}{TW- DW}\times 100 $$ Stomatal density Leaf surface imprints were acquired from the abaxial and adaxial surfaces of the terminal leaflet of the fourth trifoliate leaf. Digital photographs of imprints were used to quantify stomatal density using ImageJ 1.51 [81]. Leaf macro- and micro- nutrient measurement Twelve macro and micro nutrients were measured from the left leaflet of the 4th trifoliate leaf at the Brookside Laboratories, Inc., New Bremen, OH. Tissues were dried at 60 °C overnight and ground using a Cyclotech Mill with a 0.5 mm screen. Total nitrogen was measured by a combustion method. To measure the minerals, tissues were digested by nitric acid/hydrogen peroxide in conjunction with microwave heating in closed Teflon vessels. Samples were then analyzed by a Thermo iCAP 6500 spectrometer (Thermo Scientific, Waltham, MA). RNA extraction, library preparation and sequencing Total RNA was extracted from the inner side (the side closer to the terminal leaflet) of the left lateral leaflet of the fourth trifoliate using Spectrum™ Plant Total RNA Kit (Sigma-Aldrich, St. Louis, MO). The quantity and quality of extracted RNA was evaluated using Qubit fluorometer (Invitrogen INC. Carlsbad, CA) and bioanalyzer 2100 (Agilent Technologies, Santa Clara, CA), respectively. Library preparation and sequencing were performed at the RTSF Genomics Core at Michigan State University. Libraries were prepared using the Illumina TruSeq Stranded mRNA Library Preparation Kit. Sequencing was performed on an Illumina HiSeq 4000 flow cell in the 1x50bp single end read configuration. Cleaning and mapping the reads FastQC (v 0.11.3) was used to assess the quality of raw reads and identify the potential overrepresented sequences (adaptors) in each sequencing file. Adaptor sequences were trimmed with Cutadapt (v 1.14) and the quality of reads was confirmed by running additional analyses with FastQC (v 0.11.3). Cleaned reads were then aligned to the P. vulgaris reference genome (v 2.1) using HISAT2 (v 2.1.0, [82]). Htseq-count [83] was used to count the number of reads for each feature of P. vulgaris. The P. vulgaris reference genome and annotated GFF3 files were downloaded from Phytozome (v 12.1). Detecting differentially expressed genes (DEG), GO enrichment and pathway analysis The read count data were imported into R [84] and genes with count per million (cpm) ≥1 in at least eight samples were retained in the analysis. The differentially expressed genes were identified using the limma package [85] after voom transformation [86]. The voom function was used to estimate the relationship between the mean and variance of log-counts. Consequently, a precision weight is estimated and assigned to each observation. After this normalization step, data are ready for normal linear modelling. Log-fold-change of 1 or − 1 was considered the threshold for differentially expressed genes. We used the "global" method of the decideTests function to identify significant DEG. The gene ontology enrichment was performed using Fisher's exact tests with the topGO package [87]. Pathway analysis and corresponding visualizations were performed by MapMan 3.6.0 [88]. Measuring soluble sugars The amount of sucrose, glucose, and fructose were measured from the right leaflet of the fourth trifoliate leaf. Leaf samples were taken between two to three hours after the onset of the photoperiod. Briefly, 300 mg of ground tissue was mixed with 3.5% perchloric acid. Following centrifugation, the supernatant containing the soluble sugars was separated from the pellet, which contained the starch fraction. The supernatant was neutralized using a buffer containing 2 M KOH, 150 mM Hepes, and 10 mM KCl. Five microliters of each sample were then transferred into wells of a 96-well plate. We then transferred a buffer containing 110 mM Hepes, 500 nmol NADP, 500 nmol ATP, and 0.4 units of glucose-6-phosphate dehydrogenase (Sigma G-8529) to each well. The baseline optical density (OD) at 340 nm was measured by an MDS M2 plate reader. One unit of hexokinase (Sigma, I-4504) and 50 units of invertase (Sigma, H-4502) were the added to each well. The concentration of samples was calculated using 6220 M− 1 cm− 1 for the extinction coefficient of NADPH. Measuring starch The starch pellet from the previous section was washed three times with 80% ethanol. The pellet was then dried for 5 min at 95 °C. Following desiccation, 0.2 M KOH was added to gelatinize the pellet. The samples were incubated at 95 °C for 30 min. The pH was adjusted to 5 by adding 1 M acetic acid. Starch was degraded by adding 6.6 units of amyloglucosidse and 50 units of α-amylase. Ten μl of supernatant was then transferred to each well and absorption levels were recorded using the same protocol as explained in the previous section. Lesk C, Rowhani P, Ramankutty N. Influence of extreme weather disasters on global crop production. Nature. 2016;529:84–7. Lobell DB, Asner GP. Climate and management contributions to recent trends in U.S. agricultural yields. Science. 2003;299:1032. Lobell DB, Schlenker W, Costa-Roberts J. Climate trends and global crop production since 1980. Science. 2011;333:616–20. Solomon S, Qin D, Manning M, Chen Z, Marquis M, Averyt KB, et al. IPCC, in climate change 2007: the physical science basis. In: Contribution of working group I to the fourth assessment report of the intergovernmental panel on climate change. Cambridge: Cambridge University Press; 2007. Alexander LV, Zhang X, Peterson TC, Caesar J, Gleason B, Klein Tank AMG, et al. Global observed changes in daily climate extremes of temperature and precipitation. J Geophys Res. 2006;111:D05109. Vacca RA, de Pinto MC, Valenti D, Passarella S, Marra E, De Gara L. Production of reactive oxygen species, alteration of cytosolic ascorbate peroxidase, and impairment of mitochondrial metabolism are early events in heat shock-induced programmed cell death in tobacco bright-yellow 2 cells. Plant Physiol. 2004;134:1100–12. Kobza J, Edwards GE. Influences of leaf temperature on photosynthetic carbon metabolism in wheat. Plant Physiol. 1987;83:69–74. Busch FA, Sage RF. The sensitivity of photosynthesis to O2 and CO2 concentration identifies strong rubisco control above the thermal optimum. New Phytol. 2017;213:1036–51. Law RD, Crafts-Brandner SJ. Inhibition and acclimation of photosynthesis to heat stress is closely correlated with activation of ribulose-1,5-bisphosphate carboxylase/oxygenase. Plant Physiol. 1999;120:173–82. Sharkey TD. Effects of moderate heat stress on photosynthesis: importance of thylakoid reactions, rubisco deactivation, reactive oxygen species, and thermotolerance provided by isoprene. Plant Cell Environ. 2005;28:269–77. Bita CE, Gerats T. Plant tolerance to high temperature in a changing environment: scientific fundamentals and production of heat stress-tolerant crops. Front Plant Sci. 2013;4:273. Hofmann NR. The plasma membrane as first responder to heat stress. Plant Cell. 2009;21:2544. Sharkey TD, Zhang R. High temperature effects on electron and proton circuits of photosynthesis. J Integr Plant Biol. 2010;52:712–22. Suwa R, Hakata H, Hara H, El-Shemy HA, Adu-Gyamfi JJ, Nguyen NT, et al. High temperature effects on photosynthate partitioning and sugar metabolism during ear expansion in maize (Zea mays L.) genotypes. Plant Physiol Biochem. 2010;48:124–30. Pressman E, Harel D, Zamski E, Shaked R, Althan L, Rosenfeld K, et al. The effect of high temperatures on the expression and activity of sucrose-cleaving enzymes during tomato ( Lycopersicon esculentum) anther development. J Hortic Sci Biotechnol. 2006;81:341–8. Kaushal N, Awasthi R, Gupta K, Gaur P, Siddique KHM, Nayyar H. Heat-stress-induced reproductive failures in chickpea (Cicer arietinum) are associated with impaired sucrose metabolism in leaves and anthers. Funct Plant Biol. 2013;40:1334. Kumar S, Prakash P, Kumar S, Srivastava K. Role of pollen starch and soluble sugar content on fruit set in tomato under heat stress. Sabrao J Breed Genet. 2015;47:406–12. Pressman E, Peet MM, Pharr DM. The effect of heat stress on tomato pollen characteristics is associated with changes in carbohydrate concentration in the developing anthers. Ann Bot. 2002;90:631–6. Driedonks N, Xu J, Peters JL, Park S, Rieu I. Multi-level interactions between heat shock factors, heat shock proteins, and the redox system regulate acclimation to heat. Front Plant Sci. 2015;6:999. Schöffl F, Prändl R, Reindl A. Regulation of the heat-shock response. Plant Physiol. 1998;117:1135–41. Miller G, Mittler R. Could heat shock transcription factors function as hydrogen peroxide sensors in plants? Ann Bot. 2006;98:279–88. Crawford AJ, McLachlan DH, Hetherington AM, Franklin KA. High temperature exposure increases plant cooling capacity. Curr Biol. 2012;22:R396–7. Bitocchi E, Nanni L, Bellucci E, Rossi M, Giardini A, Zeuli PS, et al. Mesoamerican origin of the common bean (Phaseolus vulgaris L.) is revealed by sequence data. Proc Natl Acad Sci U S A. 2012;109:E788–96. Schmutz J, McClean PE, Mamidi S, Wu GA, Cannon SB, Grimwood J, et al. A reference genome for common bean and genome-wide analysis of dual domestications. Nat Genet. 2014;46:707–13. Gepts P, Osborn TC, Rashka K, Bliss FA. Phaseolin-protein variability in wild forms and landraces of the common bean (Phaseolus vulgaris): evidence for multiple centers of domestication. Econ Bot. 1986;40:451–68. Gepts P, Bliss FA. Dissemination pathways of common bean (Phaseolus vulgaris, Fabaceae) deduced from phaseolin electrophoretic variability. II Europe and Africa Econ Bot. 1988;42:86–104. Wallace DH. Adaptation of Phaseolus to different environments. In: Summmerfield RJ, Banting A, editors. Advances in legume science. Kew: Royal Botanic Gardens; 1980. p. 349–57. Rainey KM, Griffiths PD. Inheritance of heat tolerance during reproductive development in snap bean (Phaseolus vulgaris L.). J Am Soc Hortic Sci. 2005;130:700–6. Gross Y, Kigel J. Differential sensitivity to high temperature of stages in the reproductive development of common bean (Phaseolus vulgaris L.). F Crop Res. 1994;36:201–12. Porch TG, Jahn M. Effects of high-temperature stress on microsporogenesis in heat-sensitive and heat-tolerant genotypes of Phaseolus vulgaris. Plant Cell Environ. 2001;24:723–31. Shonnard GC, Gepts P. Genetics of heat tolerance during reproductive development in common bean. Crop Sci. 1994;34:1168. Cichy KA, Porch TG, Beaver JS, Cregan P, Fourie D, Glahn RP, et al. A Phaseolus vulgaris diversity panel for andean bean improvement. Crop Sci. 2015;55:2149–60. Baiges S, Beaver JS, Miklas PN, Rosas JC. Evaluation and selection of dry beans for heat tolerance. Ann Rep Bean Improv Coop. 1996;39:88–9. Mukankusi C, Raatz B, Nkalubo S, Berhanu F, Binagwa P, Kilango M, et al. Genomics, genetics and breeding of common bean in Africa: a review of tropical legume project. Plant Breed. 2018. p. 1–14. Román-Aviles B, Beaver JS. Inheritance of heat tolerance in common bean of Andean origin. J Agrie Univ PR. 2003;87:113–21. Porch TG, Smith JR, Beaver JS, Griffiths PD, Canaday CH. TARS-HT1 and TARS-HT2 heat-tolerant dry bean germplasm. HortScience. 2010;45:1278–80. Urban J, Ingwers MW, McGuire MA, Teskey RO. Increase in leaf temperature opens stomata and decouples net photosynthesis from stomatal conductance in Pinus taeda and Populus deltoides x nigra. J Exp Bot. 2017;68:1757–67. Ruan Y-L, Patrick JW, Bouzayen M, Osorio S, Fernie AR. Molecular regulation of seed and fruit set. Trends Plant Sci. 2012;17:656–65. Koonjul PK, Minhas JS, Nunes C, Sheoran IS, Saini HS. Selective transcriptional down-regulation of anther invertases precedes the failure of pollen development in water-stressed wheat. J Exp Bot. 2004;56:179–90. Oliver SN, Dennis ES, Dolferus R. ABA regulates apoplastic sugar transport and is a potential signal for cold-induced pollen sterility in rice. Plant Cell Physiol. 2007;48:1319–30. Boyer JS. Leaf enlargement and metabolic rates in corn, soybean, and sunflower at various leaf water potentials. Plant Physiol. 1970;46:233–5. Kramer PJ, Boyer JS. Water relations of plants and soils. San Diego: Academic Press, INC; 1995. Li X, Lawas LMF, Malo R, Glaubitz U, Erban A, Mauleon R, et al. Metabolic and transcriptomic signatures of rice floral organs reveal sugar starvation as a factor in reproductive failure under heat and drought stress. Plant Cell Environ. 2015;38:2171–92. Firon N, Shaked R, Peet MM, Pharr D, Zamski E, Rosenfeld K, et al. Pollen grains of heat tolerant tomato cultivars retain higher carbohydrate concentration under heat stress conditions. Sci Hortic (Amsterdam). 2006;109:212–7. Braun DM, Slewinski TL. Genetic control of carbon partitioning in grasses: roles of sucrose transporters and tie-dyed loci in phloem loading. Plant Physiol. 2009;149:71–81. Phan TTT, Ishibashi Y, Miyazaki M, Tran HT, Okamura K, Tanaka S, et al. High temperature-induced repression of the rice sucrose transporter (OsSUT1) and starch synthesis-related genes in sink and source organs at milky ripening stage causes chalky grains. J Agron Crop Sci. 2013;199:178–88. Mangelsen E, Kilian J, Harter K, Jansson C, Wanke D, Sundberg E. Transcriptome analysis of high-temperature stress in developing barley caryopses: early stress responses and effects on storage compound biosynthesis. Mol Plant. 2011;4:97–115. Miyazaki M, Araki M, Okamura K, Ishibashi Y, Yuasa T, Iwaya-Inoue M. Assimilate translocation and expression of sucrose transporter, OsSUT1, contribute to high-performance ripening under heat stress in the heat-tolerant rice cultivar Genkitsukushi. J Plant Physiol. 2013;170:1579–84. McLaughlin JE, Boyer JS. Sugar-responsive gene expression, invertase activity, and senescence in aborting maize ovaries at low water potentials. Ann Bot. 2004;94:675–89. Li Z, Palmer WM, Martin AP, Wang R, Rainsford F, Jin Y, et al. High invertase activity in tomato reproductive organs correlates with enhanced sucrose import into, and heat tolerance of, young fruit. J Exp Bot. 2012;63:1155–66. Kossmann J, Lloyd J. Understanding and influencing starch biochemistry. Crit Rev Biochem Mol Biol. 2000;35:141–96. Scheidig A, Fröhlich A, Schulze S, Lloyd JR, Kossmann J. Downregulation of a chloroplast-targeted beta-amylase leads to a starch-excess phenotype in leaves. Plant J. 2002;30:581–91. Monroe JD, Storm AR, Badley EM, Lehman MD, Platt SM, Saunders LK, et al. β-Amylase1 and β-amylase3 are plastidic starch hydrolases in Arabidopsis that seem to be adapted for different thermal, pH, and stress conditions. Plant Physiol. 2014;166:1748–63. Wang Q, Monroe J, Sjölund RD. Identification and characterization of a phloem-specific beta-amylase. Plant Physiol. 1995;109:743–50. Dreier W, Schnarrenberger C, Börner T. Light- and stress-dependent enhancement of amylolytic activities in white and green barley leaves: β-amylases are stress-induced proteins. J Plant Physiol. 1995;145:342–8. Kaplan F, Guy CL. Beta-amylase induction and the protective role of maltose during temperature shock. Plant Physiol. 2004;135:1674–84. Baroja-Fernández E, Muñoz FJ, Li J, Bahaji A, Almagro G, Montero M, et al. Sucrose synthase activity in the sus1/sus2/sus3/sus4 Arabidopsis mutant is sufficient to support normal cellulose and starch production. Proc Natl Acad Sci U S A. 2012;109:321–6. Ruan Y-L, Jin Y, Yang Y-J, Li G-J, Boyer JS. Sugar input, metabolism, and signaling mediated by invertase: roles in development, yield potential, and response to drought and heat. Mol Plant. 2010;3:942–55. Goldschmidt EE, Huber SC. Regulation of photosynthesis by end-product accumulation in leaves of plants storing starch, sucrose, and hexose sugars. Plant Physiol. 1992;99:1443–8. Koch K. Sucrose metabolism: regulatory mechanisms and pivotal roles in sugar sensing and plant development. Curr Opin Plant Biol. 2004;7:235–46. Rolland F, Baena-Gonzalez E, Sheen J. Sugar sensing and signaling in plants: conserved and novel mechanisms. Annu Rev Plant Biol. 2006;57:675–709. Lastdrager J, Hanson J, Smeekens S. Sugar signals and the control of plant growth and development. J Exp Bot. 2014;65:799–807. Lawlor DW. Carbon and nitrogen assimilation in relation to yield: mechanisms are the key to understanding production systems. J Exp Bot. 2002;53:773–87. Huang NC, Liu KH, Lo HJ, Tsay YF. Cloning and functional characterization of an Arabidopsis nitrate transporter gene that encodes a constitutive component of low-affinity uptake. Plant Cell. 1999;11:1381–92. Fan S-C, Lin C-S, Hsu P-K, Lin S-H, Tsay Y-F. The Arabidopsis nitrate transporter NRT1.7, expressed in phloem, is responsible for source-to-sink remobilization of nitrate. Plant Cell. 2009;21:2750–61. Andrews M, Raven JA, Lea PJ. Do plants need nitrate? The mechanisms by which nitrogen form affects plants. Ann Appl Biol. 2013;163:174–99. Davenport S, Le Lay P, Sanchez-Tamburrrino JP. Nitrate metabolism in tobacco leaves overexpressing Arabidopsis nitrite reductase. Plant Physiol Biochem. 2015;97:96–107. Ladenstein R, Ren B. Protein disulfides and protein disulfide oxidoreductases in hyperthermophiles. FEBS J. 2006;273:4170–85. Ohama N, Sato H, Shinozaki K, Yamaguchi-Shinozaki K. Transcriptional regulatory network of plant heat stress response. Trends Plant Sci. 2017;22:53–65. von Koskull-Döring P, Scharf K-D, Nover L. The diversity of plant heat stress transcription factors. Trends Plant Sci. 2007;12:452–7. Charng YY, Liu HC, Liu NY, Chi WT, Wang CN, Chang SH, et al. A heat-inducible transcription factor, HsfA2, is required for extension of acquired thermotolerance in Arabidopsis. Plant Physiol. 2006;143:251–62. PubMed Article CAS Google Scholar Laemke J, Brzezinka K, Altmann S, Baurle I. A hit-and-run heat shock factor governs sustained histone methylation and transcriptional stress memory. EMBO J. 2016;35:162–75. Vercruyssen L, Tognetti VB, Gonzalez N, Van Dingenen J, De Milde L, Bielach A, et al. GROWTH REGULATING FACTOR5 stimulates Arabidopsis chloroplast division, photosynthesis, and leaf longevity. Plant Physiol. 2015;167:817–32. Chao L-M, Liu Y-Q, Chen D-Y, Xue X-Y, Mao Y-B, Chen X-Y. Arabidopsis transcription factors SPL1 and SPL12 confer plant thermotolerance at reproductive stage. Mol Plant. 2017;10:735–48. Zhong L, Zhou W, Wang H, Ding S, Lu Q, Wen X, et al. Chloroplast small heat shock protein HSP21 interacts with plastid nucleoid protein pTAC5 and is essential for chloroplast development in Arabidopsis under heat stress. Plant Cell. 2013;25:2925–43. Heckathorn SA, Downs CA, Sharkey TD, Coleman JS. The small, methionine-rich chloroplast heat-shock protein protects photosystem II electron transport during heat stress. Plant Physiol. 1998;116:439–44. Dall'Osto L, Cazzaniga S, North H, Marion-Poll A, Bassi R. The Arabidopsis aba4-1 mutant reveals a specific function for neoxanthin in protection against photooxidative stress. Plant Cell Online. 2007;19:1048–64. de Bianchi S, Betterle N, Kouril R, Cazzaniga S, Boekema E, Bassi R, et al. Arabidopsis mutants deleted in the light-harvesting protein Lhcb4 have a disrupted photosystem II macrostructure and are defective in photoprotection. Plant Cell. 2011;23:2659–79. Llorente F, López-Cobollo RM, Catalá R, Martínez-Zapater JM, Salinas J. A novel cold-inducible gene from Arabidopsis, RCI3 , encodes a peroxidase that constitutes a component for stress tolerance. Plant J. 2002;32:13–24. Lichtenthaler HK, Wellburn AR. Determinations of total carotenoids and chlorophylls a and b of leaf extracts in different solvents. Biochem Soc Trans. 1983;11:591–2. Schneider CA, Rasband WS, Eliceiri KW. NIH image to ImageJ: 25 years of image analysis. Nat Methods. 2012;9:671–5. Kim D, Langmead B, Salzberg SL. HISAT: a fast spliced aligner with low memory requirements. Nat Meth. 2015;12:357–60. Anders S, Pyl PT, Huber W. HTSeq--a Python framework to work with high-throughput sequencing data. Bioinformatics. 2015;31:166–9. R Development Core Team R. R: a language and environment for statistical computing. R Foundation for Statistical Computing. 2011;1 2.11.1. Ritchie ME, Phipson B, Wu D, Hu Y, Law CW, Shi W, et al. limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 2015;43:e47. Law CW, Chen Y, Shi W, Smyth GK. voom: precision weights unlock linear model analysis tools for RNA-seq read counts. Genome Biol. 2014;15:R29. Alexa A, Rahnenfuhrer J. topGO: Enrichment analysis for gene ontology. 2016;:R package version 2.26.0. Thimm O, Bläsing O, Gibon Y, Nagel A, Meyer S, Krüger P, et al. MAPMAN: a user-driven tool to display genomics data sets onto diagrams of metabolic pathways and other biological processes. Plant J. 2004;37:914–39. This study was primarily supported by the Plant Resilience Institute at Michigan State University. Partial salary support for TDS comes from AgBioResearch. We would like to thank Dr. James Kelly, Dr. Phillip Miklas, and Dr. Timothy Porch for suggesting and providing the plant materials for this study. We would also like to thank the RTSF Genomics Core and growth chamber facilities at Michigan State University. This study was supported by the Plant Resilience Institute at Michigan State University. The raw FASTQ files, generated in this study can be obtained from NCBI SRA under the accession PRJNA530739 (https://www.ncbi.nlm.nih.gov/sra/PRJNA530739). Department of Plant Biology, Michigan State University, East Lansing, MI, USA Ali Soltani & David B. Lowry Plant Resilience Institute, Michigan State University, East Lansing, MI, USA Ali Soltani, Thomas D. Sharkey & David B. Lowry MSU-DOE Plant Research Laboratory, Michigan State University, East Lansing, MI, USA Sarathi M. Weraduwage & Thomas D. Sharkey Department of Biochemistry and Molecular Biology, Michigan State University, East Lansing, MI, USA Thomas D. Sharkey Ali Soltani Sarathi M. Weraduwage David B. Lowry DBL, TDS, and AS designed the experiment. AS, SMW and DBL performed the experiment. AS and SMW analyzed the data. AS wrote the manuscript. All authors read, edited and approved the manuscript. Plant Resilience Institute provided funding to DBL and TDS. Correspondence to Ali Soltani. The authors declare that they have no competing of interests. List of up- and down-regulated genes under heat stress condition in each of three genotypes of common bean. (XLSX 535 kb) List of core genes that up- and down-regulated in all three genotypes. (XLSX 111 kb) List of core genes in the same order as the heatmap in Fig. 6. (XLSX 60 kb) List of genes that differentially expressed under the heat stress conditions and just detected in tolerant genotypes (Sacramento and NY-105). (XLSX 1518 kb) List of genes showing genotype × treatment interaction. (XLSX 88 kb) Figure S1. Effect of heat on seed set traits for six bean genotypes (Camelot, Sacramento, NY-105, G-122, TARS-HT1 and Lisa) screened in the pilot experiment. The means for each genotype is indicated by blue (control) and red (heat stress). The bars in all figures represent the 95% confidence intervals. (PDF 63 kb) Figure S2. Leaf stomatal density measured from abaxial and adaxial surface of three bean genotypes (Sacramento, NY-105, and Redhawk) at flowering stage grown under control and heat stress condition. In each figure, the left graph represents the means for each genotype grown under control (blue) or heat (red) conditions. The upper right graph represents the main effect of treatments across genotypes and the graph below represents the main effect of genotypes across treatments. The letters on each bar represents the results of post-hoc analysis. The same letter indicates the means are not significantly different at 0.05 probability level. The bars in all figures represent the 95% confidence intervals. S = Sacramento, N = NY-105, R = Redhawk. C = control treatment, H = heat treatment. (PDF 36 kb) Figure S3. Effect of heat on reproduction of three bean genotypes (Sacramento, NY-105, and Redhawk). In each figure, the left graph represents the means for each genotype grown under control (blue) or heat (red) conditions. The upper right graph represents the main effect of treatment across genotypes and the graph below represents the main effect of genotypes across treatments. The letters on each bar represents the results of post-hoc analysis. The same letter indicates the means are not significantly different at 0.05 probability level. The bars in all figures represent the 95% confidence intervals. S = Sacramento, N = NY-105, R = Redhawk. C = control treatment, H = heat treatment. (PDF 61 kb) Figure S4. Leaf macro- and micro-nutrient content of three bean genotypes (Sacramento, NY-105, and Redhawk) grown under control and heat stress condition. In each figure, the left graph represents the means for each genotype grown under control (blue) or heat (red) conditions. The upper right graph represents the main effect of treatments across genotypes and the graph below represents the main effect of genotypes across treatments. The letters on each bar represents the results of post-hoc analysis. The same letter indicates the means are not significantly different at 0.05 probability level. The bars in all figures represent the 95% confidence intervals. S = Sacramento, N = NY-105, R = Redhawk. C = control treatment, H = heat treatment. (PDF 178 kb) Additional file 10: Figure S5. Summary of read numbers for each of the 48 libraries sequenced for the RNA-seq gene expression analysis. (PDF 3 kb) Figure S6. Overall regulation overview (upper row) and cellular response overview (lower row) of differentially expressed genes in three bean genotypes; Sacramento, NY-105 and Redhawk under heat stress. Red and blue colors indicate up- and down-regulation of genes, respectively. (PDF 366 kb) Figure S7. Correlation heatmap among 11 physiological and metabolite parameters. Positive and negative correlations indicated by blue and red, respectively. LA = leaf area, A = Photosynthesis rate, RES = respiration rate, Gs = stomatal conductance, Ci = internal [CO2], PII = operational efficiency of photosystem II in light adapted leaves (ΦPSII), STM = stomatal density in leaf abaxial, Hex = concentration of free hexoses, Suc = sucrose concentration, Seed = number of seeds per plant, and Chl = total chlorophyll concentration. (PDF 6 kb) Soltani, A., Weraduwage, S.M., Sharkey, T.D. et al. Elevated temperatures cause loss of seed set in common bean (Phaseolus vulgaris L.) potentially through the disruption of source-sink relationships. BMC Genomics 20, 312 (2019). https://doi.org/10.1186/s12864-019-5669-2 Transcriptome Source-sink relationships Plant genomics
CommonCrawl
Risk and space: modelling the accessibility of stroke centers using day- & nighttime population distribution and different transportation scenarios S. Rauch ORCID: orcid.org/0000-0001-7472-47391, H. Taubenböck1,2, C. Knopp2 & J. Rauh1 Rapid accessibility of (intensive) medical care can make the difference between life and death. Initial care in case of strokes is highly dependent on the location of the patient and the traffic situation for supply vehicles. In this methodologically oriented paper we want to determine the inequivalence of the risks in this respect. Using GIS we calculate the driving time between Stroke Units in the district of Münster, Germany for the population distribution at day- & nighttime. Eight different speed scenarios are considered. In order to gain the highest possible spatial resolution, we disaggregate reported population counts from administrative units with respect to a variety of factors onto building level. The overall accessibility of urban areas is better than in less urban districts using the base scenario. In that scenario 6.5% of the population at daytime and 6.8% at nighttime cannot be reached within a 30-min limit for the first care. Assuming a worse traffic situation, which is realistic at daytime, 18.1% of the population fail the proposed limit. In general, we reveal inequivalence of the risks in case of a stroke depending on locations and times of the day. The ability to drive at high average speeds is a crucial factor in emergency care. Further important factors are the different population distribution at day and night and the locations of health care facilities. With the increasing centralization of hospital locations, rural residents in particular will face a worse accessibility situation. Stroke as a suddenly occurring severe circulatory disorder of the brain due to cerebral infarction or cerebral hemorrhage is one of the most common neurological diseases (Kolominsky-Rabas/Heuschmann [29]: 658; [3]: 10). It is also one of the main causes of disability and invalidity in adulthood (Busch/Kuhnert [15]: 71, Stahmeyer et al. [52]: 711). Stroke is the second leading cause of death worldwide: Health data of the WHO [65] display for the year 2016 5.78 million deaths worldwide and a crude death rate of 77 per 100.000 inhabitants. The German Federal Statistical Office [58] also lists stroke among the most frequent causes of death in Germany. According to this, 7302 women and 4,722 men died in 2018 of a stroke,this corresponds to a share of 1.3% (female: 1.5%; male: 1.0%) of all causes of death [58]. Despite demographic ageing the number of deaths with stroke as the cause of death has steadily decreased in recent years (2002: 39,433; 2006: 28,566, 2010: 23,675, 2014: 16,753 [54, 55, 56, 57]. There are numerous reasons for this decline of deaths, but it may be assumed that the establishment of a network of specialized care facilities in Germany ("stroke units") contributed to this development. These stroke units are defined as specialized care centers with appropriate equipment for intensive care and monitoring of affected patients (Hacke and Schuster [26]: 520). The treatment concept includes both, the acute treatment of stroke patients and the treatment of as well as early rehabilitation efforts (Ringelstein and Busse [48]: 7). Another important aspect for the care of acute stroke patients and of Emergency Medical Services (EMS) in general is the organization of medical first aid. There is a correlation between stroke-related mortality and travel time to the nearest stroke unit [1, 9, 10]. In a key issues paper on emergency medical care in the prehospital and clinical phase, (Fischer et al. [20]: 393) make the following recommendations for stroke care: "A prehospital time of maximum 60 min to transfer the patient to the nearest suitable hospital with a certified stroke unit is acceptable". In addition, Kunz et al. [32] showed that treatment of an ischemic stroke within 60 min provided subsequent functional improvement and improved 3-month survival rate. Figure 1 displays a complete time cycle from an emergency event to treatment in a health care facility systematically (Fischer et al. [20]: 388). In this paper, we focus on the highlighted journey-interval, although the method used is valid for the transport-interval as well. Systematic first aid and transportation time cycle of an emergency event (Fischer et al. [20]: 388) This interval addresses the concept of accessibility. In a broader sense the concept of accessibility is multi-dimensional [44]. Beside the "spatial dimension such as availability and accessibility […], non-spatial dimensions like affordability, acceptability and accommodation" [43] are highly relevant. In the case of emergency, the rapid accessibility of medical care can decide on the chances of complete recovery and reduction of negative health consequences or even, in the worst case, on survival. The dimension of spatial and temporal accessibility is within the time cycle obviously of great importance for acute stroke care (highlighted blue in Fig. 1). In this study we therefore focus on the concept of "time is brain", i.e. the temporal accessibility to the nearest facility rather than availability [24, 49]. There are other methods, mostly based on 2SFCA approaches, which also consider availability (Fransen et al. [22], Higgs et al. [27], Chen et al. [16]). Parvin et al. [43] present the advantages of geographical information systems (GIS) in medical care planning. The specific GIS-methods are very differentiated [43] and have been widely applied in medical supply planning. Still, some challenges with regard to data and methods for modeling accessibility remain: Firstly, the locations of the patients play an important role regarding the quality of accessibility to medical facilities. In care planning, addresses of potential patients' residence are used. This is mainly for statistical reasons, since population statistics provide information about address of the place of residence (de jure population). Due to a lack of data, however, aggregated information at administrative levels are generally used. But little is displayed in the statistics about the actual whereabouts of people (de facto population), which varies considerably between day and night. Secondly, in emergencies, rapid medical first contact often plays a decisive role. In the prehospital time up (Fig. 1), journey- and transport-periods are the two time-spatial elements, which can be evaluated regardless of the actual emergency event. In order to define certain time zones and to implement them in spatial planning, average speeds need to be assumed. The traffic conditions, however, are strongly dependent on the type of road, speed regulations, (priority) rules for emergency vehicles and the traffic situation [21, 30]. Especially the traffic conditions are subject to very strong fluctuations during the day. The spatial distribution and density of medical care facilities is a third essential factor in evaluating the spatial accessibility. In the last decades a mostly economically motivated thinning of the supply networks of general hospitals, specialist clinics, general practitioners, pharmacies etc. has been observed with consequences of increasing travel times and a deterioration in accessibility, especially in peripheral areas [19, 42, 46] anticipate further hospital closures and cuts in the health care system that will also affect the area of stroke care. The general idea of this study is to present a method that allows to spatially quantify the accessibility of stroke units depending on time of day. Therefore, we take the everyday whereabouts of people and traffic conditions into account. The method used is a straightforward procedure especially suited for macroscale approaches. In this way, the spatial variability of risks shall be mapped. Staying with the "time is brain" concept [49], we use risk as a spatial phenomenon and it is therefore defined by not being reached in time for initial care. Since risk is not spatially static, but varies through the movement of individuals over time, the differences between the locations of people during the day and at night will be determined. This is crucial since not only the daytime population differs from the residential nighttime population, also the potential speeds respectively accessibility is lower during the day [21]. Finally, we also consider the variability of risks by integrating different road transport speeds as a function of traffic volume in general, population density and road type. Study area, data and methods We deal with the following methodical approaches of accessibility analysis using the case study of stroke care: The day population (considering the mobility of individuals) differs greatly from the night population (residential information) [61]. By using daytime and night-time population distributions at very high spatial resolution, we consider a temporal dimension as well as a more realistic version of spatial population distribution over the day than a simple usage of place of residences. The risk of stroke, as well as the mortality rate, increases with age [14]. Nevertheless, cases also occur in younger age groups. Therefore, within this methodologically oriented study, all potential residences will be considered. Furthermore, time-based average car speeds for each class of road are added, in order to reflect daytime and weekly varying traffic load. To evaluate the dynamic effects of different traffic situations on the accessibility situation, we create different scenarios by implementing multiple networks using GIS. Only certified stroke units are considered in our models. There should be no major differences in the provision of services between the stroke units. Analyses that also take the second or third closest center into account are therefore not carried out. We exemplify the spatial risk for a stroke event in the administrative district of Münster, North Rhine-Westphalia, Germany. The district is quite heterogeneous in terms of its settlement structure. It is divided into eight counties (3 city regions, 5 regions with urbanization approach/rural–urban transitionFootnote 1). A detailed overview of the population distribution is given in the section entitled Experiment and Results. In order to reduce border problems, in addition to the 15 stroke units located in the area, 23 units from the surrounding districts were considered as well (status 2015). We rely on population projections of the German Federal Statistical Office for 2020, which are based on the 2011 census [56] and the known population dynamics (i.e. migration, births, deaths). Individual addresses of residents are not publicly accessible in full spatial resolution. However, the German Federal Statistical Office provides a summarized INSPIRE-compliant 100 m × 100 m grid data set holding the results from the census 2011 [56]. This corresponds to the population distribution at the place of residence. In our study, however, we refine the model by a temporal and a spatial component: we model the temporal variation of the population distribution for typical day- and nighttime situations and spatially we disaggregate data to a resolution of 20 m. Therefore, we incorporate up-to-date statistical information and very detailed building data to have time-dependent spatial starting points for the accessibility analysis. We rely on the following demographic data: total population, employees and commuters per economic sector [34, 35, 36], children in schools and day care centers [37, 38] and relative share of care-dependent elderlies [39]. The data is distributed for different administrative units following the Nomenclature of territorial units for statistics (NUTS) developed by Eurostat [18]. The data collected was reported on Local Area Unit (LAU), i.e. municipality level as well as on NUTS-3, i.e. county level and NUTS-2, i.e. district level. And we rely on cadastral Level-of-Detail-1 (LoD-1) building data provided by the Federal Agency for Cartography and Geodesy (BKG). The dataset provides information on the building ground floor and height as well as on the predominant building usage [2]. This enables to distinguish usage types such as 'residential', 'commercial and industrial', 'schools' and 'other'. We trained random forest regression models [12] for the different functional types. For example, buildings classified as residential are found to have median storey heights of 3.4 m and a standard deviation of approximately 0.7 m, while buildings that were assigned commercial or service functions have a considerable higher median storey height of 3.8 m in the median as well as a higher standard deviation of 1.2 m, respectively. The retrieved random forest regression models allow to explain 89% of the variance in the test set with a mean absolute error (MAE) of 0.14. This enables us estimating the gross floor area per building. Assessment of day- and night-time population In order to gain the highest possible spatial resolution for the population distribution, we disaggregate the collected statistical information from administrative units (serving as source units) to single building level (serving as target units) following a top-down approach (for reference [4, 23, 67]). Besides the living space per building, we integrate ancillary information about predominant building usage as well as knowledge about the socio-economic setting within the municipalities including number of employees per economic sector, gross commuting balance per economic sector, number of children and pupils as well as the relative share of care-dependent elderlies. For daytime modelling the following core assumptions have been made: During the day location of pupils in schools; location of employed persons in the building types linked to the respective economic sector; location of care-dependent persons in elderly home facilities/assisted living; and location of non-employed persons in residential buildings. For the night situation, our main assumption is the exclusive residence of the population in residential buildings. Following this assumption, one density value is calculated taking the reported total population counts and the respective residential buildings in the LoD-1 building stock into account. According to the Federal Institute for Occupational Safety and Health [5] overall 80% of the employed population is working between 7 a.m. and 7 p.m. on working days. Approximately 20% of the employees has staggered working hours, i.e. working hours outside 7 a.m. and 7 p.m. In order to keep the model as simple as possible, we define the timeframe for daytime modelling from 7 a.m. to 7 p.m. and vice versa for nighttime. The disaggregated population is thus—compared to other data sets—available in its temporal variability (day and night) as well as spatially higher resolution on a 20 m × 20 m raster. We use a multi-scale approach to validate these data: By carrying out this approach from the NUTS 3 and NUTS 2 level to the building level and then aggregating this result to the independent and previously unused data of the LAU level, validation becomes possible. In addition to the approach described above, we also perform a "linear" estimation without additional information on building use. Thus, the added value of integrating additional socio-demographic and economic data is demonstrated. The deviations of the modelled values are shown as box plots in Fig. 2. The disaggregation of NUTS-2 results in a mean absolute deviation of 17.9% and 9.3% respectively to the figures for LAU in the night and day scenario. The disaggregation of NUTS-3 input data reduces the mean absolute deviation to about 6.4% and 5.9%, respectively. Source data was disaggregated to single building level, aggregated back to municipality (LAU) and compared to the reported numbers Left: Relative deviation of aggregated modelled numbers from the reported numbers on LAU; Right: Validation approach considering multiple spatial scales, described as NUTS and LAU units. Spatial accessiblity Focusing on accessibility by cars, a street network was used from Open Streetmap data (OSM). Neis et al. [41] mention the dataset is becoming comparable in quality to other geodata from commercial providers (also see [11, 51]) especially in countries with active communities like in Germany. The successful usage of OSM has been demonstrated for accessibility analyses [42, 46, 53, 66]. The influence of one-way streets was also be considered with the help of the data. For each road segment we calculate specific driving speeds as a function (F1) of the maximum speed (\(V_{{max}} )\), a space-dependent parameter cfa (proposed for the study area with 0.85) and its surrounding population density by using the suggested method for OSM data presented by BBSR [6]. The constant value for k is either 5000 for highways and highway-like routes or 10,000 for all other routes. $$v = V_{{max}} *cfa*\left( {1 - \frac{{Population~density~within~a~1km~radius}}{k}} \right)$$ The goal of this form of network attribution was to represent traffic jam risk in more densely populated communities and settlement areas, i.e., to simulate a more stressed road network. BBSR [6] and Schwarze/Spiekermann [50] compare this method with FCD and GoogleMaps data. They have concluded that the approach used here when compared to measured travel speeds is within the established range for extreme network conditions (disturbed network, free network) and shows a high correlation to the results of the real-time data. Besides the given infrastructure, this estimated value is the central factor in accessibility analyses. This method results in the base scenario we used for day- & nighttime analyses. The method used here is a straightforward procedure that considers essential influencing factors and is therefore particularly suitable for macroscale approaches. All accessibility models are based on assumptions concerning the network state at a certain time. In reality, there are significant differences in daily and weekly accessibility. The traffic flow is susceptible to disruptions and is sometimes subject to very large fluctuations. Therefore, we introduce further scenarios by reducing (− 30%, − 20%, − 10%) or increasing (+ 10%, + 20%, + 30%, + 40%) the speed for each edge, in order to determine how different daytime mobility might effect the overall accessibility situation. In addition, the respective scenarios (P10–P40) simulate priority rules for emergency vehicles that allow higher speeds on the particular road types in the event of an emergency [45]. Other forms of transport, such as helicopters, are also relevant in the emergency treatment of strokes. However, since their share in first aid is low and the availability and cost of helicopters are problematic [33, 47], the study focuses exclusively on car transport. Table 1 shows the modeled scenarios. The travel time to the closest stroke unit was calculated for each day- and nighttime population point in each scenario. The outcomes of the resulting 16 calculations are discussed below. Table 1 Scenarios by varying velocities Experiment and results We focus on three essential elements of spatial supply: First, a representation of the accessibility of the day- and nighttime population is carried out. Secondly, the effect of different traffic situations is examined and finally the supply situation in different settlement categories is observed. Figure 3 shows the modeled population distribution on which the following accessibility analyses are based. An additional high resolution image of the maps helps to visualize the regional differences (see Additional files 1, 2). Results of modeled daytime (top) and nighttime (bottom) populations The overall modelled daytime population in our research area is 2,527,000, the night population is 2,619,000. Focusing day- & night-time population Using the base scenario, we find 6.5% of the people (day) and 6.8% (night) are not within a 30-min driving distance towards a stroke unit. The 30-min limit for first aid in case of an emergency was proposed by a joint initiative of the Institute of Emergency Medicine and Medical Management, the University Hospital of Munich, and the Association of Southwest German Emergency Physicians [13]. For further differentiation, the 20-min interval was also considered. The number of people not reached for first care (journey-interval) within 20 min increases to 21.0% (day) and 21.5% (night). We find the mean travel time by day is 14.6 min and 13.8 min by night. Figures 4 and 5 depict the cumulated reached population for each model. The graphs are similar due to the macroscale view. The only slight differences in accessibility are caused by the fact that the study area is quite urban. Therefore, many population points are spatially equal (daytime population 1,638,487 pixels, nighttime population 1,156,200 pixels, of which 1,035,087 are spatially equal). However, 669,000 modeled individuals reside at night in locations where no one is found during the day. The share of people modeled in the other locations varies, in some cases significantly. There are two slope changes in the graphs. The first change, at around 1.4 million people, is due to people living in high dense urban areas. The accessibility situation becomes worse for less central living and working people. In addition, the two figures (Figs. 4 & 5) show the gap between the M30 scenario and the P-models, especially for a 20-min journey period. Table 2 presents the values and attributes of each scenario. Daytime: Cumulative population by driving time to the nearest stroke unit Nighttime: Cumulative population by driving time to the nearest stroke unit Table 2 Results of the accessibility analyses in the region of Muenster (in minutes) (N(Day) = 2,527,000; N(Night) = 2,619,000) The maps (Fig. 6) show the positions of the existing stroke units used in the model, and the varying accessibility situations at day- (top) compared to nighttime (bottom). In addition to the poorer supply situation in peripheral areas, it is also evident that the night population is less centrally allocated. Accessibility of stroke units: Day- (top) and nighttime (bottom) accessibility of stroke units using the base speed model Focusing different traffic situations A comparison between expected traffic peaks during rush hours by day (M30) and more relaxed situations at night (P40) shows a significant difference (Fig. 7). 39.8% of the people cannot be reached within 20-min driving time (M30) during the day, while comparatively few of 8.6% cannot be accessed in the P40 scenario. The maximum driving time during the day is 78.5 min (M30), during the night only 39.3 min (P40). In particular, this difference shows that the ability to drive fast and therefor reduce the journey-, but also the transport-period is crucial. It also shows different traffic situations can have a decisive influence on the driving time of ambulances. Accessibility of stroke units: Worst-case scenario (M30) by day (top) and best-case scenario (P40) by night (bottom) Despite the fact, that the median travel time in that scenario (M30) is slightly worse over day, due to the population distribution (workplaces, leisure activities etc.), the number of potential patients is lower. To highlight the influence of the overall traffic situation, Fig. 8 shows the difference between the worst-case (M30) during the day and best-case scenario (P40) during the night. Due to dense traffic or traffic jams especially in urban regions during the day and the rush hours, the worst-case scenario is a realistic assumption while by night the traffic situation is less stressed, because no traffic jam to this level is expected at this time. Population by driving time to the nearest stroke unit: Worst-case scenario (M30) by day and best-case scenario (P40) by night Focusing different spatial categories Overall, the marginally better accessibility during daytime can also be explained by workplace agglomerations outside of city centers. However, these agglomerations are more likely to be found in urban areas than in less densely populated regions. The median accessibility (base scenario) of 7.2 min (day) and 6.9 min (night) in urban areas is significantly lower than those in less urbanized regions with a lower population density (16.3 min (day); 15.8 min (night)) (Tables 3 & 4). Table 3 Results of the accessibility analyses in the region of Muenster by day and type of region Table 4 Results of the accessibility analyses in the region of Muenster by night and type of region These results reflect the location policy of mostly very central locations of the stroke units. In urban districts, both, day and night populations are reached within 30 min in the base scenario. While in the best case this applies also to both examined points in time, at nighttime already 2.2% of the population is not reached within 20 min. With significantly reduced driving speed, less densely populated regions show worse supply situation. Even in the best-case scenario (P40), 2.7% of the population are not reached within 30 min. Assuming a very dense traffic situation (M30), over 50% of the population, both during day and night, cannot be accessed within 20 min for a potential first care. An essential human right is the "right to health", which includes the concept of equality of living conditions [62, 64]. However, we show in this study that various locations of stay can influence the risk of receiving adequate help quickly in the event of a stroke. Even though we found accessibility to stroke units overall can be considered good, for the majority of the population inequalities are evident in space and time. By using spatial-quantitative methods and GIS the central results confirm that space has a decisive influence on personal health risk. Previous accessibility analyses in emergency care mostly use residential populations to estimate supply potentials. Thus, they assume the applied nighttime scenarios as used in our study, but disregard spatiotemporal variations in where people stay. By using state-of-the-art macroscale accessibility-methods and a very detailed time-dependent population distribution, it is possible to make more accurate estimates of the temporal accessibility of the entire population than only for fixed catchment areas. With it we document that risk is variable distributed over the course of a day in space. However, the approach has some limitations due to the data and its accuracy. In general, we assume, that in case of an emergency the closest facility will be chosen [42, 46, 59, 60]. We are aware that this assumption is not always true in reality. Anyhow, the presented type of accessibility analysis is considered an objective, location-based approach [17, 25] and specific spatial insight is possible due to the strongly disaggregated population data. Of course, the approach includes errors due to the spatial and thematic highly resolved population disaggregation. Nevertheless, the validation results show that this approach makes it possible to estimate the spatiotemporal distribution of the population with high accuracy. With respect to population data, 0.3% of the day- and 0.2% of the nighttime population was not considered in our results due to topological errors within the network. Evident human dynamics linked with e.g. leisure and consumer behavior or the partial temporal overlap of the defined daytime (7 a.m. to 7p.m.) with the real school hours besides other aspects introduce still generalized assumptions in our assessment. Nevertheless, the low MAE values testify to the high accuracy under these given data settings and we believe our estimates feature even higher accuracies when using the LAU input data. In our experiment, the nighttime shows a slightly worse health supply coverage. This can be attributed to the concentration of jobs in central locations and thus rather close to health care facilities. Because a centrally oriented planning of stroke units, also in general health care planning, focus mainly on urban areas with both, many residential locations and workplaces in the immediate vicinity. Due to the slightly higher numbers of people who have no access within 20 min at night, we assume that with the increasing centralization of hospital locations, rural residents in particular will face a worse accessibility situation in the future. The centralization of hospital locations certainly contributes to quality assurance and perhaps even improvement. However, this also leads to a significant deterioration of the accessibility situation, especially in less densely populated peripherally located areas, where first aid for stroke patients is becoming increasingly important due to aging populations. According to the current population forecast 2040 of the BBSR [8], the demographic aging in Germany will continue (average age: 2017: 44.3 years; 2040: 45.9 years). Centrally located (urban) regions show a significantly more favorable development until 2040 (average age 2017: 43.4 years; 2040: 44.4 years) than peripherally located regions (2017: 47.3 years; 2040: 50.3 years) (BBSR [8]: 4–5). This analysis allows to reveal the insufficient care capabilities in peripheral locations. Since stroke units can hardly be operated there for economic reasons, new technics are needed to treat patients in a specialized, remote manner using telemedicine [31]. Another promising concept is the use of Mobile Stroke Units (MSU). These vehicles provide a prehospital care using tools for diagnosis and treatment of a stroke and are therefore a valuable instrument to rural areas where patients face worse access to stationed stroke care (Kunz et al. [32], Mathur et al. [40]: 1). Especially for remote region, Air-Mobile Stroke Unit approach is also promising [63]. In general, we reveal inequivalence of the risks in case of a stroke depending on locations and times of the day. The ability to drive at high average speeds is a crucial factor in emergency care. This is the only way to ensure that patients receive the right initial treatment in time. Thus, accessibility is not only an important criterion for decision-making by health professionals and policy makers, measures of accessibility (like travel-time to the next hospital) also offers individuals the opportunity to review their care situation. In addition, high-resolution spatial population data is an elementary component of care analyses and spatial epidemiology. The ability to clearly locate specific population groups provides the opportunity to identify risk areas in preventive research. The effects of different population distributions reinforce previous findings on inequality in medical care [28, 53]. Thus, we show that a multitemporal view of the population distribution can produce variations in the accessibility situation in the same way as the use of different speed scenarios. Therefore, a static population distribution always requires an adapted speed scenario to be chosen depending on the time of investigation. Even if this approach cannot exactly describe individual paths in reality, it creates a picture that is closer to actual practice. In methodological terms it has been displayed, that the combination of heterogeneous free data sets from censuses or OSM enables to map reality in ever better spatial and temporal resolution. The combination of spatially high-resolution population data and different speed scenarios opens up numerous benefits for the planning process. The method is applicable for strokes, but also for other emergency events or even general care analyses when using a car. The accessibility network is based on Open Street Map data and is therefore open access. The modeled population distribution was created by DLR and is therefore not publicly available. classified by the German Federal Institute for Research on Building, Urban Affairs and Spatial Development [7]: City regions: Regions in which at least 50% of the population lives in large and medium-sized cities and in which a large city with a population of 500,000 or more is located, as well as regions with a population density of at least 300 inhabitants per km2, not including large cities. Regions with urbanization approach: Regions in which at least 33% of the population lives in large and medium-sized cities with a population density of between 150 and 300 inhabitants per km2 and regions in which at least one large city (100.000 and more inhabitants) is located and which have a population density of at least 100 inhabitants per km2 excluding large cities. Further we will use the term "rural–urban transition" for these regions. Ader J, Wu J, Fonarow GC, Smith EE, Shah S, Xian Y, Bhatt DL, Schwamm LH, Reeves MJ, Matsouaka RA, Sheth KN. Hospital distance, socioeconomic status, and timely treatment of ischemic stroke. Neurology. 2019;93(8):747–57. https://doi.org/10.1212/WNL.0000000000007963. AdV. Data format description of Official 3D Building Model LoD1 of Germany (LoD1-DE) Version 1.4. Working Committee of the Surveying Authorities of the Laender of the Federal Republic of Germany (AdV); 2019. AQUA – Institut für angewandte Qualitätsförderung und Forschung im Gesundheitswesen. Versorgungsqualität bei Schlaganfall. Konzeptskizze für ein Qualitätssicherungsverfahren. https://www.g-ba.de/downloads/39-261-2283/2015-06-18_AQUA_Abnahme-Konzeptskizze-Schlaganfall.pdf (13.05.2020). Aubrecht C, Özceylan D, Steinnocher K, Freire S. Multi-level geospatial modeling of human exposure patterns and vulnerability indicators. Nat Hazards. 2013;68:147–63. https://doi.org/10.1007/s11069-012-0389-9. Backhaus N, Tisch A, Wöhrmann AM. BAuA-Arbeitszeitbefragung: Vergleich 2015–2017, 2018; https://doi.org/10.21934/BAUA:BERICHT20180718. BBSR (Research on Building, Urban Affairs and Spatial Development) (Ed.). Methodische Weiterentwicklungen der Erreichbarkeitsanalysen des BBSR, BBSR-Online-Publication Nr. 09/2019, https://www.bbsr.bund.de/BBSR/DE/veroeffentlichungen/bbsr-online/2019/bbsr-online-09-2019.html (20.05.2021). BBSR (Research on Building, Urban Affairs and Spatial Development). Laufende Raumbeobachtung – Raumabgrenzungen, 2020 https://www.bbsr.bund.de/BBSR/DE/forschung/raumbeobachtung/Raumabgrenzungen/deutschland/regionen/siedlungsstrukturelle-regionstypen/regionstypen.html (09.11.2020). BBSR (Research on Building, Urban Affairs and Spatial Development). Raumordnungsprognose 2040. Bevölkerungsprognose: Ergebnisse und Methodik, Bonn. https://www.bbsr.bund.de/BBSR/DE/veroeffentlichungen/analysen-kompakt/2021/ak-04-2021.html (23.05.2021). Bekelis K, Marth NJ, Wong K, Zhou W, Birkmeyer JD, Skinner J. Primary stroke center hospitalization for elderly patients with stroke: implications for case fatality and travel times. JAMA Intern Med. 2016;176(9):1361–8. https://doi.org/10.1001/jamainternmed.2016.3919. Berlin C, Panczak R, Hasler R, Zwahlen M. Do acute myocardial infarction and stroke mortality vary by distance to hospitals in Switzerland? Results from the Swiss National Cohort Study. BMJ Open. 2016. https://doi.org/10.1136/bmjopen-2016-013090. Boeing G. Street network models and indicators for every urban area in the world. Geographical Anal. 2021. https://doi.org/10.1111/gean.12281. Breiman L. Random forests. Mach Learn. 2001;45:5–32. https://doi.org/10.1023/A:1010933404324. Bundesärztekammer (2007): Eckpunkte Notfallmedizinische Versorgung der Bevölkerung in Klinik und Präklinik. http://www.bundesaerztekammer.de/fileadmin/user_upload/downloads/Eckpunkte_Med_Notfallversorgung.pdf (11.05.2021). Busch MA, Schienkiewitz A, Nowossadeck E, Gößwald A. Prevalence of stroke in adults aged 40–79 years in Germany. Bundesgesundheitsbl. 2013;56:656–60. https://doi.org/10.1007/s00103-012-1659-0. Busch MA, Kuhnert R. 12-Monats-Prävalenz von Schlaganfall oder chronischen Beschwerden infolge eines Schlaganfalls in Deutschland. J Health Monitor. 2017. https://doi.org/10.17886/RKI-GBE-2017-010. Chen BY, Cheng XP, Kwan MP, Schwanen T. Evaluating spatial accessibility to healthcare services under travel time uncertainty: A reliability-based floating catchment area approach. J Transp Geogr. 2020;87:102794. https://doi.org/10.1016/j.jtrangeo.2020.102794. Curl A. Measuring what matters: Comparing the lived experience to objective measures of accessibility, Doctoral dissertation. University of Aberdeen; 2013. Commission E. Regions in the European Union: nomenclature of territorial units for statistics, NUTS 2016/EU 28: edition 2018. LU: Publications Office; 2018. Eyding J, Krogias C, Weber R. Versorgungsrealität des Schlaganfalls in Deutschland. Nervenarzt. 2020;91:875–6. https://doi.org/10.1007/s00115-020-00987-w. Fischer M, Kehrberger E, Marung H, Moecke H, Prückner S, Trentzsch H, Urban B, Fachexperten der Eckpunktepapier-Konsensus-Gruppe. Eckpunktepapier, . zur notfallmedizinischen Versorgung der Bevölkerung in der Prähospitalphase und in der Klinik. Notfall Rettungsmedizin. 2016;2016(19):387–95. https://doi.org/10.1007/s10049-016-0187-0. Fleischman RJ, Lundquist M, Jui J, Newgard CD, Warden C. Predicting ambulance time of arrival to the emergency department using global positioning system and Google maps. Prehosp Emerg Care. 2013;17(4):458–65. https://doi.org/10.3109/10903127.2013.811562. Fransen K, Neutens T, De Maeyer P, Deruyter G. A commuter-based two-step floating catchment area method for measuring spatial accessibility of daycare centers. Health Place. 2015;32:65–73. https://doi.org/10.1016/j.healthplace.2015.01.002. Freire S, Aubrecht C. Integrating population dynamics into mapping human exposure to seismic hazard. Nat Hazard. 2012;12:3533–43. https://doi.org/10.5194/nhess-12-3533-2012. Freyssenge J, Renard F, Schott AM, Derex N, Nighoghossian N, Tazarourte KE, Khoury C. Measurement of the potential geographic accessibility from call to definitive care for patient with acute stroke. Int J Health Geogr. 2018;17:1. https://doi.org/10.1186/s12942-018-0121-4. Geurs KT, Wee BW. Accessibility evaluation of land-use and transport strategies: review and research directions. J Transport Geography. 2004;12(2):127–40. https://doi.org/10.1016/j.jtrangeo.2003.10.005. Hacke W, Schuster HP. Schlaganfallstationen (Stroke Units)–Zankapfel zwischen Internisten und Neurologen oder gemeinsame Aufgabe? Intensivmedizin und Notfallmedizin. 1998;35(7):519–22. Higgs G, Langford M, Jarvis P, Page N, Richards J, Fry R. Using Geographic Information Systems to investigate variations in accessibility to 'extended hours' primary healthcare provision. Health Soc Care Community. 2019;27(4):1074–1084. https://doi.org/10.1111/hsc.12724. Kapral M, Hall R, Gozdyra P, Yu A, Jin A, Martin C, Silver FL, Schwartz RH, Manuel DG, Fang J, Porter J, Koifman J, Austin P. geographic access to stroke care services in rural communities in Ontario, Canada. Can J Neurol Sci J Can Des Sci Neurologiques. 2020;47(3):301–8. https://doi.org/10.1017/cjn.2020.9. Kolominsky-Rabas PL, Heuschmann PU. Inzidenz, Ätiologie und Langzeitprognose des Schlaganfalls. Fortschritte der Neurologie Psychiatrie. 2002;70(12):657–62. Kommer GJ, Zwakhals SLN, Over E. Modellen referentiekader ambulancezorg. 2016: Ontwikkeling modellen voor DAM, B-vervoer en rijtijden. https://www.rivm.nl/publicaties/modellen-referentiekader-ambulancezorg-2016-ontwikkeling-modellen-voor-dam-b-vervoer-en (10.05.2021). Kraft P, Kleinschnitz C, Wiedmann S, Heuschmann PU, Volkmann J. Transregionales Netzwerk für Schlaganfallintervention mit Telemedizin (TRANSIT-Stroke). 2014 https://www.transit-stroke.de/pdf/Artikel_TRANSIT-Stroke.pdf (20.10.2020). Kunz A, Nolte CH, Erdur H, Fiebach JB, Geisler F, Rozanski M, Scheitz JF, Villringer K, Waldschmidt C, Weber JE, Wendt M, Winter B, Zieschang K, Grittner U, Kaczmarek S, Endres M, Ebinger M, Audebert HJ. Effects of Ultraearly Intravenous Thrombolysis on Outcomes in Ischemic Stroke: The STEMO (Stroke Emergency Mobile) Group. Circulation. 2017;2;135(18): 1765–1767. https://doi.org/10.1161/CIRCULATIONAHA.117.027693. Leira EC, Stilley JD, Schnell T, Audeber HJ, Adams HP Jr. Helicopter transportation in the era of thrombectomy: the next frontier for acute stroke treatment and research. Eur Stroke J. 2016;1(3):171–9. https://doi.org/10.1177/2396987316658994. LDB NRW a Statistik d. sozialversicherungspfl. Beschäftigten [WWW Document]. Statistik d. sozialversicherungspfl. Beschäftigten (13111). https://www.ldb.nrw.de/ldbnrw/online/data?operation=statistic&levelindex=0&levelid=1604048888631&code=13111&option=table&info=on (30.10.2020). LDB NRW b. Pendlerrechnung in Nordrhein-Westfalen [WWW Document]. Pendlerrechnung in Nordrhein-Westfalen (19321). https://www.landesdatenbank.nrw.de/ldbnrw//online?operation=statistic&code=19321 (30.10.2020). LDB NRW c. Fortschreibung des Bevölkerungsstandes [WWW Document]. Fortschreibung des Bevölkerungsstandes (12411). https://www.landesdatenbank.nrw.de/ldbnrw/online/data?operation=statistic&levelindex=0&levelid=1604050534628&code=12411 (30.10.2020). LDB NRW d. Tageseinrichtungen für Kinder [WWW Document]. Tageseinrichtungen für Kinder, tätige Personen, genehmigte Plätze und Kinder in Tageseinrichtungen nach Altersgruppen (22541–01i). https://www.landesdatenbank.nrw.de/ldbnrw/online/data?operation=previous&levelindex=1&step=1&titel=Tabellenaufbau&levelid=1604051467899&acceptscookies=false (30.10.2020). LDB NRW e. Statistik der allgemeinbildenden Schulen [WWW Document]. Statistik der allgemeinbildenden Schulen (21111). https://www.ldb.nrw.de/ldbnrw/online/data?operation=statistic&levelindex=0&levelid=1604051020002&code=21111 (30.10.2020). LDB NRW f. Bevölkerungsstand nach Altersjahren [WWW Document]. Bevölkerungsstand nach Altersjahren (12411–09iz). https://www.landesdatenbank.nrw.de/ldbnrw//online/data?operation=table&code=12411-09iz&levelindex=0&levelid=1604051847999 (30.10.2020). Mathur S, Walter S, Grunwald IQ, Helwig SA, Lesmeister M, Fassbender K. (2019): Improving prehospital stroke services in rural and underserved settings with mobile stroke units. Front Neurol. 2019;10:159. https://doi.org/10.3389/fneur.2019.00159. Neis P, Zielstra D, Zipf A. The street network evolution of crowdsourced maps: OpenStreetMap in Germany 2007–2011. Future Internet. 2012;4:1–21. Neumeier S. Accessibility to services in rural areas. disP Plan Rev. 2016;52(3):32–49. https://doi.org/10.1080/02513625.2016.1235877. Parvin F, Ali SA, Hashmi SNI, Khatoon A. Accessibility and site suitability for healthcare services using GIS-based hybrid decision-making approach: a study in Murshidabad, India. Spat Inf Res. 2020. https://doi.org/10.1007/s41324-020-00330-0. Penchansky R, Thomas JW. The concept of access: definition and relationship to consumer satisfaction. Med Care. 1981;19:127–40. Petzäll K, Petzäll J, Jansson J, Nordström G. Time saved with high speed driving of ambulances. Accid Anal Prev. 2011;43:818–22. Rauch S, Rauh J. Verfahren der GIS-Modellierung von Erreichbarkeiten für Schlaganfallversorgungszentren. Raumforschung und Raumordnung Spatial Res Plann. 2016;74(5):437–50. https://doi.org/10.1007/s13147-016-0432-5. Reiner-Deitemyer V, Teuschl Y, Matz K, Reiter M, Eckhardt R, Seyfang L, Tatschl C, Brainin M. Helicopter transport of stroke patients and its influence on thrombolysis rates: data from the Austrian Stroke Unit Registry. Stroke. 2011;42(5):1295–300. Ringelstein EB, Busse O. Stroke Units in Deutschland Gefährdung eines Erfolgsrezeptes? Gesundheit und Gesellschaft: das AOK-Forum für Politik, Praxis und Wissenschaft. 2004;4(3):7–13. Saver JL. Time is brain quantified. Stroke. 2006;37:263–6. Schwarze B, Spiekermann, K. Flächendeckender Vergleich der Straßennetzmodelle. Projektnotiz MORO ACC PN 7. 2018 Dortmund: S&W. Sehra SS, Singh J, Rai HS. A Systematic Study of OpenStreetMap Data Quality Assessment. In: Proceedings of the 2014 11th International Conference on Information Technology: New Generations, Las Vegas, NV, USA,7–9 April 2014; IEEE: Las Vegas, NV, USA, 2014; 377–381. Stahmeyer JT, Stubenrauch S, Geyer S, Weissenborn K, Eberhard S. The frequency and timing of recurrent stroke—an analysis of routine health insurance data. Deutsches Ärzteblatt International. 2019;116:711–7. https://doi.org/10.3238/arztebl.2019.0711. Stangl S, Rauch S, Rauh J, Meyer M, Müller-Nordhorn J, Wildner M, Wöckel A, Heuschmann PU. Disparities in accessibility to evidence-based breast cancer care facilities by rural and urban areas in Bavaria. Germany Cancer. 2021;127(13):2319–32. https://doi.org/10.1002/cncr.33493. Statistisches Bundesamt (Ed.). Gesundheitswesen, Todesursachen in Deutschland 2002. 2004; Fachserie 12 Reihe 4. Wiesbaden. Statistisches Bundesamt (Ed.). Todesursachen in Deutschland, Gestorbene in Deutschland an ausgewählten Todesursachen 2006. 2007 Fachserie 12 Reihe 4. Wiesbaden. Statistisches Bundesamt (Ed.). Gesundheit, Todesursachen in Deutschland 2010. 2012 Fachserie 12 Reihe 4. Wiesbaden. Statistisches Bundesamt (Destatis). 23211–0002: Gestorbene: Deutschland, Jahre, Todesursachen, Geschlecht. 2020 https://www-genesis.destatis.de/genesis//online?operation=table&code=23211-0002&bypass=true&levelindex=0&levelid=1604832497883#abreadcrumb (19.05.2021). Tao Z, Yao Z, Kong H, Duan F, Li G. Spacial accessibility to healthcare services in Shenzhen, China: improving the multi-modal two-step floating catchment are method by estimating travel time via online map APIs. BMC Health Service Res. 2018;18:345. https://doi.org/10.1186/s12913-018-3132-8. Tao Z, Cheng Y, Zheng Q, Li G. Measuring spatial accessibility to healthcare services with constraint of administrative boundary: a case study of Yanqing District, Beijing China. Int J Equity Health. 2018;17:7. Taubenböck H, Roth A, Dech S. Linking structural urban characteristics derived from high resolution satellite data to population distribution. In: Urban and Regional Data Management. Coors, Rumor, Fendel & Zlatanova (Eds) Taylor & Francis, London, 2007;35–45. United Nations. Universal Declaration of Human Rights. 1948. Walter S, Zhao H, Easton D, Bil C. Air-Mobile Stroke Unit for access to stroke treatment in rural regions. Int J Stroke. 2018;13(6):568–75. https://doi.org/10.1177/1747493018784450. WHO. Constitution of the World Health Organization, 2006 https://www.who.int/governance/eb/who_constitution_en.pdf, (19.05.2021). WHO. The Global Health Observatory. Causes of deaths. 2018 https://www.who.int/data/gho/data/themes/topics/causes-of-death/GHO/causes-of-death (30.10.2020). Wieland T. Modellgestützte Verfahren und big (spatial) data in der regionalen Versorgungsforschung I. Monitor Versorgungsforschung. 2018;2:41–5. https://doi.org/10.24945/MVF.0218.1866-0533.2072. Wright JK. A method of mapping densities of population: with Cape Cod as an example. Geogr Rev. 1936;26:103–10. Nonduplication criteria The Paper submitted is original and was not published or submitted for publication elsewhere. Open Access funding enabled and organized by Projekt DEAL. Furthermore, this publication was supported by the Open Access Publication Fund of the University of Wuerzburg. No further funding was received. Institute for Geography and Geology, Julius-Maximilians-Universitat Würzburg, 97074, Würzburg, Germany S. Rauch, H. Taubenböck & J. Rauh German Aerospace Center (DLR), German Remote Sensing Data Center (DFD), Oberpfaffenhofen, 82234, Wessling, Germany H. Taubenböck & C. Knopp S. Rauch H. Taubenböck C. Knopp J. Rauh SR: conceptualization, methodology, data curation, writing, reviewing and editing. HT: methodology, data curation, writing, reviewing and editing. CK: methodology, data curation, writing. JR: writing, reviewing and editing. Correspondence to S. Rauch. The map shows the modeled population distribution during the day in high resolution. The map shows the modeled population distribution during the night in high resolution. Rauch, S., Taubenböck, H., Knopp, C. et al. Risk and space: modelling the accessibility of stroke centers using day- & nighttime population distribution and different transportation scenarios. Int J Health Geogr 20, 31 (2021). https://doi.org/10.1186/s12942-021-00284-y Accessibility analysis High resolution population data
CommonCrawl
Rich & Powerful Business Leaders Math & Statistics Yield Variance Definition Reviewed by Marshall Hargrave What Is Yield Variance? Yield variance is the difference between actual output and standard output of a production or manufacturing process, based on standard inputs of materials and labor. The yield variance is valued at standard cost. Yield variance is generally unfavorable, where the actual output is less than the standard or expected output, but it can be that output expects expectations as well. Yield variance measures the difference between actual output and standard output of a production or manufacturing process. It contrasts with mix variance, which is the difference in overall material usage. Yield variance will be above or below zero if a firm overestimates or underestimates how much material it takes to generate a certain amount. Yield Variance=SC∗(Actual Yield − Standard Yield)where:SC = Standard unit cost\begin{aligned} &\text{Yield Variance}=\text{SC}*\left(\text{Actual Yield }-\text{ Standard Yield}\right)\\ &\textbf{where:}\\ &\text{SC = Standard unit cost}\\ \end{aligned}​Yield Variance=SC∗(Actual Yield − Standard Yield)where:SC = Standard unit cost​ How to Calculate Yield Variance Yield variance is calculated as the actual yield minus the standard yield multiplied by standard unit cost. What Does Yield Variance Tell You? Yield variance is a common financial and operational metric within manufacturing industries. To improve or enhance the measure, it's fairly regular for an analyst to adjust inputs for special scenarios. For instance, during a raw material price spike, it may not make sense to use temporary price inputs experiencing short-term jumps in prices, as these results would be distorted from normal levels. Here, like any other analysis, it is part art and science. Generally, yield variance uses direct materials, which are raw materials that are made into finished products. These are not materials used in the production process. Direct materials are goods that physically become the finished product at the end of the manufacturing process. In other words, these are the tangible pieces or components of a finished product. If a firm overestimates or underestimates how much material it requires to take to generate a certain amount, the material's yield variance will be less than or greater than zero. If the standard quantity is equal to the quantity actually used, then the variance will be zero. If the direct materials yield variance proves that the company is producing less than originally planned for a given level of input, the company can review their operations for ways to become more efficient. Intuitively, producing more products with the same level of inventory while keeping quality constant can help the organization improve profitability. Example of How to Use Yield Variance If 1,000 units of a product are the standard output based on 1,000 kilograms of materials in an 8-hour production unit, and the actual output is 990 units, there is an unfavorable yield variance of 10 units (1,000 - 990). If the standard cost is $25 per unit, the unfavorable yield variance would be $250 (10 x $25). Or consider company ABC, which will produce 1,000,000 units of a toy for every 1,500,000 units of specialized plastic parts. In its most recent production run, Company ABC used 1,500,000 plastic units, but only produced 1,250,000 toys. The cost of plastic units is $0.50 per unit. The yield variance is: (1.25M actual toy output - 1.5M expected toy output) * $0.50 per unit cost = $125,000 unfavorable yield variance The Difference Between Yield Variance and Mix Variance Yield variance is a measure of the difference in output. Meanwhile, mix variance is the difference in overall material usage or inputs. Specifically, material usage can vary because a mix of products or inputs is used, which are different from the standard mix. Limitations of Using Yield Variance While a yield variance may tell you whether or not your output is efficient or as expected, it can't tell you why the variance occurred or what contributed to it. How Efficiency Variance Works Efficiency variance is the difference between the theoretical amount of inputs required to produce a unit of output and the actual amount of inputs used. Prime Cost Prime costs are a business's expenses for the elements involved in production. Understanding Sales Price Variance Sales price variance is the difference between the price at which a business expects to sell its products or services and the amount for which it actually sells them. What Works-in-Progress Really Mean The term work-in-progress (WIP) is a production and supply-chain management term describing partially finished goods awaiting completion. WIP refers to the raw materials, labor, and overhead costs incurred for products that are at various stages of the production process. Budget Variance A budget variance is a measure used to quantify the difference between budgeted and actual figures for a particular accounting category. Unfavorable Variance Unfavorable variance is an accounting term that describes instances where actual costs are greater than the standard or expected costs. What Is the Prime Cost Formula? Variable Cost vs. Fixed Cost: What's the Difference? What's the Difference Between Prime Costs and Conversion Costs? Marginal Revenue and Marginal Cost of Production Predict Inflation With The Producer Price Index Does working capital include inventory?
CommonCrawl
Splash Spring 2015 Filter Catalog by Grade: All Grades Grade 7 Grade 8 Grade 9 Grade 10 Grade 11 Grade 12 Jump to Categories Biological and Medical Science Engineering Hobbies Life Skills Lunch Mathematics and Computer Science Physical Science Social Science Visual and Performing Arts Walk-in Activity Writing, Literature, and Language This is editable text. Click here to edit the text. Biological and Medical Science [ Return to Category List ] B4091: How to Distinguish Medical Knowledge from Hoaxes? Difficulty: ** Teachers: Kun-Hsing Yu There are lots of newspaper articles talking about the "recent advances in health sciences". Some suggest "beer helps to prevent cancer", others assert "beer increased risks of getting cancer". Which one should we believe? How do doctors decide what to do when coming across contrary scientific evidences? And what doctors don't know about the surgery they perform or the drugs they prescribe? We will do some hands-on experiments on drawing conclusions in the world of uncertainty, and take a quick survey of current methodologies in medical sciences. Have flipped coins, or played any other games involving probability or uncertainty. Section 1: Sat 10:00am--10:45am Section 1: 15 (max 50) B4084: Food, Health, and the Rhetoric of Nutrition Facts Difficulty: * Teachers: Adrienne Rose Johnson KFC's chicken pot pie is made from more than 100 ingredients, including Tricalcium Phosphate, sodium chloride, and something called "dough conditioner." The average American meal travels 1500 miles from farm to table. Starbuck's mocha cookie crumble frappucino contains 105 grams of sugar. Experts now worry that bananas are going extinct. The Federal Food and Drug Administration regulates the labeling of "organic" food -- but similar terms such as "natural" and "healthy" are vague and can be misleading. What's going on with American food? Where does it come from and what is it made of? And, if we don't know, how can we find out? This short class will introduce students to the fiction of nutrition. We will learn how to decode an ingredient list for common household foods and read labels carefully. We will take a rhetorical approach to food, food labels, and packaging. We will also look closely at our own food stories by investigating the blurry lines between science and story in modern American food. B4164: Epigenetics: it's not just about DNA Teachers: Fiorella Grandi Is all about DNA, right? But, if that's true, why are identical twins not exactly the same? Why are calico cats different colors? Learn about epigenetics, the science of what's modifying your genes! We will explore how cells use epigenetic mechanisms to make decisions about becoming a neuron or a liver cell even though both types have the same DNA. We'll also look at the role epigenetics can play in diseases such as cancer. Basic understanding of biology (should know what DNA is). Section 1: Full! (max 25) B4167: Hepatitis B Teachers: Benjamin Yeh Learn about the Hepatitis B! Come hear about what the virus is, what the signs and symptoms are, and what YOU can do to protect your community from Hepatitis B. B4280: Introduction to Animal Behavior Teachers: Jesus Madrid Why do beavers build dams? Do emperor penguins fall in love? Why do male deer fight? Do elephants mourn the death of pack members? Why do bees form hives? If you are fascinated by animal behavior join us! We will learn the basic principles that will help us think like an ethologist. B4285: The Great Plagues: The History and Biology of Epidemics Teachers: Maggie Martins, Amy Tarangelo This course will review the history and science of major epidemics from the middle ages to the modern era. We will also explore the medical technologies that allowed for the eradication of some of history's deadliest illnesses and discuss the importance of vaccines for preventing future outbreak of disease. Background in introductory biology is recommended. B4437: Bioethics Basics Teachers: Johnathan Bowes The study of ethics seeks to answer what is right and wrong, whether in a given scenario or in life in general. When that study focuses on biology, medicine, and the intersections of the two, it's called bioethics. This course will talk about some of the basic principles that guide bioethics as it's practised today as well as let you take on the role of ethicists tackling some of the most famous cases in the field. B4425: From Fibroblasts to Retinal Neurons: Can Stem Cells Treat Blindness? Teachers: Lauren Killingsworth Learn how stem cells can be developed into photoreceptors- the main light sensing cells of the eye- to potentially restore vision. This class will include an introduction to stem cell biology, then dive into applications to blindness-related diseases such as macular degeneration. This class is designed for students with very little or no knowledge of stem cell biology. All levels welcome, class will be at a very introductory level. No biology or stem cell background necessary! Section 1: Sat 12:00pm--12:45pm Section 1: 80 (max 200) B4100: Hijacked! Why it is so hard for our immune system to fight cancer Teachers: Suparna Dutt, Amani Makkouk Where is our immune system when we need it? Why is it so hard for our immune system to fight cancer? Come and learn in this interactive session about how our immune system distinguishes our own cells from foreign cells, how cancer hijacks and manipulates the immune system to its advantage, and the new discoveries and therapies that are allowing the immune system to regain control and eradicate cancer. Section 1: Sat 1:00pm--1:45pm B4227: Schizophrenia: There's More Than One Side to the Story Teachers: Gabriella Godines, Jimmy He, Vivian Lam, Virginia Wang What really is schizophrenia? To many people who don't know very much about the specifics of mental illness, schizophrenia is the disease whose symptoms are most predominantly stereotyped as characteristics of "crazy" or "insane" people. People often confuse schizophrenia with Multiple Personality Disorder. However, schizophrenia actually involves a disconnect between thought, emotion, and behavior. 50% of people diagnosed with schizophrenia have not received treatment. Schizophrenia can affect anyone, no matter their age, talent, and social status. In this class, we will learn about the subtypes and symptoms of schizophrenia and focus on understanding what it means to live with schizophrenia. No prerequisites - just eagerness to learn :) TRIGGER WARNING: We greatly appreciate your willingness to share your personal experiences with us and/or the class if you wish. However, we fully understand that not everyone is comfortable doing so, and we would further like to note that some of the content we will be discussing in class may be triggering for some individuals. Please let us know if this is the case for you, and we would be happy to accommodate your needs. B4287: The Science of Willpower Teachers: David Carreon The Science of Willpower From ancient sages to modern science, we'll talk about this most central of human virtues. Why don't we do what we want? Why do we procrastinate? Check FB? Eat more than we wanted? We'll talk about the neuroscience of it, the biochemistry of it, and the practice of it. We've been at this a long time, so you'll hear from the Greeks, Eastern sages, from modern scientists, and willpower Olympians. -Why your willpower is powered by sugar -The top exercises proven to increase your willpower -The crazy scientist who showed that willpower was like a muscle -People whose willpower puts us all to shame Whatever it is you want to accomplish, willpower will help you do it. And this class will help you build willpower. Section 1: 123 (max 167) B4099: Funky Fungi of the Amazon Teachers: Giovanni Forcina Explore the microscopic world by examining some of the most varied and awesome creatures: fungi. In particular, we will look at fungi that live within plants that can also produce medically relevant natural products. Some knowledge of biology and/or chemistry would be helpful, but not necessary. B4122: Snails, Seastars, and Slime: Adaptation and Behavioral Ecology in the Ocean Teachers: Diana Li, Crystal Ng, Diana Rypkema, Jacob Winnikoff Learn about life in the ocean through interactive, hands on activities! The first section of the class will cover adaptation and natural selection, allowing students to participate in a creative activity based on the organisms in our touch tank. The second section of the class will involve an experiment on animal behavior, allowing students to once again utilize the touch tank animals. Section 1: 5 (max 23) B4143: Molecular Imaging Teachers: Aaron Mayer, Arutselvan Natarajan Molecular Imaging emerged in the early twenty-first century as a discipline at the intersection of molecular biology and in vivo imaging. It enables the visualization of the cellular function and the follow-up of the molecular process in living organisms without perturbing them. Positron emission tomography (PET) is a nuclear medicine imaging, an important molecular imaging technique which produces a three-dimensional image or picture of functional processes in the body. The theory behind PET is simple enough. Briefly, tracking molecule need to tagged with a positron emitting isotope and followed by scan the body with PET-CT. PET imaging have many advantages. The most important is its sensitivity: a typical PET scanner can detect between 10−11 mol/L to 10−12 mol/L concentrations. Dr. Arutselvan Natarajan, Stanford staff scientist will give an overview of PET which is key imaging modality for cancer staging and therapy. B4147: The (Personalized) Genomic Revolution Teachers: Jesse Marshall DNA and RNA are the molecules that code for all life. Much of our appearance, feelings and behavior is hard wired into our genetics, making understanding of the genome an important goal for biologists and doctors around the world. The Human Genome Project first sequenced the human genome fifteen years ago, at a cost of $3,000,000,000 dollars. Since then, the cost has now dropped to ~$1000! This has created an enormous possibility for doctors and scientists to understand the genetic nature of disease and tailor treatments to those with specific genetic mutations -- a revolution that is sure to last for decades to come. B4175: Clinical Theory: What You Learn as a Medical Student Teachers: Jonathan Lee Interested in medicine? Or just curious why your doctor ask you certain questions/perform certain maneuvers? In this session, you will learn how doctors approach the patient interview, just as one would in medical school! B4308: Lies Our Brains Tell Us: Neuroscience and Sensory Perception Teachers: Vania Cao, Srishti Gulati The world we experience is constructed by our brains using the information that our senses provide and perceive. How does this influence our behavior? What happens when some senses overpower others, or when some senses are missing? We will explore interesting situations showcasing how much we rely on sensory perception in our everyday lives, and how our brains can sometimes play tricks on us. B4322: Addiction to Schizophrenia: An Overview of Abnormal Psychology Teachers: David Altman We will cover the basics of a few key psychological disorders, including their etiology (biological and cognitive basis), symptoms, and treatment. The subject matter of this class may be sensitive for some people, and it is recommended that students be comfortable with learning about serious and sometimes upsetting psychological conditions. B4326: Adult Neurogensis and Aging Teachers: Chandresh Gajera This class will explore how aging, specifically brain aging, is affected by an imbalance in homeostasis. Immortal youth may be science fiction but current science is attempting systematic approach to extend youthful years. Modern biology are tackling major questions at the molecular, cellular, systemic, and organismal levels. The idea is to add life to years rather than years to life. This class will give a quick overview of aging, particularly brain aging, and will then concentrate on the latest research. B4396: Introduction to the biology of cancer Teachers: Delaney Sullivan In this class, we will explore the molecular biology of cancer. What types of genes are implicated in carcinogenesis? What exactly causes a cell to become cancerous? A basic understanding of biology B4415: Fungal Interactions: Friendly or Deadly? Teachers: Matthew Nelsen Explore a variety of fungal interactions with other organisms, ranging from beneficial to deadly. B4168: Neuroscience and Epigenetics This class will focus on epigenetics, the science of how environment influences the genetic code, in the context of neurons and neuroscience. We'll explore how neurons are shaped by chemical changes to DNA, how epigenetics and memory formation may be linked and what role epigenetics plays in neurological and psychological diseases. Basic understanding of biology. B4098: Evolution through Pokemon This course will provide a brief introduction to the concept of evolution and how it acts as a force to shape all life. All explanations and examples will be taught using Pokemon. Some knowledge of Pokemon may be necessary. B4251: Biotic Video Games Teachers: Honesty Kim Learn about biology as you play! In this class, you will learn basic microscopy and about the organism Euglena gracilis by exploring how they respond to light through an interactive video game. Real, living Euglena are part of the gameplay mechanics! Section 1: Full! (max 7) Section 2: 6 (max 7) B4387: Designing for Mindfulness Teachers: Sarah McDevitt Have you heard of mindfulness or mindfulness-based stress reduction? What is it and why is it useful? Learn the basics of what your brain and body do when responding to stress and practice what you can do to be in control. Then, design a physical object that you'd want to use during a mindfulness exercise! B4171: Modern Techniques in Biology Teachers: Michael Dubreuil This is an informal class that will go over a few techniques that are used in biology laboratories from Stanford to Beijing, from small academic labs to biotechnology giants like Genentech. Examples include PCR, sequencing, and antibody-based procedures Basic Biology class, basic knowledge about DNA B4217: Ethics of Scientific and Medical Research Teachers: Paul Nuyujukian Learn about the core ethical ideas that govern all scientific and medical research. Discover the criteria that must be met for medical research and clinical trials. Explore the level of adherence of various forms of scientific research to these core principles and the means of oversight setup to ensure research is conducted in an ethical manner. We will also explore specific topics, examples, and cases; where the ethics of research are non-trivial to evaluate and often accompanied with social controversy. We will apply the core principles learned to actively debated areas of scientific and medical research. B4270: Neuroscience and Religion What is meditation? Why do people believe in God? What's it like to have an ecstatic experience? This course will talk about the latest science of religion, pulling in anthropological, psychological, and neurological perspectives. We'll talk about why belief in something like God has been so darned persistent throughout human evolution. We'll discuss what it's like for a "believer," and also take a look at religious brains in action. We'll discuss both Western and Eastern experiences including prayer, ecstatic experience, meditation and ritual. We'll also talk about practical take-home lessons we can learn from these extreme brain states. -Why babies are religious. -What it's like to die. -What the "God spot" is in the brain and whether or not magnetic stimulation can cause religious experience. -The difference between mindfulness, meditation and prayer and why it's important. [Warning: Close-minded people should not take this class. It will involve objectively evaluating the claims of various religious people] B4329: How Eye See: The Biology of Vision and Perception Teachers: Brian Do, Clara Fannjiang All day long, our retinas are bombarded with endless streams of photons. How does the eye and the brain translate these signals into meaningful, recognizable objects and scenes, allowing us to recognize a four-legged blob as a dog despite innumerable variations in shape, viewpoint, and lighting? We will paint a broad picture of the mechanisms that allow humans to see, and more importantly, understand what we see. First, we will explore how the eyes and the brain learn to talk to each other during the first year of a child's life. Second, we will discuss how the brain integrates information from individual neurons to represent objects, and we'll see how functional imaging can reveal how the brain encodes what someone's seeing. Throughout, we will emphasize how scientists designed the critical experiments to make these discoveries, and we'll try our hand at brainstorming experiments ourselves! B4426: Microorganisms – Friend or foe Teachers: Rajiv Gaur Basic biological class about microorganisms and their relationship with human. Basic knowledge of biology. Section 1: Sun 10:00am--10:45am B4077: Crash Course to Food Science Teachers: Brian Chau Let us openly discuss food science in all its forms and functions from food chemistry to food safety, food technology to sensory science and from school to jobs/more schooling. We will discuss some current trends in food science and technology, especially, with the growing news of Big Food and food entrepreneurs from the tech world. We will use C-mapping tools to develop a concept map in cataloging ideas and for you to take home. Most importantly, we will have a better understanding of what is food science and what it can be. Join me on this adventure because it might be a crazy ride. Bring a helmet, if you think this food science talk is risky! Active participation. Curiosity. Interest in food. B4196: Microfluidics - Play with very small water and oil droplets. Teachers: Lukas Gerber, Honesty Kim You will build your own microfluidic device from scratch to mix colored fluids, make colorful bubbles, and learn about fluids, mixing behavior, and why mayonnaise is white. B4202: Mushroom Mania! Teachers: Laura Bogar, Nora Dunkirk, Kenneth Qin What do cheese, zombie ants and the biggest, oldest living thing have in common? Fungi! Mushrooms are the part of the a fungus that we see, but they're just the tip of the iceberg. Most of the action takes place out of sight. Come learn about the hidden world of fungi in this interactive course. You will learn about the ecology, evolution, and human uses of fungi, and will get hands-on practice identifying mushrooms on your own! B4286: Mind and Body: How Your Mind Makes It Real (Extended!) Can a sugar-pill cause morphine release? Can hypnosis cure blindness? Can looks kill (literally)? Can getting shot not hurt? We'll talk about old history and new science developing around "mind-body" medicine, how your mind and brain affect your body in really interesting ways. We'll meet people with paralysis who can regain their movement, and blind people can regain their sight by the power of words. We'll see how the brain can produce pain completely independently of any "physical" cause. We'll discuss theories of how the brain might be involved in diseases like fibromyalgia and irritable bowel syndrome. If that's not enough, you'll learn about ritual executions that rely on the victim's expectation, and soldiers in WWII who get shot but don't seem to mind. In short, we'll explore the strange and perplexing frontier where Mind meets Body. [For those who took this already, I am adding a number of slides on the science of placebo, too!] B4293: What is Biophysics? Teachers: Rikki Garner, Carlos Hernandez, Andrew Kennard, Andrew Price, Andrew Savinov Biologists study living systems that function through a vast variety of complex mechanisms. Physicists search for fundamental, mathematical laws of nature that drive physical phenomena. Learn how Stanford biophysicists are using physical tools to understand the complexity of life. Topics ranging from the atomic-level description of biological molecules to the surprisingly clever behavior of cells will be discussed. Selected topics in biophysics will be presented by Stanford graduate students. Some background in biology, chemistry, and/or physics is helpful, but not required. B4294: How to build an organ. An introduction on tissue stem cells. Teachers: Astrid Gillich, Elisabete Nascimento Have you ever wondered how our organs are built? Do you know how many different types of tissue stem cells are needed to maintain and repair an organ? Amazingly, it takes only about a week to regenerate our intestines but months to replace our lungs! In this class we will cover the different types of tissue stem cells and you will learn where they reside by observing their location under the microscope. It is recommended that students have taken high-school biology. B4336: DNA and Chromatin In this class, we will explore the structure of DNA and how DNA is packaged into chromatin. We will also discuss the basics of epigenetic control of gene expression. B4284: What your body looks like on the inside Teachers: Alanna Coughran, Rebecca Gao, Jaclyn Konopka, Laura Lu, Nitya Rajeshuni, Ani Saraswathula, Caroline Yu, steven zhang Students will learn about human anatomy using cadavers and 3-D visual tools. Various anatomical regions will be covered including the abdomen, upper limb, lower limb, back, and heart/lung. Caution: Real human cadavers are used in the teaching of this course. Section 1: Sun 10:00am--12:45pm B4129: Extreme Life of the Sea Teachers: Jake Gold Ever see a shrimp break solid glass? Fish that can't close their mouths because their teeth are so large? The marine biosphere contains some of the most diverse and interesting organisms on the planet. Come travel through the water column as we explore the strangest and most fascinating creatures that have evolved under the sea. Biology background (some). Section 1: Sun 12:00pm--12:45pm Section 2: Sun 1:00pm--1:45pm B4257: Chocolate Food of the Gods Teachers: Howard Peters A fun look at the history, biology, biochemistry, health benefits and trivia of chocolate. Some samples and a raffle for free chocolate items. B4276: Practical Neuroscience What is the brain? How does it work? How can I make it stronger? You'll learn about awesome experiments that show that the brain can be rewired, remolded and strengthened. You'll meet someone operating with half a brain (literally), people who built physical strength just by thinking about it, and people who rewired their brains and cured mental illnesses with the power of thought. We'll cover what you need to know about the brain. If you've got a brain, you should take this course! :) B4316: From Bench to Bedside: Translational Research for Medical Therapeutics Teachers: Bruce Tiu How do current drugs and treatments develop? What are the steps leading from a scientific finding to a valid and useful therapy? What are some current issues and challenges in the development of new medical treatments? How do scientific research and business finance interplay to impact and influence the whole process? In this class, we'll talk about how the drugs that make up a critical component of our healthcare come into existence. By analyzing the paths of both common and specialty therapies, for the common malady to novel treatments for cancer, such as cellular therapy, we will discuss the many different elements and factors in discovering a potential solution and bringing it to a patient. We'll talk about difficulties stemming from the institutional design of the medical system such as clinical trials as well as different nonmedical issues that impact our potential to deliver the best treatments possible. Afterwards, we will debate topics of interest and issues such as policy, patents, and ethics. Some general biology knowledge would be helpful, but is not required. B4328: A shot in the dark, the history of vaccines Teachers: Michal Tal From the first Smallpox vaccine to current efforts to design vaccines against HIV and Ebola, we will discuss how vaccines work and the current controversy surrounding them. B4102: Flavor Flav Got Nothin' on Flavor Chemistry. Difficulty: *** We will go through chemical reactions that create flavors like good ol' Maillard-Browning. We will talk about aromatics, interaction with our senses, namely aroma and taste. We will go into details about the analytical equipment and theories behind taste and flavors. We will definitely disprove the idea that tastebuds are segmented into parts of the tongue. Flavor compounds and their groupings followed with tastings of our own and little experiments with some food in our mouths, we definitely can enjoy flavors. Basic understanding of chemistry. B4104: Would you want to know? Exploring genetic testing and Huntington's disease Teachers: Kristen Powers If you could find out whether you will develop a disease with no known cure, would you want to know? This course will provide an introduction to the science behind Huntington's Disease, which is a genetically inherited disease that affects both the mind and the body. After an overview of cool topics like DNA and genetics, we will talk about genetic testing – what that is, and how it applies to parents or children who may have Huntington's Disease. You will then use this knowledge to debate the ethical concerns that arise when screening for inherited diseases. If your parents have Huntington's Disease, would you get tested? If you were going to have children but did not know if you had the disease, would you get tested? Would you have kids if you tested positive? Basic biology knowledge is helpful, but we'll provide a quick intro at the beginning of the course for context! B4271: DNA Origami: Exploring the Past and the Future of Genetic Research Teachers: Songhee Han, Emma Pair Make your own DNA origami while learning about the history and science behind the most fundamental building block of life! The questions we'll explore are: Who discovered the DNA? How was the DNA discovered? Why is the DNA shaped like a helix? ...and last, but not least.... Is it possible for DNA to have more than two strands???? B4333: Food, food everywhere, but is it safe to eat? Teachers: Rebecca Gilsdorf, Jessica Grembi, Angela Harris, Laura Kwong Come to this session where you can learn more about some of the primary causes of produce contamination that occur as your food goes from "farm to table." We'll also discuss some of the many stakeholders responsible for keeping our food safe and methods for preventing/reducing food contamination. All of these ideas will be further explored through a full-group role-playing game. B4072: The Power of Memory: Beyond the Brain Bee Teachers: Lucy Li What did you have for dinner exactly one year ago from today? What was the weather like? What day of the week was it? Chances are, unless you have hyperthymesia, you can't remember the answers to any of these questions. After a brief overview of how memory works, we'll explore the feats human memory can accomplish and even take a stroll through your own memory palace. You'll meet people like Louise Owen, who remembers details from every day of her life, Stephen Wiltshire, who can draw entire cities after a single helicopter ride, and Joshua Foer, who trained his memory for one year and accidentally won the USA Memory Championship. B4290: The Neuroscience of Happiness: The Art and Science of a Great Life What makes humans happy? Don't we all do whatever we think makes us happy? Yes, and we're often wrong. So then how do we get this most important of questions right? We will explore the big ideas on how to be awesome at life, from ancient Greece to the latest neuroscience. Hear about the best things thought and said about how to flourish as a person and live the Happy life. This class draws from many of my other lectures. You will have a better foundation if you've taken them, but can still attend if you haven't. B4325: New Neuron in Old Brain Brains were thought to be fixed! (that is no new neurons born after birth). Research in the last few decades redefined this fixed view to be rather plastic, that is new neurons are born and integrate into old brains in two regions of brain. But this may be just be the tip of iceberg. Lots of fundamental discovery is yet to be made that will lay the foundation for cure/treatment of devastating brain diseases in young and aging populations. This class will include an introduction to adult neurogenesis and historical background to the latest findings. B4366: Biology (or Lack Thereof) in Hollywood Teachers: Andres Baresch, William Gearty What would life really be like if you were shrunk down to the size of an ant or even smaller? Did dinosaurs really look and behave as they are portrayed in Jurassic Park? Would King Kong really have been able to stand up, let alone climb the Empire State Building? These questions and more will be answered. Some interest in biology. Some interest in movies. B4375: Symbiosis: a love story between corals and dinoflagellates Teachers: Lorraine Ling Around the world, coral reefs are experiencing severe environmental stress and turning white, a.k.a "bleaching." What's going on during bleaching? What role do dinoflagellates (microscopic algae) play? And how can we test it? Students will explore the science of symbiosis and current research through hands-on demonstrations with sea anemones. none, just be curious and willing to get salty. Please wear closed-toe shoes (i.e. no flip-flops) B4399: Quantifying Biology Teachers: Henry Li, Jillynne Quinn This class will teach students how to estimate the quantities of life. Example questions include How many leaves are on a tree? How many cells are there in a human body? How much does the total DNA weigh for a human being? B4071: Advanced Topics in Neuroscience: Beyond the Brain Bee Teachers: Thanh-Liem Huynh-Tran So you've taken an intro neuro/psych class and know the basics of how your brain works. What's left to learn? A lot, actually. Heard that information flows from dendrites to the cell body and down the axon to the postsynaptic cell, and not the other way around? Fetuses would die before birth and marijuana wouldn't be so popular if that were actually true. Each half of the brain controls the opposite side of the body? Nope. Neurons all have axons and dendrites? Lies. Join us to learn about what was too hard, too mysterious, or too recently discovered to be included your previous neuro classes! In this discussion-based session, students will learn about advanced topics in neuroscience as they are asked to solve some of the most thought-provoking questions of the field. How can memories last a lifetime when the molecules that constitute them are being recycled on a daily basis? Can you design a viable method of mind control (yes, the technology already exists)? How could you deliberately create a false memory, or manipulate existing ones? And finally, the big question: What is consciousness? For advanced high school students with prior exposure to neuroscience, either through coursework, self-studying, or enrichment programs (e.g., other Splash! courses). They should understand the basics of neurons, action potentials, neurotransmitters, sensorimotor systems, and neural development. They should also be comfortable with cell biology, genetics, protein synthesis, evolutionary biology, and viral life cycles. B4137: What is a mass extinction? Teachers: William Gearty Ever wonder what really happened to the all of the dinosaurs? What caused them to go extinct? What happened afterwards? Mass extinctions have occurred numerous times in Earth's history. We will learn about the "Big Five" extinctions, exploring their causes, effects, and repercussions. To wrap up, we will apply what we have learned about past extinctions to the modern day and determine if we are now in the sixth mass extinction. Are WE the cause? What will be the effects and repercussions? Will we survive to find out? B4301: Molecular Biophysics: How Life Works at the Smallest Scale Full! Teachers: Andrew Savinov At the smallest scale, life is made possible by very special molecules, including DNA, RNA, and proteins. Yet though they are special, these molecules follow the same physical rules as the rest of the universe. Molecular biophysics is the study of how these molecules of life physically work. In this class we will explore selected topics in molecular biophysics, looking at different examples of how biological molecules function and what experiments we can do to uncover these molecules' mysteries. B4423: Who likes lichens? What are those splashes of color on the rocks and tree branches? They're lichens! Learn more about this symbiotic association between fungi and algae, and why you should like them! B4078: Culinary Science & Product Development You walk down the aisle of a supermarket and you stumble upon these weirdly flavored chips. Ever wonder how someone's crazy idea is put from concept to product? Culinary arts + food science = AWESOME PRODUCT DEVELOPMENT. You take science and add it to cooking and bam, that delicious or sometimes bizarre flavored chip comes to a store near you. B4192: A Brief Introduction to Population Genetics Teachers: Alison Feder, Jonathan Kang, Rohan Mehta The human genome is comprised of three billion base pairs, of which, 99% are identical across the entire human population. Only a very small fraction of the genome harbors any variation. It is this small, variable fraction that plays an important role in natural selection and can inform us about events such as demography. With genomic sequencing technology becoming increasingly cheap and accessible to the public, we are now entering an exciting era of personalized genomics and medicine. In this course, we will learn about the signatures of genetic variation that can help us understand our susceptibility to diseases and our human demographic history. B4213: Explore the Heart: Dissection & Lecture Teachers: Martha Dadala, Beulah Dadala We all know what the heart is, what it does, and why it's important. But why does it perform the tasks in the way that it does? Why does it have multiple chambers? Why not just one? Or none? Why does the blood flow up, down, around and through it again? Why not straight through? Learn the answers to all of these questions and learn about the new technologies that are available! After taking a look at technology in cardiology, we'll dissect and take a peak at cow hearts and follow their blood flows and learn the similarities and differences between cow and human hearts. [Note: We will be handling fresh organs of animals, please keep this in mind before coming to class.] B4228: Minding Your Health: Rising Above the Stigma of Mental Illness Did you know that mental health issues affect one in every five American families? Mental health is often a difficult subject to speak openly about. This may be for several reasons, including the unwillingness and fear of individuals to see themselves or others close to them as "diseased", the lack of a culturally sensitive, mainstream vocabulary for the discussion of mental health issues, and the stigma of seeking aid or treatment for psychiatric disorders. Unfortunately, by not speaking openly and competently about these issues, we as a society risk leaving many individuals untreated, endangering their lives and damaging their communities and families, and holding back on potential advancements in care. The aim of this class is to promote more open and informed conversations about mental health issues and their impacts with your friends, families, schools, and the larger community. We hope to shed some light on different types of mental health disorders, their current care and treatment methods, and perhaps most importantly, how we as students can serve as allies to those who seek to make mental health a priority in their lives and to those who are struggling with mental health issues. B4273: Addiction and Neuroscience Why do we do what we don't want to? Or not do what we want to? This class will explore the strange, universal human experience of being out of control of our actions. We'll look at big ways this happens with alcohol or drugs, but we'll also look at "behavioral addictions" like Facebook, nail-biting, pornography, gambling and cutting. We'll look at the basic science, as well as some of the ways people treat addictions, big and small. Section 1: Sat 11:00am--12:45pm B4339: It Looks Human: Exploring Bad Biology in Movies and Television Teachers: Mike Brown A lot of the science fiction you watch has, well, less science than fiction. In this course, we'll: -discuss how bad science in movies and TV can have a negative effect on our culture -see some specific examples of bad biology in popular media -talk about the real science behind these misrepresentations -show how it would be possible to fix these problems without affecting artistic integrity -learn how to identify good and bad science on your own B4178: Welcome to your brain Teachers: Eddy Albarran, Jesus Madrid Ever wonder how your brain helps make you who you are? How does your brain help you see and move? Can we come up with a cure for brain diseases? This class is a hands-on introduction to the brain and its various functions. You'll get to see and touch real human brains and ask your burning questions to Ph.D. students who are becoming brain experts! B4394: Sustainable Food - What it is, and what it should be Teachers: Hannah Naughton Ever wonder what those little frog, bird and leaf symbols on your coffee bag or chocolate bar mean? If so, come join this overview, critique and discussion about environmentally and socially acceptable food production. E4073: What the heck is Engineering? Teachers: Zachary del Rosario "You're good at math and science - you should be an engineer!" That was about the extent of my career counseling when I was in high school. If you're in a similar position, then this class is for you! This will be a short, broad lecture on what engineers actually do, drawing on case studies from mechanical, electrical, and product design viewpoints. A slight-to-severe confusion about what engineering is, and a desire to disabuse oneself of that notion E4182: PROBABILITY DISTRIBUTIONS APPLIED IN SPORTS Teachers: FRANCISCO ZARAGOZA We try to give an interesting and exciting class where the students can be watching the opportunity to make implementation inside sports using important tools like probability and statistical with the idea to take important decisions for we try to make forecastings. KNOWLEDGE OF ALGEBRA, LINEAR EQUATIONS, BASIC PROBABILITY, DIFFERENTIAL CALCULUS E4278: California Water Resources Teachers: Jessica Watkins Where does our drinking water come from and how does it get here? Why does the drought matter and who is it impacting? Why is there enough water to keep lawns green, pools full, and golf courses open, yet some farms are left unwatered? The short answer – it's complicated! Come explore the politics, history, and engineering behind California's water infrastructure. E4405: Motorcycles and Mechanisms Full! Teachers: Joe Johnson We'll be taking apart my 1964 Honda Dream motorcycle and exploring how it works. Students will get hands on experience taking things apart and putting them back together. Here's a picture of a motorcycle similar to mine: http://www.rcycle.com/Ken_Fisher_Honda_305_Dream_068_cropped_op_800x512.jpg Be Hands-On E4180: Sailboat Physics: Lecture and Workshop Teachers: Janine Birnbaum, Elizabeth Hillstrom How do sailboats use geometry and physics to turn wind into usable energy? In the first part of this three hour class, students will be introduced to some of the basic physics of fluids and statics as they apply to sailing. We'll learn about buoyancy, Bernoulli's principle, center of mass, and moment of inertia. We'll also talk about the role of an engineer in balancing design constraints within a project. In the second part of the class, students will use what they've learned to build balsa-wood model sailboats and test them in a simulated wind environment. Students will work in teams to make engineering decisions in the process of designing, constructing, and testing their boats. An understanding of basic calculus (derivatives and integrals) is recommended, but not required. E4378: Paper Airplane Showdown: What makes airplanes work? Teachers: Matthew Berk, Alex Wolff Learn the basics behind airplane stability and efficiency, then put that knowledge to the test by trying to build a paper which flies furthest, fastest, or "best" by your own criteria. Covers the basics of why airplanes fly, how a paper glider can fly without control, and why airplanes for different missions look the way they do. After some illustrations of these concepts, students will be invited to build their own airplanes and experiment with designs to try to design the "ultimate" paper airplane. Some sort of kinematics based physics, understanding of force and pressure. E4145: Using Marshmallows to Build Understanding of Materials Science Teachers: Enze Chen An introduction to the interdisciplinary field of materials science. You will use marshmallows and toothpicks to construct crystal models and explore its physical and chemical properties. All are welcome! Some chemistry background is helpful, but not essential. E4195: Cool Polymer Science Teachers: Dara Bobb-Semple, Benjamin Elling, Andrea Fisher, Wakuna Galega, Dan Hunt, Will Murch Come and learn about some of the wonderful applications of polymers, from elastic materials to electronic devices to strange fluids. Concepts will be illustrated through hands-on activities. None. General knowledge in science might be useful E4108: Introduction to Spaceflight Teachers: Jan Kolmas Overview of rocket propulsion, introduction to the coutner-intuitive world of orbital mechanics and discussion of other concepts, such as staging, aerobraking and reentry. High school physics (Newton's laws, conservation of momentum) E4291: Making a computer out of really simple parts Full! Teachers: Omar Rizwan Maybe you've heard about how computers are just giant calculators at heart. Actually, you can build one out of tiny, simple parts. We'll download a simulator that lets you make pieces of a computer by dragging and dropping and wiring stuff together. Then you'll build little machines that can add numbers together, and finally see how we can work our way up to a computer. You'll come out understanding why binary is such a big deal, how computers work at the most fundamental level, and that this stuff isn't as complicated as you think! Basic computer skills. (No programming experience necessary!) E4181: From Transistors to iPhone: The Amazing Journey of Clueless Teenage Electrons Teachers: Benjamin Ting From the discovery of electron in 1897, to the invention of transistor in 1947, followed by the birth of Silicon Valley in the 1970's, and the arrival iPhone in 2007. 110 years in the making, the teenage electrons have finally arrived in your friendly neighborhood. Come and find out how these teenagers are shaping everything you do in your life, and what lies ahead when these electrons grow up to become adults! Ability to stay awake during the class E4205: Electricity for All Teachers: Kristen Pownell Ever wondered what really makes your computer turn on? What makes your car radio turn off when you're driving through the mountains? How do iPhone touchscreens work? The answer to all these questions and more can be found in electrical engineering! During this class, you'll learn a brief overview of electricity and gain understanding of what it means to be an electrical engineer. You'll build a lamp to test the concepts we discuss in class. E4331: Water for the World Teachers: Tallie Faircloth, Lauren Steinbaum Water is essential to life. But over 700 million people do not have access to clean water that is safe to drink. Most are living in low-income countries. Come learn about technologies for treating water, and engage in a hands-on activity to design, build and test your own water treatment technology. E4440: Materials Gone Wrong! Teachers: Urusa Alaan, Matthew Gray, Allen Pei, Melissa Wette It's a bad day if your phone screen cracks, but it's something else entirely if your airplane falls out of the sky. Learn about the science and engineering behind materials failures in history through demonstrations and hands-on experiments. We'll explore the enormous demands we place on materials in applications like space exploration, as well as how they work. We'll also show how many materials can change dramatically with changes in temperature and other conditions. You'll walk away from this class with greater knowledge of the atomic structure of materials and a deeper appreciation for the diverse properties of the materials that surround our everyday life. E4204: Engineering Stories How would you describe a transistor to your five-year-old cousin? What about the inner workings of a CD player? Analogies have enormous power for bridging the gap between engineers and non-engineers, yet we rarely practice creating them. In this class, you'll learn how to explain scientific concepts to anyone on the street. You'll build your own invention out of Legos, then use an analogy to describe how it works and what it's used for. Sign up if you want to gain invaluable communication skills, and play with Legos at the same time! E4176: 3D Printing: Hands on and More Teachers: Dave Lewis Participants will get the chance to use a 3D printer to turn a virtual (CAD) item into a physical object. The session will cover: - Work flow - Processing the Design - Working with the Printer Each participant will leave with a custom 3D printed item. E4210: Introduction to Earthquake Engineering Teachers: Cristian Acevedo Earthquakes are one of Earth's most devastating phenomena. Come learn about earthquake mechanisms and design of structures in earthquake prone areas (like California) and experience shaking first hand! The class will cover the basic physics behind structural earthquake engineering design; the focus will be on explaining concepts through demonstrations. E4438: Space Communications Teachers: Sawson Taheri This class will cover the basics of radio communication, with an emphasis on space based radio communication. Learn about: -Radio theory -Antennas -Time domain vs Frequency Domain -Digital communication -How to track and communicate with satellites Students will get a chance to make their own amateur radio satellite contact! -Completion of beginning Algebra -Motivation to learn E4248: Build Your Own Speaker Teachers: Anjali Datta Each student will build a simple styrofoam cup speaker. We will learn about sound, how speakers work, and basic circuits. Please bring a portable music player such as an MP3 player or smartphone if you have one. A few extras will be available to use if you do not have one. H4154: Crocheting for Beginners Teachers: Kaitlyn Gee, Grace Young Ever wanted to embrace your inner grandma, but simply couldn't pull off the cane and sweater set look? Well, this class is for you! Sign up to experience the joys and wonders of crocheting, from learning to cast on to making a simple project or two. No experience necessary! The class will be taught assuming no skills. H4161: Math-y Beading Teachers: Vivian Wang Beads are pretty, but polyhedra are prettier. We'll learn to make buckyballs (a.k.a. truncated icosahedra for math folks or C60 for chem folks) out of beads and string. By the end of the class, you'll have your own shiny geometric trinket to keep! Depending on time and interest, we might learn to make other geometric things...A fractal dodecahedron? Polyhedral carbon nanotori? The possibilities are (almost) endless. For an idea of what we'll be making, see here: https://db.tt/NPha2NOi. We'll be working with seed beads (which are pretty small), so a little finger dexterity and a lot of patience will go a long way! H4191: Introduction to Sabermetrics Teachers: Rohan Mehta An introduction to the statistical analysis of baseball. Learn how we evaluate players, project outcomes, and calculate statistics like BABIP, FIP, and WAR. Some basic (very basic) probability and statistics useful. Familiarity with standard baseball terminology necessary. H4201: Introduction to Bridge (by a World Champion!) Teachers: Adam Kaplan, Ted Sanders Question: What do Bill Gates and Warren Buffet have in common? Answer: They're both billionaires and they both love the card game bridge. If you too aspire to become a bridge-playing billionaire, then the first step is to learn how to play bridge! (Sadly, the second step is not covered in this class.) Bridge is a fun and brainy card game somewhat like hearts and spades. It's played 2 vs 2, so good communication and teamwork are key to victory. (Another benefit of bridge being played 2 vs 2 is that if you ever lose - hypothetically, of course - you've got someone other than yourself to blame!) This class is for anyone and everyone who wants to learn bridge. No experience necessary! ***SPECIAL ANNOUNCEMENT*** A WORLD CHAMPION and team of bridge-loving Stanford students will be teaching this year! Come rub shoulders with greatness and hope that greatness rubs off on you! H4402: Ice Cream! Full! Teachers: Celine Liong, Melissa Wette We will be exploring the different techniques used to make ice cream throughout history. Making and tasting of ice cream required. Food allergy warning: Please do not consume ice cream if you have any dairy or vanilla allergies. Must like ice cream H4190: Astrology and the Signs of the Zodiac Teachers: Derry Akin So what exactly is Astrology, after all? What's a sun sign, and a moon sign? What's my zodiac sign and how is it different from the others? Come to our class and learn the answers to these questions, as well as much more about astrology and the signs of the zodiac! H4312: Science of Star Wars Teachers: Tori Bahe, Aunika Swenson Like science? Like Star Wars? Come explore the two worlds together over fun times and good cookies. H4324: Henna Art Teachers: Marcella Anthony Learn a brief history of Mehandi, popularly known as Henna - a traditional body art. Explore traditional and modern uses and designs, the science of henna, then practice designing your own henna tattoo and apply to your hand! No prerequisites. Dress in clothes you don't mind staining. Do not sign up for this workshop if you are attending a cooking class. H4345: Martial Arts 101 Teachers: Christine Jarjour Learn some karate basics! In under 2 hours we'll go over basic throws and self defense techniques, teach you how to throw a killer roundhouse kick and even how to break a board like a pro. No experience needed! Dress comfortably. H4413: Understanding Diplomacy Through Wargaming Teachers: Daniel Whalen Much of historical European politics would have made more sense if you were there at the time. This class will give you a chance to recreate those politics. Take command of countries in a simple war game and learn about the balance of power by seeing it play out in action. H4173: Yoga for All Bodies Teachers: Paige Nethercutt We will cover a brief background on the history and purpose of the practice of yoga. Then I will lead students in stretching, breathing exercises, and an introductory sequence of yoga poses. If you think you're not flexible, think again! Come ready to fully engage, have fun, and embrace our bodies! H4249: Intro to SCUBA Diving: SCUBA In a Bucket Teachers: David Fairburn Have you ever wanted to breathe underwater, swim with the fish, or be an exciting other-world explorer? Come to SCUBA in a bucket where you can learn the basics of SCUBA diving, what gear divers use, and even try to breathe underwater in a small bucket -A Sense of adventure H4382: How to be the Ultimate Tennis Fan: From Stanford Tennis to Serving it with Serena Williams Teachers: Oscar Wong Do you still wonder how scoring in tennis works? Do you think tennis is the best sport in the world? What is life like as a Stanford student and as the (self-proclaimed) biggest tennis fan of Serena Williams? If you're up to learning more about tennis and how it has taken me all over the world, join me! Interest in tennis a big plus! H4424: Puzzle Hunts 101 Teachers: Rafael Cosman, Benjamin Cosman, Kate Rudolph Enter a weird world where a puzzle can be a list of pictures, a gibberish sound file, or just six words. What are the rules? Figure them out! None. This class is not meant for anyone who has participated in a puzzle hunt already (e.g. DASH, Shinteki Decathlon, MIT Mystery Hunt). H4174: Quidditch for Muggles Teachers: Janos Barbero, Hailey Clonts, Sam Fischgrund, Maryna Kapurova, Roman Khromenko, Ginny Tice We'll go over the basics of quidditch as it is played by high schools, colleges, and community teams throughout the world, and play some scrimmages. We'll provide the brooms! Wear shoes you can run in (e.g. sneakers). Bring a water bottle so you can keep hydrated. Sunscreen is recommended. H4430: Boba Tea: Analyzing the Tapioca Trend Through Taste-Testing Bubble tea, pearl milk tea, boba: it's quite the rising fad! Come learn a bit about the history and background of this drink over a boba tea party! All are welcome! H4436: Cup Stacking Teachers: Nick Troccoli What is cup stacking? Watch this video: http://www.youtube.com/watch?v=xNPAF4sSAH0&list=UU_I1OD_vuDDIU0dStjTRV2A&feature=share&index=2 That's the current world-record holder for the "cycle" routine in cup stacking, a sport where you race to finish "upstacking" and "downstacking" certain pyramids of cups as fast as you can. (And no, that video isn't fast-forwarded). The best part is, cup stacking is super easy to learn (but hard to master!). In this class, you'll learn how to cup stack - the basic rules, routines, and tips and tricks on how to improve. Everyone will be given their own set of cups for the duration of the class so that you can practice individually. We'll also learn a few fun cup stacking relays to see how you can cup stack with other people. H4383: 6 Continents and Counting: How to Check Traveling off the Bucket List Have you ever traveled outside your home state, country, or continent? What is life like as a full-time Stanford student studying abroad? Whether you're a seasoned traveler or someone flicking through the Travel Channel, come learn (or share) some handy travel tips and tricks. From Africa to Australia to South America (and many fascinating places in between), we'll cover most of the globe - just back in time for your next session! No traveling experience needed! H4141: Board Game Design Teachers: Sarah Edwards, Colin Thom Do you enjoy playing board games? Have you ever considered making your own? In the first half of class, we will discuss how to design, playtest and publish your own board games. In the second half, we will break into teams to play and then redesign different board games. H4230: Taste of Thai Teachers: Rongrong Cheacharoen, Umnouy Ponsukcharoen Did you know that Thai food is ranked by CNN go to be the world's best food? How about electronic tongue that judges authenticity of Thai food as appeared in New York Times? Let's explore the taste of Thai beyond Pad Thai or anything with peanut sauce. We will learn what makes Thai food tastes delicious, mysterious or even crazy! We will also learn tips and tricks to buy Thai grocery and order Thai food. Finally, let's actually cook simple Thai snack/dessert. Allergen Note: the ingredients may contain the following: fish, shellish, gluten, and peanuts Interest in Thai culture especially Thai cuisine and cooking. Food provided in class contains peanuts, dried seafood and Thai chili. H4117: Radio 4 Sports Teachers: Kenneth Huo ra·di·o :ˈrādēˌō - verb : communicate or send a message by radio. Do You Want to Be a Sports Radio Personality? A sports radio personality will provide commentary during games or talk sports on a radio show. Some of these professionals are former athletes who have several years of playing experience. On the other hand, those who are not former players or coaches will need to have a strong knowledge of sports and broadcasting experience. Those who have their own shows will interview athletes and coaches and give their opinions about player performance and personnel moves. Some live shows take place early in the morning or late at night. Looming deadlines can make this occupation stressful at times. While these announcers will have a vast knowledge about sports in general, they may also focus their attention on a specific sport, such as baseball, basketball or football. ra·di·o :ˈrādēˌō - verb : communicate or send a message by radio. Do You Want to Be a Sports Radio Personality? A sports radio personality will provide commentary during games or talk sports on a radio show. Some of these professionals are former athletes who have several years of playing experience. On the other hand, those who are not former players or coaches will need to have a strong knowledge of sports and broadcasting experience. Those who have their own shows will interview athletes and coaches and give their opinions about player performance and personnel moves. Some live shows take place early in the morning or late at night. Looming deadlines can make this occupation stressful at times. While these announcers will have a vast knowledge about sports in general, they may also focus their attention on a specific sport, such as baseball, basketball or football. H4353: Chess Puzzles: Proof Games Teachers: Theodore Hwa Given a chess position, can you find a game that leads to it? Can you find the shortest possible game? If you enjoy logic puzzles, and know the rules of chess, you should find this class fun! No particular skill level in chess is needed because we consider all possible games, not just "well-played" games. Many beautiful ideas and tricks will arise when we find a short (or shortest) game leading to a position. Knowledge of the rules of chess, but no particular skill level is required. H4232: Introduction to Monopoly Strategy Teachers: Bradley Emi The classic board game Monopoly has a rich history, dating back to 1903, when Elizabeth Magie self-published a property trading game called The Landlord's Game. It was further refined by Charles Darrow and published by Parker Brothers to become one of the world's most popular games, with over 200,000,000 games sold, and the phrases "Go directly to jail" and "Do not pass Go" have become embedded in American culture. But Monopoly is also a deeply strategic game, requiring complex valuations of property and skilled negotiation. In this class, for the first hour, we will examine the basic strategies of Monopoly: most importantly, how to evaluate property and trades effectively, and how to protect value for the long-term. While we won't have time to complete full games of Monopoly in class, during the second hour, you will have an opportunity to test out your new skills by playing out an unfinished game of Monopoly, and at the end of class, we will compare the various strategies your classmates use, and their overall effectiveness. H4263: This. Is. Jeopardy! Teachers: Cameron Kim Are you a trivia genius? Do you have random useless facts stuck in your head that you need to get out? Has it been a dream of yours to make it a "true daily double"? Why not try your luck on America's Favorite Game Show, Jeopardy! Created by Merv Griffin after a suggestion from his wife, Jeopardy! has become an American cultural icon, spurring celebrities like Ken Jennings, Julia Collins, and parodies on Saturday Night Live. Come learn about the rich history of this game show, how to play it, how to get on the real thing, and play a real game of Jeopardy! from a former Jeopardy! contestant. Just a passion for trivia! We'll be using previous Teen Tournament questions for these games. H4370: Intro to Photography Teachers: Noah Zallen New to photography, and want to get started? Like taking photos, but wish you were better at it? Experienced photographer, but want to improve your skills? Whatever your skill level this class is for you! Each student will learn to harness their own creative photographic potential through a hands on class where students will practice everything their taught. The beginning of class will be inside, after which students will partner up and head outside for the rest of the session. It's strongly recommended that each student brings a camera (any kind will do, but something other than a phone camera is preferable). If you don't have a camera, don't worry! The instructor will bring his own camera for you to share. Topics covered will include camera basics (such as depth of field, shutter speed, aperture, iso, and exposure compensation), as well as photography essentials (such as lighting, perspective, framing, portraits, nature, action shots, and much more!). By end of class you'll be ready to show off your new skills to all your friends! L4441: Do Anything Teachers: Bryan Quintanilla Come discover the what you can do with the most incredible and most ordinary items! Participants will explore the significance and power of the ordinary objects all around them. L4105: Memory Techniques Teachers: Gail Wilson Learn memory and study techniques that will give you the important edge you will need to help find success in all aspects of your life. Find out about simple ways to memorize any amount of information without repetition. Remember names; remember important facts. Learn how to memorize hundreds of definitions in time to ace the next exam. Create a powerful brain. Create a powerful life! A sincere desire to improve your memory! L4242: Let's Design a Satisfying Sustainable Life Teachers: Tom Kabat Let's brainstorm to design a satisfying and sustainable life. We'll have group discussion of values, choices and results. We'll explore the intersection of satisfaction, sustainability, consumption and community. Let's explore the balance of many possible roles in our lives and how they can sustainability add satisfaction L4299: Study Skills for Life: Learning How You Learn Which style(s) of learning suit you best? How exactly do people learn, anyway? What other study techniques are out there that I don't already know of? In our class, we'll go over the basics of the learning process (theories, mechanisms, related brain areas, etc.), the different styles of learning and learning techniques, and discuss and share with each other which study techniques work best for each of us! L4107: Leadership/Managerial Skills Teachers: Melisa Rillera Discuss different leadership and managerial skills you will need not only in your professional career but in many aspects of your life. Talk about how to bring these aspects out of you. We'll go over the difference between leadership and management, discuss these skills, what it will take to develop and refine them, and how to apply them to your daily life. L4133: How To Be A Better Listener Teachers: Elizabeth Softky Learn how good listening skills can improve your relationships at home or school, and even make you a better problem solver. We'll uncover some myths about listening, and discover what people with good listening skills do in this fun,interactive workshop. L4140: The Practice of Everyday Happiness Teachers: Carter Osborne This class explores one simple question: how do we become more happy? We will use fun activities and interactive practices to learn about the many components of happiness: compassion, gratitude, relaxation, and more. How can we better express gratitude for others? What is "self-compassion," and how can we use it to enrich our lives? Does being happy actually produce benefits in all the aspects of our daily lives: social, personal, academic, etc? (hint: it does) Students will leave the class with a number of skills and practices to more effectively manage stress, stay resilient during challenging times, and (most importantly) enrich everyday happiness. Positive energy and an open mind! L4288: Justice - What's the Right Thing to Do? Bank bailouts. Stealing to feed your hungry kid. Lying to save Jews in your basement. Waterboarding. What's right and what's wrong? And how do we know? This will be a crash course in Ethics, the rigorous discipline of determining what's right. This lost science will be critical for anyone who will have to make decisions in their life. I'm modeling this course after the enormously popular class and book taught by Michael Sandel at Harvard (Google my course title). L4302: Inside the World of Harry Potter Teachers: Travis Lanham, Addison Leong We will explore critical life skills through the eyes of the Wizarding World. We will learn about banking and personal finance at Gringotts, probability and statistics with the Triwizard Cup, ethics, rhetoric, and much more! Interest and familiarity with Harry Potter. L4106: Why and How to Volunteer Locally and Abroad? Teachers: Michelle Leporini, Melisa Rillera Why should you volunteer local or abroad? How and Where? During this class we will answer these questions and also touch on how to use volunteer experience on college applications and job resumes. We'll provide personal examples of volunteering as co-organizers of our own volunteer group and unique experiences such as volunteering in prisons and various countries abroad. L4069: Intro to Personal Finance Teachers: Melissa Ko The average American household is thousands of dollars in debt. Learning early on about personal finance can help you avoid money problems down the road! In this class, we will discuss what you will want to know about finances to make smart decisions with your money. Come find out little steps that you can take now to practice and build better money habits. Please bring any questions you would like to discuss about personal finance. Note that this is NOT purely an investing class, though we will talk a little about how investing may fit into your saving strategy. This class is geared towards complete beginners. If you don't know or understand credit scores, budgeting, or interest rates, then this is the class for you. Materials for this class include: Personal Finance Lecture Notes L4220: Peaceful Communication Teachers: Mango Martin You will learn to use mindfulness and empathy to communicate honestly and non-violently. L4235: Life as a Stanford Student Teachers: Amy S. Stanford students will discuss their first year experiences and answer any questions about Stanford and college life. L4082: Interview Skills for Internship, College, and Job Applications Teachers: Oriana Li Halevy Are you anxious about internship, college, or job interviews? Come to this interactive course to receive solid tips from a Class of 1992 Harvard College pre-med turned United Nations intern turned US Department of State diplomatic interpreter turned multinational law firm corporate attorney turned venture investor, cross-border business consultant and strategist, and bilingual communications specialist and published translator/editor who has been on both sides of these interviews since high school. This course is for anyone wishing to develop and fine-tune interviewing skills that can be applied in a variety of settings. Topics will include: Preparation Presentation Common interview questions Common pitfalls Closing the interview Thank you notes. L4348: Decision Adventure Plus Teachers: Chris Spetzler Students participate in a group project where they are students trekking in Nepal and face a difficult decision. They learn a decision framework to handle the situation and future decisions they face in life. Compared with the shorter version of this course (45 minutes), students will have an additional fun activity that expands their understanding of decision making. L4090: Public Speaking Teachers: Michael Phillips A fun public speaking workshop focused on improving speaker delivery. Everyone will be giving and listening to speeches. Students should expect active participation and should posses a desire to improve as a public speaker. L4342: Social Styles and Communication Strategy Almost without exception, today's business professionals attribute their success largely to their ability to write well, to speak dynamically, and to cultivate business relationships. In this class you will learn basic theories on communicating strategically. You will learn how to plan a persuasive message and analyze your audience, while also learning about your own behavioral style and how you interact with others to solve problems. L4365: Brownies to the Future! Teachers: Yan Yan Many students entering high school harbor anxieties about fitting in, relationships, academics, extracurriculars, and college. This class is for any student struggling with the stresses of life, who just want to relax, have fun, feel good, and be open to some positive guidance. This class is a unique fusion of cooking, nutrition, improvisation, psychology, mindfulness and a little bit of simple optimism. The goal is for you to leave this class feelin' good! Must have an open-mind and be comfortable sharing and participate in discussion. Should be interested in improving one's mindset and eating brownies. Preference: 8th and 9th grade L4395: PARTY WITH TREES Teachers: Jessica Chow Wanna learn about trees? Potentially increase your Hunger Games survival rate by knowing some edible fruit trees? Impress people with your tree identification skills? Be able to casually point out "oh, that's a Coast Live Oak, that's a redwood"? Potentially increase your Hunger Games survival rate by knowing some edible fruit trees? Or just want to stare at trees, walk around the beautiful Stanford campus and forget about life for a while? This course will draw on the current Stanford Introductory Seminar "Party With Trees" for Stanford students and translate the main objectives and ideas of the course into a middle school and high school student friendly course that consists of walking around Stanford's campus, looking at trees, learning some basic identification, potentially snacking on some fruit, and having a good time. Another goal of the course is to foster a greater understanding and appreciation for the environment around us and add to students' knowledge of trees in the Stanford, Palo Alto, and general California area. An interest in learning about trees, interesting trivia, or discovery of a new field! L4231: A Crash Course in Alternative Education Teachers: Bradley Emi, Christine Jarjour "I have never let my schooling interfere with my education." -Mark Twain This class is meant for students who do not feel challenged or engaged by traditional education. Traditional education is the paradigm that is entirely shaped by going to class, doing required homework, taking tests, and doing extracurricular activities. While this works for many students, several feel trapped and constrained by the norms and values school imposes upon people. In this class, we will discuss several strategies for breaking out of the passive-mindset approach to education, which is common for many students who feel trapped by school. We will focus instead on an active-mindset approach to learning, teaching ways for the high school student to work within their current educational constraints imposed by school to take control of their own learning and academic growth. Topics we will cover include: how to use online resources to supplement learning, how to seek inexpensive or free opportunities to allow for intellectual growth, how to take on meaningful educational projects, and deciding how to pursue further education beyond high school. Interest in becoming an independent self-learner. Interest in pursuing self-education outside of high school. L4321: The Worst Case Scenario: what would you do? Teachers: Carlos Aguilar, Daimen Sagastume Danger! It lurks around every corner. Earthquakes. Plane Crashes. Snakes. Car Accidents. You're in a plummeting elevator with seconds to act. What do you do? This class is here to help: jam-packed with step-by-step, hands-on activities and simulations, we're here to show you what to do when life takes a sudden turn for the worse. An essential class for a perilous age because you never know... L4408: The P in Poker Teachers: Nandita Bhaskhar, Ashwin Paranjape Poker is often seen in bad light because of gambling addiction. However when viewed as a game, it is a great past time and a learning experience. To play good poker you need to know probability and psychology. All good poker players know their odds and that's what we'll learn in this class. Psychology is a life skill which can only be learnt by experience. Of course we will cover the rules of poker as well. Knowledge of the deck of cards L4097: Let Your Creativity Flow! Teachers: Jaclyn Chiew And flow it must when you and your teammates are tasked with confronting challenging, time-sensitive, trials that test the boundaries of your imaginative capabilities. How would you build a bridge of straws? or create a load-bearing container made of newspaper? or develop a non-verbal communication code? The mission will be revealed. And then, if you tap abilities you never thought you had and appreciate the fact that a team is greater than its parts, you will discover that creativity just doesn't flow, it gushes. L4243: Bicycle Maintenance Let's adjust gears, brakes, and the way a bike fits so your ride improves. We even patch tires, and fix klunky, squeaky things. Bring your bike to class if you can. L4296: College and you: how to be a spectacular success Teachers: Evan Boyle, Jessica Ribado Getting into a good college is widely hailed as critical for success later in life... but why? Is it really true? And what exactly are you supposed to be doing once you get into college anyway? It turns out that there are numerous career paths and student resources available to you regardless of which university you attend. Your success will depend primarily on your ability to navigate these resources and discover what career path suits your passions. Come fill in the details you WON'T hear in high school! Materials for this class include: Slides L4379: How to BS Full! Teachers: Benjamin Yang Facts. They are useful. But unfortunately, not always available. Luckily for you, this class is all about how to break your crippling dependency on facts. Come learn all about creating information without worrying about inconsequential things such as "reality", or "truth". Become an expert at being an expert. I know what I'm talking about, and so can you! L4080: How to Talk Your Way into a Job. We need to talk. Those are words you generally don't want to hear. Seriously, we need to talk about talking...or should I say, networking? So, let us practice networking and pick up a few tips on business etiquette, the interview process and the elevator pitch. If time permits, we can discuss about LinkedIn etiquette along with e-mails, thank you notes and other business etiquette. If you are considered to be an introvert, do not fret! You, too, can be skilled in networking. L4233: Miss CEO: Becoming An Effective Leader Teachers: Andreina Parisi-Amon, Nita Singh Kaushal The world needs great leaders to tackle its biggest problems… and that starts with YOU! Although women are underrepresented in today's leadership ranks, this class will inspire and teach you how to position yourself as a leader in high school, college, and beyond. Come learn about relevant leadership skills such as effective negotiation and clear communication that will help you excel in a variety of academic, personal, and professional situations. More importantly you will also learn how to put these skills into practice starting today --including securing dream mentors, finding internships, navigating the college application process, and getting on the right trajectory for career achievement early on. The instructors for this class feature women from the Stanford community who have extensive experience leading and making innovative contributions to their fields. They also have a passion for helping students achieve their leadership potential, which you can learn more about at www.missceo.org. L4403: The Chemistry of Baking Teachers: Gabriele Fuchs In this class we will learn what ingredients are used for baking cakes and cookies, and why we use those ingredients. We will learn about different measurement systems frequently found in recipes, and how to convert them. Together we will prepare a cake batter and make cookies. Each student will have a chance to participate in the class, and decorate their own cookies. None Allergy information: The recipe I will be using to make cookies will require eggs, flour (gluten) and milk. No nuts of any kind will be used! L4119: TED 4 TEENS TED (Technology, Entertainment, Design): Ideas worth spreading teens: tēnz/noun : the years of a person's age from 13 to 19. TED (Technology, Entertainment, Design) is an invitation-only event where the world's leading thinkers and doers gather to find inspiration. Is there Unbreakable Laws of Communication? Depending on your perspective, 2006 was either a really bad year for public speaking or the start of a world-changing transformation. In that year the famous TED conference began streaming 18-minute presentations from the world's top minds for free. Today TED talks are viewed more than two million times a day and, they have become the gold standard in public speaking and presentation skills. It also means that, like it or your not, your next presentation will be compared to a TED talk. TED (Technology, Entertainment, Design): Ideas worth spreading teens: tēnz/noun : the years of a person's age from 13 to 19. ** TED (Technology, Entertainment, Design) is an invitation-only event where the world's leading thinkers and doers gather to find inspiration. Is there Unbreakable Laws of Communication? Depending on your perspective, 2006 was either a really bad year for public speaking or the start of a world-changing transformation. In that year the famous TED conference began streaming 18-minute presentations from the world's top minds for free. Today TED talks are viewed more than two million times a day and, they have become the gold standard in public speaking and presentation skills. It also means that, like it or your not, your next presentation will be compared to a TED talk. L4187: Nutrition Label reading + Introduction to Chi Quong exercise Teachers: May To Come and learn what you are eating!! We will explore label reading on packaged foods - fresh, frozen, canned, as a meal, desserts, cereals and more. There will be samples and hands on practice. At the end, let's have some fun and strength a little. There will be a short but fun session on introduction to Chi Quong for health and everyday exercise. It is simple and easy to do. L4349: Decision Adventure Take part in Decision Simulation where students trek in Nepal and face a challenging decision situation. Learn a framework for decision making that helps you get more of what you want in life. L4212: Military Education 101 Teachers: Kaitlyn Benitez-Strine, Nicolas Lozano-Landinez Military Personnel make up less than 1% of the entire US population. Perhaps 1 of every 3 of you know an immediate family member in the military. Point is you're learning about the military mostly via video games and other media, which might not be the best interpretation. Curious about what the military does and how it's soldiers are trained? Come with questions and an open mind to see what else the military does besides infantry operations, and how we train up for those operations. L4406: Green thumb farming Teachers: annika alexander-ozinskas Come check out the *new* Stanford Farm! Start your own seeds, get your hands dirty, learn about growing your own food, roll in the mud if you like. Bring home a succulent garden of your very own design. L4444: Lunch Period Difficulty: None Enjoy a break for lunch with your friends! Please register for at least one lunch period on each day of the program. Section 1: 397 (max 2600) M4074: To Infinity and Beyond! Teachers: Jonathan Kang Ever wondered what is the biggest number? That's easy! There's no biggest number! But the notion of infinity is more than meets the eye. In this course, we will attempt to answer questions such as: What do we really mean when we say there are infinitely many natural numbers? How did we arrive at our present understanding of infinity? Are there different kinds of infinities? The infinite has preoccupied mathematicians and philosophers of centuries past. Come learn more about this fascinating topic! Familiarity with algebra, comfort with basic mathematical proofs. M4225: Conjecture and Proof Teachers: Jeremy Booher The number 41 is a sum of two squares (25+16). Can you write 37 as a sum of two squares? How about 43 or 47? To a mathematician, the next obvious step is to find the pattern and make a conjecture. Only once we know what is true is it possible to prove it. We will illustrate how mathematical research is done by finding an answer to the question of which numbers are a sum of two squares and then proving it using techniques called the geometry of numbers. This question has a pretty answer which leads to many fruitful generalizations, but the goal of this class is to illustrate the process of mathematics. A desire to search for patterns is enough to get a feel for the process of learning and doing mathematics. Optionally, experience with mathematical proofs will help understand this particular problem. M4267: How To Make a Video Game Teachers: Gregory Bentsen Back by popular demand! Ever wanted to design and build your own video game? Do you want to learn how to code? Do you dream of mapping out your own loot-filled, monster-infested dungeon? Then this is the class for you. In this course you will build your own 2D video game. Along the way, you will learn to program in Javascript, how to design levels, and basics of good game design. A love for great video games. Design elements of classic video games will guide our discussion. In order to facilitate critical discussion of game design, students should be familiar with some of the most famous and highly-regarded games. If they haven't already, students should play through some of these (The Legend of Zelda, Half Life, and Super Mario 64 are great examples) M4359: How to break the Internet Teachers: Lavanya Jose Come learn about ideas that keep the Internet working and then have a wild discussion about ways to break the internet. We'll first learn about packet switching, end-to-end communication, DNS, routing. Then we'll talk about recent incidents/ situations where large parts of the Internet were disrupted e.g., in 2008 all the world's YouTube traffic was accidentally routed to Pakistan (and YouTube was down for a full two hours!) M4332: The Art of Summation Teachers: Omkar Deshpande, Vivek Kaul Summing the numbers from 1 to 5 can be done quickly. Summing the numbers from 1 up to 100 would take a lot more time. Or is there a quick way to do it? How about the general problem of summing the first N numbers? Drawing inspiration from Pythagoras and his followers, and a precocious elementary school kid who grew up to become the "Prince of Mathematicians", we will discover a number of different approaches to the problem. We will generalize those approaches to compute the sum of an arithmetic series and geometric series. We will also play with other summations: summing the first N squares, or the first N cubes, and try to discover connections between these different series. We will follow bacteria as they grow and divide, and ask why they don't conquer the world. We will trace back a children's nursery rhyme in English all the way back to a mathematical papyrus roll from ancient Egypt. We will compute the number of squares and rectangles on a chessboard, and we will learn the legend of a wise man who used a chessboard and the power of geometric growth to fool a king into promising something that was impossible for any earthly king to fulfill. And in the process, we will learn the art of summation. M4341: Weird Spaces Teachers: Ying Hong Tham We will go through a basic introduction to topological spaces, and take a look at some weird, hence interesting, spaces, e.g. the Klien bottle, the Long Line, Alexander Horned Sphere etc. These counter-intuitive examples of spaces are not only fun to analyse, but also deepens our understanding of topology, making us reevaluate assumptions that we may take to be obvious. Strong visualization and imagination, can 'manipulate' objects in your mind (ability to visualize in 4 dimensions not necessary) Not afraid of the word 'infinity' M4330: The Pigeonhole Principle & Its Applications Teachers: David Hyde The pigeonhole principle, in its namesake form, states that if you have $n$ pigeons trying to fit into $m < n$ holes, then at least two pigeons must be put into the same hole. While this is a simple idea, the pigeonhole principle is actually a very powerful mathematical tool that we can use to find surprisingly simple solutions to seemingly complex problems. We will go over a few examples of the pigeonhole principle together, and then we will spend the rest of the time in groups working on progressively harder problems. This class should be fun as long as you are interested in math, puzzles, and logic. This class is a must for those interested in math contests! While we won't be relying on a lot of standard school math, having good problem solving / critical thinking skills will make this class more enjoyable. As long as you are curious, though, most of this class should be accessible to you. M4350: Introduction to Cloud Computing Teachers: Vaishali Deshpande 1. Evolution of Computer Industry 2. Different technologies involved in Cloud Computing 3. What is Cloud Computing? 4. Different types of Cloud Computing 5. Cloud service models 6. Benefits of Cloud Computing 7. Challenges in the Cloud 8. Future of Cloud Basic computer industry knowledge M4420: Website Development in 45 Minutes Teachers: Chaitanya Asawa Ever wanted to build your own website? Ever wanted to easily advertise for something? Ever wanted to create a business? Ever wanted to show off? You're going to need to build a website to do all that! You'll learn how to create a static webpage very quickly and the basics of web development -- and, most importantly, how it's not too bad. We'll make a fun, little homepage for someone/something to demonstrate that! M4157: Making realtime websites with Nodejs Teachers: Alvin Sng In this course, we learn to make interactive & realtime websites with Node.js, we will build a working chat website so if you have a laptop handy it can be used in demos. I will also show how you can easily upload your web app using Heroku. M4200: Mad Hatter Mathematics Teachers: Zandra Vinegar There is math. Like no math in school. And proofs full of wonder, mystery, and danger! Some say to survive them, you need to be as mad as a hatter! M4340: Drinking Donuts and Eating Coffee Mugs A classic math joke goes as follows:"A topologist is someone who can't tell a donut from a coffee mug." Ever wonder what that means? How can a rigid coffee mug be mistaken for a tasty donut? In this class we will explore the natural concept of a 'homotopy': deforming one object into another. Using this concept, one can say that two objects are 'similar' when the first object can be deformed into the second. For example, you might consider all pants to be similar because they all have a similar shape: two openings for feet and a bigger opening for the waist. However, homotopy goes even further: pipes of any length are considered 'similar' because they all have exactly two openings (one at each end), and can be stretched or shrunk to resemble each other. Essentially, homotopy completely forgets about lengths and distances, and only cares about the 'intrinsic' qualities of the object. You'll be using playdough to perform these deforming operations, so get ready to get your hands dirty! I will keep mathematical formalism to a minimum, and only give 'hand-waving' proofs of important results. All you really need is imagination and willingness to stretch your mind! M4158: Hour of code! Have you ever wanted to learn how to code? If so then this is the perfect class for you. Coding can be fun and easy to learn. This class is designed for those who have never coded before. We will use the course material from code.org M4199: How to win ALL the time (and not by cheating) Do you like playing games? Winning games? ALL the games? Come learn how to play some common paper-and-pencil games (Dots and Boxes, Say 16, and Nim included) so well that no one will ever beat you again. And the game strategies I'll cover can be applied to everything from chess to the stock exchange. M4209: Mining your online data Teachers: Michael Wei If you search on Google, shop on Amazon, or post on Facebook, these companies likely know more about you than you expect. Such as how likely you are to pay your bills on time, who you'll vote for in the next presidential election, or how likely you are to buy tickets to the next Selena Gomez concert. We'll take a look at how technology companies are applying data science to personalize their products, solve interesting challenges, and make money. M4306: The Latest Craze in Image Recognition Teachers: Manu Chopra, Barak Oshri In the past few years, there have been dramatic improvements in our ability to learn from data off Facebook, Twitter, and other media. Today, we can do truly amazing things, such as understanding language and detecting objects from images. We'd like to teach you how to do this. Come learn what makes our algorithms "intelligent", or why Silicon Valley is euphoric about a simple but powerful idea that is leading to state-of-the-art performances. We will teach you this from the very beginning, without assuming any prior understanding of how we can make sense of images, but prepare to be challenged! M4368: How the heck does e^(pi*i) make any sense? Teachers: Grant Sanderson You may have heard that e^(pi*i) = -1. What?!?! I made a video giving a non-calculus intuition for why this is true here: http://youtu.be/F_0yfvm0UoU, and in this class I will go into more detail and discuss the intuition outlined there. Surprisingly, not calculus. M4088: Voting Theory Teachers: Rafael Cosman, Benjamin Cosman In the standard Plurality voting system, whoever gets the most votes wins. When there are many candidates this can get silly - a candidate that the vast majority of voters _hate_ could win with just 10% of the vote as long as ten other generally agreeable candidates split the other 90%. If voters supply not just their top choice but a ranking of all the candidates, a whole world of other voting systems become possible. In this class we will come up with those other systems and discuss their pros and cons. None. This class may have little to offer if you are already familiar with the common systems and criteria in this table: en.wikipedia.org/wiki/Voting_system#Compliance_of_selected_systems_.28table.29 M4096: Methods of Mathematical Proofs Teachers: Nicholas Dwork This class will introduce the students to the logic that is used to prove mathematical statements. Examples will include direct proof, proof by contradiction, and proof by induction. The student will be exposed to pure mathematics, and gain an understanding of the difference between pure and applied mathematics. The student should have taken algebra, and be familiar with functions. M4160: Hacking the Internet Teachers: David Eng, Sherman Leung Learn more about the wizardry of the Internet! In this course, we will apply basic programming concepts to improve our lives in many contexts. We will write a script to prank your friends, modify an online game to win faster, and launch a website! Hard prerequisites: Students should be able to open a browser and type words into Google. Soft prerequisites: Directed to students with basic programming experience (i.e. variables, types, loops, conditionals). Mostly accessible to students with no coding experience. M4183: When a Line Isn't a Line Teachers: Kenny Chang, Vinson Luo The shortest path between two points is a line… or is it? In this course, we will question the assumptions underlying our everyday notions of distance – but not just the distance you're used to. We'll measure distances between people, places, and things, whether in the context of aviation, social media, cosmology, or cartography. Finally, we'll take a look at alternate formulations of geometry, as well as the mathematical concept of a metric space. Basics of geometry M4380: Intro to Password Security Passwords. We use them a lot. We also hear about them getting stolen a lot. There's even several xkcd comics about them (538,792,936,1286). Come learn a bit about best practices for safely handling passwords, both from the user and service sides. M4109: Cabinet of Mathematical Curiosities Teachers: Amy Liu Have you ever wondered how to cut a strip of paper in half into a single piece? Do you spend your evenings pondering how to add up an infinite number of things and arrive at a finite number? Do you aspire to build a bicycle that rolls on square wheels? Behold, the Cabinet of Mathematical Curiosities! In this class, we will take a brief stroll through recreational mathematics, exploring fractals and infinity, "proofs" that 1=0 and other paradoxes, clever new ways of adding and multiplying numbers, and more! Willingness to engage in mathematical play! Some knowledge of basic high school math (algebra, geometry, etc.) is helpful, but not necessary. M4334: The Story of e Starting from the evolution of interest rates in the Greek, Roman and Christian worlds, students will learn how Euler's number e emerged in the context of calculating compound interest. The relationship between natural logarithms and e will also be looked at. M4369: A Zeroth Introduction to Groups, From Dummy to Dummies Teachers: Peng Hui How, Ho Chung Siu Group theory is nothing but the study of symmetry. Groups are ubiquitous. A double-sided square flips along the diagonals and the vertical lines that bisects itself, rotates four-fold, to come back to its original position. Rotation composed with reflection gives reflection, reflection composed with rotation gives rotation. What about other compositions? What about the operations on a cube instead? On a rubiks cube? On an infinite lattice? What are the common traits they share? What is the theory that unifies these sets of operations? The most fundamental answer is group theory. This class does not intend to indoctrinate you with theorems; it is more like a discussion session. This class will hopefully be conducted outdoor, when we think while enjoying fresh air. This class does not intend to introduce pedantic mathematical equations. Thus, no formalism prerequisite is required. If you have the image of a square/cube in mind, and knows addition and subtraction, you are all good. In fact, the less you think you know, the more you reap in the journey of mathematical exploration. M4411: Fractals! Teachers: Vineet Gupta Fractals are these crazy objects which stretch our understanding of shape and space, moving into the weird world of infinity. We will look at examples of fractals such as the Koch snowflake and the Sierpinski's triangle. We will talk about making fractals, and think about the various dimensions of a fractal. Does it make sense to talk about its dimensions? Can we call it a 2-dimensional or a 3-dimensional object? Look forward to stretching your imagination and playing with mathematics! Materials for this class include: Fractals! PowerPoint for Splash, Fractals! Lesson Plan M4428: Become an Imagineer! Teachers: Dan Yu Learn the basics of modeling and mapping in an animated environment. This course gets you started with using Blender, one of the industry's premier applications for 3D-rendering. If you've ever loved or still love Disney or Pixar movies, then take this class! M4236: Mathemagic Teachers: Jake Hillard, William Kuszmaul We'll be doing fun hands-on mathemagic tricks. Learn how to: fit your entire body through an index card; magically cut a paper loop in half, just to get another paper loop; win money from your friends; and more! M4416: Statistics 101 Teachers: Benjamin Cosman, Kate Rudolph When you do an experiment, you're taught to control as many variables as possible. For example, if you want to know whether playing music helps plants grow (please don't actually try this), then the presence or absence of music should be the only thing you change - you should grow everything in the same amount of sun, the same amount of water, etc. But what you aren't always taught is that it's a losing battle - no matter how perfectly you try to grow two plants the same way, they won't grow to exactly the same height! So when your results come in and the musical plants grew a tenth of a centimeter taller, have you discovered a new phenomenon, or did it just happen by chance? There is a powerful tool that we can use to answer this question - the statistical significance test. There are a bunch of such tests, actually, and computers are quite good at doing them for you, so we won't delve into the details of any single one but instead we'll focus on the intuition behind all of them. Taking this class should help you use and interpret any significance test and be a more discerning consumer of statistical information (and do much better in science fairs, if you're planning to do one of those)*. *according to purely anecdotal evidence None. If you are familiar with significance tests then this class probably (p < .05) does not have much to teach you. M4087: Unrelated Math I For too long have we submitted to the tyranny of unifying themes. How many bears can you run away from forever? How can electrons prove inequalities for us? Why is traffic so bad on your favorite roads? Are there theorems that are true but can't be proven? How can physics prove the Pythagorean Theorem? And most importantly, how many of these kinds of things can I answer in under an hour? M4115: Comparison Logic Puzzles Teachers: Dima Kamalov We'll spend the class solving the following logic puzzle: You have a balance with two sides; it can determine which of the two sides is heavier. You also have some coins, and one of them is slightly lighter or heavier than the rest. How many times do you need to use the balance to find the defective coin? You like solving logic puzzles M4170: Using Recursion Using Recursion Teachers: Travis Chen This class will introduce fundamental concepts in recursion and how to apply them to solve problems in math and computer science. It will include both discussion and hands-on problems-solving sections. None except interest in math/CS and algorithms! If you already know how to recursion past the AP CS/AIME math level, this class will be a review M4295: Measuring Distance Teachers: Anna Thomas How can we measure the similarity between two images? Users in a social network? Cell populations? Books? Colors? Translations? Formalizing a distance metric between objects is critical for many applications in science and engineering, ranging from image search to recommendation engines. You will learn about different methods of computing distance, how to automatically learn a distance metric based on user input, and applications to machine learning. M4417: Practical Programming with Python Teachers: Rohit Talreja Python is one of the easiest programming languages to learn! This class will not teach you Python from the ground up, but it will show you some of the awesome things you can do with a rudimentary knowledge of the language. You'll learn how to make a very simple website, send emails and write a simple interactive program. Basic knowledge of the Python programming language (or Java, C, or C++) will be very helpful, but is not required. M4206: Cryptography Role Play Teachers: Josh Alman, Timothy Chu Crypto isn't just for computers! In this class, we'll try to tell each other secrets while our classmate (and nemesis) Eve listens in. M4211: Introduction to Python! Teachers: Sam Redmond Have you ever wanted to control a computer? How about augmenting your brain with near-infinite memory and lightning-fast speed? Python is a general-purpose programming language rapidly growing in popularity that focuses on ease-of-use over execution speed. In this interactive class, you'll learn the basics of this language, why everyone loves it so much, and you'll even get the chance to make your own programs! An emphasis will be placed on learning Pythonic techniques by tackling hard tasks that Python makes very easy. Interest in computers and/or programming. Prior experience with computer science is helpful but not required. M4269: Mathemagics: Teachers: Otto Zhen Are you interested in magic? Do you want to learn shortcuts so that you can multiply large number without a calculator? Are you a human being? Do you like having fun? If the answer to any of those questions is yes, you will enjoy this class which focuses on exploring the relationship between mathematics and magic in a fun and interactive way. The first half of the class will focus on various impressive magic tricks which have a foundation in mathematical principles. Such tricks will include work on magic squares, combinations and permutations, and sequences and series. That last sentence was deliberately general to not reveal the actual tricks. At the end of this section, you will understand the basic math behind these magic tricks and also have three cool tricks that you can perform for your friends and family. The second half of this class focuses on mathematical tips and tricks that will make you a human calculator. Imagine multiply five digit numbers in your head. Basically picture yourself as a version of Will from Good Will Hunting! Many different methods of multiplication and division will be explored! Basic understanding of multiplication and division M4412: Unrelated Math II Same idea as Unrelated Math I (M4087) except the topics will be - you guessed it - totally unrelated! So sign up for either or both of these; there will be no overlap between the two. M4239: If Google had a brain Teachers: Liam McCarty Want free food? How about a free book? This class offers both. Google has a TON of information, but information isn't knowledge, knowledge isn't understanding, and understanding isn't wisdom. Computers are dumb, at least for now. All they have is raw data. But what if they had more? What if Google had a brain? Silicon Valley has largely been so successful because it goes against the tide--it does what no other place on Earth is doing. I want you to do the same. Be a contrarian. Don't accept Silicon Valley at face value. Think harder about the social implications of technological innovation and startup culture. In this class, we will explore how big data, artificial intelligence, and the NSA fit into modern America. We will go against the tide and look at Silicon Valley from an unconventional perspective. We won't just debate privacy and big data--we'll look at the future of our country. A brain | Note: this class does not involve swimming, though it does involve a Splash M4256: Beautiful Evidence - Intro to Data & Information Visualization Teachers: Brian Do, Paula Kusumaputri The amount of data in our world is continually increasing. Good visualization of complex data aids understanding and comprehension. This class will be a fun introduction to creating beautiful and informative data visualization. We will learn how humans process and perceive visual information/images. We will also explore design principles, good design practices for visualizations, and various visualization tools used in various fields, and we will put them into practice for our own mini projects. A small studio-based, interactive session that combines programming, data science and graphic design, the class will be a fun and educational experience for those who want to learn more about visualization in the age of Big Data! Basic programming experience M4277: Vectors with Video Games Teachers: Will Monroe Video games are probably the most fun and creative projects in the world of computer programming. It is not hard to get into computer programming with very little math background--lots of people pick up programming as a hobby before taking high school math--and some very popular games (Tetris, chess, even Mario or Pokémon) can be programmed with only basic math skills. Many of the most successful games, however, take place in three-dimensional worlds. If you want to make a game like Minecraft, Skyrim, or Call of Duty (to name a few), there's one bit of math that can incredibly useful to know: vectors. This is a topic that is frequently glossed over in high school math but shows up in a suprising variety of subjects. This class will cover some of the intuitions and applications of vectors that are needed to build video games, while along the way discussing the video game industry and computer science in general. This will be a lecture-style class; due to equipment and time constraints, we won't be able to do individual, "hands-on" programming, but the class will include bits of realistic code and some cool demos. I'll put any code I write online so students can look at it after the class is over. First-year algebra and geometry. Some exposure to computer programming is recommended. M4427: Building iPhone Apps Ever wondered how the apps on an iPhone work? Ever wanted to go behind the scenes and make your own? We'll take a look at how an iPhone app is created, from designing the interface to writing the code. Get a taste of what real developers do on a daily basis, and how you can make your own apps! Core programming knowledge, including functions and variables (knowledge of object-oriented programming, including classes and methods, recommended). M4092: Experience Virtual Reality Teachers: Ruth Bram Have you ever wanted to experience Virtual Reality? We will be looking at how education is represented in VR and talk about virtual reality platforms that are making a difference today in education, military, health, and gaming. M4208: Extreme Math This class is mostly an excuse for us (the teachers) to watch you (the students) flail while you try to solve tricky math problems on the spot. This is how it will work. We will give you a math problem, and you'll have to immediately present a solution on the black-board. You'll have up to eight minutes to present your proof, but you need to continuously be presenting. Then our panel of judges will award you a score based on how correct, confusing, and amusing your solution was. There may or may not be teams, depending on how many students we get. Some familiarity with contest math. Ideally you can solve some AIME problems. M4338: The story of pi How was the ratio of the circumference and diameter of a circle understood in Egypt, Mesopotamia, India and China? Students will learn about the approximation of pi derived by Archimedes. Modern developments will include the examination of the infinite series of Leibnitz, Gregory and Madhava. M4390: Introduction to Modern Geometry Teachers: Kyler Siegel We will explore the vast jungle in mathematics known as "geometry", explaining some of the key conceptual breakthroughs in the last century, leading up to major unsolved problems that geometers are currently hard at work trying to solve. We will encounter exotic structures, multi-dimensional spaces, and a whole zoo of beautiful yet unfamiliar objects. We will also take a peak at some of the high-tech abstract machinery that modern mathematicians have at their disposal. M4393: Introduction to Machine Learning with Chocolate Teachers: Mahalia Miller, Daniel Wiesenthal The field of Machine Learning is getting a lot of attention, and indeed, it's pretty cool. It's a field in which computers can actually teach themselves to do things that not even their programmers are capable of. This is your opportunity to learn about what Machine Learning is, where it came from, and form an introductory understanding of how it works. We'll go through several examples, run some demos using chocolate bars, and, for those interested in programming, we'll discuss implementation in code. M4414: Enhancing technological access for people with disabilities Teachers: Elliott Lapin, Kartik Sawhney This class explores the social, ethical, and technical challenges surrounding the design, development, and use of technologies that improve the lives of people with disabilities and older adults. This will not only introduce state-of-the-art research and innovation in this area, but will also provide the students with hands-on experience with accessible coding practices. None (some experience with HTML/CSS/JavaScript might help) M4422: Computability and Complexity 101 What questions can computers solve quickly? In fact, what questions can computers solve at all? We will cover models of computation (what's a computer anyway?), examples of undecidable problems, and what's up with the famous open "P vs NP" problem. M4250: Mobile Apps, Marketing for Good Causes Teachers: Kenneth Fax In this class, we plan to sit, listen and discuss what students care about when it comes to using mobile apps, consumers and business that like to help others for good causes as well. A desire to learn, give ideas, share and learn. P4283: Dreaming in Color: The Science of Light and Matter Teachers: Vijay Narasimhan, Jonathan Scholl Why is the sky blue? Why do leaves change colors in the fall? How can you tell the color of dinosaurs just from their fossils? How can the metal gold become green, blue, or red? In this interactive class, you'll discover the answers to questions like these with demonstrations and hands-on activities. You'll also find out how the answers to these questions are helping scientists and engineers discover more about outer space, create more efficient solar cells, and treat cancer. Some basic knowledge of physics would be helpful but not necessary. If you know that light can be described as a wave, then you should be fine. P4120: Exploring the scale of the universe Teachers: Kate Follette, Natasha Holmes Can you imagine how big an astronomical unit is? What about the size of the Sun? Making sense of the size of the universe is tough. This lesson will use hands-on activities to help put the grand scale of the universe into some perspective. P4124: Gravitational Lensing Teachers: Alfred Zong You probably know that light usually travels in a straight line, unless a magnifying glass (i.e. a convex lens) bends it. But why do physicists claim that our Sun can also act like a lens? In this class, you will be introduced to Einstein's famous general theory of relativity and you will learn the reason for this magic lensing effect! No advanced math or physics knowledge required. If you know F = ma and you're ready to embrace some really weird (but true) phenomenon, this class is for you! P4198: Maxwell's Equations \begin{equation} \varepsilon \varoiint \mathbf E \cdot ds = \iiint \mathbf q_\mathbf v dv \end{equation} \begin{equation} \oint \mathbf B \cdot dl = \mathbf I + \varepsilon \frac{d}{dt} \iint \mathbf E \cdot ds \end{equation} \begin{equation} \oint \mathbf E \cdot dl = - \mu \frac{d}{dt} \iint \mathbf B \cdot ds \end{equation} \begin{equation} \mu \varoiint \mathbf B \cdot ds = 0 \end{equation} These four equations describe one of the most universal and elegant relations in physics. They are Maxwell's equations, unifying all observations of relativity, electricity, and magnetism. Don't let the notation scare you off – this class has no prerequisites (as in, just be able to graph a function), but we will rigorously derive Maxwell's explanation of electromagnetic phenomena (including light, electricity, magnets, …). "Derive" with the catch that, as I don't believe in writing long equations on the board, everything in this class will be presented as a series of intuitive /and/ rigorous deductions, preserving concepts rather than constants. We will begin with only two observations. First, the relativistic nature of light: you can't catch up to a light beam – it will always move away from you at speed c. Second, our observations of the force between two charges. From these two observations, we will DERIVE the explanation of everything else. Aka, the world will unfold before you and it will be beautiful. All this said, and there being no "hidden prerequisites," the world will need to unfold before you /very/ quickly. I basically just claimed that I would introduce all of single-variable calculus and about half of multivariable calculus in the first hour of class – which I believe is an attainable goal – but this class will be rigorous, will be extremely intense, and will require the full two blocks. P4356: Topics in Modern Physics Teachers: Amara McCune Have you ever wondered why general relativity and quantum mechanics are incompatible? What string theory is? Why nothing can travel faster than the speed of light? If so, then this class is for you! We will be exploring the foundations of modern experimental and theoretical physics, focusing on topics including the standard model of particle physics, fundamental forces, special and general relativity, quantum mechanics, and introducing string theory. High school algebra, basic understanding of physics concepts, and curiosity. P4125: How do we "see" atoms? Even though atoms are too small to be observed by naked eyes, scientists can still "see" the microscopic structure using a variety of techniques. In this class, I will introduce many cutting-edge technologies such as scanning tunneling microscopy (STM), angle-resolved photoemission spectroscopy (ARPES), and resonant X-ray scattering (RXS), which are employed to investigate many fascinating properties of matter. Don't be scared by the jargons, and no advanced math or physics knowledge required. However, I'll assume you have heard of terms such as "atom", "electron", and "photon" but you don't have to know what they mean. P4149: Fun with Chemistry! Teachers: Su Hyun Hong, Shyam Iyer, David Kanno, Binhong Lin, Michael Melfi, Donald Ripatti, Thomas Robbins, Steven Ryckbosch, Katherine Walker Chemistry is exciting and it happens all around us every day. In this class we will talk about the states of matter, a little about polymers (like plastics), and really anything else that gives us a chance to do cool demos for you all! You may get a chance to make a souvenir to take home, too. You'll just have to come and find out! P4310: Climate Change 101-Identifying the link between temperature & carbon dioxide Teachers: Jagruti Vedamati A climate change 101 course that focuses to provide students with a first-hand experience in analyzing the link between atmospheric temperatures and carbon dioxide (CO2) concentrations by looking at ice core data spanning hundreds of thousands of years. P4344: The Chemistry of Candy Making Come learn about one of the most delicious applications of chemistry: candy making! We'll talk about why and how sugar crystallizes (or doesn't) and how various kinds of candy are made by heating and cooling sugar in different ways. Then, we'll use our newfound knowledge to discuss why making fudge is so tricky and come up with strategies to make the best fudge possible -- using observations and scrupulous taste-testing to enhance the discussion. A year of chemistry is recommended. You should know the basic intermolecular forces and basic thermodynamics. P4114: Liquid Nitrogen Ice Cream Teachers: Jeremy Lai The chemistry behind liquid nitrogen ice cream. We'll be applying the scientific method to cooking. Experimental design: hypothesis, testing, and discussion of results. There will be a quiz. Yes, it's ice cream. Optional: On Food and Cooking by Harold McGee http://www.amazon.com/On-Food-Cooking-Science-Kitchen/dp/0684800012 P4126: An Introduction to Superconductivity As we coo
CommonCrawl
General technique to find partial sum formula for series such as $\sum n^3/2^n$ Background. Working in a secondary school class on random walks that could only head in two directions (e.g., South and West) we stumbled upon the following summation to be evaluated: $$\sum_{n=0}^{\infty} \frac{(n-1)n(n+1)}{2^n}$$ We were able to "solve" this in three ways: using some probabilistic intuition, plugging the formula directly into Mathematica/Wolfram Alpha, and using the identity: $$\frac{1}{1-x} = 1 + x + x^2 + x^3 + \cdots$$ differentiated a few times over to find (verification pasted from Wolfram Alpha): after which plugging in $x = 1/2$ gives roughly the series about which we were curious. Another instructor was able to tinker with the above-mentioned series and figure out a closed form for the $m$th partial sums, which could then be verified by induction (after which taking the limit as $m \rightarrow \infty$ resolves the matter). In fact, Wolfram Alpha produces the closed form for this partial sum immediately upon input. For example, inputting the series above yields: Question: Given an infinite series that consists of the ratio of a polynomial in $n$ (numerator) to a constant raised to some power that is linear in $n$ (denominator) what is a general technique to produce the closed formula corresponding to the series' $m$th partial sums? Given the context, material at the level of strong secondary mathematics (or early undergraduate mathematics) would be ideal, but mathematics at any level - references, related problems and/or solutions, and extensions - are all welcome, too. I would especially appreciate answers that "fill in" all details, e.g., by including a worked example, so as to make this post more pedagogically effective in a self-contained manner. sequences-and-series reference-request power-series generating-functions random-walk Benjamin Dickman Benjamin DickmanBenjamin Dickman Let's say you want to compute $$ \sum_{n=0}^{m}\frac{a_0+a_1n+\dotsb+a_kn^k}{c^n} =\sum_{n=0}^{m}\frac{P(n)}{c^n}. $$ First note that $$ \sum_{n=0}^{\infty}(a_0+a_1n+\dotsb+a_kn^k)x^n=P(xD)\left(\frac{1}{1-x}\right)=G(x) $$ where $xD$ is the operator $x\frac{d}{dx}$. Then $$ G(x/c)=\sum_{n=0}^{\infty}\left(\frac{a_0+a_1n+\dotsb+a_kn^k}{c^n}\right)x^n;\quad |x|<|c|. $$ In particular $$ \frac{1}{1-x}G(x/c)=\sum_{n=0}^{\infty}\left(\sum_{m=0}^{n}\left(\frac{a_0+a_1n+\dotsb+a_kn^k}{c^n}\right)\right)x^{n}. $$ Hence it suffices to compute the coefficient of $x^{n}$ in $\frac{1}{1-x}G(x/c)$ which is typically done (by hand) with partial fractions. Sri-Amirthan TheivendranSri-Amirthan Theivendran Not the answer you're looking for? Browse other questions tagged sequences-and-series reference-request power-series generating-functions random-walk or ask your own question. Find a sum of appropriate values of $\cos$ and $\sin$ to determine the value of a series How to find the partial sum of a given series? general formula for partial sum of series Do all series have a closed form representation of their partial sum? If not, can we feasibly prove that this is not the case? Simplified Formula for Champernowne's constant help with sum of infinite series, stuck in problem What is the partial sum of :$\sum_{n=1}^{+\infty}\zeta(n)e^{-n^2}$ and what about it's closed form? Partial sums of $\frac{\pi}{2}=\sum_{n=0}^\infty \frac{(2n-1)!!}{2^n\cdot n!\cdot (2n+1) }$. Uncomfortable Series Calculations (not geometric nor telescoping): $\sum\limits_{n=1}^{ \infty } (-1)^{n+1}\frac{2n+1}{n(n+1)}$ Sum of the inverse of a geometric series?
CommonCrawl
Is anti-matter matter going backwards in time? Some sources describe antimatter as just like normal matter, but "going backwards in time". What does that really mean? Is that a good analogy in general, and can it be made mathematically precise? Physically, how could something move backwards in time? quantum-field-theory time antimatter causality arrow-of-time GerardGerard $\begingroup$ So antiparticles go back in time and still preserve causality! $\endgroup$ – Manas Dogra To the best of my knowledge, most physicists don't believe that antimatter is actually matter moving backwards in time. It's not even entirely clear what would it really mean to move backwards in time, from the popular viewpoint. If I'm remembering correctly, this idea all comes from a story that probably originated with Richard Feynman. At the time, one of the big puzzles of physics was why all instances of a particular elementary particle (all electrons, for example) are apparently identical. Feynman had a very hand-wavy idea that all electrons could in fact be the same electron, just bouncing back and forth between the beginning of time and the end. As far as I know, that idea never developed into anything mathematically grounded, but it did inspire Feynman and others to calculate what the properties of an electron moving backwards in time would be, in a certain precise sense that emerges from quantum field theory. What they came up with was a particle that matched the known properties of the positron. Just to give you a rough idea of what it means for a particle to "move backwards in time" in the technical sense: in quantum field theory, particles carry with them amounts of various conserved quantities as they move. These quantities may include energy, momentum, electric charge, "flavor," and others. As the particles move, these conserved quantities produce "currents," which have a direction based on the motion and sign of the conserved quantity. If you apply the time reversal operator (which is a purely mathematical concept, not something that actually reverses time), you reverse the direction of the current flow, which is equivalent to reversing the sign of the conserved quantity, thus (roughly speaking) turning the particle into its antiparticle. For example, consider electric current: it arises from the movement of electric charge, and the direction of the current is a product of the direction of motion of the charge and the sign of the charge. $$\vec{I} = q\vec{v}$$ Positive charge moving left ($+q\times -v$) is equivalent to negative charge moving right ($-q\times +v$). If you have a current of electrons moving to the right, and you apply the time reversal operator, it converts the rightward velocity to leftward velocity ($-q\times -v$). But you would get the exact same result by instead converting the electrons into positrons and letting them continue to move to the right ($+q\times +v$); either way, you wind up with the net positive charge flow moving to the right. By the way, optional reading if you're interested: there is a very basic (though hard to prove) theorem in quantum field theory, the TCP theorem, that says that if you apply the three operations of time reversal, charge conjugation (switch particles and antiparticles), and parity inversion (mirroring space), the result should be exactly equivalent to what you started with. We know from experimental data that, under certain exotic circumstances, the combination of charge conjugation and parity inversion does not leave all physical processes unchanged, which means that the same must be true of time reversal: physics is not time-reversal invariant. Of course, since we can't actually reverse time, we can't test in exactly what manner this is true. David ZDavid Z $\begingroup$ I read somewhere that the "one electron going back and forth in time" idea was Wheeler's, not Feynman's originally. (Wheeler was Fenyman's thesis advisor.) Unfortunately I forgot where I heard this, but it was probably in one of Feyman's lectures/writings. $\endgroup$ – Nathan Reed $\begingroup$ Just a note: If Freymann's view (a single electron bouncing back and forth in time) is true, then we would have exactly the same amount of electrons and positrons... Don't we? $\endgroup$ – Calmarius $\begingroup$ @Tobia antimatter has been used for many things, and nobody has ever seen any evidence of it carrying information backwards in time. If anyone had an idea of how antimatter could be capable of carrying information back in time, in a way that would not have been noticed, someone would certainly do an experiment to test it, but that's never happened as far as I know. $\endgroup$ – David Z $\begingroup$ Yes @Calmarius , that's why Feynman gave up the idea I seem to remember. $\endgroup$ – Andrea $\begingroup$ I'm pretty sure it was Wheeler who originated the idea of there being one electron, just crossing our timeline a collossal number of times. IIRC Feynman talks about this in his Nobel Speech, saying that he "stole" the idea from Wheeler to spawn many of his own. $\endgroup$ – Selene Routley Antimatter is in every precise meaningful sense matter moving backward in time. The notion of "moving backward in time" is nonsensical in a Hamiltonian formulation, because the whole description can only go forward in time. That's the definition of what the Hamiltonian does – it takes you forward in time a little bit. So if you formulate quantum mechanics the Hamiltonian way, this idea is difficult to understand (still it can be done – Stueckelberg discovered this connection before the path integral, when field Hamiltonians were the only tool). But in Feynman's particle path-integral picture, when you parametrize particles by their worldline proper time, and you renounce a global causal picture in favor of particles splitting and joining, the particle trajectories are consistent with relativity, but only if the trajectories include back-in-time trajectories, where coordinate time ticks in the opposite sense to proper time. Looked at in the Hamiltonian formalism, the coordinate time is the only notion of time. So those paths where the proper time ticks in the reverse direction look like a different type of particle, and these are the antiparticles. Sometimes there is an identification, so that a particle is its own antiparticle. Precise consequence: CPT theorem The "C" operator changes all particles to antiparticles, the P operator reflects all spatial directions, and the T operator reflects all motions (and does so by doing complex conjugation). It is important to understand that T is an operator on physical states, it does not abstractly flip time, it concretely flips all momenta and angular momenta (a spinning disk is spinning the other way) so that things are going backwards. The parity operator flips all directions, but not angular momenta. The CPT theorem says that any process involving matter happens exactly the same when done in reverse motion, in a mirror, to antimatter. The CPT operator is never the identity, aside from the case of a real scalar field. CPT acting on an electron produces a positron state, for example. CPT acting on a photon produces a photon going in the same direction with opposite polarization (if P is chosen to reflect all spatial coordinate axes, this is a bad convention outside of 3+1 dimensions). This theorem is proved by noting that a CPT operator corresponds to a rotation by 180 degrees in the Euclidean theory, as described on Wikipedia. Precise consequence: crossing Any amplitude involving particles A($k_1,k_2,...,k_n$) is analytic in the incoming and outgoing momenta, aside from pole and cut singularities caused by producing intermediate states. In tree-level perturbation theory, these amplitudes are analytic except when creating physical particles, where you find poles. So the scattering amplitudes make sense for any complex value of the momenta, since going around poles is not a problem. In terms of mandelstam variables for 2-2 scattering, s,t,u (s is the CM energy, t is the momentum transfer and u the other momentum transfer, to the other created particle), the amplitude is an analytic function of s and t. The regions where the particles are on the mass shell are given by mandelstam plot, and there are three different regions, corresponding to A+B goes to C+D , Cbar+B goes to Abar+D, and A+Dbar goes to C+Bbar. These three regimes are described by the exact same function of s,t,u, in three disconnected regions. In starker terms, if you start with pure particle scattering, and analytically continue the amplitudes with particles with incoming momentum k's (with positive energy) to negative k's, you find the amplitude for the antiparticle process. The antiparticle amplitude is uniquely determined by the analytic continuation of the particle amplitude for the energy-momentum reversed. This corresponds to taking the outgoing particle with positive energy and momentum, and flipping the energy and momentum to negative values, so that it goes out the other way with negative energy. If you identify the lines in Feynman diagrams with particle trajectories, this region of the amplitude gives the contribution of paths that go back in time. So crossing is the other precise statement of "Antimatter is matter going back in time". Causal pictures The notion of going back in time is acausal, meaning it is excluded automatically in a Hamiltonian formulation. For this reason, it took a long time for this approach to be appreciated and accepted. Stueckelberg proposed this interpretation of antiparticles in the late 1930s, but Feynman's presentation made it stick. In Feynman diagrams, the future is not determined from the past by stepping forward timestep by timestep, it is determined by tracing particle paths proper-time by proper-time. The diagram formalism therefore is philosophically very different from the Hamiltonian field theory formalism, so much so Feynman was somewhat disappointed that they were equivalent. They are not as easily equivalent when you go to string theory, because string theory is an S-matrix theory formulated entirely in Feynman language, not in Hamiltonian language. The Hamiltonian formulation of strings requires a special slicing of space time, and even then, it is less clear and elegant than the Feynman formulation, which is just as acausal and strange. The strings backtrack in time just like particles do, since they reproduce point particles at infinite tension. If you philosophically dislike acausal formalisms, you can say (in field theory) that the Hamiltonian formalism is fundamental, and that you believe in crossing and CPT, and then you don't have to talk about going back in time. Since crossing and CPT are the precise manifestations of the statement that antimatter is matter going back in time, you really aren't saying anything different, except philosophically. But the philosophy motivates crossing and CPT. feetwet Ron MaimonRon Maimon $\begingroup$ "Antimatter is in every precise meaningful sense matter moving backward in time" ... except the thermodynamic-arrow sense, which is the one that people normally mean when they talk about "going backward in time". I think this answer is correct as far as it goes, but totally wrong as an answer to the real question, at least when a layperson is asking it. $\endgroup$ – benrg $\begingroup$ In the traditional sense, if a particle were going backwards in time, it would only appear in our universe for a brief instant before it would be in the past. In contrast, the antimatter that we see stays around - and thus it moves oppositely to matter but doesn't actually go backwards in time. However you could probably create a different notion of what a particle "is" as some kind of persistent object in space AND time in a way that causes it to exist at all times at once, to fix this problem and keep the notion of backward-moving antimatter. $\endgroup$ – doublefelix This refers to Feynman's 1949 theory. See http://www.upscale.utoronto.ca/PVB/Harrison/AntiMatter/AntiMatter.htmllink text From there: "Feynman's Theory of Antimatter In 1949 Richard Feynman devised another theory of antimatter. The spacetime diagram for pair production and annihilation appears to the right. An electron is travelling along from the lower right, interacts with some light energy and starts travelling backwards in time. An electron travelling backwards in time is what we call a positron. In the diagram, the electron travelling backwards in time interacts with some other light energy and starts travelling forwards in time again. Note that throughout, there is only one electron. A friend of mine finds the image of an electron travelling backwards in time, interpreted by us as a positron, to be scary. Feynman in his original paper proposing this theory wrote: "It is as though a bombardier flying low over a road suddenly sees three roads and it is only when two of them come together and disappear again that he realizes that he has simply passed over a long switchback in a single road." (Physical Review 76, (1949), 749.) Note that Feynman's theory is yet another echo of the fact, noted above, that a negatively charged object moving from left to right in a magnetic field has the same curvature as a positive object moving from right to left. Feynman's theory is mathematically equivalent to Dirac's, although the interpretations are quite different. Which formalism a physicist uses when dealing with antimatter is usually a matter of which form has the simplest structure for the particular problem being solved. Note that in Feynman's theory, there is no pair production or annihilation. Instead the electron is just interacting with electromagnetic radiation, i.e. light. Thus the whole process is just another aspect of the fact that accelerating electric charges radiate electric and magnetic fields; here the radiation process is sufficiently violent to reverse the direction of the electron's travel in time. Nambu commented on Feynman's theory in 1950: "The time itself loses sense as the indicator of the development of phenomena; there are particles which flow down as well as up the stream of time; the eventual creation and annihilation of pairs that may occur now and then is no creation or annihilation, but only a change of direction of moving particles, from past to future, or from future to past." (Progress in Theoretical Physics 5, (1950) 82). About Formally Equivalent Descriptions ...." Then you mix in another very interesting problem, namely the origin of the apparent matter antimatter asymmetry in the observable universe (observed absence of annihilation radiation except in special circumstances) and point out that it may be related to a very very hard problem indeed, namely the origin of time asymmetry. One problem at a time, please. Maybe separate questions, but the answers will likely be more or less over your head since, to the extent that they are even partially understood, they are still being figured out. sigoldberg1sigoldberg1 $\begingroup$ Why would your friend find electrons moving backwards in time scary when it is identical to positrons moving forward in time? $\endgroup$ – Gerard $\begingroup$ There is still an important aspect missing from that page and all the other explanations I have read. Has anybody ever tried to convey information using an antimatter particle, to see if that information is conveyed forwards or backwards in time? $\endgroup$ – Tobia $\begingroup$ @Gerard I imagine it could be scary for someone because, if it is true, it means that the matter that we are made of will need to turn back in time one day - annihilate with antimatter. $\endgroup$ – Pawel Welsberg There is one technical inaccuracy in saying that antimatter moves back in time (whatever it might mean). In quantum field theory we get positive energy solutions (usual particles) and negative energy solutions. Negative energy solutions behave in time as if they were propagating backward in time. But they are not the antiparticles, they are just the "negative-energy particles". Antiparticles are positive energy solutions, and they are obtained by acting with charge conjugation operator on the negative-energy solutions. So, antiparticles move forward in time, as usual particles. Igor IvanovIgor Ivanov $\begingroup$ Note the subtile difference between backwards in time: from the future to the present, and back in time: from the present to the past. I feel that the mathematical formalism alone cannot decide on the issue. $\endgroup$ $\begingroup$ This is interpreting "going back in time" differently than anyone else does. Going back in time, as it is usually meant, requires flipping the sign of the energy as measured along the proper time and the sign of the proper time as measured relative to coordinate time. This operation preserves the sign of the energy. $\endgroup$ – Ron Maimon Antimatter is such a misleading term. It's not the opposite of "real" matter. It is made whenever particles are made. But it's just a function of the conservation of properties. Is like saying when one particle going left in pair production will go backwards in time but the one going right is going forward. Antimatter and matter annihilate for similar reasons. A negatively charged particle that interacts with a positive can't have a charge. So... Boom. They go away and usually a photon comes out (this is being very simplistic. But that is the root issue). If we called antimatter "opposite charge matter" no one would think it was so special. Yes. According to the CPT theorem, antimatter is matter going backwards in time, but when viewed through a mirror. Correct me if I get this wrong. TLR 7 8 agonistTLR 7 8 agonist In short, no. There is nothing backwards about antimatter. The Latin letter forms b and d are mirror images of each other. It's correct to say that b is a mirror image of d, and to say that d is a mirror image of b. It's wrong to say that one of them is the mirror image, or that one of them is backwards. They are related by a symmetry, but the relationship is, itself, symmetrical. Muons and antimuons are related by CPT symmetry, and that symmetry includes time reversal, so in a certain precise sense they are time reversals of each other. But neither one is the reversed one. "Anti-" in particle physics is like the anti- in anticlockwise, not the anti- in antibacterial. It doesn't mean that the named thing has an intrinsic property of antiness. It's just a way to avoid inventing new names for the mirror reflections of things that already have names. Not only do particles with "anti" in their name not go backward in time, "going backward in time" doesn't even make sense. "Go backward" is a description of motion. Motion occurs over time. Going backward in time would mean you're at earlier times at later times. In time travel fiction, going backward in time means that the psychological time of the protagonist is reversed relative to everyone else's. Particles don't have psychological time. You could bring this into the realm of physics by replacing psychological time with thermodynamic time, i.e., the direction in which entropy increases. Then it can be checked experimentally whether particles with "anti" in their names have a reversed thermodynamic arrow of time relative to other particles, and the empirical result is that they don't. An antimuon usually decays into a positron, neutrino and antineutrino. It doesn't usually spontaneously form from an positron, neutrino, and antineutrino that converge on a point with apparent intent. benrgbenrg It's not really that antiparticles are travelling backwards in time. But mathematically speaking, an antiparticle travelling forwards in time is indistinguishable from the corresponding particle travelling backwards in time. They're just different ways of understanding the same physical situation. $\begingroup$ -1 This was said several times in other answers already. You didn't even say anything about symmetry. $\endgroup$ – Brandon Enright No. If antimatter is going backwards in time, where did it go at the beginning of time (if indeed there is a beginning of time)? Len LokerLen Loker $\begingroup$ Maybe time is running in opposite directions simultaneously? $\endgroup$ – Len Loker $\begingroup$ That would answer the questions about what happened to the antimatter created at the beginning of time. We will never see it. But what we can see is antimatter created after the big bang running backwards in time. $\endgroup$ $\begingroup$ If we treat space and time on an equal footing then it would appear time running in both directions simultaneously would be consistent with space expanding simultaneously in all directions. I doubt Einstein understood all the consequences of his theories. $\endgroup$ $\begingroup$ Was the concept of the Dirac sea actually equivalent to the Higgs field? $\endgroup$ $\begingroup$ The Big Bang might be happening all time, in past present and future, adding to opposite time flowing matter and antimatter, expanding the universe, two perspectives of time in the same space time. Then we don't really need a start or an end, and space-time might even fold into a circle on a large scale. $\endgroup$ – Enos Oye I have been investigating if tachyon faster than light speed is just an illusion of perspective. I used a hypothetical approach which avoids breaking both the speed of light boundary and causality. One of the solutions I got is that antimatter is tachyons. So it is great fun to find this question here, backed up by so many good answers. It seems like antimatter indeed is going backwards in time, and according to my research I might state it even more accurately: Anti-matter particles carry a reversed arrow of time. The reason why matter and antimatter can't coexist seems to be because they have oppositely directed arrows of time, and will upon interaction annihilate back into energy. We might say it so simple that that time itself nulls out for both particles and they both dissolve into pure energy. A tachyon is said to have greater than light speed velocity, and then it has according to special relativity faster than light backwards time travel. Every tachyon is then constantly traveling backwards in time. And as our arrow of time propagates forward in time, the tachyons arrow of time propagates backwards in time. Normally we should then not be able to observe tachyons, as they are in an opposite time perspective with a reversed arrow of time. This led me to wonder if there could be time symmetry in the universe, where tachyons are existing in a backwards time perspective, while we exist in a forwards time perspective. Two perspectives of space-time with oppositely directed arrows of time. With such a time symmetry a tachyon with infinite speed will not really have infinite speed, as this is just an illusion of perspective, in reversed time a tachyon will instead have the opposite of infinite speed, which is being at rest. And tachyon theory already state that a tachyon with infinite speed have energy as it was at rest. Tachyon theory also state that a tachyon gains energy as it decelerate towards the speed of light boundary, but seen from a reversed time perspective the tachyon actually gains energy as it accelerates towards the speed of light boundary, which is just like normal particles in our time perspective. So the calculated faster than light speeds of tachyons might be an illusion of perspective. The imaginary tachyon mass which is a result of faster than light speed, is then also an illusion of perspective. And tachyons neither breaks causality, as cause is happening before effect in their reversed time perspective. By adding symmetric time to super symmetry it seems like the physical problems of tachyons get solutions. If a tachyon, in some way, comes into our observable reality, it is then likely to have a velocity corresponding to the velocity it had in reversed time. It will also carry with it a reversed arrow of time, and can't then coexist with the particles of this opposite time perspective. If two opposite time particles meet, time will null out, and they will both transform into pure energy. This is when it struck me, what if the reversed time particles actually are antimatter particles? I did not know much about antimatter, so I googled antimatter and backward time, and found this question, where many answers suggest there is a relation. Great fun! And if there is a whole lot of tachyon antimatter existing in a reversed time perspective that could also resolve the antimatter asymmetry problem. How these opposite time perspectives might interact is also fascinating. The instant speed of the quantum link, measured to be close to infinite speed, might for instance also be an illusion of reversed time. But interaction between the two time perspectives, if possible, may also create problems with causality. So this hypothetical approach seems to make some sense, and does not seem to be in conflict with physics. We only have to add symmetric time to super symmetry, so everything can become more symmetrical. There might be some conflict with some theories, like the Big Bang theory, which again has problems to explain the antimatter asymmetry. With symmetric time there might even be a possibility that the Big Bang is sort of happening all the time, when energy transforms into matter and antimatter which end up in their opposite perspectives of time. That might again explain why the universe is expanding with an accelerating speed. We might also wonder if matter that goes into a wormhole, might shift into antimatter. As in a one way wormhole we get faster than light time travel, which shift the arrow of time direction for matter, which may cause matter to shift into pure energy and then into antimatter. What comes out of such a wormhole through a white hole could then mostly be antimatter and/or pure energy. Maybe we can even talk about sort of a spectrum of matter, going between matter, pure energy and antimatter. So there is a lot of possibilities here for being carried away with excitement, as if we can add symmetric time to super symmetry, this might open up a whole new avenue of physics, where we might find answers to many problems in physics and get a more fundamental understanding of our reality. Enos OyeEnos Oye $\begingroup$ $\sqrt{1-\frac{v^2}{c^2}}$ doesn't give a negative result for $v \gt c$, it gives imaginary result. $\endgroup$ It might boil down to the definition of time. For example if time is a measure of change then by that definition for a particle that has only constant properties, including position and speed, time has stopped which cannot be achieved physically due to the Heisenberg uncertainty principle By that same definition a particle evolving a certain way in spacetime would be "traveling in forwards time" And the same particle evolving that certain way but "backwards" in spacetime would be "traveling backwards in time" for example as we grow up we mature from a baby to a kid to a teen and eventually to an adult or nicely said we evolve over the years from a baby to an full grown adult. Then "traveling backwards in time" would be an full grown adult devolving backwards to a baby, kinda like watching a video backwards. This would make traveling backwards in time impossible for humans but not for particles That said if that definition were to be true then we'd be observing time dilation effects near absolute zero which I haven't heard of yet FuseteamFuseteam Not the answer you're looking for? Browse other questions tagged quantum-field-theory time antimatter causality arrow-of-time or ask your own question. How is relativity related to anti-particles? Antiparticles travelling backward through time antimatter moving back in time Do particles travel backward and forward in time? Is time travel possible? Is it possible to go back in time? Could the black hole in the center of the galaxy be a white hole? Is time travel impossible because it implies total energy in the universe is non-constant over time? If electrons were just positrons moving backwards in time, then shouldn't we see them coming out of black holes? Anti-matter as matter going backwards in time? (requesting further clarification upon a previous post) Do particles and anti-particles attract each other? Does antimatter travel faster than light? Does reversing time give parity reversed antimatter or just antimatter? Why does a sign difference between space and time lead to time that only flows forward? Could Matter Go Backwards in Time?
CommonCrawl
Verold > Blog > General > hocn lewis structure bond angles hocn lewis structure bond angles 2 Answers. Janice Powell June 10, 2019. 31. The bond angle in SCl2 is 103 degrees ( Source ). Let's try and do that. Thus, the electron-pair geometry is tetrahedral and the molecular structure is bent with an angle slightly less than 109.5°. Indeed, the bond angle is significantly less than it would be in tetrahedral, since the lone pairs take up more space than bonding pairs. 10th Edition. This is a review for the advanced placement class. It is possible to draw the structure with two electrons in place of the lines to represent the covalent bonds, which would result in there being six shared electrons between the carbon and nitrogen. Part 1: Lewis structures, polarities, VSEPR geometries and bond angles These species will be explored: NH3, SO3, CH2I2, C2H4, 1. Lewis structure and valence bond theory. Anonymous. (See Exercises 25 and $26 .$ ) We expect a bond angle of 109.5 . Then complete the octect of sorrounding atoms of the central atom but not the central atom. what about for other molecules? bond angle of 109.5°. There are 3 sigma bonds and one pi bond. 10th Edition. Write Lewis structures and predict whether each of the following is polar or nonpolar. Favorite Answer. Use Lewis structures and bond energies to determine H for the reaction below. Lewis Structure PBr5 Molecular Geometry, Lewis structure, Shape, Bond Angle, And More. The Image Shows The Lewis Dot Structure Of Isoflurane. A Lewis structure, as shown above, is a topological portrayal of bonding in a molecule. like, idk, cinnamaldehyde, or caffeine. The carbons in this Lewis dot structure have 3 bonds 120 apart and are sp2 hybridized. O + 2 (F) = OF 2 H + C + N = HCN. (in Kj) HCN(g) + 2 H2(g) CH3NH2(g) Answer Save. Board index Chem 14A Molecular Shape and Structure Determining Molecular Shape (VSEPR) Email Link. HCOOH Bond Angles. 2nd Edition. 90 degrees. 2. ##CCl_4## has a tetrahedral geometry with bond angles of 109.5 . Join. What is the C-O-H bond angle be from the lewis structure for methanol? So we're going to need to move some valence electrons from the center to form a double bond with Carbon. Favorite Answer. 4 To rationalize di erences in predicted and measured values. Thus far, we have used two-dimensional Lewis structures to represent molecules. 9.28 Draw the Lewis structure for each of the following molecules or ions, and predict their electron-domain and molecular geometries: (a) AsF 3, (b) CH 3 +, (c) BrF 3, (d) ClO 3 –, (e) XeF 2, (f) BrO 2 –. Favorite Answer - N = C= O. the bond angle between C and O is 180° the bond angle between C and N is 180° 0 0. We will indicate that the bond angle deviates from the predicted value with a '˘' in front of the angle. The molecular dipole points away from the hydrogen atoms. a. HOCN (exists as HO—CN) b. COS c. XeF 2 d. CF 2 Cl 2 e. SeF 6 f. H 2 CO (C is the central atom) Buy Find arrow_forward. Home/Lewis Structure/ PBr5 Molecular Geometry, Lewis structure, Shape, Bond Angle, And More. In each case, the predicted angle is less than the tetrahedral angle, as is observed experimentally. Draw a skeleton structure of the molecule or ion, arranging the atoms around a central atom and connecting each atom to the central atom with a single (one electron pair) bond. 1. the angle of the bond matters when its structural formula is drawn, but now i see that for its Lewis structure, the angles are mostly just 90 degrees. b) nonpolar. HCOOH Bond Angles (Polar molecules, Non-polar molecules, etc.) Post by Emma Edmond 3E » Tue Oct 27, 2015 5:46 am . The Lewis structure is made from three units, but the atoms must be rearranged: 29. 3 Answers. Consider the molecule carbon monoxide, shown below. So if we're surrounded. 1 decade ago. Being an intelligent and well-practiced human being, you must know what is molecular geometry, but let me revise it for the all young students out diagramweb.netlar geometry is the three-dimensional structure of the atoms which helps in the constitution of a molecule. There are no empty p orbitals because all 3 p orbitals were used to hybridize into the tetrahedral molecular shape. CO a. Still have questions? Then, identify the correct the molecular shape and bond angle.Sulfur Trioxide Molecular Geometry. For every species listed above, use the model kit to construct the species. QUESTION (2018:2) (a) Draw the Lewis structure (electron dot diagram) for the following molecules and name their shapes. What Is The Bond Angle Around Each Carbon Center? If there is hydrogen bonded to the central atom then draw the C-H bond. What are the approximate bond angles in XeCl4? The Lewis structure of BeF 2 (Figure 2) shows only two electron pairs around the central beryllium atom. Steven S. Zumdahl + 2 others. XeCl4 molecule is a) polar. Lewis Structure Here are the steps that I follow when drawing a Lewis structure. Chemistry: An Atoms First Approach. It's like the geometry. 6.1-6. 27. In the model mode, each electron group occupies the same amount of space, so the bond angle is shown as 109.5°. Question: Isoflurane Is Used As An Inhaled Anesthetic. However, molecular structure is actually three-dimensional, and it is important to be able to describe molecular bonds in terms of their distances, angles, and relative arrangements in space ().A bond angle is the angle between any two bonds that include a common atom, usually measured in degrees. Include the total number of valence electrons used in the structure. Every bond angle is approximately 109.5°. With two bonds and no lone pairs of electrons on the central atom, the bonds are as far apart as possible, and the electrostatic repulsion between these regions of high electron density is reduced to a minimum when they are on opposite sides of the central atom. Get your answers by asking now. Chemistry. I can't do the Lewis structures on here but you can draw them yourself. 90° 120° 180° 109.5° … Anonymous. Some of these shorthand ways of drawing molecules give us insight into the bond angles and relative positions of atoms in the molecule, while some notations eliminate the carbon and hydrogen atoms and only indicate the heteroatoms (the atoms that are NOT carbon or hydrogen). thanks. Chemistry. Now to draw Lewis structure, you can first place the atoms in there respective places. It is one of the few which features a nitrogen triple bond. Thus, all bond angles around atoms with lone pairs are preceded by a '˘'. The Lewis structure of H 2 O describes the bonds as two sigma bonds between the central oxygen atom and the two peripheral hydrogen atoms with oxygen having two lone pairs of electrons. Ask Question + 100. Publisher: Cengage Learning. Publisher: Cengage Learning. Draw the Lewis Structure of the OF2 molecule. (OCN)- lewis structure and bond angle? Buy Find arrow_forward. Write Lewis structures and predict whether each of the following is polar or nonpolar. The only bond energy tables I have are in kcal so I will work in those units and convert to kJ at the end. 3 To obtain bond angle, bond length, and hybridization data for molecules. 1 decade ago. Update: (isocyananate) Answer Save. Buy Find arrow_forward. _____ 9.29 The figure that follows shows ball-and-stick drawings of three possible shapes of an AF 3 molecule. Now for completing the octect of central atom, you can consider some electrons as shared. For every species listed above, write one valid Lewis structure. A copy of the "Rules for Drawing Lewis Structures" may be found on page 4 of the Procedure Handout. 1 Answer. So when we look at the Lewis structure, Nitrogen had eight valence electrons, but the Carbon only has four. So for this carbon were surrounded by a three elect electron domains with no loan pears. Steven S. Zumdahl + 1 other. Tags bond angle of SCl2 , hybridization of SCl2 , lewis structure of SCl2 , sulfur dichloride , VSEPR shape of SCl2 With two bonds and no lone pairs of electrons on the central atom, the bonds are as far apart as possible, and the electrostatic repulsion between these regions of high electron density is reduced to a minimum when they are on opposite sides of the central atom. Select all that apply. It definitely won't be a 180 degree angle from C-O-H. You can draw it out that way, but that's not how it exists in pure form. That is why your structure fails. So Aceto Saleh Cilic Acid better known as aspirin as the loose structure I've shown here, whether you present bond but angles labeled one, two and three. The structures are very similar. Prelab Assignment: Lewis Structures and Molecular Shapes 1. That will be the least electronegative atom (##C##). Tweet. 1 decade ago. 120 degrees. Relevance. 2 To visualize the three-dimensional structures of some common molecules. Trending Questions. Lewis structures are good for figuring out how atoms are bonded to each other within a molecule and where any lone pairs of electrons are. Lewis Structures are used to represent covalently bonded molecules and polyatomic ions. ISBN: 9781305957404. For … However, most of it was discussed druring Honors chemistry. Predict the molecular structure (including bond angles) for this ion. The Lewis structure of H 2 O indicates that there are four regions of high electron density around the oxygen atom: two lone pairs and two chemical bonds: Figure \(\PageIndex{9}\). 0 9,641 3 minutes read. 2. This is the total number of electrons that must be used in the Lewis structure. The molecule HCN has one of the most fascinating lewis structures. 180 degrees. So we have to go to a molecular geometry table, and we're going to, and we're going to look. Emma Edmond 3E Posts: 29 Joined: Fri Sep 25, 2015 10:00 am. 2 posts • Page 1 of 1. Relevance. Now you can see that Nitrogen has eight valence electrons and Carbon has six. Phosphorus pentabromide written as PBr5 in the chemistry equations is a reactive yellow solid. 109.5° 180° 120° 90° What Is The Bond Angle Around The Oxygen Center? Ordering Bond Angles Exercise EXERCISE 6.2: Consider the Lewis structures of CF 4, SO 3, SO 2, NF 3, and OF 2, which are given below. a. HOCN (exists as HO—CN) b. COS c. XeF 2 d. CF 2 Cl 2 e. SeF 6 f. H 2 CO (C is the central atom.) In this example, we can draw two Lewis structures that are energetically equivalent to each other — that is, they have the same types of bonds, and the same types of formal charges on all of the structures.Both structures (2 and 3) must be used to represent the molecule's structure.The actual molecule is an average of structures 2 and 3, which are called resonance structures. Dr.A. Consider the following Lewis structure where $\mathrm{E}$ is an unknown element: What are some possible identities for element E? Problem: Draw the Lewis structure of XeCl4 showing all lone pairs. So let's move another pair to the center. > The structure of "H"_3"N-BF"_3 is The "B" and "N" atoms each have four single bonds, so their hybridizations are "sp"^3 with bond angles of 109.5°. Moderators: Chem_Mod, Chem_Admin. In fact, the bond angle is 104.5°. The Lewis structure of CO 2 (Figure 2) shows only two electron pairs around the central carbon atom. L'acide cyanique est un composé chimique de formule HOCN, correspondant à une structure H–O–C≡N.Il s'agit d'un acide instable qui peut être obtenu à partir de cyanates, par exemple en faisant réagir du cyanate de potassium KNCO avec de l'acide formique HCOOH, bien qu'en pratique on n'obtienne quasiment que son tautomère, l'acide isocyanique HNCO, de structure H–N=C=O. Join Yahoo Answers and get 100 points today. Lv 7. Justify this statement by referring to the factors that determine the shape of each molecule. ISBN: 9781305079243. 1 To compare Lewis structures to three-dimensional models. 5 To learn how to use computer modeling software. 6e-+ (2 x 7e-) = 20e-1e-+ 4e-+ 5e-= 10e-2. They're quite flexible in terms of how the atoms can be arranged on the page: bond length and angles between bonds don't necessarily have to match reality. Decide which atom is the central atom in the structure. It ascribes bonding influences to ... As a result, the H―N―H bond angle decreases slightly. Relevance. Buy Find arrow_forward. Answer Save. 109 degrees. 2. Pakistan Cricket Logo, Soap Bubbles Png, Edmund Burke Sublime And Beautiful Pdf, The International Golf Course, Taco John's Meat And Potato Burrito Price, Tilapia Vs Cod, Farmers Market Website, Project Designer Architect,
CommonCrawl
Modeling and Stochastic Learning for Forecasting in High Dimensions Modeling and Stochastic Learning for Forecasting in High Dimensions pp 213-241 | Cite as Spot Volatility Estimation for High-Frequency Data: Adaptive Estimation in Practice Till Sabel Johannes Schmidt-Hieber Axel Munk Part of the Lecture Notes in Statistics book series (LNS, volume 217) We develop further the spot volatility estimator introduced in Hoffmann et al. (Ann Inst H Poincaré (B) Probab Stat 48(4):1186–1216, 2012) from a practical point of view and make it applicable to the analysis of high-frequency financial data. In a first part, we adjust the estimator substantially in order to achieve good finite sample performance and to overcome difficulties arising from violations of the additive microstructure noise model (e.g. jumps, rounding errors). These modifications are justified by simulations. The second part is devoted to investigate the behavior of volatility in response to macroeconomic events. We give evidence that the spot volatility of Euro-BUND futures is considerably higher during press conferences of the European Central Bank. As an outlook, we present an estimator for the spot covolatility of two different prices. Wavelet Coefficient European Central Bank Microstructure Effect Mean Integrate Square Error Heston Model These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. Support of DFG/SNF-Grant FOR 916, DFG postdoctoral fellowship SCHM 2807/1-1, and Volkswagen Foundation is gratefully acknowledged. We appreciate the help of CRC 649 "Economic Risk" for providing us with access to Eurex database. Parts of this work are taken from the PhD thesis Schmidt-Hieber [48]. We thank Marc Hoffmann, Markus Reiß, Markus Bibinger, and an anonymous referee for many helpful remarks and suggestions. Appendix A: Proof of Lemma 1 The proof is the same as in the PhD thesis Schmidt-Hieber [48]. We include it for sake of completeness.To keep notation simple, we use the following quantitiesin the spirit of the definitions of Sect. 2.3: For any process(A i, n ) ∈ { (Y i, n ), (ε i, n ), (X) i, n }, define $$\displaystyle{ \overline{A}_{i,m} = \overline{A}_{i,m}(\lambda ):= \frac{m} {n} \sum _{\frac{j} {n}\in [\frac{i-2} {m}, \frac{i} {m}]}\lambda \big(m\tfrac{j} {n} - (i - 2)\big)A_{j,n}. }$$ $$\displaystyle{ \mathfrak{b}(A)_{i,m} = \mathfrak{b}(\lambda,A_{\cdot })_{i,m}:= \frac{m^{2}} {2n^{2}}\sum _{ \frac{j} {n}\in [\frac{i-2} {m}, \frac{i} {m}]}\lambda ^{2}\big(m\tfrac{j} {n} - (i - 2)\big)\big(A_{j,n} - A_{j-1,n}\big)^{2}. }$$ Further, recall that our estimator for the integrated volatility is given by \(\widehat{\langle 1,\sigma ^{2}\rangle } =\sum _{ i=2}^{m}\overline{Y }_{i,m}^{2} - \mathfrak{b}(Y )_{i,m}\). To prove the lemma, let us first show that the bias is of smaller order than n −1∕4. In fact, note that \(\mathbb{E}\big[\ \overline{Y }_{i,m}^{2}\big] = \mathbb{E}\big[\ \overline{X}_{i,m}^{2}\big] + \mathbb{E}\big[\overline{\epsilon }_{i,m}^{2}\big].\) Clearly, one can bound $$\displaystyle{ \Big\vert \mathbb{E}\big[\overline{\epsilon }_{i,m}^{2}\big] - \mathbb{E}\big[\mathfrak{b}(\lambda,Y )_{ i,m}\big]\Big\vert = O( \tfrac{1} {n}). }$$ Further, Lipschitz continuity of λ together with a Riemann approximation argument gives us $$\displaystyle{ \big\vert \mathbb{E}\big[\ \overline{X}_{i,m}^{2}\big] - \tfrac{\sigma ^{2}} {m}\big\vert =\Big \vert \tfrac{\sigma ^{2}} {m}\int _{0}^{2}\int _{ 0}^{2}\lambda (s)\lambda (t)(s \wedge t)dtds - \tfrac{\sigma ^{2}} {m}\Big\vert + O( \tfrac{1} {n}) = O( \tfrac{1} {n}). }$$ Here, the last equation is due to partial integration and the definition of a pre-average function (cf. Definition 2.1). Since both approximations are uniformly in i, this shows that the bias is of order O(n −1∕2). For the asymptotic variance, first observe that \(V ar(\sum _{i=2}^{m}\mathfrak{b}(\lambda,Y )_{i,m}) = o(n^{-1/2}).\) Hence, $$\displaystyle{V ar(\widehat{\langle 1,\sigma ^{2}\rangle }) = V ar(\sum _{ i=2}^{m}\overline{Y }_{ i,m}^{2}) + o\Big(n^{-1/4}\big(V ar(\sum _{ i=2}^{m}\overline{Y }_{ i,m}^{2})\big)^{1/2} + n^{-1/2}\Big),}$$ by Cauchy-Schwarz inequality. Recall that for centered Gaussian random variables U and V, Cov(U 2, V 2) = 2(Cov(U, V ))2. Therefore, it suffices to compute \(Cov(\overline{Y }_{i,m},\overline{Y }_{k,m}) = \mathbb{E}[\overline{Y }_{i,m}\overline{Y }_{k,m}]\). By the same arguments as above, that is Riemann summation and partial integration, we find $$\displaystyle{ \mathbb{E}\Big[\Big\vert \overline{X}_{i,m}\overline{X}_{k,m} -\int _{0}^{1}\varLambda (ms - (i - 2))dX_{ s}\int _{0}^{1}\varLambda (ms - (k - 2))dX_{ s}\Big\vert \Big] \lesssim n^{-1}. }$$ $$\displaystyle{ \mathbb{E}\big[\overline{X}_{i,m}\overline{X}_{k,m}\big] =\sigma ^{2}\int _{ 0}^{1}\varLambda (ms - (i - 2))\varLambda (ms - (k - 2))ds + O(n^{-1}), }$$ where the last two arguments hold uniformly in i, k. In order to calculate \(\mathbb{E}[\overline{Y }_{i,m}\overline{Y }_{k,m}],\) we must treat three different cases, | i − k | ≥ 2, | i − k | = 1 and i = k, denoted by I, II and III. ad I. In this case \((\tfrac{i-2} {m}, \tfrac{i} {m}]\) and \((\tfrac{k-2} {m}, \tfrac{k} {m}]\) do not overlap. By the equalities above, it follows \(Cov(\overline{Y }_{i,m},\overline{Y }_{k,m}) = O(n^{-1}).\) ad II. Without loss of generality, we set k = i + 1. Then, we obtain $$\displaystyle\begin{array}{rcl} & & Cov(\overline{Y }_{i,m},\overline{Y }_{i+1,m}) = \mathbb{E}\big[\overline{X}_{i,m}\overline{X}_{i+1,m}\big] + \mathbb{E}\big[\overline{\epsilon }_{i,m}\overline{\epsilon }_{i+1,m}\big] {}\\ & =& \sigma ^{2}\int _{ 0}^{1}\varLambda (ms - (i - 2))\varLambda (ms - (i - 1))ds + O(n^{-1}) {}\\ & & +\tau ^{2}\frac{m^{2}} {n^{2}} \sum _{ \tfrac{j} {n} \in \big(\tfrac{i-2} {m}, \tfrac{i} {m}\big]}\lambda (m\tfrac{j} {n} - (i - 2))\lambda (m\tfrac{j} {n} - (i - 1)) {}\\ & =& \frac{\sigma ^{2}} {m}\int _{0}^{1}\varLambda (u)\varLambda (1 + u)du +\tau ^{2}\frac{m} {n} \int _{0}^{1}\lambda (u)\lambda (1 + u)du + O(n^{-1}), {}\\ \end{array}$$ where the last inequality can be verified by Riemann summation. Noting that λ is a pre-average function, we obtain λ(1 + u) = −λ(1 − u) and $$\displaystyle\begin{array}{rcl} & Cov(\overline{Y }_{i,m},\overline{Y }_{i+1,m}) = \frac{\sigma ^{2}} {m}\int _{0}^{1}\varLambda (u)\varLambda (1 - u)du -\frac{\tau ^{2}m} {n} \int _{0}^{1}\lambda (u)\lambda (1 - u)du + O(n^{-1}).& {}\\ \end{array}$$ ad III. It can be shown by redoing the arguments in II that $$\displaystyle\begin{array}{rcl} & V ar(\overline{Y }_{i,m}) = V ar(\overline{X}_{i,m}) + V ar(\overline{\epsilon }_{i,m}) = \frac{\sigma ^{2}} {m}\int _{0}^{2}\varLambda ^{2}(u)du +\tau ^{2}\frac{m} {n} \int _{0}^{2}\lambda ^{2}(u)du + O(n^{-1}).& {}\\ \end{array}$$ Note that \(\|\varLambda \|_{L^{2}[0,2]} = 1.\) Since the above results hold uniformly in i, k, it follows directly that $$\displaystyle\begin{array}{rcl} & & V ar(\sum _{i=2}^{m}\overline{Y }_{ i,m}^{2}) {}\\ & =& \sum _{i,k=2,\ \vert i-k\vert \geq 2}^{m}2\big(Cov(\overline{Y }_{ i,m},\overline{Y }_{k,m})\big)^{2} {}\\ & & +2\sum _{i=2}^{m-1}2\big(Cov(\overline{Y }_{ i,m},\overline{Y }_{i+1,m})\big)^{2} +\sum _{ i=2}^{m}2\big(V ar(\overline{Y }_{ i,m})\big)^{2} {}\\ & =& O(n^{-1}) + 4\Big( \frac{\sigma ^{2}} {\sqrt{c}}\int _{0}^{1}\varLambda (u)\varLambda (1 - u)du -\tau ^{2}c^{3/2}\int _{ 0}^{1}\lambda (u)\lambda (1 - u)du\Big)^{2}n^{-1/2} {}\\ & & +2\Big( \frac{\sigma ^{2}} {\sqrt{c}} + 2\tau ^{2}c^{3/2}\|\lambda \|_{ L^{2}[0,1]}^{2}\Big)^{2}n^{-1/2}. {}\\ \end{array}$$ Ait-Sahalia, Y., & Jacod, J. (2009). Testing for jumps in a discretely observed process. The Annals of Statistics, 37, 184–222.zbMATHMathSciNetCrossRefGoogle Scholar Ait-Sahalia, Y., & Yu, J. (2009). High frequency market microstructure noise estimates and liquidity measures. The Annals of Applied Statistics, 3, 422–457.zbMATHMathSciNetCrossRefGoogle Scholar Ait-Sahalia, Y., Mykland, P. A., & Zhang, L. (2005). How often to sample a continuous-time process in the presence of market microstructure noise. The Review of Financial Studies, 18, 351–416.CrossRefGoogle Scholar Ait-Sahalia, Y., Fan, J., & Xiu, D. (2010). High-frequency covariance estimates with noisy and asynchronous financial data. Journal of the American Statistical Association, 105, 1504–1517.MathSciNetCrossRefGoogle Scholar Ait-Sahalia, Y., Jacod, J., & Li, J. (2012). Testing for jumps in noisy high frequency data. Journal of Econometrics, 168, 207–222.MathSciNetCrossRefGoogle Scholar Andersen, T. G., & Bollerslev, T. (1998). Deutsche Mark-Dollar volatility: Intraday activity patterns, macroeconomic announcements, and longer run dependencies. The Journal of Finance, 53, 219–265.CrossRefGoogle Scholar Autin, F., Freyermuth, J. M., & von Sachs, R. (2011). Ideal denoising within a family of tree-structured wavelet estimators. The Electronic Journal of Statistics, 5, 829–855.zbMATHCrossRefGoogle Scholar Bandi, F., & Russell, J. (2008). Microstructure noise, realized variance, and optimal sampling. The Review of Economic Studies, 75, 339–369.zbMATHMathSciNetCrossRefGoogle Scholar Barndorff-Nielsen, O. E., Hansen, P. R., Lunde, A., & Stephard, N. (2008). Designing realised kernels to measure the ex-post variation of equity prices in the presence of noise. Econometrica, 76(6), 1481–1536.zbMATHMathSciNetCrossRefGoogle Scholar Barndorff-Nielsen, O. E., Hansen, P. R., Lunde, A., & Shephard, N. (2011). Multivariate realised kernels: Consistent positive semi-definite estimators of the covariation of equity prices with noise and non-synchronous trading. Journal of Econometrics, 162, 149–169.MathSciNetCrossRefGoogle Scholar Bibinger, M. (2011). Efficient covariance estimation for asynchronous noisy high-frequency data. Scandinavian Journal of Statistics, 38, 23–45.zbMATHMathSciNetCrossRefGoogle Scholar Bollerslev, T., & Todorov, V. (2011). Estimation of jump tails. Econometrica, 79, 1727–1783.zbMATHMathSciNetCrossRefGoogle Scholar Cai, T., & Wang, L. (2008). Adaptive variance function estimation in heteroscedastic nonparametric regression. The Annals of Statistics, 36, 2025–2054.zbMATHMathSciNetCrossRefGoogle Scholar Cai, T., & Zhou, H. (2009). A data-driven block thresholding approach to wavelet estimation. The Annals of Statistics, 37, 569–595.zbMATHMathSciNetCrossRefGoogle Scholar Cai, T., Munk, A., & Schmidt-Hieber, J. (2010). Sharp minimax estimation of the variance of Brownian motion corrupted with Gaussian noise. Statistica Sinica, 20, 1011–1024.zbMATHMathSciNetGoogle Scholar Carr, P., & Wu, L. (2004). Time-changed Lévy processes and option pricing. Journal of Financial Economics, 71(1), 113–141.CrossRefGoogle Scholar Christensen, K., Kinnebrock, S., & Podolskij, M. (2010). Pre-averaging estimators of the ex-post covariance matrix in noisy diffusion models with non-synchronous data. Journal of Econometrics, 159, 116–133.MathSciNetCrossRefGoogle Scholar Cohen, A. (2003). Numerical analysis of wavelet methods. Amsterdam/Boston: Elsevier.zbMATHGoogle Scholar Cohen, A., Daubechies, I., & Vial, P. (1993). Wavelets on the interval and fast wavelet transforms. Applied and Computational Harmonic Analysis, 1, 54–81.zbMATHMathSciNetCrossRefGoogle Scholar Cont, R., & Tankov, P. (2004). Financial modelling with jump processes. Boca Raton: CRC Press.zbMATHGoogle Scholar Dahlhaus, R., & Neddermeyer, J. C. (2013). On-line spot volatility-estimation and decomposition with nonlinear market microstructure noise models. ArXiv e-prints. arXiv:1006.1860v4.Google Scholar Daubechies, I. (1992). Ten lectures on wavelets. Philadelphia: SIAM.zbMATHCrossRefGoogle Scholar Delbaen, F., & Schachermayer, W. (1994). A general version of the fundamental theorem of asset pricing. Mathematische Annalen, 300, 463–520.zbMATHMathSciNetCrossRefGoogle Scholar Delbaen, F., & Schachermayer, W. (1998). The fundamental theorem of asset pricing for unbounded stochastic processes. Mathematische Annalen, 312, 215–250.zbMATHMathSciNetCrossRefGoogle Scholar Donoho, D., & Johnstone, I. M. (1994). Ideal spatial adaptation via wavelet shrinkage. Biometrika, 81, 425–455.zbMATHMathSciNetCrossRefGoogle Scholar Donoho, D., Johnstone, I. M., Kerkyacharian, G., & Picard, D. (1995). Wavelet shrinkage: Asymptopia? Journal of the Royal Statistical Society: Series B Statistical Methodology, 57, 301–369.zbMATHMathSciNetGoogle Scholar Dorfleitner, G. (2004). How short-termed is the trading behaviour in Eurex futures markets? Applied Financial Economics, 14, 1269–1279.CrossRefGoogle Scholar Ederington, L. H., & Lee, J. H. (1993). How markets process information: New releases and volatility. The Journal of Finance, 48, 1161–1191.CrossRefGoogle Scholar Ederington, L. H., & Lee, J. H. (1995). The short-run dynamics of the price adjustment to new information. Journal of Financial and Quantitative Analysis, 30, 117–134.CrossRefGoogle Scholar Fan, J., & Wang, Y. (2007). Multi-scale jump and volatility analysis for high-frequency financial data. Journal of the American Statistical Association, 102, 1349–1362.zbMATHMathSciNetCrossRefGoogle Scholar Fan, J., & Wang, Y. (2008). Spot volatility estimation for high-frequency data. Statistics and Its Interface, 1, 279–288.zbMATHMathSciNetCrossRefGoogle Scholar Gloter, A., & Jacod, J. (2001). Diffusions with measurement errors. I. Local asymptotic normality. ESAIM Probability and Statistics, 5, 225–242.Google Scholar Gloter, A., & Jacod, J. (2001). Diffusions with measurement errors. II. Optimal estimators. ESAIM Probability and Statistics, 5, 243–260.Google Scholar Hasbrouck, J. (1993). Assessing the quality of a security market: A new approach to transaction-cost measurement. The Review of Financial Studies, 6, 191–212.CrossRefGoogle Scholar Hautsch, N., & Podolskij, M. (2013). Pre-averaging based estimation of quadratic variation in the presence of noise and jumps: Theory, implementation, and empirical evidence. Journal of Business and Economic Statistics, 31(2), 165–183.MathSciNetCrossRefGoogle Scholar Hayashi, T., & Yoshida, N. (2005). On covariance estimation of non-synchronously observed diffusion processes. Bernoulli, 11, 359–379.zbMATHMathSciNetCrossRefGoogle Scholar Hoffmann, M., Munk, A., & Schmidt-Hieber, J. (2012). Adaptive wavelet estimation of the diffusion coefficient under additive error measurements. Annales de l'Institut Henri Poincaré (B) Probabilités et Statistiques, 48(4), 1186–1216.zbMATHMathSciNetCrossRefGoogle Scholar Jacod, J., Li, Y., Mykland, P. A., Podolskij, M., & Vetter, M. (2009). Microstructure noise in the continuous case: The pre-averaging approach. Stochastic Processes and Their Applications, 119(7), 2249–2276.zbMATHMathSciNetCrossRefGoogle Scholar Jansen, D., & de Haan, J. (2006). Look who's talking: ECB communication during the first years of EMU. International Journal of Finance and Economics, 11, 219–228.CrossRefGoogle Scholar Li, W. V., & Shao, Q. M. (2002). A normal comparison inequality and its applications. Probability Theory and Related Fields, 122, 494–508.zbMATHMathSciNetCrossRefGoogle Scholar Li, Y., Zhang, Z., & Zheng, X. (2013). Volatility inference in the presence of both endogenous time and microstructure noise. Stochastic Processes and Their Applications, 123, 2696–2727.zbMATHMathSciNetCrossRefGoogle Scholar Lunde, A., & Zebedee, A. A. (2009). Intraday volatility responses to monetary policy events. Financial Markets and Portfolio Management, 23(4), 383–299.CrossRefGoogle Scholar Madahavan, A. (2000). Market microstructure: A survey. Journal of Financial Markets, 3, 205–258.CrossRefGoogle Scholar Munk, A., & Schmidt-Hieber, J. (2010). Nonparametric estimation of the volatility function in a high-frequency model corrupted by noise. The Electronic Journal of Statistics, 4, 781–821.zbMATHMathSciNetCrossRefGoogle Scholar Munk, A., & Schmidt-Hieber, J. (2010). Lower bounds for volatility estimation in microstructure noise models. In J.O. Berger, T.T. Cai, & I.M. Johnstone (Eds.) Borrowing strength: Theory powering applications - a festschrift for Lawrence D. Brown (Vol. 6, pp. 43–55). Beachwood: Institute of Mathematical StatisticsGoogle Scholar Podolskij, M., & Vetter, M. (2009). Estimation of volatility functionals in the simultaneous presence of microstructure noise and jumps. Bernoulli, 15, 634–658.zbMATHMathSciNetCrossRefGoogle Scholar Reiß, M. (2011). Asymptotic equivalence for inference on the volatility from noisy observations. The Annals of Statistics, 39(2), 772–802.zbMATHMathSciNetCrossRefGoogle Scholar Schmidt-Hieber, J. (2010). Nonparametric methods in spot volatility estimation. PhD thesis, Georg-August-Universität Göttingen.Google Scholar van der Ploeg, A. (2005). Stochastic volatility and the pricing of financial derivatives (n∘366 of the Tinbergen Institute research series, No. 366).Google Scholar Veraart, A. E., & Winkel, M. (2010). Time change. In R. Cont (Ed.), Encyclopedia of quantitative finance (pp. 1812–1816). Chichester: Wiley.Google Scholar Wassermann, L. (2010). All of nonparametric statistics (Springer texts in statistics). New York/London: Springer.Google Scholar Zhang, L. (2006). Efficient estimation of stochastic volatility using noisy observations: A multi-scale approach. Bernoulli, 12, 1019–1043.zbMATHMathSciNetCrossRefGoogle Scholar Zhang, L. (2011). Estimating covariation: Epps effect and microstructure noise. Journal of Econometrics, 160, 33–47.MathSciNetCrossRefGoogle Scholar Zhang, L., Mykland, P., & Ait-Sahalia, Y. (2005). A tale of two time scales: Determining integrated volatility with noisy high-frequency data. Journal of the American Statistical Association, 472, 1394–1411.MathSciNetCrossRefGoogle Scholar Zhou, B. (1996). High-frequency data and volatility in foreign-exchange rates. Journal of Business and Economic Statistics, 14, 45–52.Google Scholar © Springer International Publishing Switzerland 2015 1.Department of Mathematics and Computer Science, Institute for Mathematical StochasticsGeorg-August-University GöttingenGöttingenGermany 2.University of LeidenLeidenNetherlands Cite this paper as: Sabel T., Schmidt-Hieber J., Munk A. (2015) Spot Volatility Estimation for High-Frequency Data: Adaptive Estimation in Practice. In: Antoniadis A., Poggi JM., Brossat X. (eds) Modeling and Stochastic Learning for Forecasting in High Dimensions. Lecture Notes in Statistics, vol 217. Springer, Cham. https://doi.org/10.1007/978-3-319-18732-7_12 DOI https://doi.org/10.1007/978-3-319-18732-7_12 Publisher Name Springer, Cham eBook Packages Mathematics and Statistics Mathematics and Statistics (R0) Cite paper Buy paper (PDF)
CommonCrawl
Plant Methods Diverse maize hybrids are structurally inefficient at resisting wind induced bending forces that cause stalk lodging Christopher J. Stubbs1, Kate Seegmiller1, Christopher McMahan2, Rajandeep S. Sekhon3 & Daniel J. Robertson ORCID: orcid.org/0000-0003-1089-02491 Plant Methods volume 16, Article number: 67 (2020) Cite this article Stalk lodging (breaking of agricultural plant stalks prior to harvest) results in millions of dollars in lost revenue each year. Despite a growing body of literature on the topic of stalk lodging, the structural efficiency of maize stalks has not been investigated previously. In this study, we investigate the morphology of mature maize stalks to determine if rind tissues, which are the major load bearing component of corn stalks, are efficiently organized to withstand wind induced bending stresses that cause stalk lodging. 945 fully mature, dried commercial hybrid maize stem specimens (48 hybrids, ~ 2 replicates, ~ 10 samples per plot) were subjected to: (1) three-point-bending tests to measure their bending strength and (2) rind penetration tests to measure the cross-sectional morphology at each internode. The data were analyzed through an engineering optimization algorithm to determine the structural efficiency of the specimens. Hybrids with higher average bending strengths were found to allocate rind tissue more efficiently than weaker hybrids. However, even strong hybrids were structurally suboptimal. There remains significant room for improving the structural efficiency of maize stalks. Results also indicated that stalks are morphologically organized to resist wind loading that occurs primarily above the ear. Results are applicable to selective breeding and crop management studies seeking to reduce stalk lodging rates. Stalk lodging (permanent displacement of plants from their vertical orientation) severely reduces agronomic yields of several vital crop species including maize. Yield losses due to stalk lodging are estimated to range from 5 to 20% annually [6, 16]. Stalk lodging (as opposed to root lodging) occurs when the structural stability of the plant is lost due to structural or material failure of the plant stem [3, 28]. Root lodging occurs when the stalk or stem of the plant remains intact and failure occurs at the root soil interface (i.e., the plant is uprooted). This manuscript is focused on the problem of stalk lodging. Several internal and external factors contribute to a plant's propensity to stalk lodge. External factors include wind speed [38], pest damage [11], disease [10, 22], and canopy airflow [2]. Internal factors include the plant's morphology and material properties [13, 27, 33]. Despite a growing body of literature surrounding the topic of maize stalk lodging, a detailed morphological investigation of the taper of maize stalks has not been reported. The purpose of this paper is to quantify changes in diameter and rind thickness of maize stalks as a function of plant height (i.e., taper) and to determine the structural efficiency of the taper of maize stalks. This study investigates stalk taper from a purely structural standpoint to determine the structural efficiency of maize stalks in the absence of external abiotic (i.e., air currents) and biotic (i.e., agronomic management) factors. To determine the structural efficiency of maize stalks one must both quantify the stalk taper and define probable wind loading scenarios. An efficiently tapered stalk is defined as one in which uniform mechanical stresses are produced when the plant is subjected to probable wind loading scenarios. In other words, the shape of the stalk is optimal, meaning that loads are supported with as little tissue as possible. An inefficient taper is one in which non-uniform mechanical stresses are produced. Inefficient stalks utilize more structural tissue than is necessary in some areas and less structural tissue than is necessary in other areas to withstand the loads to which they are subjected. In other words, for inefficient stalks the amount of structural tissue could be reduced without affecting the load bearing capacity of the stalk. The structural efficiency of maize stalks is of interest because efficient stalks would theoretically have more available biomass and bioenergy to devote to grain filling as compared to inefficient stalks (i.e., efficient stalks would have a higher harvest index). As mentioned previously, both the taper and probable wind loading scenarios must be defined to determine the structural efficiency of maize stalks. The wind load exerted on a plant stalk, known as the drag force (Df), can be approximated as [25]: $$D_{f} = 0.5\rho u^{2} A_{p} C_{D}$$ where ⍴ is the density of air, u is the local wind speed, Ap is the projected area of the structure, and CD is the drag coefficient. While this equation appears fairly simple at first glance, it is complicated by the fact that the variables on the right-hand side of the equation are functions that can vary both temporally and spatially. For example, the drag coefficient changes spatially along the length of the stalk and is also a function of the local wind speed. As the local wind speed increases, the angle of the leaf blades and tassel change (known as flagging), which alters the drag coefficient [3, 25]. The strong interrelationships between the factors of Eq. 1 complicate attempts to directly measure wind forces on maize stalks. Direct measurements of wind speeds have successfully been used to estimate drag forces in past studies of trees and cereal crops [25, 26, 36]. However, the large ratio of leaf area to stalk area, close proximity of maize plants to one another in commercial fields, and other confounding factors imply that a direct measurement of the wind speed near a maize stalk is not necessarily a good predictor of the drag force experienced by the stalk. Detailed computational engineering models that capture the interplay between fluid dynamics and structural deformations (i.e., fluid–structure interaction models [41]) could potentially be used to calculate the drag force experienced by maize stalks over time. However, such models are highly complex and computationally expensive, due to difficulties in estimating the constantly changing drag force and orientation of the plant tissues (projected area) with respect to wind vectors that vary in three dimensions. In summary, accurately measuring drag forces in crop canopies is challenging and remains an active area of research. An overview of this topic is given by Finnigan [15]. While direct measurement of exact wind forces on maize stalks is challenging, defining the realm of probable wind loading scenarios is less so. To define the realm of probable wind loading scenarios we assume the wind speed acts in the same direction along the length of the stalk. In other words, the wind does not blow in one direction at the bottom of the stalk and in a different direction at the top of the stalk. Although there are times when hairpin eddies can form as air circulates through the canopy [14], the assumption of uniform wind direction was made for purposes of the optimization algorithm used in this study. This assumption is described in more detail in "Limitations" section. We can also bound the degree of change in the magnitude of the wind force along the length of the plant. For example, previous studies and engineering fluid mechanics theory dictate that the local wind speed in crop canopies increases with height [9, 38, 39]. A simple examination of corn stalks also suggest that the combination of the drag coefficient and projected area increases with plant height (i.e., the leaves near the bottom of mature maize plant are often dead and fall off whereas the top leaves remain structurally robust). Thus both the local wind speed and the combined effect of the drag coefficient and projected area can be assumed to increase with plant height. Combining these insights with Eq. 1, we can determine that the wind force (i.e., drag force) increases with plant height (at a time prior to stem deflection). The stalk lodging resistance of a plant can be defined as the plants ability to withstand these externally applied loadings. We can therefore gain insights into a plant's stalk lodging resistance by looking at a bounded range of potential wind-loading scenarios. At the upper bound of probable wind loading we assume all of the wind force acts at the top of the plant as a point load [17]. At the lower bound of probable wind loading we assume a uniform load is applied to stalk along its entire length (i.e., the drag force at the top of the plant is the same as the drag force in every other cross-section of the plant including the bottom of the plant). These bounds allow for all probable wind-loading scenarios and exclude improbable scenarios. Figure 1 visually represents each of these assumptions. For example, probable wind loading profiles such as a quadratic increase in wind loading from the bottom to the top of the plant, would fall within the white region of Fig. 1-top. Conversely, improbable wind loading profiles, such as the drag force being higher at the base of the plant than at the top of the plant, would fall outside of the white region in Fig. 1-top. Structural efficiency along the length of the specimen as a function of probable wind loadings (top); loading diagrams for a point load and uniform drag force (bottom). Plant height (x-axis) is normalized to the height of the plant specimen. Section modulus (y-axis) is normalized by the difference between the section modulus at the top internode (SMtop) and the bottom internode (SMbottom) The structural efficiency of maize stalks can be determined by using Engineering equations that relate stem morphology and mechanical stress to probable wind loading scenarios presented in Fig. 1. In particular, the maximum stress in any cross-section (σ) (typically presented in units of Pascal) due to wind-induced bending is calculated as [4]: $$\sigma = \frac{{\mathop \smallint \nolimits_{0}^{L} D_{f} dx}}{{S_{x} }}$$ where Df is the drag force (see Eq. 1) and Sx is the section modulus (typically presented in units mm3) at a distance x along the stalk, and L is the length of the stalk. Section modulus is an engineering term that quantifies the morphology of the cross-section [4]. Perhaps somewhat surprisingly, Eq. 2 does not include an elastic modulus term or any other material or tissue property terms. This is because for a structure with a single homogeneous linear elastic orthotropic material, the relationship between external applied loads and stress is independent of material properties. Maize stalks possess elliptical cross-sections, therefore, the section modulus of each cross-sections is a function of the ellipse's major diameter (D), minor diameter (d), and the thickness of the rind (t) in the form [40]: $$S_{x} = \frac{\pi }{32d}\left( {Dd^{3} - \left( {D - t} \right)\left( {d - t} \right)^{3} } \right).$$ By combining Eqs. 2 and 3, we can calculate the drag force and section modulus combination that results in a uniform stress along the length of the plant. Figure 1-top displays a visual representation of the range of possible tapers for maize stalks that could produce uniform stresses during probable wind loading scenarios. In other words, it displays the combination of external loadings and section moduli that Eq. 2 dictates would result in uniform stress along the length of the stem. The range of probable wind loadings is defined with an upper bound (red curve) of a point load applied to the top of the plant, and a lower bound (blue curve) of a uniform drag force applied to the entire length of the specimen. The graph depicts the most efficient plant tapers (white area) from the ground (x = 0) to the top of the plant specimen (x = 1), and from the section modulus at the uppermost internode of the plant specimen (y = SMtop) to the section modulus at the bottommost internode of the plant (y = SMbottom). If the section modulus of an internode falls above the red curve, then that internode will experience a lower maximum stress than the rest of the plant, as it has more structural tissues (i.e., mass) than is necessary. If the section modulus of an internode falls below the blue curve, then that internode will experience a higher maximum stress than the rest of the internodes of the plant, as it has less structural tissue than is efficient. If the section modulus of an internode falls between the red and blue curves (white area), then that internode will experience a similar level of mechanical stress as compared to the rest of the plant, and therefore has an efficient allocation of structural tissues. To determine the structural efficiency of maize plants, a select group of maize stalks were analyzed. Their major and minor diameters and rind thicknesses were measured at each internode and compared to Fig. 1. In addition, a custom optimization algorithm was employed to determine the exact drag force profile for each plant that would produce the most uniform mechanical stress possible for the given stalk structure. The details and results of these experiments are presented in the following sections. Box 1: Glossary of terms Ap Projected area of the structure CD Drag coefficient D Major diameter of the cross-section d Minor diameter of the cross-section Df Drag force F0 Positive resolved force applied to the top of the specimen f(x) Loading profile along the length of the specimen M0 Positive resolved moment applied to the top of the specimen Sx Section modulus at the location x along the length of the specimen s Bending stress vector for optimization algorithm σ Bending stress ρ Density of air t Rind thickness u Local wind speed v Residual vector for optimization algorithm X Input loading vector for optimization algorithm x Length along the length of the specimen Y Total residual scalar for optimization algorithm All maize specimens in this study were subjected to the following battery of tests. First, major and minor diameters of each internode were measured with calipers and internode lengths were measured with a ruler. Second, bending strength was measured in three-point bending. Third, rind thickness was measured through rind penetration tests [29]. Each stalk was then analyzed, and an optimization algorithm was employed to determine the theoretical drag force profile that would produce the most uniform stresses along the entire length of the stalk. Finally, statistical analyses were performed to investigate the relationship between each specimen's strength and its structural efficiency. Forty-eight maize hybrids, chosen to represent a reasonable portion of maize genetic diversity, were evaluated for variation in stem morphology and structural tissue distribution. The hybrids were planted at Clemson University Simpson Research and Education Center, Pendleton, SC in well drained Cecil sandy loam soil. The hybrids were grown in a Random Complete Block Design with two replications. In each replication, each hybrid was planted in two-row plots with row length of 4.57 m and row-to-row distance of 0.76 m with a targeted planting density of 70,000 plant ha−1. The experiment was surrounded by non-experimental maize hybrids on all four sides to prevent any edge effects. To supplement nutrients, 56.7 kg ha−1 nitrogen, 86.2 kg ha−1 of phosphorus and 108.9 kg ha−1 potassium was added at the time of soil preparation, and additional 85 kg ha−1 nitrogen was applied 30 days after emergence. Standard agronomic practices were followed for crop management. Stalks used for this study were harvested when all the hybrids were either at or past physiological maturity (i.e., 40 days after anthesis), determined based on our previous studies on senescence and patterns of sugar accumulation in maize [30]. Ten competitive plants (i.e., plants free of disease and of structural damage) from each replication were harvested by cutting at just above ground level, stripped of all the leaves and ears, and transferred to a forced air dryer for drying at 65 °C. Some plots lacked 10 competitive plants (i.e., lacked 10 plants that were structurally robust enough to be tested) and, therefore, the total number of plants evaluated for each hybrid varied slightly. In total, 945 fully mature, dried commercial hybrid maize stalks were used in this study (48 hybrids, ~ 2 replicates, ~ 10 samples per hybrid). Three-point-bending tests Specimens were tested in three-point-bending using an Instron Universal Testing System (Instron Model # 5944, Norwood, MA). Specimens were supported at their bottom and top node, and loaded at their middle node. Care was taken to ensure that the specimens were both loaded and supported at nodes, and that the span lengths were maximized. This was done to obtain the most natural possible failure modes [28, 33]. Specimens were loaded at a rate of 2 mm s−1 until structural failure. Additional details on the three-point-bending test protocol were documented in a previous study [27]. Short span 3-pt bend tests (i.e., testing of a single internode with supports placed at adjacent nodes) were not employed in this study as they have been shown to produce unnatural failure patterns and result in inaccurate bending strength measurements [28]. Morphology measurements Internode lengths of each specimen were measured with a ruler. Other morphology measurements were taken at the midspan of each internode of every specimen. In particular, caliper measurements were used to obtain the minor and major diameters of each internode. Rind penetration tests were used to obtain the rind thickness of each internode. Rind penetration tests were performed using an Instron universal testing machine at the midspan of each internode of every specimen. A probe was briefly forced through the specimen at a rate of 25 mm s−1, and the resulting force–displacement curve was analyzed using a custom MATLAB algorithm to calculate the rind thickness (t) of the stalk cross-section. Additional details on the rind penetration test protocol are documented in Seegmiller et al. [29]. Optimization of loading condition An optimization algorithm was employed to determine the drag force profile (f(x)) for each stem specimen that would produce the most uniform stress along the length of the stalk. As the specimens examined only spanned the bottom half of the stalk (from the ground to the ear), the loading above the ear was resolved into a single positive force (F0) and positive moment (M0) applied to the top of the specimen as described by Beer et al. [4] and shown in Fig. 2. For the hybrids investigated, the ear was an average of 46.1% (± 7.72% standard deviation; 95% confidence interval around the mean [45.46%, 46.76%]) of the way up the stalk. A loading diagram of the wind on the plant stalk with an unknown load distribution along the length of the stalk (left); the wind loading above the ear can be resolved as an unknown positive force (F0) and positive moment (M0) applied to the top cross section (right) [4] Based on this setup, we can now calculate the single positive resolved force applied to the top of the specimen (F0) and the loading profile (f(x)) for each specimen that results in the most uniform stress state in each particular specimen. This was accomplished through the use of an optimization algorithm. In particular, a custom code was developed in Matlab to perform an fmincon optimization for each stem specimen [7, 20, 32]. The optimization algorithm fmincon is a function that finds a minimum of a constrained nonlinear multivariate function. This optimization function was used to minimize the variation in mechanical stress across the length of the specimen by changing the values of the input parameters F0 and f(x) (see Fig. 3). In other words, the algorithm found the loading condition that produced the most uniform stress along the entire length of the stem. To accomplish this, each specimen was computationally partitioned into 100 cross-sections and the optimization routine would then: (1) take in user-supplied initial estimations of F0 at the top of the specimen and f(x) at the other 99 cross-sections (100 × 1 vector X), (2) calculate bending stress at every cross section using Eq. 2 (100 × 1 vector s), (3) calculate the variance between the bending stress at each cross-section and the uniform stress state (i.e., the distance between the "Uniform Stress State" and "Stress State Resulting from fmincon Optimization Procedure" curves in Fig. 3), (100 × 1 vector v), (4) sum all the elements of vector v (i.e., the total area between the "Uniform Stress State" and "Stress State Resulting from fmincon Optimization Procedure" curves in Fig. 3) (scalar value Y), (5) iterate on X until the variance Y was minimized. The code would then give the drag force profile X that produced the most uniform stress state along the specimen (s), and the total variance in the bending stress (Y), which is a quantitative assessment of how efficiently the specimen's structural tissues (i.e., rind) were allocated. Figure 4 presents a visual depiction of this optimization procedure. The optimization routine was conducted with several different initial starting points for each specimen (i.e., initial values for F0 and f(x)) to ensure the global optimal solution was found as opposed to a local minimum. Tolerances and stopping criteria were set to 1E−6 (first order optimality tolerance), 1E−6 (function tolerance), and 1E−10 (step size tolerance) [24]. A typical specimen output, showing the stress state resulting from the fmincon optimization procedure, which attempts to create the most uniform possible stress state along the length of the specimen by altering values for F0 and f(x) (top); the analytical maize stalk model partitioned into 100 cross-sections along its length (bottom). Plant height (x-axis) is normalized to the height of the plant specimen. Maximum stress of each cross-section (y-axis) normalized to a target stress of 1.00 A flowchart of the optimization algorithm procedure Both the three-point-bending tests and rind penetration tests yielded the expected results, based on previous studies [27, 28, 33]. The three-point-bending force–deflection responses were linear in nature until failure and demonstrated failure patterns that occur in naturally lodged maize plants [28, 33]. The rind penetration tests gave results characteristic of the protocol. The bending strength of each specimen, as well as the section modulus, length, major diameter, minor diameter, and rind thickness of each specimen internode is presented in Fig. 5. The geometric characterization of all 945 specimens; histograms of the specimen strengths (a) and specimen lengths (b); plots of the section modulus (c), rind thickness (d), minor diameter (e), and major diameter (f) along the lengths of each specimen Structural efficiency Section modulus values for each stalk were analyzed to determine structural efficiency (i.e., how structurally efficient the taper of each stalk was). The median taper of the population was calculated by determining the median section moduli at each specimen height, among all specimens. It was found that the median taper of all stalks demonstrated an efficient allocation of structural tissues for probable wind loadings (see Fig. 6). However, many internodes fell well outside the range of structural efficiency (i.e., outside of the white area in Fig. 6). In particular, 35% of the measured internodes in the study fell within the most efficient range, 38% of measured internodes fell below the blue curve (too little structural tissue), and 27% of the measured internodes fell above the red curve (too much structural tissue). The measured section moduli of the specimens as compared to the region of structural efficiency. Section modulus (y-axis) is normalized by the difference between the section modulus at the top internode (SMtop) and the bottom internode (SMbottom) Optimal drag force profile for each stalk The optimization procedure was performed on all 945 stalks. The fmincon procedure successfully determined the drag force profile (X) that produced the most uniform stress state for each specimen. Figure 7 depicts histograms of the resulting stress states of the specimens. In particular, the overall average stress along the length of each specimen (n = 945) and the stress at every cross-section of each specimen (n = 94,500) is presented in Fig. 7. To enable all specimens to be plotted on the same graph the stress of each specimen/cross-section was normalized to a target stress of 1.00. In other words, a stress state different than a stress of 1.00 represents a suboptimal allocation of structural tissues. A histogram of the average stress along the length of each of the 945 specimens after optimization, n = 945 (left); a histogram of the average stress at all 100 cross-sections along the length of each of the 945 specimens after optimization, n = 94,500 (right). All stress are normalized to a target stress of 1.00. Thus, stress states that deviate from 1.00 represent a suboptimal structural allocation of biomass Are stronger stalks more efficient? To test the hypothesis that stronger plants allocate structural tissues more efficiently (i.e., they produce uniform stresses under probable wind loading scenarios), a series of statistical analyses were performed on the data. Figures 8 and 9 provide a depiction of the variation (via boxplots) in bending strength and structural efficiency measurements stratified by hybrid type. Figures 10 and 11 provide the same but further stratified by hybrid type and replication. As discussed previously, the level of structural efficiency can be determined by calculating the area between the curves shown in Fig. 3 (Y). For example, a Y value of zero represents a perfectly efficient structure, and the larger the Y value, the less efficiently the stalk tissues are organized. These figures indicate substantial variation in bending strength and structural efficiency across hybrids. Moreover, these measures vary within hybrid type; i.e., across replication. To formally examine this, we posit a model of the following form: $$Y_{i} = \beta_{0} + \sum h_{ij} \beta_{j} + r_{i} \alpha_{0} + \sum r_{i} h_{ij} \alpha_{k} + \varepsilon_{i} ,$$ where \(Y_{i}\) is the response variable of interest (e.g., log transformed bending strength (log(σ)) and log transformed efficiency (log(Y))) as measured on the ith observation, \(\beta_{0}\) is an intercept parameter, \(h_{ij}\) is a dummy variable that encodes the hybrid type (i.e., if the ith observation was taken on a plant belonging to the jth hybrid then \(h_{ij} = 1\) and \(h_{{ij^{\prime}}} = 0\) for all \(j^{\prime} \ne j\)), \(\beta_{j}\) is the effect associated with the jth hybrid, \(r_{i}\) is a dummy variable that encodes replication (i.e., if the ith observation was taken on a plant grown in replication 1 then \(r_{i} = 1\) and \(r_{i} = 0\) otherwise), \(\alpha_{0}\) is the replicate effect, and \(\varepsilon_{i}\) is the error term. To avoid identifiability issues, the dummy variables were constructed with respect to a chosen hybrid baseline. Tables 1 and 2 summarize the findings of this analysis. In particular, these tables display the ANOVA results as obtained from the anova function in R; which present the usual sequential sums of squares, where p-values are for the tests that compare the models against one another in the order specified. From these results we find that hybrid type (p-value ≤ 2e−16), replicate (p-value = 0.000285), and their interaction (p-value ≤ 2e−16) are highly significant for log-strength. Similarly, for log-efficiency we find that both hybrid type (p-value = < 2e−16) and the interaction term between hybrid type and replicate (p-value = 1.46e−05) are highly significant, while replicate (p-value = 0.0882) is not. These findings indicate that both genetics (i.e., hybrid type) and environment (i.e., plot location) are associated with bending strength and structural efficiency. It is worthwhile to point out that standard model diagnostics (e.g., residual plots, QQ-plots, etc.) were conducted to assess the validity of each of these models, as well as those discussed below. Boxplot of bending strength, stratified by hybrid type Boxplot of structural efficiency, stratified by hybrid type Boxplot of bending strength, stratified by hybrid type and replication Boxplot of structural efficiency, stratified by hybrid type and replication Table 1 Results of analysis for log(strength) Table 2 Results of analysis for log(Y) Delving deeper, we consider the regression analysis of log-efficiency (denoted \(Y_{i}\)) under the following multiple linear regression model: $$Y_{i} = \beta_{0} + S_{i} \delta_{1} + S_{i}^{2} \delta_{2} + \sum h_{ij} \beta_{j} + r_{i} \alpha_{0} + \sum r_{i} h_{ij} \alpha_{k} + \varepsilon_{i} ,$$ where \(S_{i}\)(\(S_{i}^{2}\)) denotes the strength (squared) of the ith observation and \(\delta_{1}\)(\(\delta_{2}\)) denotes the corresponding effect size. Note, strength was entered into the model in a quadratic form due to the findings of model diagnostics. Table 3 presents a summary of this analysis. This table displays the ANOVA results as obtained from the anova function in R. From these results we see that strength is significantly related to efficiency, even when controlling for hybrid type, replication, and their interaction. When conducting the individual tests of significance (results not shown) about \(\delta_{1}\) and \(\delta_{2}\) we have the p-values < 2e−16 and 1.81e−07, respectively; i.e., strength seems to be significantly (quadratically) related to efficiency, while controlling for hybrid type, replication, and their interaction. Further, we see that hybrid type, replicate, and their interaction continue to be highly significant for log-efficiency. This tends to suggest that other genetic components and environmental conditions might be at play with respect to fully explaining the structural efficiency of maize stalks. The \(R^{2}\) (adjusted \(R^{2}\)) for this model was 0.5139 (0.4493). Table 3 Results of analysis for log(Y) regression For completeness, we also conducted the regression analysis of the section modulus of the 2nd and 3rd internode down from the ear. This analysis was done using a model of the form: where \(Y_{i}\) again represent the response of interest (i.e., log transformed section modulus of the 2nd and 3rd internode) as measured on the ith observation, with all other variables being defined as above. Tables 4 and 5 report the results of the analysis of the section modulus of the 2nd and 3rd internodes, respectively. From these results, we find that hybrid type and the interaction term between hybrid type and replicate are highly significant, while replicate is not for log-section modulus of both internodes. Table 4 Results of analysis for log(section modulus) for internode 2 Table 5 Results of analysis for log (section modulus) for internode 3 Improving structural efficiency in maize plants could simultaneously enhance yield and stalk lodging resistance. However, there has not been any previous investigations of structural efficiency in maize stalks. Consequently, plant scientists have not directly breed or managed for plants that are structurally optimized. Results from this study suggest that the majority of modern maize hybrids may possess suboptimal stalk structures (see Fig. 6). In other words, most maize plants utilize bioenergy and structural biomass inefficiently. This reduces the amount of potential biomass and bioenergy available for grain filling (i.e., lowers harvest index) and simultaneously makes stalks more susceptible to stalk lodging. Of the 945,000 cross-sections analyzed in this study 65% were structurally suboptimal with 38% having too much structural tissue and 27% having too little structural tissue. Analysis of the drag force profiles for each specimen that would produce the most uniform stress in the specimen revealed that the resolved force F0 was far larger than the drag force profile below the ear (see Fig. 12). These data imply that stalks allocate structural tissues for wind loading that primarily occurs above the ear (e.g., the drag force increases exponentially with height). This does not imply that there is no wind below the ear, but that the drag force (determined by the local wind speed, projected stalk and leaf area, and drag coefficient) is much less below the ear as compared to the drag force above the ear. Note this does not imply the bending stresses are lower at the base of the stalk. Bending stresses are determined by forces (i.e., f(x) and F0) and moment arms (i.e., distance at which the force is applied). Thus, bending stresses are always higher at the base of the stalk even if the drag force profile is lower at the base of the stalk. The drag force profile for all 945 specimens (f(x) and F0) (top); a loading diagram of f(x) and F0 (bottom). The 75th and 25th percentile values are generally line-on-line with the median values along the length of the specimen. Plant height (x-axis) is normalized to the height of the plant specimen. Force (y-axis) is normalized to the maximum calculated force Three-point bending tests are the most commonly employed test to quantify bending strength in plant stalks [28, 33, 34]. However, results from this study highlight several shortcomings of the three-point-bending test approach. In particular, most plant stalks are tapered and researchers typically opt to place the loading anvil from a three-point-bending test at the same anatomical location for each stem specimen in a given study (e.g., the third internode). Thus, the failure location is artificially imposed by the researcher since failure always occurs near the loading anvil, whereas in nature, the failure location is determined by local material weakness and imperfections (i.e., suboptimal allocation of structural tissues). By artificially imposing the failure location the researcher is inducing failure in a cross-section that may have more structural tissue than is optimal in some specimens and less structural tissue than is optimal in other specimens. This confounds comparisons of bending strength among different specimens in a given study, as the measured bending strength could vary substantially for any given specimen depending on the structural optimality of the failed cross-section. A better approach is to apply bending loads that replicate natural loading patterns. Such loading conditions produce natural failure types and failure patterns in plant specimens (i.e., failure occurs at the cross-section with the least optimal allocation of structural tissue). Several devices have been recently developed which accomplish this task [5, 8, 12, 18, 19,20,, 21, 30]. In particular, they utilize the natural anchoring of the maize roots and apply a point load to a cross-section near the ear (very similar to the loading profile shown in Fig. 12). Thus these devices simulate the loading conditions experienced by plants in their natural environment and consequently produce natural failure types and patterns [8]. These devices are therefore expected to provide more distinguishing power than three-point-bending test methods. This study elucidates the direct connection between stalk geometry (i.e., rind thickness and diameter) and stalk lodging resistance and demonstrates that key insights into stalk lodging resistance can be obtained through morphological phenotyping. However, most current morphological phenotyping tools require destructive sectioning and imaging procedures that induce plant fatality. These tools therefore prevent measurement of other important crop breeding metrics such as yield. Several nondestructive methods of measuring stalk geometry (i.e., rind thickness and diameter) have been developed (e.g., x-ray computed tomography) but these methods are usually limited to laboratory or greenhouse settings and cannot easily be implemented in an agricultural field setting (e.g., [23, 27, 29]). Future studies seeking to develop high throughput phenotyping methods capable of economically measuring internal and external stalk morphologies in the field are warranted and will lead to increased understanding of stalk lodging resistance. Results showed that lodging resistant hybrids (i.e., those with higher average bending strengths) were more structurally efficient than hybrids that were weaker. The hybrids with higher average bending strengths also displayed less plant to plant variation in structural efficiency. In other words, strong hybrids were more structurally optimized and more consistently optimized than weaker plants (i.e., demonstrated less plant to plant variability). These same findings are true when analyzing individual plants. For example, if the hybrid factor is ignored and each stalk is analyzed as an individual specimen (i.e., no averaging of results across hybrids) the stronger stalks were more structurally efficient than weaker stalks. These results are likely due in part to breeding techniques used in the past. In particular, applied selective breeding pressure based on counts of lodged stalks at harvest time is expected to produce hybrids that are both strong and exhibit minimal plant to plant variance in strength. That is to say that a variety with high average strength but also high standard deviation in strength will have higher lodging rates than a variety with a similar average strength but a lower standard deviation in strength. In this study, an optimization routine was used to determine the wind loading profile that would produce the most uniform mechanical stresses along the length of maize stalks. Based upon Fig. 12, which depicts the single resolved load (F0) far exceeding the rest of the loading profile (f(x)), it was found that maize stalks are structurally optimized for wind loadings that occur primarily above the ear. This is consistent with the authors' observations in field conditions; although plants at the border of the field may experience loading along the full length of the stalk, the majority of maize plants appear to be primarily subjected to wind loads at or above the ear. The optimization method used in this study was robust and has the potential to be applied to other plants in which the structure of the plant may be predictive of its loading environment. Using optimization methods to infer a plants wind loading environment has several advantages over traditional measurement techniques used to determine wind loads on plants. In particular, it is computationally efficient (as compared to fluid–structure interaction models) and can infer the aggregate loading over time, taking into account the wind profile and fluid–structure interaction between the wind and the plant stalk. The primary limitation of the current study is that the rind of the stalk was assumed to be a homogeneous, isotropic, linear elastic material subjected to pure bending. The inclusion of heterogeneity, anisotropy, non-linear material properties, and the addition of the pith material could change the behavior of the analyzed stalks. However, previous research has shown the inclusion of these effects to be small and in many cases insignificant. The authors do not believe inclusion of such effects would change the overall conclusions of this paper [1, 37]. This study is deliberately limited in its scope to structural efficiency. Other abiotic and biotic considerations can affect stalk morphology/anatomy and should be considered in future studies [31]. In addition, this study utilized three-point-bending test to measure the bending strength of stem specimens. However, as mentioned previously these tests are less than ideal. Unfortunately, at the time the study was conducted by the authors we were not fully aware of the limitations of three-point-bending test. In particular, we did not expect to find that the majority of maize stalk cross-sections are structurally suboptimal. Future studies are warranted which utilize in-field phenotyping devices [12] to assess structural efficiency and its relations to stalk strength, harvest index, etc. The assumption of the wind speed acting in the same direction along the length of the stalk is not necessarily valid [14], and as such was a key assumption made in this study. This assumption was used for this study, to avoid the trivial solution during the fmincon optimization routine, i.e., loading that frequently alternates directions to artificially reduce the residual values down to nearly zero in all cases. However, further improvements to the optimization parameters may yield more refined results in the future. The modeling of maize stems as a structure comprised of a single homogeneous linear elastic orthotropic material is a simplifying assumption. Further analysis, including the inclusion of the pith material or the refinement of the model to incorporate the heterogeneous nature of the rind and pith tissue [35] would provide further insights into the structural efficiency of maize stems. The morphology of physiological mature maize stalks was characterized, and the loading environments that result in the most uniform maximum stresses along the length of maize stalk were investigated. It was found that maize stalks are morphologically organized to resist wind loading that occurs primarily above the ear. It was also found that plants with higher bending strengths were more structurally efficient than weaker plants. However, even strong plants allocated structural tissues in a suboptimal manner. There exists much room for improvement in the area of structural optimization of maize stalks. These findings are relevant to crop management and breeding studies seeking to improve stalk lodging resistance. The data sets obtained and analyzed during the current study are available from the corresponding author upon reasonable request. Al-Zube L, Sun W, Robertson D, Cook D. The elastic modulus for maize stems. Plant Methods. 2018;14:11. Baker CJ. The development of a theoretical model for the windthrow of plants. J Theor Biol. 1995;175(3):355–72. Baker CJ, Sterling M, Berry P. A generalized model of crop lodging. J Theor Biol. 2014;363:1–12. Beer FP, Johnston E, Dewolf JT. Mechanics of materials. 3rd ed. New York: McGraw-Hill; 2002. Berry PM, Spink JH, Gay AP, Craigon J. A comparison of root and stem lodging risks among winter wheat cultivars. J Agric Sci. 2003;141:191–202. Berry P, Sylvester-Bradley R, Berry S. Ideotype design for lodging-resistant wheat. Euphytica. 2007;154:165–79. Chuan W, Lei Y, Jianguo Z. Study on optimization of radiological worker allocation problem based on nonlinear programming function-fmincon. In: Proceedings of the 2014 IEEE international conference on mechatronics and automation. IEEE, 2014. p. 1073–1078. Cook DD, de la Chapelle W, Lin T-C, Lee SY, Sun W, Robertson DJ. DARLING: a device for assessing resistance to lodging in grain crops. Plant Methods. 2019;15:102. Cionco RM. A mathematical model for air flow in a vegetative canopy. J Appl Meteorol. 1965;4:517–22. Dudley JW. Selection for rind puncture resistance in two maize populations. Crop Sci. 1994;34:1458–60. Echezona BC. Corn-stalk lodging and borer damage as influenced by varying corn densities and planting geometry with soybean (Glycine max. L. Merrill). Int Agrophys. 2007;21:133–43. Erndwein L, Cook D, Robertson D, Sparks E. Field-based mechanical phenotyping of cereal crops to assess lodging resistance. 2019. arXiv:1909.08555. Esechie HA. Relationship of stalk morphology and chemical composition to lodging resistance in maize (Zea mays L.) in a rainforest zone. J Agric Sci. 1985;104:429–33. Finnigan JJ. Turbulence in waving wheat. Bound Layer Met. 1978;16:181–211. Finnigan J. Turbulence in plant canopies. Annu Rev Fluid Mech. 2000;32:519–71. Flint-Garcia SA, Jampatong C, Darrah LL, McMullen MD. Quantitative trait locus analysis of stalk strength in four maize populations. Crop Sci. 2003;43:13–22. Grafius JE, Brown HM. Lodging resistance in oats. Agron J. 1954;46:414–8. Guo Q, Chen R, Ma L, Sun H, Weng M, Li S, Hu J. Classification of corn stalk lodging resistance using equivalent forces combined with SVD algorithm. Appl Sci. 2019;9:640. Guo Q, Chen R, Sun X, Jiang M, Sun H, Wang S, Ma L, Yang Y, Hu J. A non-destructive and direction-insensitive method using a strain sensor and two single axis angle sensors for evaluating corn stalk lodging resistance. Sensors. 2018;18:1852. Han SP. A globally convergent method for nonlinear programming. J Optim Theory Appl. 1977;22(3):297–309. Heuschele DJ, Wiersma J, Reynolds L, Mangin A, Lawley Y, Marchetto P. The stalker: an open source force meter for rapid stalk strength phenotyping. HardwareX. 2019;6:e00067. Holbert JR, Burlison WL, Biggar HH, Koehler B, Dungan GH, Jenkins MT. Early vigor of maize plants and yield of grain as influenced by the corn root, stalk, and ear rot diseases. J Agric Res. 1923;23:0583–630. Mairhofer S, Zappala S, Tracy SR, Sturrock C, Bennett M, Mooney SJ, Pridmore T. RooTrak: automated recovery of three-dimensional plant root architecture in soil from X-ray microcomputed tomography images using visual tracking. Plant Physiol. 2012;158:561–9. MATLAB. Find minimum of constrained nonlinear multivariable function—MATLAB. 2020. http://www.mathworks.com/help/optim/ug/fmincon.html. Niklas KJ. Computing factors of safety against wind-induced tree stem damage. J Exp Bot. 2000;51:797–806. Niklas KJ, Spatz H. Methods for calculating factors of safety for plant stems. J Exp Biol. 1999;202:3273–80. Robertson DJ, Julias M, Lee SY, Cook DD. Maize stalk lodging: morphological determinants of stalk strength. Crop Sci. 2017;57:926. Robertson DJ, Smith SL, Cook DD. On measuring the bending strength of septate grass stems. Am J Bot. 2015;102:5–11. Seegmiller WH, Graves J, Robertson DJ. A novel rind puncture technique to measure rind thickness and diameter in plant stalks. Plant Methods. 2020;16:44. Sekhon R, Joyner C, Ackerman A, McMahan C, Cook D, Robertson D. Stalk bending strength is strongly associated with maize stalk lodging incidence across multiple environments. Field Crops Res. 2020;249:107737. Sekhon R, Saski C, Kumar R, Flinn B, Luo F, Beissinger T, Ackerman A, Breitzman M, Bridges W, de Leon N, Kaeppler S. Integrated genome-scale analysis identifies novel genes and networks underlying senescence in maize. Plant Cell. 2019;31(9):1968–89. Sreeraj P, Kannan T, Maji S. Prediction and optimization of weld bead geometry in gas metal arc welding process using RSM and fmincon. J Mech Eng Res. 2013;5(8):154–165. Stubbs C, Baban N, Robertson D, Al-Zube L, Cook D. Bending stress in plant stems: models and assumptions. In: Geitmann A, Gril J, editors. Plant biomechanics—from structure to function at multiple scales. Berlin: Springer; 2018. p. 49–77. Stubbs CJ, Larson R, Cook DD. Maize stem buckling failure is dominated by morphological factors. BioRxiv. 2019. p. 833863. Stubbs CJ, Larson R, Cook DD. Mapping spatially distributed material properties in finite element models of plant tissue using computed tomography. 2020. bioRxiv. Susko A. Deciphering lodging resistance in oat and other cereal crops. 2019. Von Forell G, Robertson D, Lee SY, Cook DD. Preventing lodging in bioenergy crops: a biomechanical analysis of maize stalks suggests a new approach. J Exp Bot. 2015;66:4367–71. Wen W, Gu S, Xiao B, Wang C, Wang J, Ma L, Wang Y, Lu X, Yu Z, Zhang Y. In situ evaluation of stalk lodging resistance for different maize (Zea mays L.) cultivars using a mobile wind machine. Plant Methods. 2019;15:1–16. Yi C. Momentum transfer within canopies. J Appl Meteorol Climatol. 2008;47:262–75. Young WC, Budynas RG. Roark's formulas for stress and strain. New York: McGraw-Hill; 2002. Zienkiewicz O, Taylor R, Nithiarasu P. The finite element method for fluid dynamics. Amsterdam: Elsevier; 2014. The authors would like to acknowledge the field staff at Clemson University who maintained the research plots. This work was supported by Grants from the National Science Foundation (#1826715), and the USDA-National Institute of Food and Agriculture (#2016-67012-28381). Any opinions, findings, conclusions, or recommendations are those of the author(s) and do not necessarily reflect the view of the funding bodies. Department of Mechanical Engineering, University of Idaho, 875 Perimeter Dr. MS0902, Moscow, ID, 83844, USA Christopher J. Stubbs, Kate Seegmiller & Daniel J. Robertson School of Mathematical and Statistical Sciences, Clemson University, Clemson, SC, 29634, USA Christopher McMahan Department of Genetics and Biochemistry, Clemson University, Biosystems Research Complex, Clemson, SC, 29634, USA Rajandeep S. Sekhon Christopher J. Stubbs Kate Seegmiller Daniel J. Robertson All authors were fully involved in the study and preparation of the manuscript. All authors read and approved the final manuscript. Correspondence to Daniel J. Robertson. Stubbs, C.J., Seegmiller, K., McMahan, C. et al. Diverse maize hybrids are structurally inefficient at resisting wind induced bending forces that cause stalk lodging. Plant Methods 16, 67 (2020). https://doi.org/10.1186/s13007-020-00608-2 Received: 30 January 2020 Strength stress
CommonCrawl
Skip to main content Skip to sections Neuroinformatics December 2011 , Volume 9, Issue 4, pp 401–425 | Cite as Inverse Current Source Density Method in Two Dimensions: Inferring Neural Activation from Multielectrode Recordings Szymon Łęski Klas H. Pettersen Beth Tunstall Gaute T. Einevoll John Gigg Daniel K. Wójcik The recent development of large multielectrode recording arrays has made it affordable for an increasing number of laboratories to record from multiple brain regions simultaneously. The development of analytical tools for array data, however, lags behind these technological advances in hardware. In this paper, we present a method based on forward modeling for estimating current source density from electrophysiological signals recorded on a two-dimensional grid using multi-electrode rectangular arrays. This new method, which we call two-dimensional inverse Current Source Density (iCSD 2D), is based upon and extends our previous one- and three-dimensional techniques. We test several variants of our method, both on surrogate data generated from a collection of Gaussian sources, and on model data from a population of layer 5 neocortical pyramidal neurons. We also apply the method to experimental data from the rat subiculum. The main advantages of the proposed method are the explicit specification of its assumptions, the possibility to include system-specific information as it becomes available, the ability to estimate CSD at the grid boundaries, and lower reconstruction errors when compared to the traditional approach. These features make iCSD 2D a substantial improvement over the approaches used so far and a powerful new tool for the analysis of multielectrode array data. We also provide a free GUI-based MATLAB toolbox to analyze and visualize our test data as well as user datasets. Current source density Local field potentials Evoked potentials Inverse problems Rat Hippocampus Subiculum Cortical model To understand the brain at the systems level one needs precise information regarding the spatial and temporal activation of different neuronal populations. The recent developments in multielectrode construction have opened new possibilities for the electrophysiologist, providing the means to record extracellular potentials at tens to hundreds of closely spaced positions simultaneously (Csicsvari et al. 2003; Barthó et al. 2004; Buzsáki 2004; Blanche et al. 2005; Du et al. 2008). To take full advantage of this new technology, new data analysis methods must be developed to extract useful and precise information from the masses of data that we can now record relatively easily. The high-frequency part of extracellular signals contains information about firing of action potentials in neurons within a distance of 0.1 mm or so around the individual recording contacts (Buzsáki 2004; Pettersen and Einevoll 2008). The low-frequency part, the local field potential (LFP), thought to be generated by synaptic inputs and their dendritic return currents, has a larger spatial spread due to volume conduction (Mitzdorf 1985; Pettersen et al. 2008; Katzner et al. 2009; Xing et al. 2009). The standard method of analysis for LFP has been to estimate the current-source density (CSD), i.e., the net volume density of current entering or leaving the extracellular tissue (Lorente de No 1947; Pitts 1952; Plonsey 1969; Nicholson and Freeman 1975; Freeman and Nicholson 1975; Mitzdorf 1985). The Poisson equation provides the connection between the extracellular potential Φ and the current source density C under the assumption of passive spread in an homogeneous and isotropic medium: σ ΔΦ = –C, where Δ is the Laplace operator. To estimate C one may use the numerical second derivative in place of the Laplace operator (Pitts 1952; Nicholson and Freeman 1975; Freeman and Nicholson 1975). This approach has been commonplace in the analysis of recordings from linear (laminar) electrodes inserted perpendicular to cortical layers (e.g. Haberly and Shepherd 1973; Mitzdorf 1985; Schroeder et al. 1992; Ylinen et al. 1995; Lakatos et al. 2005; Lipton et al. 2006; Rajkai et al. 2008; de Solages et al. 2008). In this setting one has assumed that the extracellular potential (and by implication the current-source density) is constant in the lateral directions, an approximation that cannot always be justified (Nicholson and Freeman 1975; Pettersen et al. 2006; Einevoll et al. 2007; Pettersen et al. 2008; Katzner et al. 2009; Xing et al. 2009). Another problem with the standard numerical derivative approach is that it is impossible to estimate CSD at the outermost electrode contact positions unless one makes assumptions unjustified with complex electrode geometries (Vaknin et al. 1988; Łęski et al. 2007). Giving up the boundary becomes a particularly severe issue in two- and three-dimensional electrode geometries where the relative number of such boundary contacts becomes large (Łęski et al. 2007). The estimation of current sources from recorded potentials has a long history in the interpretation of EEG (Guljarani 1998; He and Lian 2005; Nunez and Srinivasan 2006) and ECoG signals (e.g. Freeman 1980; Zhang et al. 2008). The inverse methods involve calculating a forward model of propagation of electric fields from the sources inside the brain to the recording electrodes on the scalp (EEG) or cortical surface (ECoG). The model is then inverted to estimate the sources from the recorded potentials. For these signals, which are recorded outside the source region, one common approach is to assume their sources to be a small number of mesoscopic dipoles located so far away from the electrode contacts that the far-field approximation can be evoked. However, more general source distributions have also been considered (for review see e.g. He and Lian 2005; Nunez and Srinivasan 2006). For multielectrode contacts positioned inside neural tissue in the immediate vicinity of the neuronal sources, the far-field dipole approximation for calculating the forward solution is not applicable. Moreover, the approximation where one assumes the neuronal sources to be built up of pairs of two balanced current monopoles is unsuitable. In Lindén et al. (2010) (see Fig. 6) it was shown that both of these simplified source representations provided poor approximations of the local field potential (LFP) generated by neurons within distances of a millimeter or so. A better source representation for this situation is the continuous current-source density, and we have developed a forward-inverse scheme for CSD estimation called the inverse current source density (iCSD) method (Pettersen et al. 2006; Łęski et al. 2007; Wójcik and Łęski 2009). A main advantage of the iCSD method compared to the standard CSD analysis is that assumptions about the geometry of the CSD as well as electrical boundary conditions can be incorporated explicitly in the estimator. The iCSD method assumes a specific form of the distribution of the current sources, for instance, linearly varying between the recording points. It connects parameters of the CSD with the potentials at the grid through forward calculation of the potential generated by the assumed distribution. While the idea is general and applicable to different geometries, until now it has been developed only for recordings with laminar (linear) electrodes (Pettersen et al. 2006) and recordings on a three-dimensional grid of electrode positions (Łęski et al. 2007). Given the growing availability of two-dimensional electrode arrays, such as multi-shank laminar electrodes (Buzsáki 2004), it is presently important to provide efficient and general methods for CSD estimation from such recordings. Since the 3D iCSD cannot be applied directly to such recordings (as it requires at least a few contacts in all three spatial directions) in the present paper we develop a specific iCSD framework for the analysis of data from two-dimensional multielectrode arrays. This 2D iCSD method is validated on LFP data generated by several types of test sources where we know the true underlying source distribution and, thus, can make a quantitative assessment of the estimation accuracy. We first consider various types of Gaussian current sources, and then move on to analyze LFP generated by synaptically activated model populations consisting of about one thousand morphologically-reconstructed layer 5 neocortical pyramidal neurons (Mainen and Sejnowski 1996; Pettersen et al. 2008). For all tests we compare our iCSD estimates with estimates from the traditional CSD method. The proposed framework is further illustrated by application to a set of recordings with an 8 × 8 multi-shank electrode from the rat subiculum. A key feature of the iCSD method is the explicit incorporation of assumptions regarding geometries of the underlying CSD sources. For the present situation with a 2D multi-electrode grid, a key parameter is the assumed spatial spread of CSD in the direction perpendicular to that grid (the standard CSD method neglects the variation of extracellular potential in this direction, an assumption that is physically unrealistic in that it does not correspond to any known CSD distribution). This is also the main difference between the 2D and the 3D iCSD; in the 3D case the behavior of CSD in the direction perpendicular to the main plane is calculated, whilst here it must be modeled. Usually, it is not clear a priori, which value of this parameter leads to optimal reconstruction. To circumvent this we found it useful to make iCSD estimates assuming different parameter values, and investigate how the salient features of the CSD pattern depend on the choice made (see Freeman 1980 for similar analysis of the influence the "focal depth" parameter on the analysis of ECoG data). To facilitate this approach we have developed a MATLAB toolbox with a simple graphical user interface to allow users to easily and rapidly investigate the dependency of the 2D iCSD estimates on both this parameter as well as the choice of boundary conditions. The toolbox bundled with three of the datasets used in this paper is provided under GNU General Public License v.3 or later and is available from the INCF software repository, http://software.incf.org/. The software and data can be used in published research provided this article is cited. Inverse Current Source Density in Two Dimensions Suppose that we measure the electric potential Φ at N points, x 1, x 2, …, x N. Potentials are functions of the density of current sources C according to the Poisson equation (Nicholson and Freeman 1975): $$ \sigma \,\Delta \Phi = - C, $$ where σ is the conductivity of the tissue, here assumed to be homogeneous and isotropic. The main idea behind iCSD is to assume a specific distribution C(x,y,z) of the current sources from a class parameterized with parameters C = [C1, C2, …, CN]. Once a model of CSD is assumed, it is a simple matter to evaluate the potentials, which would be measured at any point in space, by summing the contributions from every point source: $$ \Phi (x,y,z) = \int {\frac{{C(x\prime, y\prime, z\prime, {{\bf C}})\;dx\prime dy\prime dz\prime }}{{4\pi \sigma \sqrt {{{{(x - x\prime )}^2} + {{(y - y\prime )}^2} + {{(z - z\prime )}^2}}} }}} $$ This leads to the relation $$ \Phi = \left[ {\Phi ({{{\bf x}}_{{1}}}),\Phi ({{{\bf x}}_{{2}}}), \ldots, \Phi ({{{\bf x}}_{\rm{N}}})} \right] = F\left[ {{\bf C}} \right], $$ and if we choose the appropriate model of the CSD distribution, the operator F is linear and can be inverted. We thus obtain values of N parameters in terms of the measured potentials: $$ C = {F^{{ - 1}}}\left[ \Phi \right], $$ which gives the model CSD in its whole domain. In this work we consider potentials measured on a two-dimensional regular grid of electrodes of N = n x n y points. Let us position the axes so that the plane of the grid corresponds to z = 0 and the electrode positions are $$ (x,y,z) = \left( {m{ }\Delta x,n\Delta y,0} \right), $$ where Δx and Δy is the spacing in the x and y directions, m = 1,…,n x , n = 1,…,n y . We can uniquely decompose the CSD that generated the measured potentials into parts symmetric and anti-symmetric in z $$ C\left( {x,y,z} \right) = {C_S}\left( {x,y,z} \right) + {C_A}\left( {x,y,z} \right), $$ where \( {{\text{C}}_{\text{S}}}\left( {x,y,z} \right) = \left( {C\left( {x,y,z} \right) + C\left( {x,y, - z} \right)} \right)/2,{C_A}\left( {x,y,z} \right) = \left( {C\left( {x,y,z} \right) - C\left( {x,y, - z} \right)} \right)/2 \). Substitution to Eq. 1b shows that the potentials measured in the plane of the grid come only from the symmetric part, so this is the only thing we can hope to reconstruct. Our goal is to recover C S (x,y,0). In the generic situation, we have no additional information about the distribution of CSD along the z axis, apart from the fact that it has finite extent. Therefore, we make the simplest possible assumption of a model CSD distribution which is a product of an a priori unknown two-dimensional profile c(x,y) and a specific symmetric profile in the perpendicular direction, i.e. $$ C\left( {x,y,z} \right) = {C_S}\left( {x,y,z} \right) = c\left( {x,y} \right)H(z). $$ It is convenient to normalize the z-profile H(z) so that H(0) = 1. The proposed approach would work best if anatomy of the probed volume suggests such a product structure of the CSD, for example, in case of a grid inserted perpendicularly to a layered brain structure. This is similar to the one-dimensional case where the most meaningful estimates of CSD can be obtained in laminar structures, such as cortex (Nicholson and Freeman 1975; Pettersen et al. 2006). If substantial extra knowledge of anatomy of the neighboring tissue or the processes taking place are known, one can develop specific variants of the method building more complex models of the assumed CSD. In general, however, we believe the assumptions we make are reasonable and, as our tests in the following pages show, the proposed approach does work adequately on a number of model sources. For H(z) we take here either the step function, H(z) = 1 for − h ≤ z ≤ h and H(z) = 0 otherwise, or a Gaussian: H(z) = exp(−z 2 /2h 2 ). We choose the normalization H(0) = 1 so that c(x,y) = C(x,y,0). The function c(x,y) is in the simplest case an interpolation between the nodes. This could be either nearest neighbor interpolation (we set the value at any spatial point equal to the value at the nearest node), linear interpolation (the function describing values between the nodes is piecewise linear), or spline interpolation (which uses third-degree polynomials to produce distributions which vary more smoothly than in the linear case). In the simplest case we set c(x,y) to zero for positions outside the grid (we will refer to this case as "no boundary conditions"). We showed for the three-dimensional case (Łęski et al. 2007) that the "no boundary conditions" assumption can lead to large reconstruction errors if some of the sources generating the fields are located outside the grid. This is because the reconstructed CSD tries to compensate for the effect of external sources with components within the grid, which leads to artifacts. One possible solution is to consider an extended layer of grid points, as shown in Fig. 1. Then at each of these additional points we can either (i) set the CSD to zero (which we denote by B boundary conditions), or (ii) copy the value from the nearest point of the original grid (denoted by D). For the latter case the values at additional points are not fixed. Such a procedure is well-defined for all points of the extended grid, including corners. We found that both variants of this approach improve the reconstruction fidelity within the grid (Łęski et al. 2007). In all cases the distribution of sources is completely described with N parameters. Comparison of 'no boundary treatment' vs. b or d distribution of sources. In the distribution with no boundary treatment (a) the CSD is non-zero only inside the rectangular grid. To accommodate for sources lying outside of the grid, and to avoid artifacts in the reconstructed CSD, a grid with an additional layer of nodes is used (b), which results in an additional layer of non-zero CSD We now consider the situation where the CSD is assumed to be 'step-wise' constant (Pettersen et al. 2006), i.e., the CSD is assumed to be constant within the rectangle in the xy-plane assigned to each grid point. For this model the matrix F can be calculated as follows: Denote by (x i , y i ) and (x j , y j ) the positions of the i-th and j-th grid points, respectively, i and j go from 1 to N. The N parameters of the CSD distribution will be the values of C at the nodes, which we denote by C i , i goes from 1 to N. The potential at the node i, Φ i , is equal to $$ {{\Phi_i} = \sum\limits_{{j = 1}}^N {{F_{{ij}}}} }{C_j}. $$ The matrix element F ij is: $$ {{F_{{ij}}} = \frac{1}{{4\pi \sigma }}\int\limits_{{{x_j} - \Delta x/2}}^{{{x_j} + \Delta x/2}} {dx} }\int\limits_{{{y_j} - \Delta y/2}}^{{{y_j} + \Delta y/2}} {dy} \int\limits_{{ - \infty }}^{\infty } {dz} \frac{{H(z)}}{{\sqrt {{{{({x_i} - x)}^2} + {{({y_i} - y)}^2} + {z^2}}} }}. $$ The construction of the matrix F for linear and spline interpolation is given in the Appendix. The procedure is analogous, but the calculations are more complex. To compare the accuracy of the iCSD method quantitatively against the traditional CSD estimation method, we had to extend the traditional CSD method to predict spatially continuous CSD profiles. The following procedure was used: the grid of electrodes was extended (cf. Fig. 1) and at every point of this extension we copied the potential from the nearest point of the original boundary (Vaknin et al. 1988), which is non-ambiguous. The numerical second derivative was then calculated at the points of the initial grid and spline-interpolated in between. Generation of Population Model Data In order to test the new 2D iCSD method we generated model data for columnar populations of reconstructed layer-5 pyramidal cells receiving a combination of excitatory and inhibitory inputs resembling stimulus-evoked activation (Pettersen et al. 2008). In the simulation, we know the actual CSD generating the local-field potentials recorded (virtually) at the grid points of the multielectrode. We can use these to quantify the quality of CSD estimates from the recorded LFPs reliably. A similar procedure was used by Pettersen et al. (2008) to test the 1D iCSD method. We studied CSD and LFP generated by three such model columns spaced equally along a line as one "central" column surrounded by two "lateral" columns, and we assumed two positions of the virtual electrode grid with respect to these columns (see text below and Fig. 6a, b, for details). Two synaptic stimulation patterns were considered. Neuron templates A cell population resembling a layer-5 pyramidal-cell network in a neocortical column was built based on the compartmental model from Mainen and Sejnowski (1996). The compartmental model was downloaded from SenseLab's ModelDB (http://senselab.med.yale.edu/; Hines et al. 2004; Migliore et al. 2003), and the simulation tool NEURON (http://www.neuron.yale.edu/; Carnevale and Hines 2006) was used to compute the neuronal dynamics. The neuron model has various active conductances in the axon segments, axon hillock, soma and dendrites. Similar to Pettersen et al. (2008), the original model of Mainen and Sejnowski (1996) was modified as follows: the active conductances in the dendrites were removed, the electrode was removed from soma, the whole neuron was rotated so that the primary dendrite was aligned to the positive y-axis, and the axon was then aligned straight downward along the negative y-axis. Two different stimulus patterns were used for the pyramidal neuron model: one with combined apical excitation and basal inhibition, and one with combined basal excitation and basal inhibition. These two patterns correspond to stimulus patterns 2 (SIP2) and 4 (SIP4) in Pettersen et al. (2008). In both cases excitation and inhibition were tuned so that the soma potential was just below threshold value so that the neuron did not produce any action potentials. Synaptic input was density-based, i.e., not based on point processes. Thus, the synaptic input was considered to be scattered throughout the whole branch to which it was applied, similar to Holt and Koch (1999). The excitatory synaptic input was conductance-based with an exponentially decaying temporal profile, $$ g_j^{\rm{e}} = g_{{{ \max }}}^{\rm{e}}\delta_j^{\rm{e}}\frac{1}{{{\tau_{\rm{e}}}}}{e^{{ - (t - \Delta_j^{\rm{e}})/{\tau_{\rm{e}}}}}}\theta (t - \Delta_j^{\rm{e}}). $$ Here \( g_j^{\rm{e}} \) is the synaptic membrane conductance in branch j of the neuron, \( g_{{{ \max }}}^{\rm{e}} \) is the maximum conductance, τ e is the time constant of the excitatory input, \( \Delta_j^{\rm{e}} \) is the onset time of synaptic input in branch j, and θ(t) is the Heaviside unit step function. \( \delta_j^{\rm{e}} \) is unity if branch j is set to receive excitatory input in the model, and zero if not. The synaptic input patterns, SIP2 and SIP4, both had the same excitatory time constant, τ e = 5 ms. The maximum conductance for the apically excited neurons (SIP2) was \( g_{{{ \max }}}^{\rm{e}} = 0.001 {\hbox{S/c}}{{\hbox{m}}^{{2}}} \) while the maximum conductance for the basally excited neurons (SIP4) was \( g_{{{ \max }}}^{\rm{e}} = 1 \times {10^{{ - 4}}} {\hbox{S/c}}{{\hbox{m}}^{{2}}} \). The basal inhibitory input was similarly given by $$ g_j^{\rm{i}} = g_{{{ \max }}}^{\rm{i}}\delta_j^{\rm{i}}\frac{1}{{{\tau_{\rm{i}}}}}{e^{{ - (t - \Delta_j^{\rm{i}})/{\tau_{\rm{i}}}}}}\theta (t - \Delta_j^{\rm{i}}). $$ An inhibitory time constant τ i = 10 ms was used. The maximum conductance \( g_{{{ \max }}}^{\rm{i}} \) was adjusted so that the neuron did not produce action potentials. SIP2 had a maximal inhibitory membrane conductance of \( g_{{{ \max }}}^{\rm{i}} = 5 \times {10^4} {\hbox{S/c}}{{\hbox{m}}^2} \), while for SIP4 \( g_{{{ \max }}}^{\rm{i}} = 3 \times {10^4} {\hbox{S/c}}{{\hbox{m}}^2} \). For both stimulus patterns the inhibitory synaptic stimulus was applied to the soma and each branch of the basal dendrites, i.e., \( \delta_j^{\rm{i}} \) was unity only for the soma and these dendritic branches j. The basal excitation in SIP4, in contrast to the basal inhibition, did not include the soma, the first branch of the dominant apical dendrite, nor the side branches to this apical dendrite. The apical excitation of SIP2 was applied to all dendritic branches above the first branching point of the apical dendrite. The total synaptic transmembrane current in segment k, being a part of branch j, is then given by $$ {i_{{{\rm{syn}},k}}} = g_j^{\rm{e}}({V_k} - {E^{\rm{e}}}) + g_j^{\rm{i}}({V_k} - {E^{\rm{i}}}), $$ where E e = 0 is the reversal potential for the excitatory synapses, and E e = −70 mV is the reversal potential for the inhibitory synapses. V k is the membrane potential in segment k of the neuron. The soma resting potential for the pyramidal neuron templates was −64.5 mV for both stimulus patterns. To include some temporal jitter in the onset of synaptic inputs, each branch's synaptic input was started separately in time, i.e., the Δ j 's were slightly different: the onset was chosen stochastically assuming a Gaussian distribution around the mean \( {\overline \Delta_{\rm{syn}}} \) with a standard deviation of \( {\sigma_{\rm{syn}}} = \sqrt {5} \) ms. The dynamical response of the pyramidal neuron to a particular synaptic-input pattern was computed in NEURON, and the transmembrane currents for all segments were written to file with a ten-digit precision. The extracellular potential for the neuronal templates was then computed using the line-source method (Holt and Koch 1999). The extracellular potential \( {\varphi_n}({{\bf r}},t) \) from a neuronal template n is then found by a sum over all segments of this neuron, i.e., $$ {\varphi_n}({{\bf r}},t) = \sum\limits_{{k = 1}}^{{{N_{\rm{k}}}}} \frac{{{I_{{nk}}}(t)}}{{4\pi \sigma \Delta {s_k}}}ln\left| {\frac{{\sqrt {{h_{{nk}}^2 + r_{{nk}}^2}} - {h_{{nk}}}}}{{\sqrt {{l_{{nk}}^2 + r_{{nk}}^2}} - {l_{{nk}}}}}} \right|, $$ where N k is the number of segments in the pyramidal neuron, \( \Delta {s_k} \) is the length of line segment k of this neuron, r nk is the radial distance (perpendicular to the segment) from segment number k, h nk is the longitudinal distance (parallel to the dendritic segment) from the end of segment number k, \( {l_{{nk}}} = \Delta {s_k} + {h_{{nk}}} \) is the longitudinal distance from the start of the segment to the recording point, and I nk (t) is the transmembrane segmental current (the ionic currents plus the capacitive current). The extracellular conductivity used in the simulations was σ = 0.3 S/m (Hamalainen et al. 1993). The extracellular population activity was calculated by linear superposition of the single pyramidal-neuron templates. Structure of Population and Population Activity The modeled layer-5 pyramidal populations have the typical sizes and spatial extensions of cortical columns, e.g., as seen for the layer-5 pyramidal population in rat barrel cortex (Beaulieu 1993; Feldmeyer and Sakmann 2000). They contained 1,000 pyramidal neurons, each randomly rotated around their primary dendrite and receiving the same spatial stimulus pattern. Their somas were positioned stochastically with uniform probability density within a cylinder oriented along the y-axis with height 0.4 mm and a diameter of 0.4 mm (cf. Fig. 5a). The mean synaptic onset times \( {\overline \Delta_{{{\rm{syn}},n}}} \) of the 1000 neurons in each population were chosen stochastically from a Gaussian distribution with a standard deviation of σsp = 5 ms. The probability distribution was truncated, i.e., set to zero, for times 2σsp = 10 ms smaller or larger than the mean value. This gives a maximum temporal translation of 20 ms between the neurons within the population. Computation of Extracellular Potentials from Population Activity Based on the set of stochastically chosen mean synaptic onset times (\( {\overline \Delta_{{{\rm{syn}},n}}} \)) for all 1,000 neurons in each population and the extracellular single-neuron templates in Eq. 4, the extracellular potential from the entire population was calculated as $$ \Phi ({{\bf r}},t) = \sum\limits_{{n = 1}}^N {\varphi_n}({{\bf r}},t - {\overline \Delta_{{{\rm{syn}},n}}}), $$ where N = 1,000 is the number of neurons in the population, and ϕ n (r, t) is the extracellular signature of neuron n with mean synaptic onset at time zero. The potential was computed at 1656 assumed recording positions defined by a 9 × 23 × 8 grid where the assumed recording positions in x-direction were from −0.7 mm to 0.7 mm with inter-contact distance of 0.2 mm, in the y-direction from −0.8 mm to 1.4 mm with an inter-contact distance of 0.1 mm, and in z-direction from −0.8 mm to 0.8 mm with inter-contact distance of 0.2 mm. In the y-direction the populations were centered so that the average soma position was in y = 0. We tested several spatial placements of the populations of model neurons, which we call 'central' and 'lateral' (Fig. 6a, b). For the central population the horizontal centering was in (x,z) = (0,0), while the lateral populations were centered in (x,z) = (0,–0.6) mm and (x,z) = (0,0.6) mm. The lateral populations centered in (x,z) = (0,0.6) were produced by mirroring the populations centered in (x,z) = (0,–0.6) mm about the xy-plane, i.e. they shared the (mirrored) stochastic parameters discussed above (somatic placement, orientation, synaptic onsets and neuronal time-shifts). The center population was created with a new set of stochastic parameters. To avoid singularities in Eq. 9 no dendritic segments were allowed to be closer to the assumed recording contacts than 5 μm; dendritic segments closer than this distance were given a radial distance of r nk = 5 μm when the extracellular potential was computed through Eq. 9. Computation of the Actual Model CSD When computing the potentials at assumed recording positions, all neural segments except for the somas were treated as linear current sources (Eq. 9). However, when computing the actual CSD for these populations, the current-sources were treated as point sources to improve computational efficiency. Each point source was then positioned at the center of its corresponding line segment. The CSD is defined as the total current source within a small volume element, divided by the volume of this element. Because of the inhomogeneous nature of the CSD one cannot choose this volume element to be arbitrarily small without accepting a high degree of spatial noise due to the high variance in the number of sources in neighboring cubes. For this model study we computed the real CSD based on the total current-source within cubes with sides of 50 μm, which gave an acceptable high resolution without too much spatial noise. Experimental Procedure Rat Subicular Recordings Adult male Wistar rats (200–400 g) were anaesthetized using urethane (1.5 g/kg; 30% w/v i.p.) and then placed in a stereotaxic frame with body temperature maintained at 37°C using a homoeothermic blanket (Harvard Apparatus, UK). A midline scalp incision was made to expose the skull and craniotomies were made above dorsal CA1 (AP: −4.5 mm, ML: 3.0 mm) and subiculum (AP: −8.4 mm, ML 3.5 mm) relative to Bregma (Gigg et al. 2000). The dura was then excised to allow insertion of electrodes. All procedures were in accordance with the Animals (Scientific Procedures) Act 1986, UK. Electrodes and Stimulations Dorsal subicular recording were made using a 64-contact probe (a8 × 8–5 mm 200-200-413, NeuroNexus Technologies, Michigan, USA). The configuration of the probe provided eight 413 μm2 contacts at 200 μm spacing on each of the eight 5 mm long shanks. The probe was inserted at a 25° angle from vertical in the sagittal plane so that at target the shanks of the electrode were approximately perpendicular to the main axis of the subicular cell layer. A bipolar stimulating electrode consisting of two, twisted Teflon coated, tungsten microwires (125 μm diameter, insulated to the tips; Advent RM, UK) was placed in the alveus above dorsal CA1 (Fig. 2). Histological verification of stimulation and recording electrodes (a). White circles show placement of electrode contacts (−8.2 from Bregma, 2.9 lateral to midline). The white square in the inset shows the position of the stimulating electrode during the recording (−4.5 mm from Bregma, 2.9 mm lateral to midline). The b panel shows superimposed mean evoked voltage responses at each electrode position. Maximal deflections occur mainly in the subicular region. Stimulating and recording electrode sites are identified on the nearest slice image to both points using Paxinos and Watson 1998. Black bars show 1 mm scale. Alv: alveus, CA1, CA3, DG: dentate gyrus, Post: postsubiculum Stimuli were triggered by 5 V analog pulses from a National Instruments card (PCI-6071E), controlled by programs written in LabVIEW (v8, National Instruments, USA). These pulses triggered a constant-current, isolated stimulator (DS3, Digitimer Ltd., UK). Stimuli were of fixed duration (0.2 ms). Once electrodes were at target an input–output curve was plotted of alvear stimulus intensity versus subicular response amplitude (e.g., Commins et al. 1998). All subsequent pulses were then set at half the intensity required to elicit the maximum response from the curve, in this case, 70 μA. Single stimulation pulses were presented at a rate of 0.33 Hz for one minute. Perievent histograms of the mean fEPSP voltage response from these were calculated for every channel. As two channels had to be used for stimulation, the final result was an eight-by-eight contact profile minus two silent contacts. For the application of iCSD methods we filled these missing grid points with mean voltages from nearest neighbors. We have shown elsewhere that this procedure does not introduce much distortion for a small number of missing channels (Wójcik and Łęski 2009). Signals were acquired using a Recorder64 (Plexon, USA) recording system, referenced to ground and amplified at source using a 20x gain AC-coupled headstage followed by pre-amplifier conditioning (total gain of 2500x). Other than the fixed system low-pass (6 kHz) no other filtering was applied. Local field potential signals were digitized at a sampling rate of 10 kHz per channel at 12 bit resolution and stored for offline analyses. Histological Verification At the end of the experiment, recording sites were determined by a combination of visual analyses of electrode tracks and lesions placed on the upper-most and lowest electrode contacts using a 30 μA positive current for 3 seconds (Townsend et al. 2002). Animals were subject to terminal anesthesia (2–3 ml of 30% urethane i.p.) and transcardially perfused with 100 ml of 0.9% saline followed by 150 ml of 10% formalin. The brain was removed and stored in 10% formalin for 24 hours, followed by immersion in 30% sucrose solution. Frozen 100 μm sections were made in the sagittal plane. and stained with Cresyl violet. Electrode placements were assigned with reference to the rat brain atlas of Paxinos and Watson (1998). In this section we study the properties of the proposed method on several different datasets before we finally apply it to experimental data. We start with tests of reconstruction quality for surrogate data with a simple structure (Gaussian sources) and compare variants of the iCSD method with the traditional approach for CSD analysis. Then we analyze more complex data from simulation of columnar populations of layer-5 pyramidal neurons. Two different patterns of stimulation (basal and apical) are tested. We then observe and discuss the asymptotic independence of reconstructed error (large h limit). With the viability of our approach assured by tests on these model data, we then use the best variant of our method to analyze responses evoked by alvear stimulation recorded on 8 by 8 grid in the rat subiculum. Tests on Gaussian Sources Throughout the tests on Gaussian sources we assume that the potentials are measured on a grid of 8×8 electrodes, spaced by 0.2 mm in x and y directions. This choice was motivated by the experimental conditions used later. Two-Dimensional Gaussian Sources First we test the method on surrogate sources which have a product structure c(x,y)H(z), where c(x,y) is a sum of Gaussians, and H(z) is a step function with h = 0.5 mm (the exact formula for c(x,y) is given in the Appendix). We calculate the potentials generated by such sources on the assumed electrode grid. In the simplest case we assume that sources are non-zero only for x and y inside the electrode grid (Fig. 3a). For this case both linear and spline interpolation iCSD methods perform very well, provided we assume the correct value for h (Fig. 3c, d). The traditional (numerical second derivative) method also locates the sources, however, their shapes are distorted (Fig. 3b). To quantify the reconstruction fidelity we use normalized squared error: Test of iCSD methods on sources based on two-dimensional Gaussian functions (arbitrary units). a original Gaussian sources (zero outside the box, product structure, h = 0.5 mm, potentials are measured on a grid of 8×8 electrodes), b reconstruction using traditional CSD method, that means numerical double derivative (smoothed), c reconstruction with linear iCSD method, d reconstruction with spline iCSD method e–g: the sources are the same as in a, but with h = 0.1 mm. Reconstruction of CSD (spline interpolation) with assumed h equal to 0.05 mm (e), 0.1 mm (f), 0.2 mm (g). Note different scales of the three plots: besides distortion of shapes of the sources there is also a global scaling due to different assumed values of h h–k: the sources inside the box are the same as in a, but now they are also non-zero outside the box. Reconstruction with spline iCSD method: h with no boundary treatment, i with b boundary conditions, j with d boundary conditions, k reconstruction using smoothed numerical double derivative $$ {e_1} = \frac{1}{M}{\iint {\left( {C(x,y,z = 0) - \hat{C}(x,y)} \right)}^2}dx\;dy, $$ $$ M = \iint {C{{(x,y,z = 0)}^2}}dx\;dy, $$ where C(x, y, z = 0) are original sources and \( {{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}\over C} }}(x,y) \)is the reconstructed CSD in the plane; M is a normalization constant. Since we are mostly interested in the spatial profile of the CSD, as the precise values are unavailable because the conductivity is usually unknown, we also studied the differences between the original profiles and scaled reconstructions. Specifically, as another measure of reconstruction error, we took: $$ {e_2} = \mathop{{\min }}\limits_{\alpha } \frac{1}{M}{\iint {\left( {C(x,y,z = 0) - \alpha \hat{C}(x,y)} \right)}^2}dx\;dy, $$ where the integrals are over the area spanned by the electrode grid, and M is a normalization constant defined as previously. For the traditional CSD method, the error e 1 is 34%, while it is only 0.097% for the linear iCSD method and 0.019% for the spline iCSD method. Thus, if the assumed distribution is sufficiently smooth, one can reconstruct the original pattern faithfully from the limited information provided by the finite set of measurements. Some of the large error inherent in the traditional CSD method is due to incorrect estimation of the CSD amplitude and not the form of the spatial profile. However, even if the CSD amplitude is adjusted to minimize the error according to Eq. 13 one still finds an error of e 2 = 32%, i.e., much larger than for the iCSD methods. One of the reasons for using the inverse CSD method instead of the numerical second derivative, especially in two- or three-dimensional situations, is that the boundary data are better utilized. To illustrate this we calculated the errors (e 1 ) of the reconstruction of the central part of the grid, i.e., the grid spanned by the electrodes 2–7 both in x and y. Not surprisingly, in all cases the errors are smaller: : e1 = 8.1% for the traditional approach, 0.069% for the linear iCSD and 0.0063% for the spline iCSD. Usually we do not know the correct h and we must form an educated guess based on the available information (Fig. 3e, f, g). In the source data we used h = 0.1 mm and for reconstructions in Fig. 3e, f, g we took h equal to 0.05 mm, 0.1 mm and 0.2 mm, respectively. If the assumed h is different from the true h, then slight distortions appear in the shape of the reconstructed CSD. The errors are: e 2 = 0.4% for assumed h = 0.05 mm, e 2 = 0.019% for the assumed h equal to the true h (= 0.1 mm), and e 2 = 2.1% if we assume h = 0.2 mm. Moreover, the amplitude of the reconstructed distribution varies with the assumed h (the CSD is scaled by a global factor which in this case is approximately equal to the ratio of true and assumed h). In the experimental setting the sources often extend beyond the electrode grid. Hence, we tested the iCSD methods on potentials calculated using the complete spatial extent of the Gaussian sources. The situation now changes dramatically and the variants of the iCSD method assuming non-zero distribution only inside the grid work very poorly (Fig. 3h): the reconstruction error is almost 500%. However, we can overcome this by using B or D boundary conditions (Fig. 3i, j), which leads to reconstruction errors e 1 of 8.4% and 2.4%, respectively. The traditional method (Fig. 3k) gives a reconstruction error of e 1 = 24% (e 2 = 19%), which is much better than iCSD with no boundary conditions, but substantially worse than iCSD with proper boundary treatment. If we consider only the central grid, then the e 1 error for iCSD assuming no sources on the outside is 19%, for B and D boundary conditions e 1 is 1.3% and 0.29%, respectively, and for the traditional CSD we get e 1 = 11%. Three-Dimensional Gaussian Sources The most interesting test case of Gaussian data consists of truly three-dimensional sources. The set of test sources C we use here is a sum of four, three-dimensional Gaussians whose centers are at z ranging from −0.3 mm to 0.6 mm. The sources were chosen in such a way that in the plane of the grid, z = 0, the distribution was the same as in the product case studied before (see Fig. 3a, the exact formula is given in the Appendix). We also used the same two-dimensional 8×8 grid of electrodes. The questions of interest are now what h and which boundary conditions would work best. Our tests on product sources (Fig. 3e, f, g) suggest that the reconstructed CSD will be defined up to a multiplicative constant depending on the assumed h. Therefore, to compare the quality of reconstruction for different h we have to scale the reconstructed CSD by a constant and so in this section we use the error e 2 as defined by Eq. 13. We performed the reconstructions assuming h = (0.05 × 2n) mm, n = 0,…,6. The reconstruction errors for the method with D boundary conditions are shown in Fig. 4d where the two curves are for H(z) being either a step function or a Gaussian (see figure caption). To understand the meaning of different values of reconstruction error we plot the actual CSD in the plane of the (virtual) recording grid (Fig. 4a) and two examples of reconstructed CSD (with α set to the values found to minimize the error for the two different choices of h). Fig. 4b shows results for h = 1.6 mm which corresponds to the smallest error of reconstruction in the studied regime (e 2 ~ 10%). Figure 4c shows the reconstructed CSD for h = 0.1 mm which corresponds to large reconstruction error (e 2 = 20%). From this test one may conclude that these two different choices for the H(z) profile (step and Gaussian profiles) give similar results. The reconstruction error is high for very small h and reaches minimum for h ~ 1 mm, which is roughly the extent of the test Gaussian sources. For comparison, the error for the traditional CSD method is in this case e 2 = 26%. Reconstruction error (Eq. 13) for inferring two-dimensional current source-density distribution on a 2D section through a set of three-dimensional Gaussian sources. The spline iCSD method with D boundary conditions was used. a Original CSD in the plane of the measurement grid (arbitrary units). b Reconstructed CSD for h = 1.6 mm which corresponds to the smallest error of reconstruction in the studied regime (e 2 ~ 10%). c Reconstructed CSD for h = 0.1 mm which corresponds to large error of reconstruction (e 2 ~ 20%). d Dependence of reconstruction error (e) on the assumed width of the sources (h). The dotted curve is for H(z) being a step function, for the solid curve H(z) is a Gaussian. The reconstructions in b and c are scaled with a parameter α to minimize the reconstruction error, see text for details. In b and c the function h(z) is a step function Tests on Population Model Data For tests on our population model data, i.e., the data from forward modeling on a columnar population of layer-5 pyramidal neurons, we used potentials calculated on an array of 8 × 23 positions, spaced by 0.2 mm in x direction and by 0.1 mm in y direction. Here, as in the case of three-dimensional Gaussian sources, we can reconstruct the CSD up to a multiplicative constant (at least if we are not in the large-h limit, see the next section). However, unlike the previous case, we can now reconstruct the whole time-course of the activity. Therefore, as the error measure we can take $$ {e_3} = \mathop{{\min }}\limits_{\alpha } \frac{1}{M}\int {dt} {\iint {\left( {C(x,y,z = 0) - \alpha \hat{C}(x,y)} \right)}^2}dx\;dy, $$ where the double integral is over the area spanned by the electrode grid. To calculate this integral we need the actual model sources in the grid plane, i.e., C(x,y,z = 0), and for this we used the actual model sources averaged locally in cubes of edge length 0.05 mm (see Materials and Methods). To obtain \( \hat{C}(x,y) \) we simply used the value of the reconstructed sources in the middle of the cube. The normalization constant M was $$ M = \int {dt} \iint {C{{(x,y,z = 0)}^2}}dx\;dy. $$ The (time-independent) constant α in Eq. 14 was chosen to minimize the error e 3 . In Fig. 5 we present reconstructions of activity of a simulated single column of layer-5 pyramidal neurons (Fig. 5a) in the planar section defined by the virtual recording electrodes. The electrode grid is placed along the axis of the apical dendrites of the simulated population and passes through the center of the cylinder in which the somas are located. Placement of the simulated population of 1000 layer-5 pyramidal neurons a. All somas are contained within the cylinder. Three example neurons are shown. Panels b and d show the actual CSD generated by the population stimulated apically (b) or basally (d) (excitatory synaptic input). c, e: reconstruction of the CSD with spline iCSD method, boundary conditions D, h = 0.2 mm In the figure we present the reconstruction at one particular point in time for each type of stimulation (apical (5 B, C) or basal (5 D, E) excitation), the complete time course of the simulated activity compared with the best reconstruction is available as a video file in the supplementary material, also available at http://www.youtube.com/icsd2d. We can see that for both types of stimulation the estimation of CSD using splines effectively smoothes the real activity. This is very pronounced in the case of basal excitation (Fig. 5d), as here the actual CSD activity is more sparse. The reconstruction errors are 3% for apical stimulation (Fig. 5c vs. b), and 58% for basal stimulation (Fig. 5e vs. d). Note that error on the order of 50% indicates inadequacy of the chosen electrode grid to resolve the structures arising at the very small spatial scales in the second case. As seen in Fig. 5d and e the iCSD method is nevertheless able to reconstruct the gross features of the actual CSD distribution. We compared the quality of reconstruction for different numbers of simulated barrel columns, different types of stimulation, different placement of the electrode grid with respect to the sources, and different assumed h (Fig. 6). Panels A and B in Fig. 6 show the placement of the electrode grid in the xz-plane. The 8 × 23 grid of electrodes extends for 1.4 mm in the x (Fig. 6a) or z (Fig. 6b) direction and for 2.2 mm in the y-direction. Panels C and D correspond to apical excitatory stimulation, while panels E and F show the results for basal excitatory stimulation. The spatial setup shown in panels A and B is the same for both stimulation schemes (panels C and E are for setup shown in panel A, panels D and F correspond to the setup depicted in panel B). The results show that for each configuration there appears to be an optimal value of h, as in the case of Gaussian sources (Fig. 6c, d; Fig. 6e, f). The large difference in CSD reconstruction errors in cases of apical (Fig. 6c, d) and basal (Fig. 6e, f) stimulation is the consequence of very different spatial structure of CSD in these two cases. For apical stimulation, we observe CSD activity of high absolute values over a large spatial extent, slowly varying in space (Fig. 5b), therefore, the spline approximation between points on the scale set by the electrode grid is reasonable. On the other hand, for basal stimulation, we observe fine-grained activity, i.e., activity at spatial scales smaller than the grid spacing (Fig. 5c), and the spline-based approximation is, as expected, less able to account for the actual activity. In the case with one central column the optimal h is 0.1–0.3 mm; this translates to an assumed thickness of CSD distribution 2 h of 0.2–0.6 mm, which is close to the diameter of the cylinder containing the somas used in the simulation. Reconstruction fidelity of model CSD data for different placements of sources and recording grid. a, b: top-view of the grid of electrodes (x′s) and cylinders containing somas of the simulated layer-5 pyramidal cells (circles). The two setups a and b differ in the placement of the electrode grid with respect to the axis going through centers of three columns. Note that in the setup b the electrode grid is slightly off-center, which allows the impact of such placement on the reconstruction error to be estimated. c–f: reconstruction error of the whole time-course of CSD for varying number of active columns and h. c and e correspond to setup a, d and f correspond to setup b. c and d: apical excitatory stimulation, e and f: basal excitatory stimulation If we now consider the configuration with lateral sources, the situation depends on whether the grid is placed in the orthogonal plane (Fig. 6a) or aligns with the new columns (Fig. 6b). In the first case (Fig. 6a), the optimal h grows slightly, which is the effect of increasing size of the sources perpendicularly to the grid In the latter case (Fig. 6b), the optimal h changes very little with the addition of new columns. This is consistent with the constant extent of the sources in the direction perpendicular to the grid. The dependence of reconstruction error on h is more pronounced in the case of apical excitation, but for both types of stimulation the minimum is obtained for values close to the size of the source generators. This confirms our conclusions from the tests on Gaussian sources that the choice of h should be based on the expected extent of the activity along the axis orthogonal to the grid. Note also that for the setup shown in Fig. 6b, where the electrode grid is slightly off-center, the errors for the single central column are higher than for the setup shown in Fig. 6a. This is because the CSD distribution assumed in our iCSD scheme is symmetrical in the axis perpendicular to the grid. Indeed, the LFP generated by any CSD distribution is equal to that generated by a distribution symmetrized with respect to the plane of the electrodes. The recorded LFP will be the same whether all the CSD is 'behind' or 'in front of' the electrode grid, or shared in any fraction between the two sides. Therefore, for any data set, we can only hope to reconstruct the symmetrized part of the CSD, unless there is additional anatomical or physiological information indicating specific asymmetry, which can then be included in the method explicitly. We also calculated the reconstruction errors for the CSD estimated using the traditional method for the situations depicted in Fig. 6. Typically, the errors of the traditional method are slightly higher than in the inverse method for the best choice of h. For example, the errors for the configuration shown in Fig. 6a and apical excitation (Fig. 6c) are 13.0%, 11.7%, and 11.2% for a single column, two columns, and three columns, respectively. One situation where the traditional method gives significantly higher errors is when there is activity at the boundary of the grid (Fig. 6b, lowermost of the three columns). For three active columns (circles in Fig. 6d) the error of the reconstruction obtained with the traditional method is as high as 19.6%. These findings are compatible with the results for Gaussian sources, where there was always significant activity at the boundary. Thus, in this case we see that the main benefit of using iCSD instead of traditional approach is significant gain of precision at the boundaries. Selection of h The parameter h is the main a priori unknown parameter specifying the 2D iCSD method, and the choice of h is, therefore, an important issue in practical applications. Optimally, h should be chosen based on the expected extent of neural activity, known anatomy, or the results of forward modeling studies according to the expected size of a typical active population. As seen in Figs. 4 and 6 the assumed activity depth influences the estimation error. However, in these examples the estimation errors seem to converge towards a constant value above h = 1 mm, which corresponds to an assumed activity depth 2 h of 2 mm. This is particularly visible for the case of basal stimulation. This reflects that the 2D iCSD estimator itself approaches a fixed 'large-h' form when h becomes large enough. It is shown in the Appendix that the typical large-h transition value depends on the geometry of the multielectrode, in particular the distance between recording grid points. This large-h transition value is potentially of practical importance: in a system where neural activity is more widespread than this value in the direction orthogonal to the 2D grid, the 'large-h' 2D iCSD method can be used and the uncertainty due to lack of knowledge of the true effective value of h will be minimal. The large-h limit of the 2D iCSD method is studied in the Appendix. Analysis of Experimental Data To test the utility of the proposed method in analysis of real data we applied it to the set of simultaneous recordings obtained in the rat subiculum using an 8 by 8 multielectrode, as outlined in the "Materials and Methods" section. We assumed h = 0.5 mm and Gaussian profile of H(z). Figure 7a–h compares the reconstructed current-sources using the iCSD 2D with D boundary conditions (marked 2D iCSD) with the interpolated averaged potentials in time from the stimulation. The interpolated data were superimposed on top of anatomical borders (Fig. 7i) according to the histology (Fig. 2). Laminar profile of network activity in rat subiculum. Panels a–h: interpolated data from 8 × 8 MEA recording grid (1.6 × 1.6 mm) in dorsal subiculum (iCSD at left, potential at right). Insets in the left panels show post-stimulus time for each frame. Black bar in a shows 1 mm scale. Anatomical borders are indicated in i with matched average potentials (red box) shown in j (scale bar: 1 mV/15 ms; w.m. white matter; data shown 15 ms post-stimulus). In a–d single-pulse alvear stimulation (* in i) produces a peak negative voltage (red) across the subicular cell layer (SUBp; panel a voltage) with bordering negativity (molecular layer (SUBm) and basal dendrites) that moves over time to the SUBp/SUBm border. There are also smaller positive–negative responses in presubiculum (PrS) and in the border region between PrS and Sub. This pattern is also reflected in the 2D iCSD patterns as sinks (red) and sources (blue). In c–h the main SUB pattern appears to 'split' around a central quiescent zone, fade (d–e) and then strengthen again (f–g) before fading slowly A paper centred on a large 2D data set will be submitted elsewhere, hence, we provide here a basic interpretation of the present responses. Alvear activation evokes an antidromic subicular population spike (SUBp, peaks at 1.7 ms) across the full extent of the subicular cell layer (Menendez de la Prida 2003). The main SUBp sink back-propagates across the cell layer towards the molecular layer (1.7–2.1 ms sequence), then 'splits' and finally fades away slowly (from 2.5–3.8 ms; along SUBp/SUBm). We interpret this 'split' as feed-forward activation from antidromically-activated subicular pyramids that project laterally (but perhaps not to the middle of SUB) with the likely recruitment of local inhibitory cells (Harris et al. 2001). A strengthening of the main split pattern and a reversal of the activity at SUB/PrS border (and PrS) likely reflects feed-back inhibition of SUBp, producing an 'inhibitory' source and 'passive' sink. After back-propagation the region lying along the 'split' has very little synaptic activity, suggesting that the proposed feedback mechanisms on either side of this region are not active here. Figure 8 shows the CSD reconstructed using different h in the iCSD method (0.05 mm, 0.5 mm, and 3.2 mm respectively in Fig. 8a–c) and using the traditional method (Fig. 8d). To obtain the values at the boundary layer in the traditional CSD reconstruction we used the 2D analog of the Vaknin procedure. The shapes recovered using these methods are very similar. If we use h = 0.5 mm as reference the errors e2 of the four plots are 3.3%, 0%, 0.8%, and 2.4%. The largest difference is in the amplitude of the CSD, which is 263, 133, 126, and 77 for Fig. 8a–d, respectively (arbitrary units). This example shows that in some cases traditional CSD and iCSD may lead to equivalent results. We expect this to happen especially when there is little activity close to the edges, or when the inter-electrode distance is large compared to the spatial scale of activity (here we have sinks and sources separated by just 0.2 mm, that is, one inter-contact distance). The latter claim is supported by tests similar to the ones shown in Fig. 3, but with longer and more narrow sources, resembling the experimental activity shown in Fig. 8 (test data not shown). In this case we also obtained reconstruction errors on the order of 9% for the spline iCSD variants (B and D) only mildly better than for the traditional method (e2 =10%) with the difference between the iCSD and traditional method around 6%. Reconstruction of CSD from experimental data (t = 1.7 ms after stimulation). Three reconstructions with the iCSD method (spline, Gaussian z-profile) with different assumed h (a–c) and the traditional method with Vaknin procedure (d) give similar shapes of the CSD. The predicted amplitudes are different (see text), each plot is rescaled using the amplitude of the CSD to facilitate comparison of the shape Among the many techniques available for the study of information processing in the brain, electrophysiology stands out when it comes to temporal resolution of the signal. Some obstacles in its use are (a) the technical problems of simultaneous signal recording at multiple sites and (b) the large spatial range of electric fields measurable with these electrodes. It is now feasible to implant around a hundred electrodes within a relatively small brain volume to record signals simultaneously, allowing for remarkable spatial and temporal resolution. The amount of information coming from such experiments, matched with adequate analytical tools, are used for precise identification of single units (Buzsáki 2004), brain machine interfacing (Nicolelis 2001) and in studies of LFPs for a precise description of population activity of neural structures and spatial localization of synaptic connections. Although a remarkable spatial resolution in recorded potentials is achievable with modern multielectrodes, this does not automatically afford a correspondingly high spatial resolution in the estimated neural activity due to the inherent long-range properties of LFPs. The Inverse Current Source Density method proposed in Pettersen et al. (2006) for laminar 1D recordings, developed by Łęski et al. (2007) for 3D data and here for 2D grids, allows more robust reconstruction of the sources and sinks generating the measured LFPs than previous methods. One specific advantage, particularly important in 2D (and in 3D), is that the iCSD method appears to recover the CSD close to the boundary of the electrode-contact grid more accurately than the traditional CSD method. With the increasing availability of multi-shank, multi-contact electrodes and microelectrode arrays there is now a growing need for data analysis methods to match the sophistication of the sensor hardware. In this regard, we believe that the 2D variant proposed here, with its freely available implementation, will find immediate use in such electrophysiological studies. Previous Studies of Two-dimensional CSD The links between CSD and LFPs were discussed in full three-dimensional generality previously by Nicholson and co-workers (Nicholson 1973; Nicholson and Freeman 1975; Nicholson and Llinás 1975). Their approach has been used most frequently, however, for analyses of laminar recordings in one dimension (e.g. Haberly and Shepherd 1973; Mitzdorf 1985; Schroeder et al. 1992; Ylinen et al. 1995; Lakatos et al. 2005; Lipton et al. 2006; Rajkai et al. 2008; de Solages et al. 2008). There are several studies that involved the estimation of CSD in two dimensions (Novak and Wheeler 1989; Shimono et al. 2000; Shimono et al. 2002; Lin et al. 2002; Phongphanphanee et al. 2008). However, all of these applied the traditional approach through estimation of the second numerical derivative, and most applied the specific technique (smoothing followed by differentiation) proposed by Shimono et al. (2000). The problem with boundary values was observed by Novak and Wheeler (1989), who refrained from the analysis of such edge data. On the other hand, in the papers by Shimono et al. (2000, 2002), Lin et al. (2002), and Phongphanphanee et al. (2008), there is no explicit discussion of boundary treatment. A notable difference between the standard (numerical second derivative) and the inverse CSD approaches is that in iCSD we can specify the parameter h, which translates to the assumed thickness of the region with sources. The standard CSD method has an implicit large-h assumption: it implies that no field escapes in the third dimension (the double derivative in the Poisson equation is assumed to be zero in the third dimension). Although the 2D iCSD methods require an explicit assumption of h which is not always easy to justify, it is better to have this assumption explicit than forcing a large-h assumption, which is done implicitly in the standard CSD method. We should stress that we do not want to imply here that the traditional CSD approach is invalid or should be abandoned, as it is simple to apply and in many cases may lead to reasonable estimates of CSD (see our experimental example). However, from the general viewpoint presented here, we believe 2D iCSD is better grounded. Indeed, we are convinced that the framework of 2D iCSD offers a viable and meaningful alternative, which can be further extended to incorporate additional physiological knowledge as it becomes available. In this article we have developed several variants of the iCSD 2D. The basic difference between them comes from the assumed structure of the CSD: either constant within boxes or interpolated (linear or spline) between the points of an electrode grid. To accommodate sources located outside the probed region one can expand the grid spanning the CSD (Fig. 1). The CSD at these extra points can be set to duplicate the values from neighboring points or alternatively set to zero. Our tests on several surrogate data sets, including Gaussian sources of different spatial structure (Fig. 3) and extracellular potentials generated from a model population of 1000 pyramidal cells (Fig. 5), indicate that the optimal approach is the spline interpolated iCSD on an extended grid with duplicated boundaries. The main free parameter is the assumed width of the sources perpendicular to the multielectrode grid, h. Tests on model data indicate that the performance is best when h is in the order of the actual size of the current sources. Thus, it is beneficial to estimate h from anatomical studies of the target region. However, reasonable deviations from the best choice of h deteriorate the profile of the reconstructed sources only very slightly. We also observed stabilization of the error values for the normalized CSD with growing h, which corresponds to assuming infinite extent of sources in the direction perpendicular to the grid. For the cortical pyramid model (Fig. 5) such stability was reached at about h = 1 mm (i.e., assumed width of 2 mm) for our multielectrode with a grid-spacing of 0.2 mm. This error stabilization reflects convergence of the 2D iCSD method in the large-h limit. This convergence is independent of the neural sources and only depends on the geometry of the grid-points of the multielectrode. Thus, for a given spatial extension of neural activity in the direction orthogonal to the 2D grid, the large-h limit can, in principle, be reached by reducing the distance between grid points in the 2D multielectrode. In many real biological applications, one may already be at this large-h limit. In such situations, one may simply choose a sufficiently large h when constructing the iCSD estimator. In case of very limited information on the possible extent and distribution of the sources, we recommend experimenting with several different values of h and looking for features appearing stably across the different iCSD reconstructions. To test our method in practice we applied it to a set of recordings from rat subiculum (Figs. 2, 7). Compared with the interpolated potentials, iCSD plots provide more precise localization of neural activity in subiculum following alvear stimulation. The observed activity pattern points to the heterogeneous structure of connections in the subiculum and supports the columnar anatomical model proposed by Harris et al. (2001). Graphical User Interface Tool for CSD Analysis As supplementary material we provide a MATLAB toolbox containing the scripts used in our analysis together with a simple GUI (Fig. 9), allowing easy calculation of different variants of iCSD from voltage data recorded on 2D regular grids. The viewer is bundled with three of the datasets used in the paper: two model datasets (apical and basal excitatory input to a population of pyramidal cells), and one experimental (data recorded in the rat subiculum). It is also possible to import and analyze user data provided as a workspace variable or an array stored in a MAT file. The viewer makes it straightforward to switch between traditional CSD and different variants of inverse CSD, to adjust boundary conditions, and to test different values of the h parameter in case of iCSD. The resulting figures can be exported as PNG files. Graphical tool for CSD analysis. The main window shows details of the dataset, the CSD method, two panels with voltage and CSD data, and controls for browsing through the dataset The software and the data can be used freely in research provided this paper is cited in any material using the results of their application. The toolbox is provided under GNU General Public License version 3 or later and is available from the INCF software repository, http://software.incf.org/, where a toolbox for iCSD analysis of 1D linear recordings, CSDplotter (Pettersen et al. 2006), can also be found. Inverse Current Source Density developed in one (Pettersen et al. 2006), two (here), and three dimensions (Łęski et al. 2007) is a flexible framework that allows us to incorporate different assumptions about the distribution of sources or the geometry of probed structures. So far, however, it has been developed only on regular grids with the assumption of uniform and homogeneous tissue conductivity. Conductivity in any structure is neither completely homogeneous nor uniform (see for example recent measurements reported in Logothetis et al. 2007 or a study of conductivity in the rat barrel cortex, Goto et al. 2010), therefore, one way of developing the method would be to consider inhomogeneous or anisotropic media. Equation 1a then becomes \( \nabla (\sigma \nabla \,\Phi ) = - C \), where σ is a tensor which can take different values at different spatial positions. The solution to this equation would then be used to construct the forward model matrix F. The generic situation can only be treated by solving the equation numerically, but in several important cases the solution can be given explicitly. For instance, if σ is constant in space then one can always rotate the Cartesian coordinate system so that σ becomes a diagonal matrix: σ = diag(σ x ,σ y ,σ z ). Then the solution Φ for the unit source located at the origin is $$ \Phi (x,y,z) = \frac{1}{{4\pi }}\frac{1}{{\sqrt {{{x^2}{\sigma_y}{\sigma_z} + {y^2}{\sigma_x}{\sigma_z} + {z^2}{\sigma_x}{\sigma_y}}} }}. $$ Also some important cases of inhomogeneity can be solved analytically. One such case is conductivity that is constant within planar layers but has different value in each layer (Gold et al. 2006; López-Aguado et al. 2001), then the method of images can be used to express the solution Φ as a series. Another way of generalizing the method would be to consider arbitrary distributions of recording points. This seems to be of immediate use, as it is becoming increasingly easy to insert large numbers of independent electrodes probing large volumes of the brain, but not necessarily on regular grids. This will be an object of our further study. Until now we have assumed that the grid spanning the CSD is the same as the electrode grid. This need not necessarily be so and one could consider electrode arrays with differing geometries, e.g., electrode contacts not forming an exact straight line (Barthó et al. 2004). An important topic we have only briefly covered in the present work is the stability of the results with respect to noise (see the Appendix). The current-source density analysis requires us to calculate—explicitly in 'classical' CSD and implicitly in iCSD—the spatial derivative of the potential. This leads to amplification of noise present in the signals, see detailed analysis in (Freeman 1980) in the context of ECoG. The thorough study of the influence of different sources of noise (measurement noise, uncertainty in the electrodes' positions, etc.) is beyond the scope of this paper. However, the analysis of the condition number of the matrix F for different iCSD variants (step, linear, spline) suggests that there is a tradeoff between how well a given method resolves the shape of the sources and how stable it is with respect to noise. It would be useful to establish general range of applicability of the proposed methods. Unfortunately, to make general statements regarding the optimality or stability of the iCSD method one would have to know the class of possible distributions of fields in the brain, which is not available. This is why we tested the quality of different methods used for CSD estimation on both artificial test data and simulated data from synaptically activated populations of biophysically detailed neuron models with reconstructed morphologies. As illustrated by Fig. 5b and d the ground truth ("true CSD") is quite "noisy" in the sense that it varies from pixel to pixel. Nevertheless, for all situations encountered we found both the spline-iCSD and linear-iCSD methods to be stable in the sense that it gave results in good agreement with the know ground truth (see, e.g., Fig. 5c and e). In addition, we found no signs of instability when applying our method to the experimental test data. We only encountered instability in predictions in test cases where the current sources were positioned far away from the recording grid, which resemble situation in the EEG or ECoG source estimation (not shown). Such situations might require use of more involved approaches but they are not our goal: we are interested in estimating sources close the recording grid within the same plane. It would be interesting to see how more involved approaches from EEG or ECoG source localization would perform in this context (He and Lian 2005; Nunez and Srinivasan 2006). Also, it would be interesting what one could achieve in the context of CSD estimation using Bayesian techniques, which allow to incorporate the prior knowledge (e.g. anatomical) in a natural way (Baillet and Garnero 1997; Schmidt et al. 1999). Information Sharing Statement The toolbox for CSD analysis in 2D is publicly available from the INCF software repository, http://software.incf.org/ (project name iCSD 2D). The toolbox consists of MATLAB scripts, a GUI viewer, and three example data sets. The software and the data can be used freely in research provided this paper is cited in any material using the results of their application. The supplementary video is available at http://www.youtube.com/icsd2d and can be used freely in non-commercial education. For other usage, contact the authors. This work was partly financed from the Polish Ministry of Science and Higher Education research grants N401 146 31/3239, PBZ/MNiSW/07/2006/11, 46/N-COST/2007/0, and the eScience program of the Research Council of Norway. SŁ was supported by Foundation for Polish Science and an INCF travel grant. JG is supported by the BBSRC (BB/D0111159/1) and Royal Society (RSRG 24519). We used a MATLAB script rotate_image.m by Ohad Gal for the experimental movie in supplementary material. We are grateful to Prof Menno Witter for a discussion about the interpretation of the subicular data. This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. 12021_2011_9111_MOESM1_ESM.zip (381 kb) ESM 1 (ZIP 381 kb) (MP4 3.83 mb) Here we construct the F matrix for the linear and spline interpolation of CSD between the nodes of rectangular two-dimensional grid. We assume that CSD has product structure, i.e. $$ C\left( {x,y,z} \right) = c\left( {x,y} \right)H(z), $$ where H(z) is a step function (a generalization to the Gaussian profile is straightforward and we omit it here). Consider a grid of points (i,j), where 1 ≤ i ≤ n x , 1 ≤ j ≤ n y . The spacing of the grid is Δx and Δy in x and y directions, respectively. Let us number the points with a multi-index α = (i,j). The coordinates of node α are (x α , y α ). Let V be the set of 4 vectors (v 1 , v 2 ), each of v i being either 0 or 1. The grid has N = n x n y nodes and there are m = (n x -1)(n y -1) boxes spanned by these nodes. We index the boxes so that the corners of the box number α are α + v for v∈V. Let B denote the set of all the allowed indices α numbering the boxes and G stand for all the grid points. Let C denote the vector of CSD values at the nodes, that is C α = c(x α , y α ) for α∈G. Linear Interpolation Here we assume linearly interpolation of CSD between the box corners. Take a point (x,y) in box number α and let \( \delta x = \frac{{x - {x_{\alpha }}}}{{\Delta x}} \), \( \delta y = \frac{{y - {y_{\alpha }}}}{{\Delta y}} \). The value of CSD at this point obtained with the linear interpolation is given by: $$ c(x,y) = \sum\limits_{{v \in V}} {\left[ {1 - {v_1} + (2{v_1} - 1)\delta x} \right]} \left[ {1 - {v_2} + (2{v_2} - 1)\delta y} \right]{{{\bf C}}_{{\alpha + v}}}. $$ Therefore, the distribution inside the box is a linear combination of 4 functions f l , l = 1…4: f 1 (δx, δy) = 1, f 2 (δx, δ y) = δx, f 3 (δx, δy) = δy, f 4 (δx, δy) = δxδy with coefficients depending linearly on the values of C at the nodes of the box: $$ c(x,y) = \sum\limits_{{\beta \in G}} {\sum\limits_{{l = 1}}^4 {E_{{\alpha \beta }}^l{f_l}{{{\bf C}}_{\beta }}} } . $$ The coefficients \( E_{{\alpha \beta }}^l \) are non-zero only for β-α∈V and follow from the above formula, e.g. \( E_{{\alpha \beta }}^1 = 1 \)for β-α = (0,0), otherwise \( E_{{\alpha \beta }}^1 = 0 \), etc.; conf. Łęski et al. 2007. The potential generated by such a distribution of current-source density at some point \( (\tilde{x},\tilde{y}) \) is $$ \Phi (\tilde{x},\tilde{y}) = \sum\limits_{{\alpha \in B}} {\sum\limits_{{\beta \in G}} {\sum\limits_{{l = 1}}^4 {F_{\alpha }^l(\tilde{x},\tilde{y})E_{{\alpha \beta }}^l{{{\bf C}}_{\beta }}} } }, $$ $$ F_{\alpha }^l(\tilde{x},\tilde{y}) = \frac{1}{{4\pi \sigma }}\int\limits_{{ - h}}^h {dz} \int\limits_0^{{\Delta y}} {dy} \int\limits_0^{{\Delta x}} {dx\frac{{{f_l}\left( {\frac{x}{{\Delta x}},\frac{y}{{\Delta y}}} \right)}}{{\sqrt {{{{(\tilde{x} - {x_{\alpha }} - x)}^2} + {{(\tilde{y} - {y_{\alpha }} - y)}^2} + {{({z_{\alpha }} + z)}^2}}} }}} . $$ The integral over z can be explicitly calculated to give $$ F_{\alpha }^l(\tilde{x},\tilde{y}) = \frac{1}{{2\pi \sigma }}\int\limits_0^{{\Delta y}} {dy} \int\limits_0^{{\Delta x}} {dx\,{f_l}\left( {\frac{x}{{\Delta x}},\frac{y}{{\Delta y}}} \right){\hbox{ar}}\sinh } \frac{h}{L}, $$ L standing for \( \sqrt {{{{(\tilde{x} - {x_{\alpha }} - x)}^2} + {{(\tilde{y} - {y_{\alpha }} - y)}^2}}} \). If we now take as \( (\tilde{x},\tilde{y}) \) one of the grid points γ then \( {\Phi_{\gamma }} = \sum\limits_{{\beta \in G}} {{F_{{\gamma \beta }}}{{{\bf C}}_{\beta }}} \), where \( {F_{{\gamma \beta }}} = \sum\limits_{{\alpha \in B}} {\sum\limits_{{l = 1}}^4 {\sum {F_{\alpha }^l} } } ({x_{\gamma }},{y_{\gamma }})E_{{\alpha \beta }}^l. \) Thus F γβ represents the direct and indirect contributions to the total potential at point γ from the CSD associated with the grid point β. Spline interpolation The construction of the F matrix for the spline distribution is, in principle, very similar to the linear case. Now the interpolating function in each box is the two-dimensional cubic spline. This means there are 4×4 = 16 base functions. Therefore, there are 16 E l and F l matrices. It is sufficient to consider spline interpolation on a quadratic grid with unit spacing in both directions. The correct formulae for inverse CSD on a general rectangular grid are then easily obtained by changing the variables from (x,y) to (x/Δx, y/Δy). Let us first recall the construction of one-dimensional spline. Suppose we have values of a function f at points x = 1, 2, … n x . For x such that j≤x≤j + 1 define P 1(x) = j + 1-x, P 2(x) = x-j. The formula $$ f(x) = {P_{{1}}}(x)f(j) + {P_{{2}}}(x)f\left( {j + {1}} \right) $$ gives a linear interpolation between the grid points, that means interpolation with a continuous function. In case of cubic splines we need the first and second derivatives to be continuous. This is accomplished (Press et al. 1992) by replacing the right hand side of Eq. 16 with a third-degree polynomial: $$ f(x) = {P_{{1}}}(x)f(j) + {P_{{2}}}(x)f\left( {j + {1}} \right) + {P_{{3}}}(x)f\prime\prime(j) + {P_{{4}}}(x)f\prime\prime\left( {j + {1}} \right) $$ where \( {P_{{3}}}(x) = \left( {{P_{{1}}}{{(x)}^{{3}}} - {P_{{1}}}(x)} \right)/{6},{P_{{4}}}(x) = \left( {{P_{{2}}}{{(x)}^{{3}}} - {P_{{2}}}(x)} \right)/{6} \). This formula guarantees that both f and its second derivative are continuous. The condition that the first derivative is continuous allows us to obtain the values of f′′ at the nodes from f(j), j = 1, …, n x , by a linear operation which we call G: $$ f\prime\prime(i) = \sum\limits_{{j = 1}}^{{{n_x}}} {{G_{{ij}}}f(j)} . $$ The matrix G is different for "natural" and "not-a-knot" splines (see Łęski et al. 2007), which assume different conditions for the first derivative of f at the boundaries (only the latter are implemented in MATLAB without a dedicated spline toolbox). The two-dimensional spline interpolation is obtained by simply performing two, one-dimensional splines. The complication is that we do not want the values of the interpolating function at some points, but the coefficients standing by the base functions. We found that it is convenient to choose base functions which are products of the polynomials P 1, P 2, P 3, P 4 of variables x and y, that means P 1(x)P 1(y), P 1(x)P 2(y), … P 4(x)P 4(y). To extract the coefficients we start with the spline in y direction: $$ f(x,y) = {P_1}(y)f(x,j) + {P_2}(y)f(x,j + 1) + {P_3}(y)f{}_{{yy}}(x,j) + {P_4}(y)f{}_{{yy}}(x,j + 1), $$ where f yy stands for the second derivative of f with respect to y and is given by \( f{}_{{yy}}(x,j) = \sum\limits_{{i = 1}}^{{{n_y}}} {G_{{ji}}^y} f(x,i) \). Therefore, we reduce the problem to one-dimensional splines in the x axis. We continue with $$ f(x,j) = {P_1}(x)f(i,j) + {P_2}(x)f(i + 1,j) + {P_3}(x)f{}_{{xx}}(i,j) + {P_4}(x)f{}_{{xx}}(i + 1,j). $$ This way we get the coefficients standing by the base functions as combinations of f(i,j) (values of f at the nodes) and the matrices G x , G y . Then we construct the matrices \( E_{{\alpha \beta }}^{{pq}} \), α∈B, β∈G, 1≤p,q≤4. The number \( E_{{\alpha \beta }}^{{pq}} \) is the coefficient standing by P p (x)P q (y) in box α resulting from a unit CSD at the node β. The construction of 16 \( F_{{\gamma \alpha }}^{{pq}} \) matrices (each of size n by m), where γ∈G and α∈B, is simple (note the arguments of P i are scaled by grid constants due to general rectangular geometry): $$ F_{{\gamma \alpha }}^{{pq}} = \frac{1}{{4\pi \sigma }}\int\limits_{{ - h}}^h {dz} \int\limits_0^{{\Delta y}} {dy} \int\limits_0^{{\Delta x}} {dx\frac{{{P_p}\left( {\frac{x}{{\Delta x}}} \right){P_q}\left( {\frac{y}{{\Delta y}}} \right)}}{{\sqrt {{{{(\tilde{x} - {x_{\alpha }} - x)}^2} + {{(\tilde{y} - {y_{\alpha }} - y)}^2} + {{({z_{\alpha }} + z)}^2}}} }}}, $$ or, after the integral over z is done: $$ F_{{\gamma \alpha }}^{{pq}} = \frac{1}{{2\pi \sigma }}\int\limits_0^{{\Delta y}} {dy} \int\limits_0^{{\Delta x}} {dx\,{P_p}\left( {\frac{x}{{\Delta x}}} \right){P_q}\left( {\frac{y}{{\Delta y}}} \right)} {\hbox{ar}}\sinh \frac{h}{L}, $$ with L defined as for linear interpolation. The full matrix F is now $$ F = \sum\limits_{{p,q = 1}}^4 {{F^{{pq}}}{F^{{pq}}}}, $$ $$ {F_{{\gamma \beta }}} = \sum\limits_{{p,q = 1}}^4 {\sum\limits_{{\alpha \in B}} {F_{{\gamma \alpha }}^{{pq}}F_{{\alpha \beta }}^{{pq}}} } . $$ 2D iCSD Method in Large-h Limit Each variant of the iCSD method is based on inversion of a matrix F of size P×P, where P is the product of the number of rows and columns of the 2D electrode grid, P = M×N. The impact of the different recorded potentials on the CSD estimated at a given position, C j , can be found from j-th row of the inverted matrix F −1. To visualize this impact, the elements of the row can be mapped back to their spatial positions and plotted. Figure 10 shows such mappings for two different iCSD methods and for three different positions: the linear and the spline iCSD methods for a central-, an edge-, and a corner element of the 2D electrode grid. The numbers in these six matrices express the weights of the electrode-contact potentials in the sum providing the CSD estimate. In Fig. 10 the 2D iCSD weights in the large-h limit (h = 10 mm, Δx = Δy = 0.2 mm) are shown. The numbers are given as percentages of the estimation-site weight (impact), i.e., the weight given to the potential recorded at the grid-point at which the CSD is estimated. The elements with gray background are studied in more detail in Fig. 11. iCSD weight matrices. The six matrices show how the different potentials are weighted when estimating the CSD for given electrode contact positions. The electrode grid is assumed to have a planar 8×8 configuration with an inter-contact distance of 0.2 mm in both directions, and the matrices in the large-h limit are shown, i.e. the CSD perpendicular to the electrode grid is assumed to be homogeneous over large distances perpendicular to the electrode grid. The matrix elements are given in percent relative to the weight assigned to the potential at the estimation site, i.e., the grid point at which the CSD is estimated. Different iCSD methods are shown in the two rows: linear (row 1) and spline (row 2). Horizontally, different estimation points are shown, a central element (i = j = 4), an edge element (i = 4 and j = 1) and a corner element (i = j = 1). The depth parameter h has been set to 10 mm. Elements with gray background are studied in more detail in Fig. 11 Illustration of h-dependence of relative weight of elements in iCSD weight matrices of the type shown in Fig. 10. The curves are obtained by calculating the relative weight of the off-center elements compared to the weight at the estimation-site (the site given the value 100 in Fig. 10) for each value of h. Then these sets of relative weights are normalized to give the value 1 in the large-h limit. The electrode grid is assumed to consist of 8 × 8 electrode contacts with an inter-contact distance of 0.2 mm in both directions. Left column: deviation for a central element (i = j = 4). Middle column: deviation for an edge element (i = 4 and j = 1). Right column: deviations for a corner element (i = j = 1). The gray shade of the lines reflects the elements' impact in the large-h limit, see scale bar to the right and element impacts in Fig. 10. Note that due to symmetry some of the lines are overwritten. The horizontal dashed lines indicate plus/minus ±5% deviation To compare the iCSD with the standard 2D double derivative formula, consider a plot of the respective weights, similar to the matrices in the left column of Fig. 10. If the central element is normalized to 100%, then only the nearest vertical and horizontal neighbors are non-zero (and equal to −25%). The standard 2D method would, therefore, be more compact than the iCSD methods, in the sense that most weight is placed on the central element and its nearest neighbors. Similarly, one could see that the linear iCSD method is more compact than the iCSD spline method (see further Appendix). Figure 11 shows the weights relative to their large-h value. Here, the matrix element at the estimation point is normalized to the same value as its large-h value. The darker lines in the plots correspond to the most important weight elements in the large-h limit, and for most of the plots, especially those concerning the central element, all weights are deviating less than 5% for h larger than about 0.5 mm. All plots express convergence as h becomes large, and for the majority of the plots the weights with highest impact in the large-h limit express faster convergence than the less important weight elements. In Fig. 12, the weight for the estimation point is shown for the above-mentioned methods and positions. The central elements (black) express the fastest convergence, the edge elements (gray) express convergence that is almost as fast and the corner elements (light gray) express the slowest convergence. However, both for the central-, edge- and corner element, 95% of the maximum value is reached for h less than about 0.4 mm. For h = 1 mm, both the central and the edge elements are indistinguishable from 1, while the corner elements are larger than 98% of their large-h value. The relative weight of the estimation sites for the three points (corner, edge, center) in Fig. 10 relative to the weight at the same estimation site in the large-h limit, as a function of the depth parameter h. The grid parameters and iCSD methods are the same as in Figs. 10 and 11. The horizontal dashed line indicates 95% of the maximum amplitude, which was reached for values of h between 0.22 mm and 0.41 mm The results presented in Figs. 10, 11 and 12 scale with the inter-contact distance. Here, an inter-contact distance of 0.2 mm is used. However, with a grid shrunken by a factor of two with an inter-contact distance of 0.1 mm, the transition to the large-h limit would occur at half the above value, and only half the activity depth would be needed to allow for use of the large-h limit version of the iCSD method. Compactness of Methods Consider a weight matrix similar to the matrices shown in Fig. 11. For each element (electrode contact position) one can define an impact radius, ρ ij , as $$ {\rho_{{ij}}} = \left( {\sum\limits_{{k = 1}}^N {\sum\limits_{{l = 1}}^N {\left| {{w_{{kl}}}{r_{{kl}}}} \right|} } } \right)/\left| {{w_{{ij}}}} \right|, $$ where w kl are the elements of the weight matrix and the denominator ensures that the weight matrix is normalized to 1 (shown as 100% in Fig. 11). r kl is the distance from the estimation point to the corresponding electrode contact, \( {r_{{kl}}} = \sqrt {{{{(\Delta x(k - i))}^2} + {{(\Delta y(l - j))}^2}}} \). The impact radius can be computed for all elements in the N×N matrix, and the mean impact radius can be computed for the method by averaging over all elements ρ ij . Figure 13 shows plots of the mean impact radius for the linear and spline iCSD methods for N×N electrode grids of different sizes N and with an assumed inter-contact distance of 0.1 mm. For up to 16×16 electrode contacts, the mean impact radius is typically between 0.2 mm and 0.3 mm, while the standard 2D double derivative formula would give an impact radius of 0.1 mm for all central elements. Quantification of mean impact radius of the linear and spline iCSD methods in the large-h limit. The mean impact radius is defined in Eq. 18 and is found to be in the range 0.2–0.3 mm for differently sized electrode grids with inter-contact distances of 0.1 mm Condition Number of the F Matrix with Respect to Inversion The inverse CSD method relies on an inversion of the forward solution. An important question in this context is how sensitive this system of equations is to changes in the data (for example errors or noise). While the full analysis for all possible sources of noise (measurement noise, displaced electrode contacts, etc.) is outside of the scope of this work, one simple measure of this property is the condition number of the matrix F. This number is defined as the ratio of the largest singular value of F to the smallest. Figure 14 presents the condition number in case of 10 × 10 electrode grid, Δx = Δy = 0.2 mm, as a function of h for different iCSD variants (spline, linear, step). Two facts are noteworthy: first, for increasing h the condition number increases. This is compatible with the fact that small h means more local estimation of current-source density, whereas large h means that also values of potential at distant nodes play a role in estimation. Second, the spline method yields the largest condition number for a given h, the condition number for step iCSD is the smallest, and the linear iCSD lies in between. This is because the CSD values at the nodes influence the CSD distribution differently for step, linear and spline interpolations. For step iCSD, only the immediate neighborhood is affected, for linear iCSD this area is larger, and the spline coefficients are global, i.e., a change in CSD at a single node deforms slightly the whole distribution. This fact is also compatible with the calculations of the compactness of the methods: as we have shown above the spline method has larger impact radius than the linear iCSD. Condition number of the matrix F as a function of h for step, linear and spline iCSD. The grid used here is 10 by 10 electrodes, Δx = Δy = 0.2 mm Parameters of Gaussian Sources The three-dimensional Gaussian sources used for tests of the iCSD method were given by: $$ C(x,y,z) = \sum\limits {A\exp \left[ { - \left( {{{\left( {x - {x_0}} \right)}^2} + {{\left( {y - {y_0}} \right)}^2}} \right)/{\sigma_{\rm{xy}}}} \right]\frac{{\exp \left[ { - {{\left( {z - {z_0}} \right)}^2}/{\sigma_{\rm{z}}}} \right]}}{{\exp \left[ { - {{\left( {{z_0}} \right)}^2}/{\sigma_{\rm{z}}}} \right]}}} . $$ For some of the tests we used product sources c(x,y)H(z), with $$ c\left( {x,y} \right) = C\left( {x,y,0} \right). $$ The parameters of the sources are given in Table 1. Parameters of the Gaussian sources used for testing Source number σxy σz −0.9269 Baillet, S., & Garnero, L. (1997). A Bayesian approach to introducing anatomo-functional priors in the EEG/MEG inverse problem. IEEE Ttransactions on Biomedical Engineering, 44, 374–385.CrossRefGoogle Scholar Barthó, P., Hirase, H., Monconduit, L., Zugaro, M., Harris, K. D., & Buzsáki, G. (2004). Characterization of neocortical principal cells and interneurons by network interactions and extracellular features. Journal of Neurophysiology, 92, 600–608.PubMedCrossRefGoogle Scholar Beaulieu, C. (1993). Numerical data on neocortical neurons in adult rat, with special reference to the GABA population. Brain Research, 609, 284–292.PubMedCrossRefGoogle Scholar Blanche, T. J., Spacek, M. A., Hetke, J. F., & Swindale, N. V. (2005). Polytrodes: high-density silicon electrode arrays for large-scale multiunit recording. Journal of Neurophysiology, 93, 2987–3000.PubMedCrossRefGoogle Scholar Buzsáki, G. (2004). Large-scale recording of neuronal ensembles. Nature Neuroscience, 7, 446–451.PubMedCrossRefGoogle Scholar Carnevale, T., Hines, M. (2006) The NEURON Book. Cambridge University Press.Google Scholar Commins, S., Gigg, J., Anderson, M., & O'Mara, S. M. (1998). The projection from hippocampal area CA1 to the subiculum sustains long-term potentiation. NeuroReport, 9, 847–950.PubMedCrossRefGoogle Scholar Csicsvari, J., Henze, D. A., Jamieson, B., Harris, K. D., Sirota, A., Barthó, P., et al. (2003). Massively parallel recording of unit and local field potentials with silicon-based electrodes. Journal of Neurophysiology, 90, 1314–1323.PubMedCrossRefGoogle Scholar de Solages, C., Szapiro, G., Brunel, N., Hakim, V., Isope, P., Buisseret, P., et al. (2008). High-frequency organization and synchrony of activity in the purkinje cell layer of the cerebellum. Neuron, 58, 775–788.PubMedCrossRefGoogle Scholar Du, J., Riedel-Kruse, I. H., Nawroth, J. C., Roukes, M. L., Laurent, G., & Masmanidis, S. C. (2008). High-resolution three-dimensional extracellular recording of neuronal activity with microfabricated electrode arrays. Journal of Neurophysiology, 101, 1671–1678.PubMedCrossRefGoogle Scholar Einevoll, G. T., Pettersen, K. H., Devor, A., Ulbert, I., Halgren, E., Dale, A. M. (2007). Laminar population analysis: estimating firing rates and evoked synaptic activity from multielectrode recordings in rat barrel cortex. Journal of Neurophysiology, 97(3), 2174–2190.Google Scholar Feldmeyer, D., & Sakmann, B. (2000). Synaptic efficacy and reliability of excitatory connections between the principal neurones of the input (layer 4) and output layer (layer 5) of the neocortex. The Journal of Physiology, 525, 31–39.PubMedCrossRefGoogle Scholar Freeman, W. J. (1980). Use of spatial deconvolution ot compensate for distortion of EEG by volume conduction. IEEE Trans on Bio-med Engineering, 27, 421–9.CrossRefGoogle Scholar Freeman, J. A., & Nicholson, C. (1975). Experimental optimization of current source-density technique for anuran cerebellum. Journal of Neurophysiology, 38, 369–382.PubMedGoogle Scholar Gigg, J., Finch, D. M., & O'Mara, S. M. (2000). Responses of rat subicular neurons to convergent stimulation of lateral entorhinal cortex and CA1 in vivo. Brain Research, 884, 35–50.PubMedCrossRefGoogle Scholar Gold, C., Henze, D. A., Koch, C., & Buzsáki, G. (2006). On the origin of the extracellular action potential waveform: a modeling study. Journal of Neurophysiology, 95, 3113–3128.PubMedCrossRefGoogle Scholar Goto, T., Hatanaka, R., Ogawa, T., Sumiyoshi, A., Riera, J. J., & Kawashima, R. (2010). An evaluation of the conductivity profile in the somatosensory barrel cortex of Wistar rats. Journal of Neurophysiology. doi: 10.1152/jn.00122.2010.PubMedGoogle Scholar Guljarani, R. M. (1998). Bioelectricity and biomagnetism. New York: Wiley.Google Scholar Haberly, L. B., & Shepherd, G. M. (1973). Current-density analysis of summed evoked potentials in opossum prepyriform cortex. Journal of Neurophysiology, 36, 789–802.PubMedGoogle Scholar Hamalainen, M., Hari, R., Ilmoniemi, R. J., Knuutila, J., & Lounasmaa, O. V. (1993). Magnetoencephalography theory, instrumentation, and applications to noninvasive studies of the working human brain. Review of Modern Physics, 65, 413–497.CrossRefGoogle Scholar Harris, E., Witter, M. P., Weinstein, G., & Stewart, M. (2001). Intrinsic connectivity of the rat subiculum: I. Dendritic morphology and patterns of axonal arborization by pyramidal neurons. The Journal of Comparative Neurology, 435, 490–505.PubMedCrossRefGoogle Scholar He, B., & Lian, J. (2005). Electrophysiological neuroimaging in Neural Engineering. In Bin He (Ed.), New York: Kluwer.Google Scholar Hines, M. L., Morse, T., Migliore, M., Carnevale, N. T., & Shepherd, G. M. (2004). ModelDB: A database to support computational neuroscience. Journal of Computational Neuroscience, 17, 7–11.PubMedCrossRefGoogle Scholar Holt, G. R., & Koch, C. (1999). Electrical interactions via the extracellular potential near cell bodies. Journal of Computational Neuroscience, 6, 169–184.PubMedCrossRefGoogle Scholar Katzner, S., Nauhaus, I., Benucci, A., Bonin, V., Ringach, D. L., & Carandini, M. (2009). Local origin of field potentials in visual cortex. Neuron, 61, 35–41.PubMedCrossRefGoogle Scholar Lakatos, P., Shah, A. S., Knuth, K. H., Ulbert, I., Karmos, G., & Schroeder, C. E. (2005). An oscillatory hierarchy controlling neuronal excitability and stimulus processing in the auditory cortex. Journal of Neurophysiology, 94, 1904–1911.PubMedCrossRefGoogle Scholar Łęski, S., Wójcik, D. K., Tereszczuk, J., Świejkowski, D. A., Kublik, E., & Wróbel, A. (2007). Inverse current-source density method in 3D: reconstruction fidelity, boundary effects, and influence of distant sources. Neuroinformatics, 5, 207–222.PubMedCrossRefGoogle Scholar Lin, B., Colgin, L. L., Brücher, F. A., Arai, A. C., & Lynch, G. (2002). Interactions between recording technique and AMPA receptor modulators. Brain Research, 955, 164–173.PubMedCrossRefGoogle Scholar Lindén, H., Pettersen, K. H., & Einevoll, G. T. (2010). Intrinsic dendritic filtering gives low-pass power spectra of local field potentials. Journal of Computational Neuroscience, 29, 423–444.PubMedCrossRefGoogle Scholar Lipton, M. L., Fu, K.-M. G., Branch, C. A., & Schroeder, C. E. (2006). Ipsilateral hand input to area 3b revealed by converging hemodynamic and electrophysiological analyses in macaque monkeys. The Journal of Neuroscience, 26, 180–185.PubMedCrossRefGoogle Scholar Logothetis, N. K., Kayser, C., & Oeltermann, A. (2007). In vivo measurement of cortical impedance spectrum in monkeys: implications for signal propagation. Neuron, 55, 809–823.PubMedCrossRefGoogle Scholar López-Aguado, L., Ibarz, J. M., & Herreras, O. (2001). Activity-dependent changes of tissue resistivity in the CA1 region in vivo are layer-specific: modulation of evoked potentials. Neuroscience, 108, 249–262.PubMedCrossRefGoogle Scholar Lorente de No, R. (1947). A study of nerve physiology. Studies from the Rockefeller Institute for Medical Research, 131, 1–496.Google Scholar Mainen, Z. F., & Sejnowski, T. J. (1996). Influence of dendritic structure on firing pattern in model neocortical neurons. Nature, 382, 363–366.PubMedCrossRefGoogle Scholar Menendez de la Prida, L. (2003). Control of bursting by local inhibition in the rat subiculum in vitro. The Journal of Physiology, 549, 219–203.PubMedCrossRefGoogle Scholar Migliore, M., Morse, T. M., Davison, A. P., Marenco, L., Shepherd, G. M., & Hines, M. L. (2003). ModelDB: making models publicly accessible to support computational neuroscience. Neuroinformatics, 1, 135–9.PubMedCrossRefGoogle Scholar Mitzdorf, U. (1985). Current source-density method and application in cat cerebral cortex: investigation of evoked potentials and EEG phenomena. Physiological Review, 65, 37–100.Google Scholar Nicholson, C. (1973). Theoretical analysis of field potentials in anisotropic ensembles of neuronal elements. IEEE Transactions on Biomedical Engineering, 20, 278–288.PubMedCrossRefGoogle Scholar Nicholson, C., & Freeman, J. A. (1975). Theory of current source-density analysis and determination of conductivity tensor for anuran cerebellum. Journal of Neurophysiology, 38, 356–368.PubMedGoogle Scholar Nicholson, C., & Llinás, R. (1975). Real time current source-density analysis using multi-electrode array in cat cerebellum. Brain Research, 100, 418–424.PubMedCrossRefGoogle Scholar Nicolelis, M. (2001). Actions from thoughts. Nature, 409, 403–407.PubMedCrossRefGoogle Scholar Novak, J. L., & Wheeler, B. C. (1989). Two-dimensional current source density analysis of propagation delays for components of epileptiform bursts in rat hippocampal slices. Brain Research, 497, 223–230.PubMedCrossRefGoogle Scholar Nunez, P. L., & Srinivasan, R. (2006). Electric fields of the brain. Oxford: Oxford University Press.CrossRefGoogle Scholar Paxinos, G., & Watson, C. (1998). The rat brain in Stereotaxic coordinates. Academic.Google Scholar Pettersen, K. H., & Einevoll, G. T. (2008). Amplitude variability and extracellular low-pass filtering of neuronal spikes. Biophysical Journal, 94, 784–802.PubMedCrossRefGoogle Scholar Pettersen, K. H., Devor, A., Ulbert, I., Dale, A. M., & Einevoll, G. T. (2006). Current-source density estimation based on inversion of electrostatic forward solution: effects of finite extent of neuronal activity and conductivity discontinuities. Journal of Neuroscience Methods, 154, 116–133.PubMedCrossRefGoogle Scholar Pettersen, K. H., Hagen, E., & Einevoll, G. T. (2008). Estimation of population firing rates and current source densities from laminar electrode recordings. Journal of Computational Neuroscience, 24, 291–313.PubMedCrossRefGoogle Scholar Phongphanphanee, P., Kaneda, K., & Isa, T. (2008). Spatiotemporal profiles of field potentials in mouse superior colliculus analyzed by multichannel recording. The Journal of Neuroscience, 28, 9309–9318.PubMedCrossRefGoogle Scholar Pitts, W. H. (1952). Investigations on synaptic transmission. In Cybernetics, Trans. 9th Conf. Josiah Macy Foundation H. von Foerster (pp. 159–166). New York.Google Scholar Plonsey, R. (1969). Bioelectric phenomena. McGraw-Hill Inc.Google Scholar Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. (1992). Numerical Recipes in C: the art of scientific computing. Cambridge University Press.Google Scholar Rajkai, C., Lakatos, P., Chen, C.-M., Pincze, Z., Karmos, G., & Schroeder, C. E. (2008). Transient cortical excitation at the onset of visual fixation. Cerebral Cortex, 18, 200–209.PubMedCrossRefGoogle Scholar Schmidt, D. M., George, J. S., & Wood, C. C. (1999). Bayesian inference applied to the electromagnetic inverse problem. Human Brain Mapping, 7, 195–212.PubMedCrossRefGoogle Scholar Schroeder, C. E., Tenke, C. E., & Givre, S. J. (1992). Subcortical contributions to the surface-recorded flash-vep in the awake macaque. Electroencephalography and Clinical Neurophysiology, 84, 219–231.PubMedCrossRefGoogle Scholar Shimono, K., Brucher, F., Granger, R., Lynch, G., & Taketani, M. (2000). Origins and distribution of cholinergically induced beta rhythms in hippocampal slices. The Journal of Neuroscience, 20, 8462–8473.PubMedGoogle Scholar Shimono, K., Kubota, D., Brucher, F., Taketani, M., & Lynch, G. (2002). Asymmetrical distribution of the Schaffer projections within the apical dendrites of hippocampal field CA1. Brain Research, 950, 279–287.PubMedCrossRefGoogle Scholar Townsend, G., Peloquin, P., Kloosterman, F., Hetke, J. F., & Leung, L. S. (2002). Recording and marking with silicon multichannel electrodes. Brain Research Protocols, 9, 122–129.PubMedCrossRefGoogle Scholar Vaknin, G., DiScenna, P. G., & Teyler, T. J. (1988). A method for calculating Current Source Density (CSD) analysis without resorting to recording sites outside the sampling volume. Journal of Neuroscience Methods, 24, 131–135.PubMedCrossRefGoogle Scholar Wójcik, D. K., & Łęski, S. (2009). Current source density reconstruction from incomplete data. Neural Computation, 22, 48–60.CrossRefGoogle Scholar Xing, D., Yeh, C.-I., & Shapley, R. M. (2009). Spatial spread of the local field potential and its laminar variation in visual cortex. Journal of Neuroscience, 29, 11540–11549.PubMedCrossRefGoogle Scholar Ylinen, A., Bragin, A., Nádasdy, Z., Jand, G., Szabó, I., Sik, A., et al. (1995). Sharp wave-associated high-frequency oscillation (200 Hz) in the intact hippocampus: network and intracellular mechanisms. Journal of Neuroscience, 15, 30–46.PubMedGoogle Scholar Zhang, Y., van Drongelen, W., Kohrman, M., & He, B. (2008). Three-dimensional brain current source reconstruction from intra-cranial ECoG recordings. NeuroImage, 42, 683–695.PubMedCrossRefGoogle Scholar © The Author(s) 2011 Open AccessThis is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://creativecommons.org/licenses/by-nc/2.0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. 1.Department of NeurophysiologyNencki Institute of Experimental Biology of the Polish Academy of SciencesWarsawPoland 2.Department of Mathematical Sciences and Technology and Center for Integrative GeneticsNorwegian University of Life SciencesÅsNorway 3.Faculty of Life SciencesUniversity of ManchesterManchesterUK Łęski, S., Pettersen, K.H., Tunstall, B. et al. Neuroinform (2011) 9: 401. https://doi.org/10.1007/s12021-011-9111-4 Publisher Name Springer-Verlag
CommonCrawl
Quantitative patterns of motor cortex proteinopathy across ALS genotypes Matthew Nolan1, Connor Scott1, Menuka Pallebage Gamarallage1, Daniel Lunn2, Kilda Carpenter1, Elizabeth McDonough3, Dan Meyer3, Sireesha Kaanumalle3, Alberto Santamaria-Pang3, Martin R. Turner1, Kevin Talbot1 & Olaf Ansorge ORCID: orcid.org/0000-0003-1825-54341 Acta Neuropathologica Communications volume 8, Article number: 98 (2020) Cite this article Degeneration of the primary motor cortex is a defining feature of amyotrophic lateral sclerosis (ALS), which is associated with the accumulation of microscopic protein aggregates in neurons and glia. However, little is known about the quantitative burden and pattern of motor cortex proteinopathies across ALS genotypes. We combined quantitative digital image analysis with multi-level generalized linear modelling in an independent cohort of 82 ALS cases to explore the relationship between genotype, total proteinopathy load and cellular vulnerability to aggregate formation. Primary motor cortex phosphorylated (p)TDP-43 burden and microglial activation were more severe in sporadic ALS-TDP disease than C9-ALS. Oligodendroglial pTDP-43 pathology was a defining feature of ALS-TDP in sporadic ALS, C9-ALS and ALS with OPTN, HNRNPA1 or TARDBP mutations. ALS-FUS and ALS-SOD1 showed less cortical proteinopathy in relation to spinal cord pathology than ALS-TDP, where pathology was more evenly spread across the motor cortex-spinal cord axis. Neuronal pTDP-43 aggregates were rare in GAD67+ and Parvalbumin+ inhibitory interneurons, consistent with predominant accumulation in excitatory neurons. Finally, we show that cortical microglia, but not astrocytes, contain pTDP-43. Our findings suggest divergent quantitative, genotype-specific vulnerability of the ALS primary motor cortex to proteinopathies, which may have implications for our understanding of disease pathogenesis and the development of genotype-specific therapies. The term 'selective vulnerability' describes the differential susceptibility of cells or anatomically defined systems to disease pathomechanisms. In the context of neurodegenerative disease, this can be further delineated as vulnerability to cellular proteinopathy or vulnerability to degeneration itself; a relationship which cannot be assumed to be linear, as microscopically visible aggregate formation may be an indicator of a successful cellular response to proteotoxicity, dependent on cell-specific concentration-dependent thresholds for protein precipitation [3, 16, 28, 73]. In amyotrophic lateral sclerosis (ALS), this selectivity is particularly apparent as functionally distinct motor neuron subtypes in close anatomical proximity are variably affected [13, 56], however whether this reflects cell-intrinsic vulnerabilities or connectivity, or both, remains unclear. Around 10% of ALS cases are caused by autosomal dominant mutations of varying penetrance to one or more known genes including C9ORF72 (C9-ALS) [17, 64], SOD1 (ALS-SOD1) [68], FUS (ALS-FUS) [43] and TARDBP [81], with some mutations appearing to predispose patients towards an upper (UMN) or lower (LMN) motor neuron predominant phenotype [29, 60, 65, 79]. The primary neuropathological observation in ~ 95% ALS cases is the cytoplasmic mislocalization and aggregation of hyper-phosphorylated TDP-43 (pTDP-43) within neurons and glia (ALS-TDP) [54]. While ALS patients commonly present with a combination of LMN and pyramidal signs, the nature of disease initiation and progression between the spinal cord and cortex remains unclear. In the 'dying forward' hypothesis, motor cortex dysfunction associated with anterograde glutamate-mediated excitotoxicity precedes LMN dysfunction [19, 24]. However, attempts to identify a histological correlate for this hypothesis using human tissue have produced conflicting results [30, 48, 55], compounded by neuropathological assessment often being confined to subjective readouts of disease burden which are a source of bias and potentially increase type I error. Digital image analysis algorithms have previously been used to accurately quantify TDP-43 pathology more objectively [35, 86], and the combination of quantitative disease indicators and genetic architecture can be used to produce neuropathological endophenotypes independent of relatively crude clinical readouts, which may not be up-to-date during the terminal phase of the illness [2, 18, 53]. Surprisingly, there have been no systematic studies attempting to define and quantify the total proteinopathy burden and its cellular pattern in the primary motor cortex across ALS genotypes since the discovery of the main ALS driver genes and their protein products. We hypothesized that the ratio of motor cortex to spinal cord proteinopathy burden is not uniform across genotypes and that not all cell types are equally affected by protein aggregates. For example, we therefore sought to clarify if inhibitory interneurons, excitatory pyramidal neurons, or oligo−/astroglia preferentially accumulate protein aggregates in the motor cortex. Here, we use quantitative digital image analysis in conjunction with multilevel statistical modelling to show that primary motor cortex phosphorylated (p)TDP-43 burden and microglial activation was more severe in sporadic ALS-TDP disease than C9-ALS, that oligodendroglial pTDP-43 pathology was a defining feature of all genetic subgroups of ALS-TDP, that ALS-FUS and ALS-SOD1 show less cortical proteinopathy in relation to spinal cord pathology than ALS-TDP, and that neuronal pTDP-43 aggregates are rare in GAD67+ inhibitory interneurons consistent with predominant accumulation in excitatory neurons. Our data from an independent ALS cohort thus contribute to our understanding of motor cortex neuropathology across diverse ALS genotypes. Cases were included if the primary clinical presentation was ALS as judged by an experienced neurologist with a special interest in ALS. Presentation with frontotemporal dementia (FTD) was an exclusion criterion. A further inclusion criterion was availability of definitive primary motor cortex (defined by the presence of Betz cells) and lumbar spinal cord blocks. Post-mortem human brain tissue in the form of 10 μm sections was obtained from the Oxford Brain Bank, the MRC London Neurodegenerative Diseases Brain Bank and the Sheffield Brain Tissue Bank. Consent and ethical approval for the use of tissue was provided by the generic REC approval of each research tissue bank (Oxford - 15/SC/0639, MRC London - 08/MRE09/38, Sheffield - 08/MRE00/13). All cases were diagnosed post-mortem by an experienced neuropathologist and confirmed genetically via Sanger or whole-exome sequencing. For the purposes of this study, we therefore use the term 'sporadic' to refer to cases with no high-penetrance mutations in known ALS genes and an absence of characteristic genetically-linked pathology. Whole-exome sequencing failed to find a causative mutation in one patient (case 8), but was neuropathologically confirmed as ALS-FUS through the presence of characteristic FUS pathology. Case 6 also exhibited a Y374X TARDBP variant which is predicted to be damaging, but displayed no TDP-43 pathology [40] and overall the case more closely resembles the juvenile-onset ALS associated with ALS-FUS. Cases caused by CHMP2B, HNRNPA1, OPTN and TARDBP mutations are present in < 1% ALS patients, and we were therefore unable to obtain a significant number of these genotypes, however they were included because of their potential for interesting comparisons and clarification of their pathotype. Demographic and clinical details (where available) of all cases are included in the supplementary material and summarised in Table 1. Table 1 Summary of all cases used in this study. See supplementary data for subgroup analyses Tissue sampling 10 μm sections containing underlying white matter were cut from archival formalin-fixed paraffin-embedded (FFPE) blocks from the primary motor cortex, corresponding to the 'hand knob' region of the homunculus where available. 5 μm sections from the same region were used in multiplexed-immunofluorescence experiments. 10 μm sections of FFPE lumbar spinal cord were also cut from each case. DAB-immunohistochemistry De-identified sections from sporadic (n = 19), C9ORF72 (n = 16), SOD1 (n = 11), FUS (n = 9), CHMP2B (n = 2), TARDBP (n = 1), HNRNPA1 (n = 1), OPTN (n = 1) and control (n = 11) cases ('s-IHC cohort') were immunohistochemically stained according to standard protocols. Primary antibody clones (Table 2) were chosen based on specificity demonstrated in protein expression databases (e.g Human Protein Atlas) as well as validation in previous studies. Targets were visualised using HRP-conjugated secondary antibodies (Dako Envision+ kit, Agilent, USA), which were incubated for 1 h at room temperature before detection using DAB substrate for 10 min. Extensive quality-control experiments were conducted with each antibody prior to batch staining. Specifically, we tested fixation-dependency of the reaction products and qualitatively screened for non-specific staining. The pTDP-43 antibody performed equally well in short- and long-fixed material, with no background staining. In contrast, FUS antibody showed only weak signal in long fixation cases and SOD1 antibody (SEDI) showed very strong non-specific astroglial staining in control tissue and missed some protein aggregates in SOD1 cortex that were clearly visible on HE sections (e.g. hyaline conglomerate inclusions). These antibodies therefore served to support the diagnosis of ALS-FUS and ALS-SOD1 cases respectively, but were not suitable for quantitative digital analysis. We did not have access to anti-dipeptide antibodies that performed robustly across the cohort. However, p62 proved an excellent generic stain for ALS protein aggregates (including dipeptides) and was used as a read-out for pan-aggregation load in the selected ROIs [1, 23, 42]. None of the primary motor cortices contained tau, alpha-synuclein or amyloid-beta aggregates. For chromogenic double IHC, pTDP-43 and either GAD67, Olig2 or NeuN primary antibodies (Table 2) were applied to a subsection of the cohort (control n = 3, ALS n = 7; Supplementary material). These were sequentially stained using pTDP-43 primary antibody with HRP-DAB detection, followed by either GAD67, Olig2 or NeuN primary antibody and goat anti-rabbit secondary antibody conjugated to alkaline phosphatase (1:1000, Dako, cat# D0487) with FastRed substrate detection. To assess the relative extent of upper versus lower motor neuron involvement, an additional three sections from the lumbar spinal cord region were cut at 10 μm from each case and stained with pTDP-43, CD68 and SMI-312. Positive and negative controls were included for each antibody. No staining was seen when the primary antibody was omitted. Counterstaining was performed using Coles haematoxylin. Giant pyramidal cells of Betz define the human primary motor cortex. For all cases therefore, an additional section was stained using haematoxylin and eosin (H&E) to confirm the presence of Betz cells in layer Vb. Stained sections were then dehydrated, cleared and mounted (Histomount, Thermo-Fisher Scientific, Germany) and viewed using a Zeiss Primo Star microscope (Zeiss, Oberkochen, Germany). Table 2 Primary antibody clones used in this study Quantification of single-immunohistochemistry Slides were digitally scanned using the Aperio ScanScope AT Turbo system (Leica Biosystems, Germany) and analysed using QuPath software [4]. For pathological quantification of the primary motor cortex, regions of interest (ROI) measuring 1000 μm × 3000 μm were defined and quantified using thresholding and algorithms optimized for each stain. Optimisation of algorithms included adjustment for nuclear/cytoplasmic area, fragmentation of nuclei, pixel size and DAB thresholding. The algorithms themselves are included in the supplementary material. CD68, TPPP/p25 and Olig2 were quantified using a positive cell count (pcc; Fig. 1e-f). Because of the morphological heterogeneity of pTDP-43 and p62 pathology, these were quantified using a positive pixel count (ppc) tool (Fig. 1c,d). Positive pixel count approaches have been used and validated in other human post-mortem studies [35, 86]. Both methods measured the number of positive cells/pixels above a defined DAB optical density (OD) threshold, which was set by plotting mean DAB OD against count for all detections in a given ROI and selecting the lowest OD required to produce minimal false positive/negatives. After optimisation, algorithms were applied consistently throughout. Five ROI were assessed per slide, and placed where the short edge of each ROI was touching the pial surface whilst parallel with underlying white matter. ROI were placed evenly around a single gyrus on each slide, and measurements averaged across the five ROI (Fig. 1a). For CD68, three additional ROI within the subcortical white matter of a single gyrus were also defined and measurements averaged. For lumbar spinal cord sections, one circular ROI measuring 1.75mm2 and one square ROI measuring 2.25mm2 was placed in the lateral corticospinal tract region and anterior horn region respectively, on each side of a single lumbar spinal cord slice (Fig. 1b) and measurements averaged. CD68 and pTDP-43 were quantified in the spinal cord using the same algorithms as the motor cortex. The area of 10 anterior horn neurons (5 from each half of the spinal column) from the lumbar region of each case were measured by drawing around the perimeter of each neuron at × 40 mag. Measurements were then averaged and log transformed for statistical testing. Quantitative assessment of pathology in the ALS primary motor cortex and spinal cord using optimised digital image analysis. IHC staining was quantified in the motor cortex using DAB optical-density based algorithms on five ROI, each measuring 3mm2. ROI were spread evenly around a single gyrus where the top edge of the ROI was at the pial surface (a). CD68 and pTDP-43 pathology was also quantified in the anterior horn and corticospinal tract of the lumbar spinal cord (b, yellow and green box respectively), using guidelines bisecting the section in each direction across the central canal. An optimised positive pixel count (ppc) algorithm is able to distinguish variable pTDP-43 morphologies including glial (c, red arrows) and neuronal (d, red arrows). A positive cell count (pcc) algorithm counts the number of Olig2+ oligodendrocytes (e, black box = area of f), and defines positive cells by DAB-threshold with high specificity (f, pink arrow highlights DAB-positive cell, green arrow highlights DAB-negative cell). Scale bars where not indicated (μm): c,d = 50; e = 250; f = 20 For oligodendrocyte analysis, criteria for designation as an oligodendrocyte were presence of characteristic round nucleus, chromatin structure and nucleolus, thin rim or perinuclear clear cytoplasm and position as grey matter satellite or subcortical white matter oligodendrocyte. Oligodendroglial TDP-43 inclusions have a characteristic 'comma' or 'wisp-like' appearance. Neuronal criteria were size (> 10 μm) and shape (triangular or oval), position in cortical layer, presence of peripheral Nissl substance, and a round nucleus with pale homogenous nucleoplasm and a single dark nucleolus. All analyses were conducted blind to case details including genotype and other IHC results. Multiplexed immunofluorescence Multiplex experiments were conducted on a subset of cases, all with a short (< 48 h) fixation protocol, as most antibodies suitable for multiplex experiments did not work after long (> 4 weeks) fixation. Multiplexed immunofluorescence (MxIF) staining and imaging was performed at GE Research using previously described methodologies [25], optimized for short-fixed FFPE neurological specimens. In brief, antibodies (Table 2) were applied two or three at a time across nine sequential staining-and-dye-deactivation rounds to a subsection of our cohort (sporadic n = 18, C9ORF72 n = 3, controls n = 5; Supplementary material). Antigens were detected either by directly fluorophore-conjugated antibodies, by pre-bound fluorophore-labelled antibody binders, or using primary antibody followed by fluorophore-labelled secondary antibodies. The staining of each antibody was imaged on separate channels after each staining round, and DAPI imaging of nuclei was collected in all rounds. Images were collected on custom Olympus IX81 inverted microscopes (Olympus, Tokyo, Japan) using 20x objectives, with multi-round image acquisition driven by automated, custom software. Images from different staining rounds were co-registered using DAPI staining (US Patent no. US8369600B2, 2013) [15] and then autofluorescence signal was removed by subtracting an earlier, unstained image from each corresponding stained image [84]. ROIs for MxIF acquisition were selected manually on the whole-tissue tissue sections to cover from the pial surface to the subcortical white matter of each case. Images were reviewed as a single panel composed of ~ 15 stitched fields of view per case at the University of Oxford using a custom plugin for ImageJ [71] enabling review of the multi-channel MxIF image data. Statistical analysis and graphing was performed using a combination of GraphPad Prism (California, USA) and multi-level generalized linear modelling (GLM) [52, 83] in RStudio (Boston, USA). The procedure used is analogous to a 3-way ANOVA where each individual Case is a block, Protein is a grouping variable and Genotype is a treatment, but in order to avoid over-parameterisation Case was incorported as a random effect in a 2-level multilevel model; thus within-Case correlation is accounted for. Goodness-of-fit chi-squared test revealed the data was negatively-binomially distributed. If the mean Count is μ, a negative-binomial GLM fits the model: $$ \log \mu =\alpha +\sum \limits_i{\beta}_i{Genotype}_i+\sum \limits_j{\beta}_j{Protein}_j+\sum \limits_{i,j}{\beta}_{ij}{Genotype}_i:{Protein}_j $$ Unpaired Welch's t-test was used to test oligodendrocyte pathologies. D'Agostino and Pearson method was used to assess normality prior to testing correlation coefficients. Scatterplots are formatted on a logarithmic base 2 scale and labelled as exponents of the base value. Levels for quoted confidence intervals are set at 95% and p-values in the figures are indicated as follows: * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001. The dataset(s) supporting the conclusions of this article are included within the article and its additional files. Pathology of the ALS primary motor cortex is highly variable both within and across the spectrum of ALS genotypes To assess the relative extent of pathology within the primary motor cortex across ALS genotypes, we performed immunohistochemical staining on our cohort ('s-IHC cohort'; see supplementary material) using antibodies for the common pathological markers pTDP-43, p62, and CD68 (Fig. 2; Table 3). Pathology of the ALS primary motor cortex is variable both within and across the genotypic spectrum of disease. Relatively little pTDP-43 aggregation was found in a single TARDBP mutation case (a), but this was severe in an OPTN mutation case (d; see also supp. Figure 2). Insets highlight variance of pTDP-43 morphology between genotypes. Average highest pTDP-43 deposition was seen in sporadic cases, which was statistically higher than in C9-ALS (g). Quantification of p62 (b, e, h). Levels of p62 correlated with pTDP-43 in sporadic cases but less so in C9ORF72 disease, reflective of the existence of p62-positive dipeptide repeat protein species unique to C9-ALS (h and j). Cortical microglial activation was highly variable between genotypes (i), and in some cases there was evidence of severe nodular neuronophagia surrounding layer V neurons (f). Grey matter CD68 correlated with the extent of pTDP-43 deposition (k) in both sporadic and C9ORF72 cases, but this relationship was not recapitulated using p62 and CD68 in SOD1/FUS cases (l). Arrows highlight pathology, asterisks highlight Betz cells. r correlations = Pearson (j) or Spearman (k, l), results as on figure. Bars in (i) represent means and SEM. Best-fit lines are manually added for illustrative purposes. All scale bars = 50 μm Table 3 Primary motor cortex and spinal cord single-IHC results Morphology and severity of pTDP-43 was highly variable, even within cases of the same genotype. No pTDP-43 staining was detected in any of the SOD1, FUS or control cases. A formal goodness-of-fit test of a negative-binomial GLM fit for pTDP-43 staining was satisfactory (χ2(32) = 39.577, p = 0.167). On average, pTDP-43 aggregate burden was moderately but significantly higher in sporadic cases than C9ORF72 disease (p = 0.043) (Fig. 2g). Notably, pTDP-43 pathology was relatively sparse across all layers in the single heterozygous TARDBP (M337V) mutation case analysed, but was widespread and severe in homozygous ALS-OPTN (R217X), particularly within layer II and V and at the grey/white matter border (Fig. 2a,d; Supp Figure 2). Microglia are the innate immune cells of the central nervous system and robustly express the transmembrane glycoprotein CD68 in response to inflammatory injury as part of an 'activated' phenotype, a process which is commonly implicated in the pathogenesis of ALS (for review, see: [49, 62]). CD68 staining was therefore performed to assess the relative levels of microglial activation in both the grey and white matter within the primary motor cortex across our cohort (Fig. 2c, f, i). The relative extent of CD68 staining between the subcortical white matter and grey matter correlated in all genotypes (Supp. Figure 3). However, microglial activation was higher within the subcortical white matter than cortical grey matter in nearly all genotypes analysed (Fig. 2i). Differences in grey matter CD68 staining were significant between genotypes; staining was higher in sporadic cases than C9ORF72 (p = 0.037) and SOD1 cases (p = < 0.0001). In the subcortical white matter, CD68 expression was significantly higher in sporadic ALS-TDP cases than SOD1 (p = < 0.0001) and FUS cases (p = 0.0006), and also significantly higher in C9ORF72 cases than SOD1 cases (p = 0.0001) (Fig. 2i). Finally, and in contrast to the findings of a previous semi-quantitative study [12], grey matter CD68 staining correlated closely with pTDP-43 severity in both sporadic and C9-ALS (Fig. 2k). p62 is a ubiquitin-binding scaffold protein involved in the degradation of marked cargoes via selective autophagy, and is a known component of pTDP-43-immunoreactive inclusions. p62 was highest in the OPTN case. Among other genotypes, p62 was highest in C9ORF72 cases, and lowest in HNRNPA1 (Fig. 2b, e, h). p62 was significantly higher in sporadic than FUS (p = 0.0004) and SOD1 (p = < 0.0001) cases but not C9ORF72 disease (p = 0.92). We found a strong positive correlation between the extent of p62 and pTDP-43 in sporadic ALS [Pearson r(16) = 0.61, p = 0.006](Fig. 2j), however this relationship was not observed in C9-ALS to the same degree [Pearson r(14) = 0.53, p = 0.03](Fig. 2j), reflecting the existence of p62-positive, pTDP-43-negative dipeptide repeat proteins produced as a consequence of the C9ORF72 expansion. In contrast to the pTDP-43-CD68 relationship identified in sporadic and C9-ALS, the extent of p62 aggregation did not correlate with CD68 in ALS-SOD1 and ALS-FUS (Fig. 2l). Taken together, these results show that on average sporadic ALS is associated with a more pronounced motor cortex pTDP-43 proteinopathy and neuroinflammatory pathological phenotype than C9-ALS, implicate TDP-43 as a driver of microglial activation, and broadly highlight the significant variability in the severity of motor cortical pathology across the spectrum of ALS genotypes. Spatial and morphological distribution of motor cortical proteinopathy TDP-43 proteinopathy is the major neuropathological finding in sporadic and C9-ALS cases, while ALS-FUS and ALS-SOD1 cases are marked by p62-positive FUS and SOD1 aggregates, respectively [14, 51, 54]. Previous studies have described distribution patterns of cortical pTDP-43 pathology in sporadic ALS [77] and FTLD [47], but this has not been examined comparatively between genotypes in ALS. Given the significant variation in the extent of identified pathology between genotypes, we next qualitatively assessed the relative laminar distribution and morphologies of pTDP-43 and p62 proteinopathy in our s-IHC cohort (Fig. 3; for cases assessed see supplementary material). Spatial and morphological distribution of proteinopathy across genotypes in the ALS primary motor cortex. Sporadic ALS exhibits NCI and oligodendroglial pathology across layers I-VI as well as the subcortical white matter (a-d). The distribution pattern of pTDP-43 pathology was not entirely dissimilar between sporadic and C9-ALS (e-f), although there was a slight preponderance towards oligodendrocyte inclusions in C9 cases (Fig. 6g and h). FUS mutation cases demonstrated infrequent, compact p62-positive NCI with occasional granular inclusions, as well as occasional oligodendroglial pathology (i-l). SOD1 mutation cases, by contrast, exhibited granular p62 staining confined to the middle cortical layers with infrequent NCI in layer V, without obvious glial pathology. Arrows highlight respective proteinopathy. Scale bar applicable to all panels = 50 μm While the extent was highly variable, the dominant pattern of cortical pTDP-43 pathology in sporadic cases consisted of granular, skein or compact neuronal aggregates, with occasional dot and short thread shaped dystrophic neurites, as well as 'comma' or 'wisp'-like perinuclear oligodendroglial inclusions. Staining was mainly present within layers II-VI (Fig. 3a-c) but all ALS-TDP cases assessed exhibited at least one obvious example of oligodendroglial pTDP-43 pathology within the cortical/subcortical white matter, even in cases with the lowest average grey matter pTDP-43 overall (Fig. 3d). There was occasional, diffuse cytoplasmic punctate pathology within the giant pyramidal cells of Betz (Fig. 3c); compacted aggregates were rare. A harmonized subtyping system exists to describe the heterogeneity of pTDP-43 pathology in FTLD-TDP, based on the relative prevalence and distribution of dystrophic neurites (DN) and neuronal cytoplasmic inclusions (NCI). Assessment of primary motor cortex is not included in this system and it has been mostly applied to patients presenting initially with FTD. It is, however, recognised that cases of FTLD-ALS-TDP are most commonly associated with FTLD-TDP Type B neuropathology, both in sporadic as well as C9-FTLD-TDP cases [47]. Although motor cortex pTDP-43 morphology does not appear to be significantly different between FTLD-TDP and FTLD-ALS-TDP cases [78], for clarity, only C9-ALS cases with no clinical history of concomitant FTD were assessed here. Staining was similar in distribution to sporadic cases, with a slight preponderance towards oligodendroglial inclusions (Fig. 3e-h, Fig. 6g and h). However, Betz cell pathology was less prominent. No intranuclear pTDP-43-ir inclusions were seen in any of the sporadic or C9-ALS cases examined. The cellular pattern of proteinopathy in our ALS-FUS and ALS-SOD1 cases was largely consistent with smaller case series described previously [31, 46]. ALS-FUS cases showed predominantly neuronal p62 aggregation within layers I-V in the form of sparse-moderate NCI and DN (Fig. 3i, j), although occasional oligodendroglial inclusions were seen. We found rare examples of p62-positive NCI within Betz cells in one of nine (case 8) ALS-FUS cases assessed (Fig. 3k). There were also occasional p62-positive aggregates within the subcortical white matter immediately below layer VI, but these did not reach the deeper white matter. By contrast, p62-positive aggregates in ALS-SOD1 cases were sparse, present almost exclusively within layers III-V, and consisted of variably sized granules (Fig. 3n) with occasional NCI (Fig. 3o). Notably, the majority of p62 aggregations appeared to be concentrated within the middle and deeper layers including layer V, which is the origin of corticospinal projections [50]. No obvious oligodendrocyte pathology was seen. This comparative assessment highlights the morphological variability in which genetic subtypes of ALS manifest as motor cortical disease. Sporadic ALS-TDP exhibits broad pathological homogeneity across the cortico-spinal neuraxis As ALS patients commonly present clinically with a combination of upper and lower motor neuron signs and concomitant pathology, we next performed additional immunohistochemical staining on sections from the lumbar spinal cord region of each case in our single-stain IHC cohort to assess relative pathological predominance in upper or lower motor neuron compartments in our cohort. There was no significant difference in spinal cord pTDP-43 deposition between sporadic and C9ORF72 cases (p = 0.65; Fig. 4g). The lateral corticospinal tract is composed of white matter axons that descend from the cortex and decussate at the level of the medullary pyramids, synapsing with lower motor neurons in the contralateral spinal cord. Although we found a trend for highest CD68 reactivity in the corticospinal tract of C9-ALS cases, this did not reach statistical significance compared with sporadic ALS-TDP cases (p = 0.91; Fig. 3h.) We also found no relationship between pTDP-43 and CD68 in the anterior horn (Fig. 4j). Sporadic ALS-TDP exhibits broad pathological homogeneity across the motor cortex-spinal cord neuraxis. CD68 and pTDP-43 expression was quantified within the anterior horn (a, red box) and corticospinal tract (a, green circle). The morphology of pTDP-43 pathology was varied in anterior horn lower motor neurons, including large discrete inclusions (b), thread-like skeins (c) and diffuse punctate (d). There was no significant difference in pTDP-43 severity between sporadic and C9ORF72 cases (g). CD68 expression (e, f) was variable across ALS genotypes but was not statistically significant (h, i), and levels of pTDP-43 did not correlate with CD68 expression in the anterior horn (Pearson r; j). pTDP-43 in the motor cortex correlated with pTDP-43 in the anterior horn in sporadic disease but not C9-ALS (Pearson r; k), and CD68 correlated between the corticospinal white matter tract and the subcortical white matter in both FUS and sporadic cases (Pearson r; l) but not other genotypes (p > 0.2). CD68 in the lower anterior horn neuron was plotted against grey matter CD68 in the motor cortex (m). This was used to create a predominance ratio for each case that could be used as a covariate in subsequent models, where a lower ratio represents a tendency towards a burden of CD68 in the anterior horn. The dotted blue line in (m) represents the approximate median ratio of our cohort. Therefore, the further each case is from this line, the more extreme the relative UMN/LMN burden of CD68 the case exhibits. Cases are coloured according to whether they are TDP-proteinopathies (red) or not (green), showing that non-TDP-proteinopathies are more likely to exhibit a LMN neuropathological predominance when assessed via levels of activated microglia (ALS-OPTN case demonstrated extreme UMN predominance, but is excluded from graph for the clarity of other cases). Representative images of this variation across the neuraxis between genotypes, with an OPTN and FUS mutation representing upper and lower motor neuron spectral extremes, respectively (n-u). Asterisk in (t) highlights a seemingly normal Betz cell. AH, anterior horn; CST, corticospinal tract; UMN, upper motor neuron; LMN, lower motor neuron. Scale bars where not indicated (μm): b,c,d = 30, e,f,r,s = 75, all others = 50 Staining from the lumbar spinal cord from each case was next compared to its equivalent stain within the primary motor cortex to assess pathological relationships across the corticospinal neuraxis. Similar to a previous semi-quantitative study [11], pTDP-43 severity in the anterior horn correlated with that in the primary motor cortex in sporadic ALS-TDP cases [Pearson r(17) = 0.62, p = 0.004], however this relationship was not present in C9-ALS [Pearson r(14) = − 0.03, p = 0.89](Fig. 4k). CD68 staining within the lateral corticospinal tract correlated with CD68 staining in the primary motor cortex in sporadic [Pearson r(17) = 0.62, p = 0.0037] and FUS cases [Pearson r(7) = 0.89, p = 0.0013], but not other genotypes (p > 0.2)(Fig. 4l). Note that, with 3 out of 4 correlations being highly significant, simultaneous inference corrections do not modify the conclusions. A previous semi-quantitative study found a correlation between microglial activation and neuronal loss in both the motor cortex and spinal cord [12]. We therefore next sought to assess the relative predominance of upper/lower motor neuron involvement in our cases by comparing the extent of pathology in the primary motor cortex to that in the anterior horn using CD68 as a surrogate marker of neurodegeneration. The extent of CD68 staining between motor cortex and spinal cord was expressed as a ratio, where smaller ratios represent a tendency towards a higher burden of CD68 in the anterior horn (Fig. 4m). Ratios were calculated as: $$ \frac{M. Cx\ grey\ matter\ CD68/{mm^2}_{\left(\log \right)}}{Anterior\ horn\ CD68/{mm^2}_{\left(\log \right)}} $$ Many ALS-FUS and ALS-SOD1 cases exhibited cortical CD68 comparable to that of controls but showed significant staining within the spinal cord anterior horn (Fig. 4t,u), and 80% of all non-TDP-43 proteinopathies assessed fell below the calculated median predominance ratio of our cohort (Fig. 4m; Supp Figure 5). The single ALS-OPTN case demonstrated the most significant UMN predominance (Fig. 4n,o; Supp Figure 5). Together, these results show quantitatively that sporadic ALS-TDP exhibits high intraindividual pathological homogeneity across the cortico-spinal neuraxis, and that non-TDP-43 proteinopathies are more likely to display significant lower motor neuron predominance than TDP-43 disease, which is in broad concordance with clinical presentations commonly seen for these genotypes. GABAergic, GAD67+ interneurons only rarely accumulate pTDP-43 aggregates Gamma-aminobutyric acid (GABA) is the main inhibitory neurotransmitter in the brain. It is produced by glutamic acid decarboxylases (GADs) GAD65 and GAD67, of which GAD67 is generally constitutively active and produces > 90% of basal GABA levels [44]. GAD67 protein expression is a robust generic marker for cell bodies of inhibitory interneurons in human and mouse cortex [69, 72]. By contrast, no comparatively reliable tool is available as a generic marker for labelling of the soma of human excitatory, glutamatergic neurons, which include extratelencephalic pyramidal tract projection neurons. Therefore, we used GAD67-pTDP-43 double labelling to explore the vulnerability of GABAergic interneurons of the primary motor cortex to the accumulation of pTDP-43 aggregates (ALS n = 7, control n = 3; supplementary material), with the reasoning that this would allow us to obtain a histological window into potential imbalances of inhibitory and excitatory neurotransmission in ALS motor cortex. In controls, GAD67+ interneurons were mainly present in layers II-V (Fig. 5a), and included large neurons with long apical dendrites (Fig. 5b, green arrow) as well as smaller tufted neurons (Fig. 5b, blue arrow) as well as their dendritic protrusions (Fig. 5c). We also found examples of GAD67+ interneurons directly synapsing on to Betz cells soma (Fig. 5d, green arrow). Using GAD67-pTDP-43 double labelling in seven ALS cases assessed (see supplementary material for individual cases), we found that GAD67 neurons were devoid of granular or compact pTDP-43 aggregates, even in areas that otherwise contained typical ALS-TDP neuropathology (Fig. 5e-g). We found only one instance of clear cytoplasmic colocalization of GAD67 and a well-formed pTDP-43 aggregate (Fig. 5h), suggesting that pTDP-43 microscopic aggregates only rarely accumulate within interneurons. Additionally, we found no examples of compact pTDP-43 accumulation within Parvalbumin+ interneurons (Fig. 5j), further supporting the idea that pTDP-43 aggregation is more common within excitatory than inhibitory neurons in the ALS primary motor cortex. Cortical inhibitory interneurons rarely accumulate pTDP-43. Motor cortex GAD67+ neurons were present predominantly within layers II-V (a), and exhibited varied morphologies including large neurons with long apical dendrites (b, green arrow), smaller tufted neurons (b, blue arrow). GAD67 highlights multiple primary and secondary dendrites (c). GAD67 interneurons were seen synapsing directly on to Betz cell somata (d, green arrow). GAD67 neurons were devoid of granular or compact pTDP-43 aggregates, even in areas that otherwise contained typical ALS-TDP neuropathology (e-g; orange arrows: pTDP-43 in pyramidal cells, green arrow: lack of pTDP-43 in a Betz cell (5e) and non-Betz pyramidal cell (5 g)). We found only one instance of clear cytoplasmic colocalization of GAD67 and a compact pTDP-43 aggregate (5 h, green arrow). Multiplexed IHC revealed no evidence of parvalbumin colocalising with pTDP-43 (i, green arrow indicates HuD+, parvalbumin- neuron containing discrete pTDP-43 inclusion). Scale bars (μm): a = 300; b = 30; c,g = 20; d,e,f = 25; h = 15; i = 50 Cortical oligodendroglial, but not astroglial, pTDP-43 pathology is a defining feature of ALS-TDP The main role of oligodendroglia is the support of neurons, either directly as cortical satellite glia or via the production of insulating myelin. As pTDP-43 accumulation is present within oligodendrocytes and their dysfunction has been implicated in pathogenesis of ALS [61], we sought to assess the numbers of oligodendrocytes within the primary motor cortex as well as the relative abundance of oligodendrocytes containing pTDP-43 inclusions in a subset of our cohort (ALS n = 7, control n = 3; see supplementary material for cases used). Olig2 is a transcription factor highly expressed in immature oligodendrocyte precursor cells (OPC)(Fig. 6a) while Tubulin polymerization-promoting protein (TPPP/p25) is highly expressed in mature oligodendrocytes (Fig. 6b) [10]. Similar to a previous study in the spinal cord [67], we found no significant difference in mature or immature oligodendrocyte numbers within the ALS-TDP motor cortex grey matter (Fig. 6e and f). We next assessed the prevalence of oligodendrocyte inclusions across our single-IHC cohort. We found at least one obvious instance of oligodendrocyte pTDP-43 inclusion in all ALS-TDP cases assessed (40/40), even in cases that exhibited very low levels of pTDP-43 pathology overall. We next blindly assessed the relative abundance of oligodendrocyte/neuronal pTDP-43 pathology in a randomised selection of ALS-TDP (n = 10) and C9-ALS cases (n = 10)(Supplementary material). Both the mean number of oligodendrocyte inclusions and the ratio of oligo/neuron inclusions was highest in C9-ALS but neither was significantly higher than in sporadic disease (Fig. 6g and h). We (OA) qualitatively reviewed all cases with respect to the topography and neuronal vs. oligodendroglial distribution of aggregates. We found that in human motor cortex FUS aggregates involve both neurons and oligodendroglia, but that SOD1 aggregates are restricted to neurons. The former assertion is in agreement with previous observations [5, 76], however, the latter is based on p62 staining, not staining with confirmation-specific antibody SEDI as this showed strong diffuse astroglial staining even in healthy controls (data not shown); nor did we have access to other conformation-specific SOD1 antibodies. Differential vulnerability of glial cells in the ALS primary motor cortex. Olig2+ OPC (a) and mature TPPP/p25+ oligodendrocytes (b) were present in all layers of the primary motor cortex. Red arrow in (a) indicates satellite oligodendrocyte surrounding the apical dendrite of a Betz cell (green asterisks). There was no difference in the numbers of mature or immature oligodendrocytes between ALS and control cases (e, f). pTDP-43 inclusions are present in oligodendrocytes (c, red arrows indicated by characteristic 'comma' shaped inclusions) and neurons (d, red arrows indicate neuronal pTDP). Both the mean numbers of oligodendroglial inclusions and the relative predominance of pTDP-43 oligodendroglial pathology was higher in C9-ALS, but was not statistically higher than in sporadic disease (g, h). This relative distribution was confirmed qualitatively using double staining for pTDP-NeuN (i, j) and pTDP-Olig2 (k-l, green arrow indicates Olig2-negative neuronal inclusion). Multiplexed immunofluorescence reveals that Iba1+ microglia (m-p, green arrow in m indicates Iba1+ microglia extending processes to surround large layer V neuron containing pTDP-43 aggregates) occasionally contain pTDP-43, but we could find no clear evidence of this in GFAP+ astrocytes (q-u), even in cases with severe pTDP-43 pathology overall. Bars on graphs (e-h) represent means and SD. Scale bars (μm): c, i = 25, j = 20; k, l, m = 10; all others = 50 Microglia may phagocytose inclusions or cells that have been suitably marked for destruction as part of the innate immune system. As microglial activation correlates closely with the extent of pTDP-43 deposition (Fig. 2), we next analysed whether pTDP-43 inclusions could be visualised within microglia themselves. Similar to a previous study [59], we found several examples (Fig. 6m-p) of Iba1+ microglia either actively phagocytosing or containing pTDP-43 in several of the TDP-43 proteinopathy cases analysed. Astrocytes are a diverse glial subtype whose roles include mediation of potassium ions across the synapse and the provision of neurotrophic support. However while cortical astrogliosis is a feature of ALS [41], consistent with a previous study [77] there was no obvious perivascular pTDP-43 pathology suggestive of astrogliopathy and using pTDP-43-GFAP double labelling we were unable to find any clear evidence of pTDP-43 aggregates within GFAP+ astrocytes themselves (Fig. 6q-u). Betz cells are giant pyramidal neurons unique to the primary motor cortex that reside within layer Vb, and can be distinguished histologically by their large size, accumulation of intracellular lipofuscin, and circumferential somatodendritic architecture. Similar to a previous study [9], we observed pTDP-43 accumulation within Betz cell perinuclear cytoplasm only rarely (supp. Figure 6a-d). However, in several cases we did find evidence of severe microgliosis surrounding or seemingly replacing large pyramidal neurons in layer V (supp. Figure 6e), suggesting that processes within these large neurons may induce a selective vulnerability to the microglial response, and indicating active neuronophagia even in the clinical end-stage of the disease (Fig. 2f), a phenomenon which has been previously reported in spinal motor neurons [58]. Together, these results highlight cortical pTDP-43 oligodendrocyte pathology as a defining feature of ALS-TDP across all studied genotypes, and suggest a clear neuronal/glial subtype specific vulnerability to proteoaggregation in the ALS primary motor cortex. We report, to our knowledge, the first large-scale, digital microscopy-guided neuropathological analysis of the burden and pattern of proteinopathies of the primary motor cortex in ALS since the discovery of the main disease defining genotypes. A strength of our study is the combined use of image-analysis algorithms with statistical modelling to allow a comprehensive, fully quantitative approach to pathological assessment. We consider these quantitative traits to be more proximally related to genetic variables and pathobiology than clinical data (e.g. UMN vs. LMN predominance), which were limited, not quantitative and unlikely to be reflective of neuraxis involvement at the end stage of the disease. Importantly, the genetic stratification of our cohort allowed us to begin to assess ALS neuropathological endophenotypes, the utility of which has been demonstrated in other settings, such as frontotemporal lobar degeneration and hippocampal sclerosis [2, 6, 26, 27, 53]. Our dataset stems from an independent, previously unpublished cohort of individuals with ALS stratified by genotype. It allows us to contribute several important findings to the field: First, in the group of ALS associated with phosphorylated (p)TDP-43 proteinopathy, sporadic disease was characterised by a higher pTDP burden in the motor cortex than C9-ALS, and all genotypes in this group demonstrated oligodendroglial pTDP-43 inclusion body pathology. Second, overall motor cortex proteinopathy burden of ALS-FUS and ALS-SOD1 is less severe than that of ALS-TDP. Third, inhibitory interneurons appear to be less prone to accumulation of microscopic pTDP-43 aggregates than excitatory neurons. A key point in discussions concerning selective vulnerability is the distinction between cellular vulnerability to proteinopathy, and vulnerability to degeneration itself. The toxicity of TDP-43 has been demonstrated in vitro through a range of mechanisms including its effects on endocytosis, autophagy and stress-granule formation [34, 37, 45]. However, the cellular effect of this toxicity in humans is less well understood, and the relative contribution of toxic gain of function and loss of physiological nuclear function of TDP-43 remains difficult to disentangle. We observed high interindividual variation in the extent of pTDP-43 across our cohort, including in patients of the same genotype. pTDP-43 aggregation was amongst the lowest in the single TARDBP mutation case assessed (Fig. 2a,g; supp Figure 2), however notably it has not been formally established that the pattern and/or severity of pTDP-43 pathology in TARDBP mutation cases is the same as in ALS-TDP or C9-ALS. Understanding the extent of this variation across genotypes is relevant because clearance of pathological cytoplasmic TDP-43 rescues motor deficits and extends lifespan in mice [82], and clinical trials involving immunotherapy are ongoing in humans. Unidentified genetic factors are likely an influence on the overall extent of pathology, however the majority of cases used in this study were sequenced using whole-exome or SNP-array based sequencing using blood-derived DNA, meaning any potential CNS-specific variation in the majority of the genome is unaccounted for. In our study, the main genotypes were sporadic disease and C9-ALS; we did not have data on ATXN2 intermediate expansions. In this context it is interesting to note that Yang et al. [85], using immunoblots for pTDP-43, found evidence of higher levels of pTDP-43 in motor cortex of C9-ALS and in spinal cord of ALS-TDP with intermediate ATXN2 expansions. To our knowledge, it has not been formally studied how increased levels of biochemically-determined insoluble pTDP-43 relate to unbiased quantification of light-microscopically visible pTDP-43 aggregates in the same anatomical compartment. Our data revealed a significant, but not dramatic difference in microscopic pTDP-43 burden in motor cortex of C9-ALS carriers versus sporadic ALS-TDP (p = 0.043); however, we observed a trend towards higher oligodendroglial pTDP-43 burden in C9-ALS. We detected a differential vulnerability of specific cell types to the aggregation of pTDP-43. This can be summarised as follows: excitatory neurons within the motor cortex are more prone to form pTDP-43 aggregates than inhibitory interneurons, and oligodendroglial, but not astroglial, pTDP-43 pathology should be considered a defining feature of ALS-TDP in cases with and without proven genetic aetiology. We found evidence of oligodendrocyte pathology in every ALS-TDP case assessed in this study, demonstrating cortical oligodendroglial proteinopathy as a defining feature of the disease. However, the mechanisms by which oligodendroglial aggregations potentially cause dysfunction are unclear. While the cellular origin of TDP-43 pathology in humans is not known, studies have suggested that TDP-43 can be transmitted between cells both in culture and in vivo [20, 63]. It is plausible therefore that pTDP-43 oligodendroglial pathology is preceded by cell-to-cell transmission of seed-competent soluble TDP-43, propagating its non-cell autonomous effects. Notably, and contrasting observations in the spinal cord [21], we found that this pattern of cortical neuronal and oligodendroglial proteinopathy is also a hallmark of ALS-FUS motor cortex, but not of ALS-SOD1. This is unexpected, as the concept of non-neuronal - including oligodendroglial - contributions to ALS neurodegeneration has been established in the SOD1G93A mouse model [36, 61]. Whether this indicates a differential vulnerability of oligodendroglia to RNA-binding protein homeostasis or of different oligodendroglial subtypes remains to be established. There is no doubt however that microglial activation occurs across all genetic subtypes of human ALS, although whether this response is in itself pathogenic or primarily a subsequent reactive event is debated. Our results show that the extent of this response in the primary motor cortex varies significantly between ALS genotypes, and confirms findings ante mortem concerning the difference in microglial burden between upper and lower motor neurons in sporadic ALS [74, 80]. Several ALS genes have been implicated in the innate immune response specifically; for example C9ORF72 is highly expressed in myeloid cells [57, 66], conditional myeloid deletion of OPTN results in dysmyelination and axonopathy [32], and removal of mutant SOD1 from microglia prolongs lifespan in transgenic mice [7]. However, comparably low numbers of cortical activated microglia were seen in SOD1 and FUS cases, which exhibited a relatively low extent of cortical proteinopathy, while the highest microglial response was seen in sporadic and C9ORF72 cases, which on average demonstrated a higher burden of concomitant pTDP-43. This suggests that TDP-43 proteoaggregation specifically determines the extent of the microglial response, rather than the expression of a mutant protein within microglia themselves. We found several examples of pTDP-43 aggregates within microglia (Fig. 6), though we cannot say whether these cells actively phagocytosed the aggregates, or the inclusions arose within the microglia as result of the disease process. We also found no evidence of pTDP-43 inclusions within GFAP+ astroglia, despite cortical astrogliosis being a feature in ALS [41]. Similar to a previous study, we also found only some evidence of compact pTDP-43 aggregates within Betz cell somata [9], although we did find evidence of severe microgliosis and neuronophagia surrounding large layer Vb neurons in several cases, which has also been previously reported [33]. It has been noted that some ALS Betz cells seem to lose expression of the native protein entirely [9]. This raises interesting questions regarding whether Betz cells are perhaps selectively vulnerable to the toxicity conferred by soluble TDP-43 and its effects on microglia, and, relative to other neurons, whether they lack the capacity to concentrate insoluble TDP-43 in microscopically visible compact inclusions, which may be neuroprotective. In this context it is interesting to note that our data suggest inhibitory interneurons of the primary motor cortex do not seem to form abundant microscopically visible compact pTDP-43 aggregates either. We may, therefore, infer that non-Betz excitatory neurons carry the vast majority of neuronal pTDP-43 aggregate burden in human motor cortex. Whether this reflects cell-intrinsic biochemical traits of GABAergic neurons or is an expression of an inhibitory-excitatory local network imbalance remains unknown. Experimental evidence appears to indicate that excitatory layer V pyramidal activity is directly associated with the accumulation of ubiquitinated aggregates in TDP-43 mutant mice [87], however it is likely that more complex genotype-dependent interactions between neurons are at play [38]. In any case, our inference that excitatory pyramidal neurons, but not inhibitory interneurons, are the primary neuronal cell type accumulating pTDP-43 aggregates is consistent with the hypothesis that pTDP-43 propagation through the human nervous system is initiated by them [8]. Our study contains several limitations. We analysed a region of the primary motor cortex corresponding to the hand region of the homunculus wherever possible, however we cannot guarantee that this region was analysed in every case, and it is possible that the expression of proteins described in this paper vary across the somatotopically defined regions of the primary motor cortex. Additionally, we have not stratified our cohort by hemisphere sampled, and it is possible that ALS pathology varies between the same region of different hemispheres, particularly as initial clinical presentation is often lateralised. We note that there is some evidence of asymmetrical TDP-43 pathology in FTLD (without ALS) and Alzheimer's disease [75], but such asymmetry was not found in two cases of ALS-FTLD [39]. The precise variation of ALS pathology across hemispheres is therefore of an unknown significance. We also used CD68 to create a predominance ratio for each case that reflected the comparable levels of involvement between the primary motor cortex and lumbar spinal cord. This was necessary because there is a degree of corticospinal motor neuron loss even in patients that display almost purely LMN signs [70]. Ideally, neuronal loss itself would have been used to create a predominance indicator. However, NeuN, the most commonly used pan-neuronal marker in mouse studies, displays a strong fixation dependency and is unsuitable for use with the kind of archival material used in the single-IHC component of this study, in which we aimed to achieve the highest possible numbers of cases for analysis. In summary, we used a quantitative immunohistochemical approach to comprehensively analyse pathology within the primary motor cortex in a large cohort of genetically-defined ALS cases. Our results demonstrate a clear genotype-specific vulnerability to ALS proteinopathy, which may provide targets for the design of future therapeutics through genotype-specific amelioration of cortical pathology. ALS: FTLD: Frontotemporal lobar degeneration TDP-43: Transactive Response DNA binding protein 43-kDa FUS : Fused in Sarcoma OPTN : Optineurin LMN: UMN: Upper motor neuron MxIF: SOD1 : Superoxide dismutase 1 C9ORF72 : Chromosome 9 open reading frame 72 Al-Sarraj S, King A, Troakes C, Smith B, Maekawa S, Bodi I, Rogelj B, Al-Chalabi A, Hortobagyi T, Shaw CE (2011) p62 positive, TDP-43 negative, neuronal cytoplasmic and intranuclear inclusions in the cerebellum and hippocampus define the pathology of C9orf72-linked FTLD and MND/ALS. Acta Neuropathol 122:691–702. https://doi.org/10.1007/s00401-011-0911-2 Allen M, Burgess JD, Ballard T, Serie D, Wang X, Younkin CS, Sun Z, Kouri N, Baheti S, Wang C et al (2016) Gene expression, methylation and neuropathology correlations at progressive supranuclear palsy risk loci. Acta Neuropathol 132:197–211. https://doi.org/10.1007/s00401-016-1576-7 Baloh RH (2011) TDP-43: the relationship between protein aggregation and neurodegeneration in amyotrophic lateral sclerosis and frontotemporal lobar degeneration. FEBS J 278:3539–3549. https://doi.org/10.1111/j.1742-4658.2011.08256.x Bankhead P, Loughrey MB, Fernandez JA, Dombrowski Y, McArt DG, Dunne PD, McQuaid S, Gray RT, Murray LJ, Coleman HG et al (2017) QuPath: open source software for digital pathology image analysis. Sci Rep 7:16878. https://doi.org/10.1038/s41598-017-17204-5 Baumer D, Hilton D, Paine SM, Turner MR, Lowe J, Talbot K, Ansorge O (2010) Juvenile ALS with basophilic inclusions is a FUS proteinopathy with FUS mutations. Neurology 75:611–618. https://doi.org/10.1212/WNL.0b013e3181ed9cde Bennett DA, De Jager PL, Leurgans SE, Schneider JA (2009) Neuropathologic intermediate phenotypes enhance association to Alzheimer susceptibility alleles. Neurology 72:1495–1503. https://doi.org/10.1212/WNL.0b013e3181a2e87d Boillee S, Yamanaka K, Lobsiger CS, Copeland NG, Jenkins NA, Kassiotis G, Kollias G, Cleveland DW (2006) Onset and progression in inherited ALS determined by motor neurons and microglia. Science 312:1389–1392. https://doi.org/10.1126/science.1123511 Braak H, Brettschneider J, Ludolph AC, Lee VM, Trojanowski JQ, Del Tredici K (2013) Amyotrophic lateral sclerosis--a model of corticofugal axonal spread. Nat Rev Neurol 9:708–714. https://doi.org/10.1038/nrneurol.2013.221 Braak H, Ludolph AC, Neumann M, Ravits J, Del Tredici K (2017) Pathological TDP-43 changes in Betz cells differ from those in bulbar and spinal alpha-motoneurons in sporadic amyotrophic lateral sclerosis. Acta Neuropathol 133:79–90. https://doi.org/10.1007/s00401-016-1633-2 Bradl M, Lassmann H (2010) Oligodendrocytes: biology and pathology. Acta Neuropathol 119:37–53. https://doi.org/10.1007/s00401-009-0601-5 Brettschneider J, Del Tredici K, Toledo JB, Robinson JL, Irwin DJ, Grossman M, Suh E, Van Deerlin VM, Wood EM, Baek Y et al (2013) Stages of pTDP-43 pathology in amyotrophic lateral sclerosis. Ann Neurol 74:20–38. https://doi.org/10.1002/ana.23937 Brettschneider J, Libon DJ, Toledo JB, Xie SX, McCluskey L, Elman L, Geser F, Lee VM, Grossman M, Trojanowski JQ (2012) Microglial activation and TDP-43 pathology correlate with executive dysfunction in amyotrophic lateral sclerosis. Acta Neuropathol 123:395–407. https://doi.org/10.1007/s00401-011-0932-x Brockington A, Ning K, Heath PR, Wood E, Kirby J, Fusi N, Lawrence N, Wharton SB, Ince PG, Shaw PJ (2013) Unravelling the enigma of selective vulnerability in neurodegeneration: motor neurons resistant to degeneration in ALS show distinct gene expression characteristics and decreased susceptibility to excitotoxicity. Acta Neuropathol 125:95–109. https://doi.org/10.1007/s00401-012-1058-5 Cairns NJ, Neumann M, Bigio EH, Holm IE, Troost D, Hatanpaa KJ, Foong C, White CL 3rd, Schneider JA, Kretzschmar HA et al (2007) TDP-43 in familial and sporadic frontotemporal lobar degeneration with ubiquitin inclusions. Am J Pathol 171:227–240. https://doi.org/10.2353/ajpath.2007.070182 Can A, Gerdes MJ, Tao X, Bello MO, Seel M (2013) Method and apparatus for detecting irregularities in tissue microarrays. General Electric Co, City Cowan CM, Mudher A (2013) Are tau aggregates toxic or protective in tauopathies? Front Neurol 4:114. https://doi.org/10.3389/fneur.2013.00114 DeJesus-Hernandez M, Mackenzie IR, Boeve BF, Boxer AL, Baker M, Rutherford NJ, Nicholson AM, Finch NA, Flynn H, Adamson J et al (2011) Expanded GGGGCC hexanucleotide repeat in noncoding region of C9ORF72 causes chromosome 9p-linked FTD and ALS. Neuron 72:245–256. https://doi.org/10.1016/j.neuron.2011.09.011 Deming Y, Li Z, Kapoor M, Harari O, Del-Aguila JL, Black K, Carrell D, Cai Y, Fernandez MV, Budde Jet al (2017) Genome-wide association study identifies four novel loci associated with Alzheimer's endophenotypes and disease modifiers. Acta Neuropathol 133: 839–856 Doi https://doi.org/10.1007/s00401-017-1685-y Eisen A, Kim S, Pant B (1992) Amyotrophic lateral sclerosis (ALS): a phylogenetic disease of the corticomotoneuron? Muscle Nerve 15:219–224. https://doi.org/10.1002/mus.880150215 Feiler MS, Strobel B, Freischmidt A, Helferich AM, Kappel J, Brewer BM, Li D, Thal DR, Walther P, Ludolph AC et al (2015) TDP-43 is intercellularly transmitted across axon terminals. J Cell Biol 211:897–911. https://doi.org/10.1083/jcb.201504057 Forsberg K, Andersen PM, Marklund SL, Brannstrom T (2011) Glial nuclear aggregates of superoxide dismutase-1 are regularly present in patients with amyotrophic lateral sclerosis. Acta Neuropathol 121:623–634. https://doi.org/10.1007/s00401-011-0805-3 Forsberg K, Graffmo K, Pakkenberg B, Weber M, Nielsen M, Marklund S, Brannstrom T, Andersen PM (2019) Misfolded SOD1 inclusions in patients with mutations in C9orf72 and other ALS/FTD-associated genes. J Neurol Neurosurg Psychiatry 90:861–869. https://doi.org/10.1136/jnnp-2018-319386 Gal J, Strom AL, Kilty R, Zhang F, Zhu H (2007) p62 accumulates and enhances aggregate formation in model systems of familial amyotrophic lateral sclerosis. J Biol Chem 282:11068–11077. https://doi.org/10.1074/jbc.M608787200 Geevasinga N, Menon P, Ozdinler PH, Kiernan MC, Vucic S (2016) Pathophysiological and diagnostic implications of cortical dysfunction in ALS. Nat Rev Neurol 12:651–661. https://doi.org/10.1038/nrneurol.2016.140 Gerdes MJ, Sevinsky CJ, Sood A, Adak S, Bello MO, Bordwell A, Can A, Corwin A, Dinn S, Filkins RJ et al (2013) Highly multiplexed single-cell analysis of formalin-fixed, paraffin-embedded cancer tissue. Proc Natl Acad Sci U S A 110:11982–11987. https://doi.org/10.1073/pnas.1300136110 Giannini LAA, Xie SX, Peterson C, Zhou C, Lee EB, Wolk DA, Grossman M, Trojanowski JQ, McMillan CT, Irwin DJ (2019) Empiric methods to account for pre-analytical variability in digital histopathology in Frontotemporal lobar degeneration. Front Neurosci 13:682. https://doi.org/10.3389/fnins.2019.00682 Grothe MJ, Sepulcre J, Gonzalez-Escamilla G, Jelistratova I, Scholl M, Hansson O, Teipel SJ, Alzheimer's Disease Neuroimaging I (2018) Molecular properties underlying regional vulnerability to Alzheimer's disease pathology. Brain 141:2755–2771. https://doi.org/10.1093/brain/awy189 Guo T, Noble W, Hanger DP (2017) Roles of tau protein in health and disease. Acta Neuropathol 133:665–704. https://doi.org/10.1007/s00401-017-1707-9 Hara M, Minami M, Kamei S, Suzuki N, Kato M, Aoki M (2012) Lower motor neuron disease caused by a novel FUS/TLS gene frameshift mutation. J Neurol 259:2237–2239. https://doi.org/10.1007/s00415-012-6542-2 Ince P, Stout N, Shaw P, Slade J, Hunziker W, Heizmann CW, Baimbridge KG (1993) Parvalbumin and calbindin D-28k in the human motor system and in motor neuron disease. Neuropathol Appl Neurobiol 19:291–299 Ince PG, Tomkins J, Slade JY, Thatcher NM, Shaw PJ (1998) Amyotrophic lateral sclerosis associated with genetic abnormalities in the gene encoding cu/Zn superoxide dismutase: molecular pathology of five new cases, and comparison with previous reports and 73 sporadic cases of ALS. J Neuropathol Exp Neurol 57:895–904. https://doi.org/10.1097/00005072-199810000-00002 Ito Y, Ofengeim D, Najafov A, Das S, Saberi S, Li Y, Hitomi J, Zhu H, Chen H, Mayo L et al (2016) RIPK1 mediates axonal degeneration by promoting inflammation and necroptosis in ALS. Science 353:603–608. https://doi.org/10.1126/science.aaf6803 Jara JH, Genc B, Stanford MJ, Pytel P, Roos RP, Weintraub S, Mesulam MM, Bigio EH, Miller RJ, Ozdinler PH (2017) Evidence for an early innate immune response in the motor cortex of ALS. J Neuroinflammation 14:129. https://doi.org/10.1186/s12974-017-0896-4 Johnson BS, Snead D, Lee JJ, McCaffery JM, Shorter J, Gitler AD (2009) TDP-43 is intrinsically aggregation-prone, and amyotrophic lateral sclerosis-linked mutations accelerate aggregation and increase toxicity. J Biol Chem 284:20329–20339. https://doi.org/10.1074/jbc.M109.010264 Josephs KA, Whitwell JL, Weigand SD, Murray ME, Tosakulwong N, Liesinger AM, Petrucelli L, Senjem ML, Knopman DS, Boeve BF et al (2014) TDP-43 is a key player in the clinical features associated with Alzheimer's disease. Acta Neuropathol 127:811–824. https://doi.org/10.1007/s00401-014-1269-z Kang SH, Li Y, Fukaya M, Lorenzini I, Cleveland DW, Ostrow LW, Rothstein JD, Bergles DE (2013) Degeneration and impaired regeneration of gray matter oligodendrocytes in amyotrophic lateral sclerosis. Nat Neurosci 16:571–579. https://doi.org/10.1038/nn.3357 Kim HJ, Raphael AR, LaDow ES, McGurk L, Weber RA, Trojanowski JQ, Lee VMY, Finkbeiner S, Gitler AD, Bonini NM (2014) Therapeutic modulation of eIF2 alpha phosphorylation rescues TDP-43 toxicity in amyotrophic lateral sclerosis disease models. Nat Genet 46:152. https://doi.org/10.1038/ng.2853 Kim J, Hughes EG, Shetty AS, Arlotta P, Goff LA, Bergles DE, Brown SP (2017) Changes in the excitability of neocortical neurons in a mouse model of amyotrophic lateral sclerosis are not specific to Corticospinal neurons and are modulated by advancing Disease. J Neurosci 37:9037–9053. https://doi.org/10.1523/JNEUROSCI.0811-17.2017 King A, Bodi I, Nolan M, Troakes C, Al-Sarraj S (2015) Assessment of the degree of asymmetry of pathological features in neurodegenerative diseases. What is the significance for brain banks? J Neural Transm (Vienna) 122:1499–1508. https://doi.org/10.1007/s00702-015-1410-8 King A, Troakes C, Smith B, Nolan M, Curran O, Vance C, Shaw CE, Al-Sarraj S (2015) ALS-FUS pathology revisited: singleton FUS mutations and an unusual case with both a FUS and TARDBP mutation. Acta Neuropathol Commun 3:62. https://doi.org/10.1186/s40478-015-0235-x Kushner PD, Stephenson DT, Wright S (1991) Reactive astrogliosis is widespread in the subcortical white matter of amyotrophic lateral sclerosis brain. J Neuropathol Exp Neurol 50:263–277 Kuusisto E, Kauppinen T, Alafuzoff I (2008) Use of p62/SQSTM1 antibodies for neuropathological diagnosis. Neuropathol Appl Neurobiol 34:169–180. https://doi.org/10.1111/j.1365-2990.2007.00884.x Kwiatkowski TJ Jr, Bosco DA, Leclerc AL, Tamrazian E, Vanderburg CR, Russ C, Davis A, Gilchrist J, Kasarskis EJ, Munsat T et al (2009) Mutations in the FUS/TLS gene on chromosome 16 cause familial amyotrophic lateral sclerosis. Science 323:1205–1208. https://doi.org/10.1126/science.1166066 Lee SE, Lee Y, Lee GH (2019) The regulation of glutamic acid decarboxylases in GABA neurotransmission in the brain. Arch Pharm Res 42:1031–1039. https://doi.org/10.1007/s12272-019-01196-z Liu G, Coyne AN, Pei F, Vaughan S, Chaung M, Zarnescu DC, Buchan JR (2017) Endocytosis regulates TDP-43 toxicity and turnover. Nat Commun 8:2092. https://doi.org/10.1038/s41467-017-02017-x Mackenzie IR, Ansorge O, Strong M, Bilbao J, Zinman L, Ang LC, Baker M, Stewart H, Eisen A, Rademakers R et al (2011) Pathological heterogeneity in amyotrophic lateral sclerosis with FUS mutations: two distinct patterns correlating with disease severity and mutation. Acta Neuropathol 122:87–98. https://doi.org/10.1007/s00401-011-0838-7 Mackenzie IR, Neumann M, Baborie A, Sampathu DM, Du Plessis D, Jaros E, Perry RH, Trojanowski JQ, Mann DM, Lee VM (2011) A harmonized classification system for FTLD-TDP pathology. Acta Neuropathol 122:111–113. https://doi.org/10.1007/s00401-011-0845-8 Maekawa S, Al-Sarraj S, Kibble M, Landau S, Parnavelas J, Cotter D, Everall I, Leigh PN (2004) Cortical selective vulnerability in motor neuron disease: a morphometric study. Brain 127:1237–1251. https://doi.org/10.1093/brain/awh132 McCauley ME, Baloh RH (2019) Inflammation in ALS/FTD pathogenesis. Acta Neuropathol 137:715–730. https://doi.org/10.1007/s00401-018-1933-9 Molyneaux BJ, Arlotta P, Menezes JR, Macklis JD (2007) Neuronal subtype specification in the cerebral cortex. Nat Rev Neurosci 8:427–437. https://doi.org/10.1038/nrn2151 Munoz DG, Neumann M, Kusaka H, Yokota O, Ishihara K, Terada S, Kuroda S, Mackenzie IR (2009) FUS pathology in basophilic inclusion body disease. Acta Neuropathol 118:617–627. https://doi.org/10.1007/s00401-009-0598-9 Nelder JA, Wedderburn WM (1972) Generalized linear models. J R Stat Soc A 135:15. https://doi.org/10.2307/2344614 Nelson PT, Estus S, Abner EL, Parikh I, Malik M, Neltner JH, Ighodaro E, Wang WX, Wilfred BR, Wang LS et al (2014) ABCC9 gene polymorphism is associated with hippocampal sclerosis of aging pathology. Acta Neuropathol 127:825–843. https://doi.org/10.1007/s00401-014-1282-2 Neumann M, Sampathu DM, Kwong LK, Truax AC, Micsenyi MC, Chou TT, Bruce J, Schuck T, Grossman M, Clark CM et al (2006) Ubiquitinated TDP-43 in frontotemporal lobar degeneration and amyotrophic lateral sclerosis. Science 314:130–133. https://doi.org/10.1126/science.1134108 Nihei K, McKee AC, Kowall NW (1993) Patterns of neuronal degeneration in the motor cortex of amyotrophic lateral sclerosis patients. Acta Neuropathol 86:55–64 Nijssen J, Comley LH, Hedlund E (2017) Motor neuron vulnerability and resistance in amyotrophic lateral sclerosis. Acta Neuropathol 133:863–885. https://doi.org/10.1007/s00401-017-1708-8 O'Rourke JG, Bogdanik L, Yanez A, Lall D, Wolf AJ, Muhammad AK, Ho R, Carmona S, Vit JP, Zarrow J et al (2016) C9orf72 is required for proper macrophage and microglial function in mice. Science 351:1324–1329. https://doi.org/10.1126/science.aaf1064 Pamphlett R, Kum Jew S (2008) TDP-43 inclusions do not protect motor neurons from sporadic ALS. Acta Neuropathol 116:221–222. https://doi.org/10.1007/s00401-008-0392-0 Paolicelli RC, Jawaid A, Henstridge CM, Valeri A, Merlini M, Robinson JL, Lee EB, Rose J, Appel S, Lee VM et al (2017) TDP-43 depletion in microglia promotes amyloid clearance but also induces synapse loss. Neuron 95(297–308):e296. https://doi.org/10.1016/j.neuron.2017.05.037 Parkinson N, Ince PG, Smith MO, Highley R, Skibinski G, Andersen PM, Morrison KE, Pall HS, Hardiman O, Collinge J et al (2006) ALS phenotypes with mutations in CHMP2B (charged multivesicular body protein 2B). Neurology 67:1074–1077. https://doi.org/10.1212/01.wnl.0000231510.89311.8b Philips T, Bento-Abreu A, Nonneman A, Haeck W, Staats K, Geelen V, Hersmus N, Kusters B, Van Den Bosch L, Van Damme P et al (2013) Oligodendrocyte dysfunction in the pathogenesis of amyotrophic lateral sclerosis. Brain 136:471–482. https://doi.org/10.1093/brain/aws339 Philips T, Robberecht W (2011) Neuroinflammation in amyotrophic lateral sclerosis: role of glial activation in motor neuron disease. Lancet Neurol 10:253–263. https://doi.org/10.1016/S1474-4422(11)70015-1 Porta S, Xu Y, Restrepo CR, Kwong LK, Zhang B, Brown HJ, Lee EB, Trojanowski JQ, Lee VM (2018) Patient-derived frontotemporal lobar degeneration brain extracts induce formation and spreading of TDP-43 pathology in vivo. Nat Commun 9:4220. https://doi.org/10.1038/s41467-018-06548-9 Renton AE, Majounie E, Waite A, Simon-Sanchez J, Rollinson S, Gibbs JR, Schymick JC, Laaksovirta H, van Swieten JC, Myllykangas L et al (2011) A hexanucleotide repeat expansion in C9ORF72 is the cause of chromosome 9p21-linked ALS-FTD. Neuron 72:257–268. https://doi.org/10.1016/j.neuron.2011.09.010 Restagno G, Lombardo F, Sbaiz L, Mari C, Gellera C, Alimonti D, Calvo A, Tarenzi L, Chio A (2008) The rare G93D mutation causes a slowly progressing lower motor neuron disease. Amyotroph Lateral Scler 9:35–39. https://doi.org/10.1080/17482960701788198 Rizzu P, Blauwendraat C, Heetveld S, Lynes EM, Castillo-Lizardo M, Dhingra A, Pyz E, Hobert M, Synofzik M, Simon-Sanchez J et al (2016) C9orf72 is differentially expressed in the central nervous system and myeloid cells and consistently reduced in C9orf72, MAPT and GRN mutation carriers. Acta Neuropathol Commun 4:37. https://doi.org/10.1186/s40478-016-0306-7 Rohan Z, Matej R, Rusina R, Kovacs GG (2014) Oligodendroglial response in the spinal cord in TDP-43 proteinopathy with motor neuron involvement. Neurodegener Dis 14:117–124. https://doi.org/10.1159/000362929 Rosen DR, Siddique T, Patterson D, Figlewicz DA, Sapp P, Hentati A, Donaldson D, Goto J, O'Regan JP, Deng HX et al (1993) Mutations in cu/Zn superoxide dismutase gene are associated with familial amyotrophic lateral sclerosis. Nature 362:59–62. https://doi.org/10.1038/362059a0 Rudy B, Fishell G, Lee S, Hjerling-Leffler J (2011) Three groups of interneurons account for nearly 100% of neocortical GABAergic neurons. Dev Neurobiol 71:45–61. https://doi.org/10.1002/dneu.20853 Sasaki S, Iwata M (2000) Immunocytochemical and ultrastructural study of the motor cortex in patients with lower motor neuron disease. Neurosci Lett 281:45–48 Schneider CA, Rasband WS, Eliceiri KW (2012) NIH image to ImageJ: 25 years of image analysis. Nat Methods 9:671–675 Schwab C, Yu S, Wong W, McGeer EG, McGeer PL (2013) GAD65, GAD67, and GABAT immunostaining in human brain and apparent GAD65 loss in Alzheimer's disease. J Alzheimers Dis 33:1073–1088. https://doi.org/10.3233/JAD-2012-121330 Sidhu A, Wersinger C, Moussa CE, Vernier P (2004) The role of alpha-synuclein in both neuroprotection and neurodegeneration. Ann N Y Acad Sci 1035:250–270. https://doi.org/10.1196/annals.1332.016 Sitte HH, Wanschitz J, Budka H, Berger ML (2001) Autoradiography with [3H]PK11195 of spinal tract degeneration in amyotrophic lateral sclerosis. Acta Neuropathol 101:75–78 Stefanits H, Budka H, Kovacs GG (2012) Asymmetry of neurodegenerative disease-related pathologies: a cautionary note. Acta Neuropathol 123:449–452. https://doi.org/10.1007/s00401-011-0936-6 Suzuki N, Kato S, Kato M, Warita H, Mizuno H, Kato M, Shimakura N, Akiyama H, Kobayashi Z, Konno H et al (2012) FUS/TLS-immunoreactive neuronal and glial cell inclusions increase with disease duration in familial amyotrophic lateral sclerosis with an R521C FUS/TLS mutation. J Neuropathol Exp Neurol 71:779–788. https://doi.org/10.1097/NEN.0b013e318264f164 Takeuchi R, Tada M, Shiga A, Toyoshima Y, Konno T, Sato T, Nozaki H, Kato T, Horie M, Shimizu H et al (2016) Heterogeneity of cerebral TDP-43 pathology in sporadic amyotrophic lateral sclerosis: evidence for clinico-pathologic subtypes. Acta Neuropathol Commun 4:61. https://doi.org/10.1186/s40478-016-0335-2 Tan RH, Yang Y, Kim WS, Dobson-Stone C, Kwok JB, Kiernan MC, Halliday GM (2017) Distinct TDP-43 inclusion morphologies in frontotemporal lobar degeneration with and without amyotrophic lateral sclerosis. Acta Neuropathol Commun 5:76. https://doi.org/10.1186/s40478-017-0480-2 Tumer Z, Bertelsen B, Gredal O, Magyari M, Nielsen KC, Lucamp GK, Brondum-Nielsen K (2012) Novel heterozygous nonsense mutation of the OPTN gene segregating in a Danish family with ALS. Neurobiol Aging 33:208 e201–208 e205. https://doi.org/10.1016/j.neurobiolaging.2011.07.001 Turner MR, Cagnin A, Turkheimer FE, Miller CC, Shaw CE, Brooks DJ, Leigh PN, Banati RB (2004) Evidence of widespread cerebral microglial activation in amyotrophic lateral sclerosis: an [11C](R)-PK11195 positron emission tomography study. Neurobiol Dis 15:601–609. https://doi.org/10.1016/j.nbd.2003.12.012 Van Deerlin VM, Leverenz JB, Bekris LM, Bird TD, Yuan W, Elman LB, Clay D, Wood EM, Chen-Plotkin AS, Martinez-Lage M et al (2008) TARDBP mutations in amyotrophic lateral sclerosis with TDP-43 neuropathology: a genetic and histopathological analysis. Lancet Neurol 7:409–416. https://doi.org/10.1016/S1474-4422(08)70071-1 Walker AK, Spiller KJ, Ge G, Zheng A, Xu Y, Zhou M, Tripathy K, Kwong LK, Trojanowski JQ, Lee VM (2015) Functional recovery in new mouse models of ALS/FTLD after clearance of pathological cytoplasmic TDP-43. Acta Neuropathol 130:643–660. https://doi.org/10.1007/s00401-015-1460-x Wolfinger R, O'Connell M (1993) Generalized linear mixed models: a pseudo-likelihood approach. J Stat Comput Simul 48:10 Woolfe F, Gerdes M, Bello M, Tao X, Can A (2011) Autofluorescence removal by non-negative matrix factorization. IEEE Trans Image Process 20:1085–1093. https://doi.org/10.1109/TIP.2010.2079810 Yang Y, Halliday GM, Kiernan MC, Tan RH (2019) TDP-43 levels in the brain tissue of ALS cases with and without C9ORF72 or ATXN2 gene expansions. Neurology 93:e1748–e1755. https://doi.org/10.1212/WNL.0000000000008439 Yousef A, Robinson JL, Irwin DJ, Byrne MD, Kwong LK, Lee EB, Xu Y, Xie SX, Rennert L, Suh E et al (2017) Neuron loss and degeneration in the progression of TDP-43 in frontotemporal lobar degeneration. Acta Neuropathol Commun 5:68. https://doi.org/10.1186/s40478-017-0471-3 Zhang W, Zhang L, Liang B, Schroeder D, Zhang ZW, Cox GA, Li Y, Lin DT (2016) Hyperactive somatostatin interneurons contribute to excitotoxicity in neurodegenerative disorders. Nat Neurosci 19:557–559. https://doi.org/10.1038/nn.4257 We are grateful to the Oxford Brain Bank, Sheffield Brain and Tissue Bank and the MRC London Neurodegenerative Diseases Brain Bank for providing the tissue used in this study. Tissue was obtained from the Sheffield Teaching Hospitals NHS Foundation Trust and the Kings College Hospital NHS Foundation Trust as part of the UK Brain Archive Information Network (BRAIN UK) which is funded by the Medical Research Council and Brain Tumour Research. MN was funded by a PhD studentship from the Motor Neurone Disease Association (grant # Ansorge/Oct14/977-792). KT receives funding from the Motor Neurone Disease Association, SMA Trust and Medical Research Council. We gratefully acknowledge support by the Motor Neurone Disease Association, the Medical Research Council (MRC), Brains for Dementia Research (BDR) (Alzheimer Society and Alzheimer Research UK) and the NIHR Oxford Biomedical Research Centre. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health. This work uses data provided by patients and collected by the NHS as part of their care and support and would not have been possible without access to this data. The NIHR recognises and values the role of patient data, securely accessed and stored, both in underpinning and leading to improvements in research and care. We thank the laboratory staff within the Academic Unit of Neuropathology, Oxford, as well as the donors and their families, without whom our research would not be possible. Nuffield Department of Clinical Neurosciences, University of Oxford, Level 1, West Wing, John Radcliffe Hospital, Oxford, OX3 9DU, UK Matthew Nolan, Connor Scott, Menuka Pallebage Gamarallage, Kilda Carpenter, Martin R. Turner, Kevin Talbot & Olaf Ansorge Department of Statistics, University of Oxford, Oxford, UK Daniel Lunn GE Research, Niskayuna, NY, USA Elizabeth McDonough, Dan Meyer, Sireesha Kaanumalle & Alberto Santamaria-Pang Matthew Nolan Connor Scott Menuka Pallebage Gamarallage Kilda Carpenter Elizabeth McDonough Dan Meyer Sireesha Kaanumalle Alberto Santamaria-Pang Martin R. Turner Kevin Talbot Olaf Ansorge MN implemented the study, performed experiments and data analysis and wrote the manuscript. OA conceived the study, analysed data and wrote the manuscript. KC quantified the oligodendrocyte pTDP-43 pathology and SK provided technical expertise and performed the MxIF component. DL provided assistance with statistical analysis, and is now sadly deceased. All authors read, contributed to and approved the final manuscript. Correspondence to Olaf Ansorge. DM, AS-P, EM and SK are employed by General Electric Company, USA. General Electric did not receive or provide any form of payment for their contribution to the study. The other authors declare no competing interests. Additional file 1 Supp Fig. 1: Additional staining of pathology in the motor cortex and spinal cord. Staining for OPTN was highly positive in all neurons in controls (A), but absent in an OPTN mutation case (D), reflecting the truncation of the protein upstream of the antibody epitope. Asterisks in (B) mark the presence of Betz cells. Arrows in (E) mark mislocalized and aggregated FUS inclusions. Arrows in (F) mark the presence of aggregated SOD1 protein in a case with the SOD1 D101G mutation (case 31). As discussed in the text, antibody SOD1 SPC-206 did highlight solid compact and skein aggregates but had strong background staining (C), making it unsuitable for quantitative automated image analysis. P62 was used instead as a marker for compact SOD-1 associated protein aggregates; granular aggregation of misfolded wild-type SOD1, which has been suggested to be present in all genotypes of ALS, was not revealed by p62 immunohistochemistry [22]. Scale bar applicable to all panels = 50 μm. Supp Fig. 2: Significant variation in the extent of pTDP-43 pathology between TARDBP and OPTN ALS mutation cases. pTDP-43 pathology in the TARDBP case was sparse, and consisted almost exclusively of compact NCI (a,b, red arrows highlight pathology). In contrast, pathology was severe and widespread in the homozygous ALS-OPTN case (c,d), with NCI, oligo inclusions and dystrophic neurites in all layers, including the subcortical white matter (e). Supp Fig. 3: CD68 staining between the primary motor cortex (a) and lumbar spinal cord (b) white and grey matter is positively correlated in all the genotypes tested; Pearson r, results as on figure. Best fit lines are manually added for illustrative purposes. Supp Fig. 4: Assessment of anterior horn neuron size. Anterior horn degeneration and shrinkage was most prominent in FUS (b) and SOD1 cases, and noticeably less severe in the single ALS-OPTN case (c), however there was significant intraindividual differences within genotypes (d). pTDP-43 aggregation in the anterior horn did not correlate with anterior horn neuron shrinkage/loss in either sporadic (blue dots) or C9ORF72 disease (red squares) (e). Supp Fig. 5: Calculated predominance ratios for single-IHC cohort using CD68 as a surrogate marker of neurodegeneration. Ratios were calculated by dividing the log expression of motor cortex CD68/mm2 by the log expression of anterior horn CD68/mm2. Lower ratios therefore represent a higher LMN burden of activated microglia. Supp Fig. 6: Betz cells occasionally display pTDP-43 aggregation but also occasionally nodular microgliosis and neuronophagia. MxIF analysis did not reveal significant evidence of pTDP-43 within Betz cells (a,b, green lined arrows indicate unaffected Betz cells), but it can occasionally be seen in some cases (c,d). However we did also find evidence of nodular microgliosis surrounding large layer V neurons in some cases (e). UL = upper layers, DL = deeper layers. Scale bars where not indicated (μm): a = 40, b,e = 50. Supp Table 3: Numerical results for Olig and TPPP/p25 quantification. 40478_2020_961_MOESM2_ESM.xlsx Nolan, M., Scott, C., Gamarallage, M.P. et al. Quantitative patterns of motor cortex proteinopathy across ALS genotypes. acta neuropathol commun 8, 98 (2020). https://doi.org/10.1186/s40478-020-00961-2 FTD Selective vulnerability TDP-43 C9ORF72
CommonCrawl
Solvability of the matrix equation $ AX^{2} = B $ with semi-tensor product Tori can't collapse to an interval Classification of finite irreducible conformal modules over Lie conformal algebra $ \mathcal{W}(a, b, r) $ Wenjun Liu , Yukun Xiao and Xiaoqing Yue , School of Mathematical Sciences, Tongji University, Shanghai 200092, China * Corresponding author: Xiaoqing Yue Received May 2020 Revised October 2020 Published November 2020 Fund Project: Supported by the NSF grant Nos. 11971350, 11431010 of China and the CSC grant No. 202006260122 We study a family of non-simple Lie conformal algebras $ \mathcal{W}(a,b,r) $ ($ a,b,r\in {\mathbb{C}} $) of rank three with free $ {\mathbb{C}}[{\partial}] $-basis $ \{L, W,Y\} $ and relations $ [L_{\lambda} L] = ({\partial}+2{\lambda})L,\ [L_{\lambda} W] = ({\partial}+ a{\lambda} +b)W,\ [L_{\lambda} Y] = ({\partial}+{\lambda})Y,\ [Y_{\lambda} W] = rW $ and $ [Y_{\lambda} Y] = [W_{\lambda} W] = 0 $. In this paper, we investigate the irreducibility of all free nontrivial $ \mathcal{W}(a,b,r) $-modules of rank one over $ {\mathbb{C}}[{\partial}] $ and classify all finite irreducible conformal modules over $ \mathcal{W}(a,b,r) $. Keywords: Lie conformal algebras, finite conformal modules, irreducible. Mathematics Subject Classification: 17B10, 17B65, 17B68. Citation: Wenjun Liu, Yukun Xiao, Xiaoqing Yue. Classification of finite irreducible conformal modules over Lie conformal algebra $ \mathcal{W}(a, b, r) $. Electronic Research Archive, doi: 10.3934/era.2020123 B. Bakalov, V. G. Kac and A. A. Voronov, Cohomology of conformal algebras, Comm. Math. Phys., 200 (1999), 561-598. doi: 10.1007/s002200050541. Google Scholar A. Barakat, A. De Sole and V. G. Kac, Poisson vertex algebras in the theory of Hamiltonian equations, Jpn. J. Math., 4 (2009), 141-252. doi: 10.1007/s11537-009-0932-y. Google Scholar A. A. Belavin, A. M. Polyakov and A. B. Zamolodchikov, Infinite conformal symmetry in two-dimensional quantum field theory, Nuclear Phys. B, 241 (1984), 333-380. doi: 10.1016/0550-3213(84)90052-X. Google Scholar R. E. Borcherds, Vertex algebras, Kac-Moody algebras, and the Monster, Proc. Nat. Acad. Sci. USA, 83 (1986), 3068-3071. doi: 10.1073/pnas.83.10.3068. Google Scholar S.-J. Cheng and V. G. Kac, Conformal modules, Asian J. Math., 1 (1997), 181-193. doi: 10.4310/AJM.1997.v1.n1.a6. Google Scholar S.-J. Cheng, V. G. Kac and M. Wakimoto, Extensions of conformal modules, in Topological Field Theory, Primitive Forms and Related Topics, Kyoto, (1996), 79–129. Google Scholar A. D'Andrea and V. G. Kac, Structure theory of finite conformal algebras, Selecta Math. (N.S.), 4 (1998), 377-418. doi: 10.1007/s000290050036. Google Scholar A. De Sole and V. G. Kac, Lie conformal algebra cohomology and the variational complex, Comm. Math. Phys., 292 (2009), 667-719. doi: 10.1007/s00220-009-0886-1. Google Scholar V. Kac, Vertex Algebras for Beginners, University Lecture Series, 10. American Mathematical Society, Providence, RI, 1997. doi: 10.1090/ulect/010. Google Scholar V. G. Kac, The idea of locality, in Physical Application and Mathematical Aspects of Geometry, Groups and Algebras, eds. H.-D. Doebner et al., World Scienctific, Singapore, (1997), 16–32, arXiv: q-alg/9709008v1. Google Scholar V. G. Kac, Formal distribution algebras and conformal algebras, in Proc. 12th International Congress Mathematical Physics (ICMP'97)(Brisbane), International Press, Cambridge, (1999), 80–97. Google Scholar K. Ling and L. Yuan, Extensions of modules over a class of Lie conformal algebras $\mathcal{W}(b)$, J. Alg. Appl., 18 (2019), 1950164, 13 pp. doi: 10.1142/S0219498819501640. Google Scholar K. Ling and L. Yuan, Extensions of modules over the Heisenberg-Virasoro conformal algebra, Int. J. Math., 28 (2017), 1750036, 13 pp. doi: 10.1142/S0129167X17500367. Google Scholar D. Liu, Y. Hong, H. Zhou and N. Zhang, Classification of compatible left-symmetric conformal algebraic structures on the Lie conformal algebra $\mathcal{W}(a, b)$, Comm. Alg., 46 (2018), 5381-5398. doi: 10.1080/00927872.2018.1468903. Google Scholar L. Luo, Y. Hong and Z. Wu, Finite irreducible modules of Lie conformal algebras $\mathcal{W}(a, b)$ and some Schrödinger-Virasoro type Lie conformal algebras, Int. J. Math., 30 (2019), 1950026, 17 pp. doi: 10.1142/S0129167X19500265. Google Scholar H. Wu and L. Yuan, Classification of finite irreducible conformal modules over some Lie conformal algebras related to the Virasoro conformal algebra, J. Math. Phys., 58 (2017), 041701, 10 pp. doi: 10.1063/1.4979619. Google Scholar Y. Xu and X. Yue, $W(a, b)$ Lie conformal algebra and its conformal module of rank one, Alg. Colloq., 22 (2015), 405-412. doi: 10.1142/S1005386715000358. Google Scholar L. Yuan and H. Wu, Cohomology of the Heisenberg-Virasoro conformal algebra, J. Lie Theory, 26 (2016), 1187-1197. Google Scholar L. Yuan and H. Wu, Structures of $W(2, 2)$ Lie conformal algebra, Open Math., 14 (2016), 629-640. doi: 10.1515/math-2016-0054. Google Scholar Hongliang Chang, Yin Chen, Runxuan Zhang. A generalization on derivations of Lie algebras. Electronic Research Archive, , () : -. doi: 10.3934/era.2020124 Ville Salo, Ilkka Törmä. Recoding Lie algebraic subshifts. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 1005-1021. doi: 10.3934/dcds.2020307 Yongjie Wang, Nan Gao. Some properties for almost cellular algebras. Electronic Research Archive, 2021, 29 (1) : 1681-1689. doi: 10.3934/era.2020086 Wen Li, Wei-Hui Liu, Seak Weng Vong. Perron vector analysis for irreducible nonnegative tensors and its applications. Journal of Industrial & Management Optimization, 2021, 17 (1) : 29-50. doi: 10.3934/jimo.2019097 Bing Sun, Liangyun Chen, Yan Cao. On the universal $ \alpha $-central extensions of the semi-direct product of Hom-preLie algebras. Electronic Research Archive, , () : -. doi: 10.3934/era.2021004 Kengo Matsumoto. $ C^* $-algebras associated with asymptotic equivalence relations defined by hyperbolic toral automorphisms. Electronic Research Archive, , () : -. doi: 10.3934/era.2021006 Xuefei He, Kun Wang, Liwei Xu. Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28 (4) : 1503-1528. doi: 10.3934/era.2020079 Anton A. Kutsenko. Isomorphism between one-dimensional and multidimensional finite difference operators. Communications on Pure & Applied Analysis, 2021, 20 (1) : 359-368. doi: 10.3934/cpaa.2020270 Laura Aquilanti, Simone Cacace, Fabio Camilli, Raul De Maio. A Mean Field Games model for finite mixtures of Bernoulli and categorical distributions. Journal of Dynamics & Games, 2020 doi: 10.3934/jdg.2020033 Shudi Yang, Xiangli Kong, Xueying Shi. Complete weight enumerators of a class of linear codes over finite fields. Advances in Mathematics of Communications, 2021, 15 (1) : 99-112. doi: 10.3934/amc.2020045 Junkee Jeon. Finite horizon portfolio selection problems with stochastic borrowing constraints. Journal of Industrial & Management Optimization, 2021, 17 (2) : 733-763. doi: 10.3934/jimo.2019132 Matúš Tibenský, Angela Handlovičová. Convergence analysis of the discrete duality finite volume scheme for the regularised Heston model. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1181-1195. doi: 10.3934/dcdss.2020226 Wenjun Liu Yukun Xiao Xiaoqing Yue \begin{document}$ \mathcal{W}(a, b, r) $\end{document}" readonly="readonly">
CommonCrawl
Homogenization of high-contrast and non symmetric conductivities for non periodic columnar structures NHM Home Convergence of vanishing capillarity approximations for scalar conservation laws with discontinuous fluxes December 2013, 8(4): 943-968. doi: 10.3934/nhm.2013.8.943 Global existence and asymptotic behavior of measure valued solutions to the kinetic Kuramoto--Daido model with inertia Young-Pil Choi 1, , Seung-Yeal Ha 2, and Seok-Bae Yun 3, Department of Mathematics, Imperial College London, London SW7 2AZ, United Kingdom Department of Mathematical Sciences and Research Institute of Mathematics, Seoul National University, Seoul 151-747 Department of Mathematical Sciences, Seoul National University, Seoul 151-747, South Korea Received December 2012 Revised June 2013 Published November 2013 We present the global existence and long-time behavior of measure-valued solutions to the kinetic Kuramoto--Daido model with inertia. For the global existence of measure-valued solutions, we employ a Neunzert's mean-field approach for the Vlasov equation to construct approximate solutions. The approximate solutions are empirical measures generated by the solution to the Kuramoto--Daido model with inertia, and we also provide an a priori local-in-time stability estimate for measure-valued solutions in terms of a bounded Lipschitz distance. For the asymptotic frequency synchronization, we adopt two frameworks depending on the relative strength of inertia and show that the diameter of the projected frequency support of the measure-valued solutions exponentially converge to zero. Keywords: large time behavior., Vlasov equation, measure-valued solutions, Kuramoto-Dadio model, well-posedness, Neunzert's mean-field approach. Mathematics Subject Classification: Primary: 92D25, 76N10; Secondary: 74A2. Citation: Young-Pil Choi, Seung-Yeal Ha, Seok-Bae Yun. Global existence and asymptotic behavior of measure valued solutions to the kinetic Kuramoto--Daido model with inertia. Networks & Heterogeneous Media, 2013, 8 (4) : 943-968. doi: 10.3934/nhm.2013.8.943 J. A. Acebron, L. L. Bonilla, C. J. Perez Vicente, F. Ritort and R. Spigler, The Kuramoto model: A simple paradigm for synchronization phenomena,, Rev. Mod. Phys., 77 (2005), 137. doi: 10.1103/RevModPhys.77.137. Google Scholar J. A. Acebron, L. L. Bonilla and R. Spigler, Synchronization in populations of globally coupled oscillators with inertial effect,, Phys. Rev. E., 62 (2000), 3437. doi: 10.1103/PhysRevE.62.3437. Google Scholar J. A. Acebron and R. Spigler, Adaptive frequency model for phase-frequency synchronization in large populations of globally coupled nonlinear oscillators,, Phys. Rev. Lett., 81 (1998), 2229. doi: 10.1103/PhysRevLett.81.2229. Google Scholar J. Buck and E. Buck, Biology of synchronous flashing of fireflies,, Nature, 211 (1966), 562. doi: 10.1038/211562a0. Google Scholar N. J. Balmforth and R. Sassi, A shocking display of synchrony,, Physica D., 143 (2000), 21. doi: 10.1016/S0167-2789(00)00095-6. Google Scholar J. A. Carrillo, Y.-P. Choi, S.-Y. Ha, M.-J. Kang and Y. Kim, Contractivity of the Wasserstein metric for the kinetic Kuramoto equation,, preprint, (). Google Scholar H. Chiba, Continuous limit of the moments system for the globally coupled phase oscillator,, Discrete Contin. Dyn. Syst., 33 (2013), 1891. doi: 10.3934/dcds.2013.33.1891. Google Scholar Y.-P. Choi, S.-Y. Ha and S. E. Noh, Remarks on the nonlinear stability of the Kuramoto model with inertia,, to appear in Quart. Appl. Math., (). Google Scholar Y.-P. Choi, S.-Y. Ha, S. Jung and Y. Kim, Asymptotic formation and orbital stability of phase-locked states for the Kuramoto model,, Physica D., 241 (2012), 735. doi: 10.1016/j.physd.2011.11.011. Google Scholar Y.-P. Choi, S.-Y. Ha and S.-B. Yun, Complete synchronization of Kuramoto oscillators with finite inertia,, Physica D., 240 (2011), 32. doi: 10.1016/j.physd.2010.08.004. Google Scholar N. Chopra and M. W. Spong, On exponential synchronization of Kuramoto oscillators,, IEEE Trans. Autom. Control., 54 (2009), 353. doi: 10.1109/TAC.2008.2007884. Google Scholar J. D. Crawford and K. T. R. Davies, Synchronization of globally coupled phase oscillators: Singularities and scaling for general couplings,, Physica D., 125 (1999), 1. doi: 10.1016/S0167-2789(98)00235-8. Google Scholar H. Daido, Onset of cooperative entrainment in limit-cycle oscillators with uniform all-to-all interactions: Bifurcation of the order function,, Physica D., 91 (1996), 24. doi: 10.1016/0167-2789(95)00260-X. Google Scholar B. C. Daniels, S. T. Dissanayake and B. R. Trees, Synchronization of coupled rotators: Josephson junction ladders and the locally coupled Kuramoto model,, Phys. Rev. E., 67 (2003). doi: 10.1103/PhysRevE.67.026216. Google Scholar F. Dorfler and F. Bullo, On the critical coupling for Kuramoto oscillators,, SIAM J. Appl. Dyn. Syst., 10 (2011), 1070. doi: 10.1137/10081530X. Google Scholar G. B. Ermentrout, Synchronization in a pool of mutually coupled oscillators with random frequencies,, J. Math. Biol., 22 (1985), 1. doi: 10.1007/BF00276542. Google Scholar S.-Y. Ha, T. Y. Ha and J.-H. Kim, On the complete synchronization for the Kuramoto model,, Physica D., 239 (2010), 1692. doi: 10.1016/j.physd.2010.05.003. Google Scholar S.-Y. Ha and J.-G. Liu, A simple proof of the Cucker-Smale flocking dynamics and mean-field limit,, Commun. Math. Sci., 7 (2009), 297. doi: 10.4310/CMS.2009.v7.n2.a2. Google Scholar H. Hong, M. Y. Choi, J. Yi and K.-S. Soh, Inertia effects on periodic synchronization in a system of coupled oscillators,, Phys. Rev. E., 59 (1999), 353. doi: 10.1103/PhysRevE.59.353. Google Scholar H. Hong, G. S. Jeon and M. Y. Choi, Spontaneous phase oscillation induced by inertia and time delay,, Phys. Rev. E., 65 (2002). doi: 10.1103/PhysRevE.65.026208. Google Scholar Y. Kuramoto, Chemical Oscillations, Waves and Turbulence,, Springer-Verlag, (1984). doi: 10.1007/978-3-642-69689-3. Google Scholar Y. Kuramoto, International symposium on mathematical problems in mathematical physics,, Lecture Notes in Theoretical Physics., 39 (1975), 420. Google Scholar C. Lancellotti, On the Vlasov limit for systems of nonlinearly coupled oscillators without noise,, Transp. Theory Stat. Phys., 34 (2005), 523. doi: 10.1080/00411450508951152. Google Scholar M. M. Lavrentiev and R. Spigler, Existence and uniqueness of solutions to the Kuramoto-Sakaguchi nonliner parabolic integrodifferential equation,, Differ. Integr. Eq., 13 (2000), 649. Google Scholar H. Neunzert, An introduction to the nonlinear Boltzmann-Vlasov equation,, In Kinetic Theories and the Boltzmann Equation, (1048). doi: 10.1007/BFb0071878. Google Scholar A. Pikovsky, M. Rosenblum and J. Kurths, Synchrnization: A Universal Concept in Nonlinear Sciences,, Cambridge University Press, (2001). doi: 10.1017/CBO9780511755743. Google Scholar P.-A. Raviart, An analysis of particle methods,, in Numerical Methods in Fluid Dynamics (Como, 1127 (1983), 243. doi: 10.1007/BFb0074532. Google Scholar H. Sakaguchi and Y. Kuramoto, A soluble active rotator model showing phase transitions via mutual entraintment,, Prog. Theor. Phys., 76 (1986), 576. doi: 10.1143/PTP.76.576. Google Scholar H. Sphohn, Large Scale Dynamics of Interacting Particles,, Springer-Verlag, (1991). doi: 10.1007/978-3-642-84371-6. Google Scholar S. H. Strogatz, From Kuramoto to Crawford: Exploring the onset of synchronization in populations of coupled oscillators,, Physica D., 143 (2000), 1. doi: 10.1016/S0167-2789(00)00094-4. Google Scholar H. A. Tanaka, A. J. Lichtenberg and S. Oishi, First order phase transition resulting from finite inertia in coupled oscillator systems,, Phys. Rev. Lett., 78 (1997), 2104. doi: 10.1103/PhysRevLett.78.2104. Google Scholar H. A. Tanaka, A. J. Lichtenberg and S. Oishi, Self-synchronization of coupled oscillators with hysteretic responses,, Physica D., 100 (1997), 279. doi: 10.1016/S0167-2789(96)00193-5. Google Scholar S. Watanabe and J. W. Swift, Stability of periodic solutions in series arrays of Josephson junctions with internal capacitance,, J. Nonlinear Sci., 7 (1997), 503. doi: 10.1007/s003329900038. Google Scholar S. Watanabe and S. H. Strogatz, Constants of motion for superconducting Josephson arrays,, Physica D., 74 (1994), 197. doi: 10.1016/0167-2789(94)90196-1. Google Scholar K. Wiesenfeld, R. Colet and S. H. Strogatz, Synchronization transitions in a disordered Josephson series arrays,, Phys. Rev. Lett., 76 (1996), 404. doi: 10.1103/PhysRevLett.76.404. Google Scholar K. Wiesenfeld, R. Colet and S. H. Strogatz, Frequency locking in Josephson arrays: Connection with the Kuramoto model,, Phys. Rev. E., 57 (1988), 1563. doi: 10.1103/PhysRevE.57.1563. Google Scholar K. Wiesenfeld and J. W. Swift, Averaged equations for Josephson junction series arrays,, Phys. Rev. E., 51 (1995), 1020. doi: 10.1103/PhysRevE.51.1020. Google Scholar A. T. Winfree, Biological rhythms and the behavior of populations of coupled oscillators,, J. Theor. Biol., 16 (1967), 15. doi: 10.1016/0022-5193(67)90051-3. Google Scholar Ammari Zied, Liard Quentin. On uniqueness of measure-valued solutions to Liouville's equation of Hamiltonian PDEs. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 723-748. doi: 10.3934/dcds.2018032 Peng Jiang. Global well-posedness and large time behavior of classical solutions to the diffusion approximation model in radiation hydrodynamics. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 2045-2063. doi: 10.3934/dcds.2017087 Seung-Yeal Ha, Jeongho Kim, Jinyeong Park, Xiongtao Zhang. Uniform stability and mean-field limit for the augmented Kuramoto model. Networks & Heterogeneous Media, 2018, 13 (2) : 297-322. doi: 10.3934/nhm.2018013 Goro Akagi, Kei Matsuura. Well-posedness and large-time behaviors of solutions for a parabolic equation involving $p(x)$-Laplacian. Conference Publications, 2011, 2011 (Special) : 22-31. doi: 10.3934/proc.2011.2011.22 Didier Pilod. Sharp well-posedness results for the Kuramoto-Velarde equation. Communications on Pure & Applied Analysis, 2008, 7 (4) : 867-881. doi: 10.3934/cpaa.2008.7.867 Azmy S. Ackleh, Vinodh K. Chellamuthu, Kazufumi Ito. Finite difference approximations for measure-valued solutions of a hierarchically size-structured population model. Mathematical Biosciences & Engineering, 2015, 12 (2) : 233-258. doi: 10.3934/mbe.2015.12.233 Andrea Giorgini. On the Swift-Hohenberg equation with slow and fast dynamics: well-posedness and long-time behavior. Communications on Pure & Applied Analysis, 2016, 15 (1) : 219-241. doi: 10.3934/cpaa.2016.15.219 Haydi Israel. Well-posedness and long time behavior of an Allen-Cahn type equation. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2811-2827. doi: 10.3934/cpaa.2013.12.2811 Hayato Chiba, Georgi S. Medvedev. The mean field analysis of the Kuramoto model on graphs Ⅰ. The mean field equation and transition point formulas. Discrete & Continuous Dynamical Systems - A, 2019, 39 (1) : 131-155. doi: 10.3934/dcds.2019006 Rainer Brunnhuber, Barbara Kaltenbacher. Well-posedness and asymptotic behavior of solutions for the Blackstock-Crighton-Westervelt equation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4515-4535. doi: 10.3934/dcds.2014.34.4515 Kazuo Yamazaki, Xueying Wang. Global well-posedness and asymptotic behavior of solutions to a reaction-convection-diffusion cholera epidemic model. Discrete & Continuous Dynamical Systems - B, 2016, 21 (4) : 1297-1316. doi: 10.3934/dcdsb.2016.21.1297 Patrick Gerard, Christophe Pallard. A mean-field toy model for resonant transport. Kinetic & Related Models, 2010, 3 (2) : 299-309. doi: 10.3934/krm.2010.3.299 Seung-Yeal Ha, Jeongho Kim, Peter Pickl, Xiongtao Zhang. A probabilistic approach for the mean-field limit to the Cucker-Smale model with a singular communication. Kinetic & Related Models, 2019, 12 (5) : 1045-1067. doi: 10.3934/krm.2019039 Hongwei Wang, Amin Esfahani. Well-posedness and asymptotic behavior of the dissipative Ostrovsky equation. Evolution Equations & Control Theory, 2019, 8 (4) : 709-735. doi: 10.3934/eect.2019035 Luigi Forcella, Kazumasa Fujiwara, Vladimir Georgiev, Tohru Ozawa. Local well-posedness and blow-up for the half Ginzburg-Landau-Kuramoto equation with rough coefficients and potential. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2661-2678. doi: 10.3934/dcds.2019111 Rong Yang, Li Chen. Mean-field limit for a collision-avoiding flocking system and the time-asymptotic flocking dynamics for the kinetic equation. Kinetic & Related Models, 2014, 7 (2) : 381-400. doi: 10.3934/krm.2014.7.381 Michele Colturato. Well-posedness and longtime behavior for a singular phase field system with perturbed phase dynamics. Evolution Equations & Control Theory, 2018, 7 (2) : 217-245. doi: 10.3934/eect.2018011 Jiawei Chen, Zhongping Wan, Liuyang Yuan. Existence of solutions and $\alpha$-well-posedness for a system of constrained set-valued variational inequalities. Numerical Algebra, Control & Optimization, 2013, 3 (3) : 567-581. doi: 10.3934/naco.2013.3.567 Jinkai Li, Edriss Titi. Global well-posedness of strong solutions to a tropical climate model. Discrete & Continuous Dynamical Systems - A, 2016, 36 (8) : 4495-4516. doi: 10.3934/dcds.2016.36.4495 Shijin Deng. Large time behavior for the IBVP of the 3-D Nishida's model. Networks & Heterogeneous Media, 2010, 5 (1) : 133-142. doi: 10.3934/nhm.2010.5.133 HTML views (0) Young-Pil Choi Seung-Yeal Ha Seok-Bae Yun
CommonCrawl
BoVW model based on adaptive local and global visual words modeling and log-based relevance feedback for semantic retrieval of the images Ruqia Bibi1, Zahid Mehmood2, Rehan Mehmood Yousaf1, Muhammad Tahir3, Amjad Rehman4, Muhammad Sardaraz3 & Muhammad Rashid5 The core of a content-based image retrieval (CBIR) system is based on an effective understanding of the visual contents of images due to which a CBIR system can be termed as accurate. One of the most prominent issues which affect the performance of a CBIR system is the semantic gap. It is a variance that exists between low-level patterns of an image and high-level abstractions as perceived by humans. A robust image visual representation and relevance feedback (RF) can bridge this gap by extracting distinctive local and global features from the image and by incorporating valuable information stored as feedback. To handle this issue, this article presents a novel adaptive complementary visual word integration method for a robust representation of the salient objects of the image using local and global features based on the bag-of-visual-words (BoVW) model. To analyze the performance of the proposed method, three integration methods based on the BoVW model are proposed in this article: (a) integration of complementary features before clustering (called as non-adaptive complementary feature integration), (b) integration of non-adaptive complementary features after clustering (called as a non-adaptive complementary visual words integration), and (c) integration of adaptive complementary feature weighting after clustering based on self-paced learning (called as a proposed method based on adaptive complementary visual words integration). The performance of the proposed method is further enhanced by incorporating a log-based RF (LRF) method in the proposed model. The qualitative and quantitative analysis of the proposed method is carried on four image datasets, which show that the proposed adaptive complementary visual words integration method outperforms as compared with the non-adaptive complementary feature integration, non-adaptive complementary visual words integration, and state-of-the-art CBIR methods in terms of performance evaluation metrics. Due to a staggering increase in globalization, communication, and advancement in technology, the world has become a global village in its true sense. Digital image libraries are exponentially expanding because of the proliferation of social media and other information-sharing mediums. To extract meaningful information from such a huge repository requires certain techniques that can perform retrieval effectively and within minimum computational cost. Traditional text-based approaches retrieve images based on information that is annotated manually, which now has become impractical for such huge image repositories [1]. Another reason for opting for content-based image retrieval (CBIR) is a language dependency of textual annotations. CBIR has been a rapidly progressing area since 1990, and it retrieves images having similar contents/features, i.e., colors, shapes, and textures. It is categorized into two stages: (1) feature extraction and (2) feature matching. The purpose of the first stage is to get a feature vector that can effectively represent the visual contents of images. Features are categorized as global or local features. Global features encapsulate characteristics of an entire image as a single vector. Even though they are robust and computationally efficient, they may overlook the pixel's spatial relationship and local details [2]. On the contrary, local features preserve local characteristics of an image as they are extracted from patches of an image and are consider scale and rotation-invariant. Research has been done in the recent past to explore CBIR [3,4,5,6,7] and its applicability in different fields such as artificial intelligence (AI), human–computer interaction (HCI), and medical imaging. With the advent of deep learning approaches, research concern has now shifted towards deep features that can be learned by algorithms on their own. The ability of artificial neural networks to classify images either through supervised or unsupervised learning explored by Krizhevsky et al. [8] has taken the research inclination to a new dimension with their breakthrough results. Several feature descriptors are being developed for CBIR, but a selection of appropriate image representation is still challenging due to different issues such as illumination changes, viewing angles, and variation in image scale. As shown in Fig. 1, visual similarity between semantically different objects is also an intriguing issue that results in misclassification of an object, which affects the overall performance of the CBIR system. Another barrier for the retrieval system is an accurate feature matching. Most CBIR systems use similarity measures, whose performance highly depends on the selected feature descriptor and distance measure being used [10,11,12]. The research concern of today is to lessen the semantic gap concerning the images' low-level visual features and user high-level semantics to improve the accuracy of image retrieval systems. An image having similar visual contents [9] This study presents an innovative method for CBIR to advance the performance of image retrieval. The proposed methodology is categorized into two sections, namely training and testing sections. In the training section, complementary features are extracted using BFGF-HOG and GSURF feature descriptors from the images of the training group. To get optimized feature vectors, latent semantic analysis (LSA) followed by an adaptive feature weighting (AFW) method based on self-paced learning (SPL) is applied to each feature vector. Afterward, a visual vocabulary is constructed by applying adaptive fuzzy k-means (AFKM) clustering on each optimized feature vector, which represents the contents of the images in a more compact form. These two visual vocabularies are concatenated to get a resultant visual vocabulary that contains complementary features of both descriptors, which is termed as an adaptive complementary visual word integration in the proposed method of CBIR. In the next step, a histogram is formed using visual words of each image from the resultant complementary visual vocabulary. These histograms along with training labels are used as an input to quadratic kernel-based support vector machine (QSVM) for classification. In the testing section, the aforementioned steps are carried out on a query image taken from the testing group of images, which outputs a histogram-based visual representation of a query image. Afterward, a relevance score is computed between images residing in datasets and query image by applying Euclidean distance. For further improving the performance of image retrieval, the proposed method also uses a log-based relevance feedback mechanism. The major contributions of this study are as follows: An innovative image representation method by integrating adaptive local and global visual words along with log-based relevance feedback based on the BoVW model. Non-adaptive complementary visual words integration for the principal objects of the images based on the BoVW model. Non-adaptive complementary feature integration for the principal objects of the image based on the BoVW model. The remaining sections of this paper are structured as follows: Section 2 provides a detailed review of the relevant CBIR methods. Section 3 presents a detailed methodology of the proposed method. Section 4 provides detail of the experimental parameters for performance evaluations of the proposed method along with experimental results and discussion. Section 5 presents the conclusion and future directions of the research work. Numerous techniques have been developed to efficiently and effectively retrieve images from repositories having an immense and diverse collection of images from users around the globe, hence uncovering a field that makes computers understand or learn, enabling them to compete with the human brain, in short working towards AI and going deep down to imitate the working of neurons. CBIR has gained immense recognition in the recent past and motivates researchers to innovate new techniques to recognize objects or areas under consideration with the highest possible accuracy. Singh et al. [2] presented a novel low-dimensional color texture descriptor named as a local binary pattern for color images (LBPC). A plane is used for a 3-dimensional RGB color space. LBP of color pixels is selected across a circularly symmetric neighbor lying within radius 2. Pixels having values above the plane are termed as 1 and below the plane as 0. A combination of hue component of HIS color space with LBP, i.e., LBPH and its fusion with color histogram (CH), is also analyzed to improve the discerning capability of the descriptor. To further reduce the dimension of the proposed descriptor, a uniform pattern with 59 bins is also calculated. In terms of performance, a fusion of LBPC, LBPH, and CH achieves better retrieval accuracy when an intra-class variation is highest. Meanwhile, uniform patterns of the proposed descriptor have achieved somewhat similar retrieval accuracy with a lower computational cost. LBP's for multi-channel color images are mostly calculated individually for each channel, thus results in loss of cross-channel information and higher computational cost. Misale et al. [10] presented an efficient CBIR system based on local tetra pattern (LTrP) features and bag-of-words (BoW) model. Initially, interest points are detected through SURF, and features are extracted locally through LTrP. Dataset images are classified in a 33:33:34 ratio for training, validation, and testing, respectively. In the testing phase, a trained neural network is employed to classify images according to semantic categories. The performance of the proposed approach highlights better retrieval accuracy and reduced computational expense. A novel feature descriptor called as multi-trend binary code descriptor (MTBCD) is proposed by Yu et al. [13], which addresses some of the common issues faced by local feature descriptors in CBIR such as the change in pixel patterns, semantic gap, and lack of spatial information. The MTBCD descriptor works on the intensity component of the HSV model and identifies a change in trend among pixels along with four symmetrical directions (0°, 45°, 90°, 135°). The change in trend is classified as parallel, if the values of pixels within an assigned radius are in increasing or decreasing order, and as non-parallel, if values are equal or greater/smaller than the center pixels. To preserve the spatial relation among pixels, a co-occurrence matrix is also constructed. Experimental analysis depicts robustness of this framework against competitive methods. Mistry et al. [14] designed and developed a robust CBIR system by integrating various spatial and frequency-based features. This method uses color moments, auto-correlogram and HSV histogram as spatial features and stationery, and Gabor wavelet transforms as frequency domain features. Apart from these, the approach also combines features extracted through color and edge directivity descriptor (CEDD) and binarized statistical image features (BSIF) descriptor. The feature vectors of 6-D and 64-D are extracted in case of color moments and color auto-correlogram, respectively. For the CEDD-BSIF feature set, 144-D CEDD and 256-D BSIF feature vectors are generated. Frequency domain features lead to better accuracy than spatial domain features when city block and Euclidean distance are utilized for measuring similarity while CEDD and BSIF features achieve the highest precision among all. However, this method is computationally expensive because of the high-dimensional feature vector. An innovative technique based on spatial histograms (spatiograms) is presented by Zeng et al. [15] to address issues faced by generalized histograms in CBIR, i.e., loss of spatial information, high dimensionality, and semantic gap. It quantizes the color space by using the Gaussian mixture model (GMM) learned through the expectation maximization-Bayesian information criterion (EM-BIC) algorithm, which automatically identifies the number of Gaussians (color bins) and associate pixels to multiple bins based on probability. Spatiograms are computed and incorporated with GMM. For determining a distance between spatiograms, a new measure based on Jensen–Shannon (JS) divergence is also proposed in this method. The experimental analysis highlights the robustness of the method for image retrieval. Roy et al. [16] presented a novel and highly discriminative rotation invariant texture descriptor named as a local directional zigzag pattern (LDZP). The proposed framework first reduces the noise of textured images by generating a local directional edge map (LDEM) through Kirsch compass mask along 6 directions from 0 to 150° with a 30° interval. Zigzag patterns and corresponding uniform histograms are extracted from each LDEM and concatenated to obtain rotation invariance. In terms of performance, LDZP efficiently encodes recurrent changes in local texture patterns and has better texture classification accuracy because of its zigzag sampling structure as compared to LBP which suffers from unreliable texture information because of its circular sampling structure. Amato et al. [17] investigated the application of aggregation methods to binary local features and presented a CBIR method based on Fisher kernels, Bernoulli mixture models, and CNN. The method is two times faster in extracting binary features as compared to the traditional SIFT method and can be used as an alternative to direct matching in CBIR. The information that we get from images may be insufficient to build a feature vector so Li et al. [18] suggested a re-ranking mechanism called discriminative multi-view interactive image re-ranking (DMINTIR) that integrates relevance feedback with complementary features. The feature set is encoded by utilizing neural code, VLAD+, and triangulation embedding. The proposed mechanism shuffles the images based on updated scores obtained through learned weight vector. To maximize precision, a new similarity learning method named maximum top precision similarity (MTPS) for the CBIR system is proposed [19]. The precision achieved after initial retrievals can be maximized by tuning parameters of similarity function. For that, similarity function is exhibited by hinge loss and designed as a linear function; squared Frobenius norm for each query is minimized to prevent overfitting problems. The experimental evaluation highlighted a shorter running time. Similarity measures have been evaluated in detail in [20]. The study concluded by suggesting a new matching measure by integrating relevance feedback and sequential forward selector. Retrieving images based on regions usually results in the repetitive matching of similar regions and loss of spatial information. To overcome this issue, Meng et al. [21] presented a novel method for extracting and matching regions. Firstly, segments are identified and merged using statistical region merging and affinity propagation (SRM-AP). Instead of incorporating local descriptors, the method utilizes a CNN-based feature extraction method named as regional convolution mapping feature (RCMF) to preserve the spatial layout of the key objects of the image. Layer 5 of VGGNet19 is used as a feature layer, which outputs a 256-dimensional feature vector. For effective image representation, a number of regions and their locations are also incorporated with the RCMF method. Images are matched based on integrated category matching (ICM), which utilizes centroids rather than area or center-based methods. The method exhibits superior performance against benchmark methods but suffers from higher dimensionality of the feature vector. Another retrieval method based on the region is presented by Song et al. [22]. In this method of CBIR, the foreground and background parts of the HSV color space image are segmented by applying the Otsu algorithm. For extracting color, the hue component is quantized into 3 bins and the saturation component is quantized into 2 bins. The intensity component (V) of HSV space is utilized to generate diagonal texture structure descriptor (DTSD), which efficiently describes the edges and preserves spatial resolution and finer details of an image. The DTSD treats an image as a 4 × 4 grid and computes the difference between the center and neighboring pixels. Afterward, diagonal pixels are multiplied and evaluated based on a threshold. The resultant matrix is weighted, and values are accumulated to represent diagonal texture structure. The histograms of three components of both regions combinedly form a feature vector. In terms of performance, this method surpassed many competitive methods. A hybrid method for region-based image retrieval is presented by Ahmed et al. [23], which integrates local and global features for effective image representation. In this method, interest points of the image are assembled using connected stable regions method and described using the histogram of oriented gradients. For extracting texture, uniform local binary patterns are used. The resultant higher-dimensional features are transformed into compact vectors by applying the principal component analysis (PCA) method. Experimental analysis shows improved accuracy as compared with competitive CBIR methods. Other than the semantic gap, one of the major setbacks for CBIR is edge-based object identification, which only uses edges to differentiate objects having visually similar content and spatial invariance problem, which arises because of the varied spatial position of objects within images. Pradhan et al. [24] addressed these problems by incorporating a color edge map for extracting color and shape features simultaneously and a novel image block re-ordering method based on texture direction. Initially, foreground and background regions are extracted through saliency maps. Edges from the foreground part are first extracted through a combined edge map (canny edge, fuzzy edge) and later through color edge map by accumulating the pixels into 9 groups based on orientations. For texture, the Y component of the YCbCr color space is divided into 24 non-overlapping blocks and rearranged using principal texture direction, which is based on the largest eigenvalue of the intensity covariance matrix. In terms of performance, this rearrangement scheme resulted in better retrieval accuracy because the objects within images became more comparable to each other irrespective of their position. The compact detail of the competitive methods of CBIR is presented in Table 1. Table 1 Compact detail of competitive methods of CBIR In this section, the methodology of the proposed method is presented in detail. The proposed method adopted the BoVW model that has been one of the most dominant and frequently used methods for classifying and retrieve images. The BoVW model (as shown in Fig. 2) inherited its basic concept from the bag-of-features (BoF) model, which is particularly developed for retrieving similar documents. To get representations of visual contents of the images based on the BoVW model, images undergo following transformations: (1) firstly, local features are extracted by detecting keypoints and their corresponding descriptors are computed; (2) the extracted features are then organized into clusters by applying clustering algorithms, each cluster head then termed as a visual word which accumulates into visual vocabulary or codebook; (3) for each image, a signature is formed by representing visual words in terms of a histogram, (4) histograms are normalized to retain fine details, and (5) these signatures are then fed into the classifier for training purposes. Apart from exhibiting remarkable performance in several image retrieval applications [36,37,38], the BoVW model still has certain limitations that need to be addressed, i.e., lack of spatial information, extraction of redundant, and insignificant features (background regions), and most importantly, it lacks from effective, efficient feature representation and feature weighting method as some features are of greater importance than others. The proposed method of image retrieval addresses the aforementioned issues of the BoVW model to improve the performance of image retrieval. The BoVW model-based visual representation of the images The detail of each module of the training and testing sections of the proposed method is discussed in the following subsequent sections and its complete framework is shown in Fig. 3. Framework of the proposed adaptive complementary visual words integration based on the BoVW methodology The training section of the methodology This section presents the detail of the different modules of the proposed method, which are complementary feature extraction, adaptive feature weighting, clustering, histogram formation, and image classification. The detail of these modules is presented in the following subsequent sections. Feature extraction using BFGF-HOG descriptor This step comprises extracting features from each image by using the BFGF-HOG descriptor, which is a variant of the HOG descriptor. The HOG descriptor [39] has been used widely in machine vision tasks for detecting objects within images, humans, etc. It is a window-based descriptor and works by capturing the edge directions or local intensity gradients. A window is focused on interest points and partitioned into n × n cells. For each pixel in a cell, gradient direction θ(x, y) and magnitude M(x, y) are mathematically calculated as follows: $$ M\left(x,y\right)=\sqrt{{\left(\raisebox{1ex}{$\partial I$}\!\left/ \!\raisebox{-1ex}{$\partial x$}\right.\right)}^2+{\left(\raisebox{1ex}{$\partial I$}\!\left/ \!\raisebox{-1ex}{$\partial y$}\right.\right)}^2} $$ $$ \theta \left(x,y\right)={\tan}^{-1}\frac{\raisebox{1ex}{$\partial I$}\!\left/ \!\raisebox{-1ex}{$\partial y$}\right.}{\raisebox{1ex}{$\partial I$}\!\left/ \!\raisebox{-1ex}{$\partial x$}\right.} $$ The computed gradient directions for each pixel are then quantized into 9 bin histogram of 45°, and the corresponding magnitudes are accumulated. The contrast of the resultant histogram is normalized to achieve illumination invariance. Given an image I(x, y), a non-iterative bilateral field (BF), which efficiently preserves edges, is applied. The bilateral filter is an alternative to low-pass filters, which reduces noise but fade edges too. To overcome this, BF computes weighted averages like low-pass filters but utilizes geometric closeness (spatial) as well as photometric information/similarity between a center pixel c and its neighboring pixels (k − c) to calculate weights. Mathematically, it is expressed as follows: $$ h(c)={N}^{-1}{\int}_{-\infty}^{\infty }{\int}_{-\infty}^{\infty }I(k)g\left(k,c\right)\left(I(k)-I(c)\right) dk $$ $$ N={\int}_{-\infty}^{\infty }{\int}_{-\infty}^{\infty }g\left(k,c\right)\left(I(k)-I(c)\right) dk $$ where N is a normalization constant, g(k, c) = k − c represents geometric closeness, and (I(k) − I(c)) measures the similarity between the center pixel and its neighbors. After that, feature vector of the BF-based GF-HOG feature descriptor is computed, which represents image structure as dense gradient field (GF), interpolated by neighboring sparse edge pixels. Begin with binary canny edge map Ie, edge orientations and magnitudes are calculated. Pixels having smaller magnitudes are discarded to obtain a set of sparse orientation edge pixels S = {θ(x, y)M > t} against a certain threshold t. The gradient field \( {G}_{R^2} \) is dense orientation field interpolated from sparse set S. Issue of smoothness of dense gradient field is solved by the Poisson equation with Dirichlet boundary conditions. The Poisson approximates ∆G = 0 by using a 3 × 3 Laplacian window, which results in a linear equation (Eq. (5)) with Dirichlet boundary conditions (Eq. 6). $$ G\left(x,y\right)=G\left(x-1,y\right)+G\left(x+1,y\right)+G\left(x,y-1\right)+G\left(x,y+1\right) $$ $$ G\left(x,y\right)=\left\{\begin{array}{cc}\theta \left(x,y\right)& f\left(x,y\right)\in S\\ {}0& f\left(x,y\right)\ \mathrm{is}\ \mathrm{located}\ \mathrm{on}\ \mathrm{image}\ \mathrm{boundaries}\end{array}\right. $$ After detecting keypoints by applying a Hessian detector on each image, a histogram of gradients (detail mentioned earlier) is then calculated over the density gradient field G and the range of orientations is quantized into m bins. The resultant vector is mn2-dimensional vector for the entire window. A resultant feature vector of the BFGF-HOG descriptor is 64 × J dimensional, where J represents a number of interest points of the features, which are automatically selected by the descriptor depending upon the contents of the image, and it is mathematically expressed as follows: $$ {F}_a=\left({a}_{1d},{a}_{2d},{a}_{3d},\dots, {a}_{nd}\right) $$ where a1d to and are image descriptors of the BFGF-HOG feature vector. Feature extraction using Gauge SURF descriptor This step comprises extracting features by applying the Gauge SURF (GSURF) descriptor to each image. To locally adapt the blur within a region and to retain fine details or edges, GSURF [40] feature descriptor utilizes gauge coordinates. Instead of using first-order derivatives, GSURF detects keypoints from multiscale images using the determinant of the Hessian matrix. Hessian matrix is a result of convolving an integral image with second-order partial derivative Gaussian to obtain a maximum gradient. Give an image I(x, y), Hessian matrix H(z, σ) at point z(x, y) and scale parameter σ are mathematically defined as follows: $$ H\left(z,\sigma \right)=\left[\begin{array}{cc}{L}_{xx}\left(z,\sigma \right)& {L}_{xy}\left(z,\sigma \right)\\ {}{L}_{xy}\left(z,\sigma \right)& {L}_{yy}\left(z,\sigma \right)\end{array}\right] $$ where Lxx is a convolution of second-order gauge derivative with image I at point z and is calculated as follows: $$ {L}_{xx}\left(z,\sigma \right)=I(z)\ast \frac{\partial^2g\left(\sigma \right)}{\partial {x}^2} $$ and similarly \( {L}_{yy}\left(z,\sigma \right)=I(z)\ast \frac{\partial^2g\left(\sigma \right)}{\partial {y}^2} \) and \( {L}_{xy}\left(z,\sigma \right)=I(z)\ast \frac{\partial^2g\left(\sigma \right)}{\partial x\ \partial y} \). The motivation behind using gauge coordinates is their ability to describe each pixel in an image by its 2D local structure. Even if an image is rotated, the structure will remain the same. Gauge coordinates comprise of a gradient vector \( \overrightarrow{w} \) and its perpendicular vector\( \overrightarrow{v} \), which are mathematically defined as follows: $$ \overrightarrow{w}=\left(\frac{\partial L}{\partial x},\frac{\partial L}{\partial y}\right)=\frac{1}{\sqrt{L_x^2+{L}_y^2}}\cdot \left({L}_x,{L}_y\right) $$ $$ \overrightarrow{v}=\left(\frac{\partial L}{\partial y},-\frac{\partial L}{\partial x}\right)=\frac{1}{\sqrt{L_x^2+{L}_y^2}}\bullet \left({L}_y,-{L}_x\right) $$ where L denotes convolution of image I with Gaussian kernel having σ as scale parameter, i.e., L(x, y, σ) = I(x, y) ∗ g(x, y, σ). Derivatives of any scale and order can be obtained using these coordinates. Second-order derivatives of these coordinates are of special interest and can be calculated by taking a product of 2 × 2 Hessian matrix with gradients in \( \overrightarrow{w} \)and \( \overrightarrow{v} \)directions. For building a descriptor of 64 × J dimensions, first- and second-order Haar wavelet responses in a horizontal and vertical direction are calculated over a 20 × 20 region, i.e., Lx, Ly, Lxx, Lyy, Lxy. The 20 × 20 window is further subdivided into 4×4 sub-blocks without any overlap and Haar wavelet of size 2σ is calculated. After fixing the gauge coordinates for each of these pixels, gauge invariants |Lww|, |Lvv| are computed. The parameters of the GSURF descriptor are mathematically defined as follows: $$ {L}_{ww}=\frac{1}{L_x^2+{L}_y^2}\left({L}_x\;{L}_y\right)\left(\begin{array}{cc}{L}_{xx}& {L}_{xy}\\ {}{L}_{yx}& {L}_{yy}\end{array}\right)\left(\begin{array}{l}{L}_x\\ {}{L}_y\end{array}\right) $$ $$ {L}_{vv}=\frac{1}{L_x^2+{L}_y^2}\left({L}_y-{L}_x\right)\left(\begin{array}{cc}{L}_{xx}& {L}_{xy}\\ {}{L}_{yx}& {L}_{yy}\end{array}\right)\left(\begin{array}{l}{L}_y\\ {}-{L}_x\end{array}\right) $$ A resultant feature descriptor for each sub-region will be four-dimensional vector Vd = (∑Lww, ∑Lvv, ∑|Lww|, ∑|Lvv|). Resultant feature vector will be 64 × J dimensional, where J represents a number of the interest points of the features that are chosen automatically by the descriptor depending upon the contents of the image, mathematically, it can be expressed as follows: $$ {F}_b=\left({b}_{1d},{b}_{2d},{b}_{3d},\dots, {b}_{nd}\right) $$ where b1d to bnd are feature descriptors of the GSURF descriptor. To detect objects within images, their location and spatial orientation of edges are of high significance. Using the HOG descriptor to extract such information results in poor performance because of difficulty in the selection of appropriate window size, as the window captures either too much or too less of local edge structure. Similarly, the standard SURF descriptor utilizes the Gaussian scale space, which incorporates blurring as a pre-processing step to remove noise. However, this step resulted in the removal of structure details such as edges. Therefore, a fusion of adaptive complementary visual words obtained through a bilateral filter (BF)-based gradient field HOG [25] and gauge SURF descriptors is proposed in this article to overcome said issues. In the next two steps, features from both the descriptors are weighted for optimal feature selection, which can reduce training time (computational cost) and improve the performance of the proposed method. Latent semantic analysis as a dimension reduction mechanism The feature vectors extracted in the previous steps exhibit high dimensionality, which generates issues in constructing compact feature interpretation of the image as there exist redundancy and multiple correlations among certain feature points. To get robust and discriminative features, a latent semantic analysis (LSA) method is applied to each feature vector to easily perceive and preserve data, while reducing storage and computational cost. Deerwester et al. [41] applied this method for document retrieval systems, which is based on a singular value decomposition (SVD) mechanism. The proposed method uses LSA to construct a term-context matrix A of dimension r × q for each extracted feature vector, which highlights the hidden relationship among semantically similar images. In the case of the proposed method of CBIR, each column A represents a resultant feature vector (i.e., refers to Fa (defined in Eq. (7)) in case of BFGF-HOG resultant feature vector, while it refers to Fb (defined in Eq. (13)) in case of GSURF resultant feature vector), while rows are distinct features. Ar × q indicates the association between rth term and qth context. The key step of LSA is SVD, which decomposes the high-dimensional term-context matrix A into three matrices U, Z,and V of smaller dimensions d, represented mathematically as follows: $$ \mathrm{A}=> UZ{V}^T $$ where U, V are orthogonal matrices and Z is the diagonal matrix. The columns of U and V contain orthonormal eigenvectors of AAT and ATA,respectively, while the diagonal matrix contains singular values, which are square roots of eigenvalues from U or V. The values of the diagonal matrix Z are sorted in descending order, so the significant information can be retained by considering higher values while eliminating the lower values/noise. For dimension d, the reduced matrix can then be represented as follows: $$ {A}_d={U}_d\;{Z}_d\;{V}_d^T $$ In the next step, reduced features from both descriptors are weighted for optimal feature selection. Adaptive feature weighting based on self-paced learning In computer vision-based applications, some features of the image are more significant than the others. The proposed method applies an adaptive feature weighting method to each reduced size feature vector to classify features as significant or insignificant based on the self-paced learning (SPL) method [42]. The SPL dynamically pick features and learn in an easy to hard learning fashion. Given a matrix of extracted LSA features X = [X1, X2, X3, …, Xn] (where X = > Ad) and y as the corresponding class label, the objective function of SPL can be defined mathematically as follows: $$ \underset{\alpha }{\min}\;p(t)=\left\Vert y- Xt\right\Vert +\left\Vert \lambda (t)\right\Vert $$ where t and λ(t) denote the representation coefficient and regularization parameter, respectively. A weight variable w is added in Eq. (16) to assign a higher or lower value of weights to each feature categorize as easy or hard. Equation (16) can then be mathematically transformed as follows: $$ \underset{w}{\min}\;p\left(t,w\right)={\sum}_{i=1}^n{\left({\left({w}^i\right)}^{\frac{1}{2}}\left({y}^i-{X}^it\right)\right)}^2-\frac{1}{\gamma }{\sum}_{i=1}^n{w}^i+\lambda (t) $$ where γ, Xi, yi are the learning parameters, which controls the selection of learning sample, vector of the ith training feature, and ith feature of a test sample, respectively. The value of l is higher for the initial learning sample, which yields smaller losses and decreases gradually when hard samples are selected. The process continues until all the samples are selected. The features are selected by setting a threshold which is mathematically described as: $$ {w}^i\left({f}^i,l\right)=\left\{\begin{array}{l}1, if\;{f}^i\le \frac{1}{l}\\ {}0, if\;{f}^i>\frac{1}{l}\end{array}\right. $$ where fi = (yi − Xiα)2. In the next step, feature vectors of the adaptive feature weighting are clustered separately using an adaptive fuzzy k-means clustering algorithm, whose details are provided in the following section. The framework of the first competitive method of non-adaptive complementary features integration method is shown in Fig. 4. While in the case of the non-adaptive complementary visual words integration method (second competitive method), all the framework is the same as shown in Fig. 3, except that it does not use the adaptive feature weighting (AFW) to analyze its image retrieval performance. Framework of the competitor method of non-adaptive complementary feature integration based on the BoVW methodology Adaptive fuzzy k-means clustering for complementary visual vocabulary formation In this step, the visual vocabulary is built by applying adaptive fuzzy k-means (AFKM) clustering on the optimized adaptive features of BFGF-HOG and GSURF descriptors of the whole data of the training images. The AFKM clustering is an improved version of the k-means clustering algorithm. It is one of the frequently used unsupervised, non-deterministic, and iterative clustering algorithms. However, initialization of the cluster center, the number of clusters, sensitivity to noise, and outliers are some of the shortcomings of the standard k-means algorithm. To overcome these issues, the proposed method of CBIR uses the AFKM clustering algorithm [43]. It is a combination of moving k-means (MKM) [44] and fuzzy c-means (FCM) [45] clustering algorithms. The MKM clustering contributes to an assignment of data to its closest center and FCM allows data to belong to two or more clusters. For a point x and cluster center c, the objective function of AFKM clustering is calculated as follows: $$ F=\sum \limits_{i=1}^n\sum \limits_{j=1}^n\left({E}_{ij}^m\right){\left({x}_j-{c}_i\right)}^2 $$ where \( {E}_{ij}^m \) represent a fuzzy membership function and m represent a fuzziness exponent. The level of being in a specific group is inverse of the distance to clusters. The new position for each centroid is calculated as follows: $$ {C}_{new}=\frac{\sum_{j=1}^n\left({E}_{ij}^m\right){x}_j}{\sum_{j=1}^n\left({E}_{ij}^m\right)} $$ In AFKM clustering, the concept of belongingness is introduced to improve clustering. The belongingness estimates the relationship between the cluster center and its members. The degree of belonging is calculated using the following mathematical equation: $$ {B}_i=\frac{C_i}{E_{ij}^m} $$ The proposed method of CBIR minimizes the AFKM's objective function, defined in Eq. (19). In AFKM, the clustering is iteratively performed until the center is converged and all data can be considered. In the AFKM clustering, cluster heads of the formed clusters are then termed as visual words, which are grouped to form a visual vocabulary. The proposed method of image retrieval formulates two visual vocabularies, which are represented by WA = {a1, a2, a3, ⋯, ai}, where a1 to ai represent the visual words of BFGF-HOG feature vector and WB = {b1, b2, b3, ⋯, bj}, where b1 to bj represent the visual words of the GSURF feature vector. After that, both visual vocabularies are concatenated vertically to form combined visual vocabulary denoted by WAB = WA + WB = {WA; WB} of size i + j visual words to achieve complementary features by integrating visual words of both descriptors in the proposed method. Image representation as a histogram In this phase, salient objects of an image are transformed into a histogram, which is formed using fused visual words from the complementary visual vocabulary. Assume that the total no. of visual words in the complementary visual vocabulary (termed as WAB in the previous step) are denoted by T. Consider Dj denote the number of descriptors, which are mapped to the jth visual word abj, then the cardinality of Dj is the jth bin of the histogram of visual word abj, which is mathematically denoted as follows: $$ a{b}_j=\mathrm{Cardinality}\ \left({D}_j\right),{D}_j=\left\{{D}_j, j\epsilon \left(1,\dots, T\right)\ \right|\ ab\left({D}_j\right)=a{b}_j\Big\} $$ The obtained histograms are then forward to a classifier for learning a model that can classify images semantically. In this step, the proposed method uses quadratic kernel-based SVM (QSVM) to perform image classification. The histograms of the training images along with labels of each class act as inputs to the QSVM for image classification in the proposed method. To improve retrieval efficiency and accuracy of any CBIR system, image classification is regarded as one of the vital steps. The SVM [46] is one of the frequently used classifiers and has been applied in various computer vision-based applications because of its outstanding generalization ability. Given a linear training set {(x1, y1), (x2, y2), (x3, y3), …. (xn, yn)}, where y1, 2, …,n = {1, −1} are the corresponding class labels, SVM classifies linear data as defined in Eq. (23). It defines decision boundaries known as hyperplanes by focusing on data points that lie at the edges of classified class distributions, which are also known as support vectors. Mathematically, it is defined as: $$ f(x)={w}^T{x}_i+b $$ where w, x, and b represent weight vector, sample point of the training set, and bias, respectively. For hyperplane to be optimal, SVM tries to (i) maximize the margin between support vectors and (ii) reduces misclassification by introducing slack variable ξas defined in Eq. (24): $$ \operatorname{Minimize}\ \frac{1}{2}{\left\Vert w\right\Vert}^2+R\sum \limits_{i=1}^n{\xi}_i $$ $$ \mathrm{Subject}\ \mathrm{to}\ y\left({w}^T{x}_i+b\right)\ge 1-{\xi}_i, $$ $$ {\xi}_i\ge 0\kern2em i=1,\cdots, n, $$ where ξ represents a misclassified sample of corresponding hyperplane and R represents a tradeoff between margin maximization and misclassification error. The higher the value of R, error reduction will be predominant, and for lower values of R, margin maximization will be emphasized. For non-linear data points, the traditional SVM algorithm fails to converge hence consumes more processing time and it also affects image retrieval accuracy. As a solution, SVM utilizes kernel functions k to map data points into new feature space also known as kernel space. The transformed equation for hyperplane is then represented as: $$ f\left({\mathrm{x}}_i\right)=\sum \limits_{m=1}^n{y}_i{\alpha}_i\mathrm{k}\left({x}_m,{\mathrm{x}}_i\right)+\mathrm{b} $$ where k(xm, xi) = ϕ(xm). ϕ(xi) is a kernel function that uses a non-linear mapping ϕ, which maps the data points to kernel space and αi is the Lagrange multiplier. Among several available kernels, the proposed method uses the polynomial kernel of degree 2, also known as the quadratic kernel. It has low running costs as compared to RBF, sigmoid, and other higher-order kernels, and it also produces a robust performance of the image classification. It is mathematically represented as follows: $$ k\left({x}_m,{x}_i\right)={\left({x}_m.{x}_i+1\right)}^2 $$ Performance testing section of the methodology As mentioned earlier, a query image from the test group of the images is selected that undergoes all the steps mentioned in the training section. The similarity between a query image and dataset images is computed using Euclidean distance. The retrieval accuracy of the proposed method is further improved by incorporating log-based RF. The details about these two modules are presented in the following subsequent sections. Retrieval of the images based on the similarity measure Given a query image q, a set of similar images are retrieved by computing relevance score between query image and images in the datasets denoted as IDB. For this purpose, Euclidean distance is utilized as a measure of relevance score. Mathematically, it is defined as follows: $$ \mathrm{D}\left(q,{\mathrm{I}}_{DB}\right)=\sqrt{{\left(\sum \limits_{k=1}^n{\left({I}_{D{B}_k}-{q}_k\right)}^2\right)}^2} $$ Log-based relevance feedback To improve the performance of image retrieval, the proposed method uses log-based relevance feedback (LRF) method for CBIR. It integrates user feedback along with low-level features to further improve the learning process of a CBIR system. Traditional relevance feedback (RF) methods for CBIR require several iterations to return satisfactory results, which are considered time-consuming and tedious from a user perspective. In [47], an active learning approach is proposed that requires a user to label extra images retrieved by the system as most informative. The CBIR methods based on RF have been studied immensely and the one based on RF logs is presented in [48]. The proposed method uses the LRF method, which starts with a query image (represented by q) and its corresponding retrieved images (represented by N), which are marked by a user as relevant or irrelevant. The user judgment is then saved in a history log, and a relevance matrix R is created from all log sessions. In the case of relevant, irrelevant, and non-judged images in log sessions, a cell in R is marked as +1, − 1, and 0, respectively. The LRF method aims to look for a function fq that can map images to a relevance degree between 0 and 1. $$ {f}_q:{I}_{DB}\to \left[0,1\right] $$ As LRF method utilizes low-level features (i.e., BFGF-HOG and GSURF features) and log sessions, so the overall function fq can be defined mathematically as follows: $$ {f}_q\left({I}_{DB}\right)=\frac{1}{2}\left({f}_R\left({I}_{DB}\right)+{f}_x\left({I}_{DB}\right)\right) $$ where fR and fx are relevance functions based on log-based data and low-level features of images, respectively. To find relevance between two images Ii and Ij, the correlation between log data li and lj of these images is calculated, which is mathematically defined as: $$ {corF}_{i,j}={\sum}_k{\upsilon}_{k,i,j}\cdot {l}_{k,i}\cdot {l}_{k,j} $$ $$ {\upsilon}_{k,i,j}=\left\{\begin{array}{l}1, if\;{l}_{k,i}+{l}_{k,j}\ge 0\\ {}0, if\;{l}_{k,i}+{l}_{k,j}<0\end{array}\right. $$ The fR for a log session k can be calculated using the following mathematical equation: $$ {f}_R\left({I}_i\right)={\max}_{k\in {\mathrm{\mathcal{L}}}^{+}}\left\{\frac{corF_{k,i}}{\max\;{corF}_{k,j}}\right\}-{\max}_{k\in {\mathrm{\mathcal{L}}}^{-}}\left\{\frac{corF_{k,i}}{\max\;{corF}_{k,j}}\right\} $$ where corFk, i is a correlation function, \( {\mathcal{L}}^{+} \)and \( {\mathcal{L}}^{-} \)denotes a set of relevant and irrelevant images, respectively. Evaluation metrics, results of the experiments, and discussions This section describes the chosen datasets, evaluation metrics, experimental results, and discussions of the proposed method. The experimental results of the proposed method are reported by performing each experiment 5 times for consistent performance. The comprehensive details of these metrics are presented in the following subsequent sections. To assess the performance of our proposed method, the evaluation metrics that we have used are described in detail in the subsequent sections. The accuracy of a CBIR system in retrieving relevant images (images that belong to the same semantic class of the dataset) according to the visual contents of a query image is evaluated by precision (P), which is a ratio of images retrieved as relevant over total retrieved images. Mathematically, it is defined as follows: $$ \mathrm{Precision}=P=\frac{\mathrm{No}.\mathrm{of}\ \mathrm{retrieved}\ \mathrm{relevant}\ \mathrm{images}}{\mathrm{No}.\mathrm{of}\ \mathrm{retrieved}\ \mathrm{images}} $$ Average precision Average precision (Pavg) computes an average of precision scores (P) of all relevant retrieved images. Mathematically, it is described as: $$ {P}_{avg}=\frac{1}{n}{\sum}_{j=1}^nP(j) $$ where P(j) represents the precision value of jth iteration. Mean average precision The mean average precision (mAP) computes the average of Pavg values. Mathematically, it is expressed as follows: $$ \mathrm{mAP}=\frac{1}{k}\sum \limits_{i=1}^k{P}_{\mathrm{avg}}(i) $$ where k represents a number of queries of the image. The ratio of images retrieved as relevant over the number of relevant images available in the dataset is known as recall. It is defined as follows: $$ \mathrm{Recall}=R=\frac{\mathrm{Number}\ \mathrm{of}\ \mathrm{retrieved}\ \mathrm{relevant}\ \mathrm{images}}{\mathrm{Number}\ \mathrm{of}\ \mathrm{relevant}\ \mathrm{images}\ \mathrm{in}\ \mathrm{the}\ \mathrm{dataset}} $$ F-measure The overall success of an image retrieval system and its efficiency can also be assessed by utilizing F-measure, which is formalized by combining precision and recall as mentioned in the equation below: $$ \mathrm{F}-\mathrm{measure}=F=2\times \frac{\left(P\times R\right)}{\left(P+R\right)} $$ Datasets, experimental parameters, results, and discussions The performance assessment of the proposed method and its competitor methods is accomplished on four standard image datasets of CBIR, which are Corel 1000, Corel 1500, Scene 15, and Holidays. The detail of these image datasets along with experimental results is presented in Section 4.2.1 to Section 4.2.4. Table 2 presents the detail of different experimental parameters, which are used to analyze the performance of the proposed method. Table 2 Detail of experimental parameters of the proposed method Comparative performance analysis on the Corel 1000 image dataset The Corel-1000 [49] image dataset comprises a total of 1000 images, which are divided among 10 semantic categories, each having 100 images of resolution of 256 × 384 pixels or 384 × 256 pixels. The categories of the images included in this image dataset are buses, flowers, buildings, mountains, dinosaurs, human beings, food, landscape, elephants, and horses. Figure 5 presents the sample images, which are taken from each semantic category of the Corel 1000 image dataset. Sample images (one per category) of the Corel 1000 image dataset The experimental results of the non-adaptive complementary feature integration (first competitor method), non-adaptive complementary visual words integration (second competitor method), and the proposed adaptive complementary visual words integration methods using different sizes of the visual vocabulary are presented in Figs. 6, 10, 14, and 16. After the analysis of experimental facts presented in these figures, it can be deduced that the proposed system that is based upon adaptive complementary visual word integration produces robust performance in contrast to its competitor methods of CBIR for all the specified datasets. The size of the visual vocabulary, which produces the best performance of the proposed method, is 600 visual words and achieved mAP performance on this visual vocabulary size is 89.91% for the Corel 1000 dataset. Tables 3, 4, 5, and 6 present the performance comparison of the proposed method with its state-of-the-art image retrieval methods. It can be concluded from experimental results that the proposed method gives promising results as compared to its competitor CBIR methods due to the following reasons: (a) firstly, it uses complementary visual feature representation for salient contents of the images; (b) it uses adaptive feature weighting method based on self-paced learning to select optimized features for each image; (c) it uses twice size complementary visual words to represent salient contents of each image; (d) it uses quadratic kernel-based SVM (QSVM) to achieve robust image classification results, which ultimately improve the similarity measure process in the proposed method of CBIR; and (e) lastly, the proposed method uses log-based relevance feedback (LRF) mechanism for CBIR, which integrates user feedback along with low-level complementary features to further improve the learning process of a CBIR system. Comparison of mAP of the proposed method with competitive CBIR methods on the Corel 1000 image dataset Table 3 Comparative analysis of competitive methods with a proposed method on the Corel 1000 dataset Table 5 Comparative analysis of competitive methods with the proposed method on the Scene 15 dataset Table 6 Comparative analysis of competitive methods with a proposed method on the Holidays dataset Figures 7 and 8 show the results of the image retrieval according to the salient objects of the query images. The query image (first row) of Figs. 7 and 8 are taken from the "Dinosaurs" and "Horses" categories of the Corel 1000 dataset, respectively. Furthermore, Fig. 7 shows the result of LRF-0 image retrieval. The integer value with LRF shows the iteration of the feedback. The images shown in Fig. 8 are the result of the image retrieval after applying LRF-1, which are semantically more relevant to the query image as compared to the LRF-0 retrieval result of the query image. Semantic category "Dinosaurs" of the Corel 1000 dataset shows retrieved images as a response of the query image (first row) (LRF-0) Semantic category "Horses" of the Corel 1000 dataset shows retrieved images as a response of the query image (first row) (LRF-1) The Corel 1500 [49] image dataset is a subset of the WANG image dataset. The images in the Corel 1500 image dataset are ordered into 15 semantic categories, and each contains a total of 100 images. The image resolution in this image dataset is either 256 × 384 pixels or 384 × 256 pixels. The sample images from each semantic category of the Corel 1500 image dataset are shown in Fig. 9. Sample images (one per category) of the Corel 1500 image dataset [49] By varying different sizes of the visual vocabulary, the mAP performance of the proposed method, and its comparison with competitor methods, is presented in Fig. 10. After analyzing experimental details, it can be deduced that the proposed method outperforms as compared to its competitor methods on the Corel 1500 image dataset. The best mAP performance of the proposed method is obtained on a visual vocabulary of size 1000 visual words, which is 83.99%. Table 4 presents the performance comparison of the proposed method against competitive methods in terms of performance evaluation metrics of the CBIR. Based on the experimental details shown in Table 4, it can also be concluded that the proposed method also outperforms its comparative methods due to the factors mentioned in Section 4.2.1. The results of the image retrieval using the proposed method according to the salient objects of the query images of the Corel 1500 image dataset are shown in Figs. 11 and 12 for the semantic categories "Sunset" and "Postcard," respectively. Semantic category "Sunset" of the Corel 1500 dataset shows retrieved images as a response of the query image (first row) (LRF-0) Semantic category "Postcard" of the Corel 1500 dataset shows retrieved images as a response of the query image (first row) (LRF-0) Comparative performance analysis on the Scene 15 image dataset The Scene 15 dataset [51] comprises of 4485 gray-scale images, divided into 15 scene categories. This dataset contains images of indoor as well as outdoor scenes. There are 200 to 400 images in each semantic class of this dataset, and the resolution of each image is 300 × 250 pixels. Figure 13 shows different sample images from each semantic class of the Scene 15 image dataset. Indoor and outdoor scenes of the sample images taken from the Scene 15 image dataset Figure 14 shows the performance comparison of the proposed method with its competitor methods in terms of the mAP performance on different sizes of the visual vocabulary. On the Scene 15 image dataset, the best mAP performance of the proposed method against its competitor CBIR methods is attained on a visual vocabulary of size 1000 visual words, which is 83.11%. To further analyze the robustness of the proposed method, its performance comparison is performed with state-of-the-art CBIR methods in terms of standard performance evaluation metrics, whose details are presented in Table 5 for the Scene 15 image dataset. Different factors of the proposed method such as robust complementary image representation, efficient and effective adaptive feature weighting of visual words, twice size visual words for key objects of the image result in the robust performance of the proposed method as compared to its competitor CBIR methods. Comparison of mAP of the proposed method with competitive CBIR methods on the Scene 15 image dataset Comparative performance analysis on the Holidays image dataset The Holidays image dataset [52] contains 1491 images, out of which, 500 images are the query images and the remaining 991 are corresponding relevant images that are classified into distinct semantic groups. Each semantic group of the images represents a distinct scene having various transformations on the images such as rotation, blurring, and viewpoint. The resolution of the image in this dataset is 2448 × 3204 pixels. The different sample images of the Holidays image dataset are shown in Fig. 15. Eight sample images (one per category) of the Holidays image dataset The experimental details and comparative analysis of the effect of varying different sizes of visual vocabulary on mAP performance of the proposed method with its competitor methods are presented in Fig. 16 for the Holidays image dataset. The proposed method produces the best mAP performance of 72.85% on the visual vocabulary of size 800 visual words against its competitive methods of CBIR. The second competitor method of non-adaptive complementary visual words integration of CBIR produces best mAP performance of 62.53% on a visual vocabulary of size 600 visual words as compared to its other reported sizes of the visual vocabulary. Similarly, the best mAP performance produces by the first competitor method of non-adaptive complementary features integration method is 57.14%, which is attained on a visual vocabulary size of 600 visual words as compared to its other reported sizes of the visual vocabulary on the Holidays image dataset. The performance comparison of the proposed method with state-of-the-art CBIR methods is presented in Table 6, which concludes that the proposed method produces robust performance as compared to recent CBIR methods in terms of performance evaluation metrics. Comparison of mAP of the proposed method with competitive CBIR methods on the Holidays image dataset Required hardware/software resources and computational cost The performance of the proposed method in terms of computational cost is measured using a desktop PC having following hardware and software requirements: Intel(R) Core(TM)-i3 CPU (frequency 2.1 GHz-series 2310 M), 8 GB of RAM, 120 GB SSD, Windows 7 Professional (64-bit), and MATLAB (2015b-x64 bit). The computational cost of the proposed method based on adaptive complementary visual words integration and its comparison with other competitive CBIR methods are presented in Table 7 for the Corel 1000 image dataset. Table 7 Computational time (in seconds) of the proposed method as compared to competitive CBIR methods Conclusion and future work In this article, we explored the effect of adaptive feature weighting and adaptive fuzzy k-means clustering on the robust representation of the principal objects of the images by integrating complementary visual words of the local and global features based on the BoVW methodology. The latent semantic analysis is applied to the adaptive feature weighting to reduce the computational complexity of the proposed method, which is slightly increased due to the integration of the complementary visual words. The classification accuracy of the proposed method is improved using quadratic kernel-based SVM, which ultimately improved the similarity measure process of the CBIR. The log-based relevance feedback mechanism is also introduced in the proposed method to further improve the performance of the CBIR. The performance comparison of the proposed adaptive complementary visual words integration method is carried with a non-adaptive complementary feature integration method and non-adaptive complementary visual words integration method using the same local and global features as well as with state-of-the-art CBIR methods. It can be concluded that the integration of adaptive complementary visual words significantly improved the performance of the CBIR as compared with the integration of non-adaptive complementary features and non-adaptive complementary visual words integration methods due to the assignment of twice size visual words to the salient objects of the images. In the future work, due to the radically increasing volume of the image and video databases, the performance of the proposed method can be analyzed using normalized discriminative deep learning-based compressed domain methods like JPEG-2000 and HEVC to improve the accuracy and efficiency of content-based video and image retrieval systems. Data sharing is not applicable to this article as the authors have used publicly available datasets, whose details are included in the "experimental results and discussions" section of this article. Please contact the authors for further requests. CBIR: Content-based image retrieval BoVW: Bag-of-visual-words LRF: QSVM: Quadratic kernel-based support vector machine AFKM: Adaptive fuzzy k-means SPL: BoF: Bag-of-features LSA: Histogram of oriented gradient LBPC: Local binary pattern for color images BF: Bilateral field GF: Gradient field LTrP: Local tetra pattern GSURF: Gauge speeded-up robust features CEDD: Color and edge directivity descriptor BSIF: Binarized statistical image features LDEM: Local directional edge map DBSCAN: Density-based spatial clustering of applications with noise DCD: Dominant color descriptor PCA: FCM: Vector of locally aggregated descriptors SVD: Singular value decomposition AI: HCI: Human–computer interaction M. Alkhawlani, M. Elmogy, H. El Bakry, Text-based, content-based, and semantic-based image retrievals: a survey. Int. J. Comput. Inf. Technol 4(01), 58–66 (2015) C. Singh, E. Walia, K.P. Kaur, Color texture description with novel local binary patterns for effective image retrieval. Pattern Recogn. 76, 50–68 (2018) A. Talib, M. Mahmuddin, H. Husni, L.E. George, A weighted dominant color descriptor for content-based image retrieval. J. Vis. Commun. Image Represent. 24(3), 345–360 (2013) A.T. Da Silva, A.X. Falcão, L.P. Magalhães, Active learning paradigms for CBIR systems based on optimum-path forest classification. Pattern Recogn. 44(12), 2971–2978 (2011) S. Murala, Q.J. Wu, Expert content-based image retrieval system using robust local patterns. J. Vis. Commun. Image Represent. 25(6), 1324–1334 (2014) M. Subrahmanyam, R. Maheshwari, R. Balasubramanian, Local maximum edge binary patterns: a new descriptor for image retrieval and object tracking. Signal Process. 92(6), 1467–1479 (2012) R.S. Dubey, R. Choubey, J. Bhattacharjee, Multi feature content based image retrieval. Int. J. Comput. Sci. Eng. 2(6), 2145–2149 (2010) Krizhevsky, A., I. Sutskever, and G.E. Hinton, Imagenet classification with deep convolutional neural networks. ed. Advances in neural information processing systems, 2012, p. 1097-1105. Z. Mehmood, N. Gul, M. Altaf, T. Mahmood, T. Saba, A. Rehman, M.T. Mahmood, Scene search based on the adapted triangular regions and soft clustering to improve the effectiveness of the visual-bag-of-words model. EURASIP Journal on Image and Video Processing 2018(1), 48 (2018) Misale, S. and A. Mulla, Learning visual words for content based image retrieval. ed. 2018 2nd International Conference on Inventive Systems and Control (ICISC), 2018, p. 580-585. Z. Mehmood, S. Anwar, M. Altaf, N. Ali, A novel image retrieval based on rectangular spatial histograms of visual words. Kuwait Journal of Science 45(1), 54–69 (2018) Z. Mehmood, S.M. Anwar, N. Ali, H.A. Habib, M. Rashid, A novel image retrieval based on a combination of local and global histograms of visual words. Math. Probl. Eng. 2016, 1–12 (2016) L. Yu, L. Feng, H. Wang, L. Li, Y. Liu, S. Liu, Multi-trend binary code descriptor: a novel local texture feature descriptor for image retrieval. SIViP 12(2), 247–254 (2018) Mistry, Y., D. Ingole, and M. Ingole, Content based image retrieval using hybrid features and various distance metric. Journal of Electrical Systems and Information Technology, 2017. S. Zeng, R. Huang, H. Wang, Z. Kang, Image retrieval using spatiograms of colors quantized by Gaussian Mixture Models. Neurocomputing 171, 673–684 (2016) S.K. Roy, B. Chanda, B.B. Chaudhuri, S. Banerjee, D.K. Ghosh, S.R. Dubey, Local directional ZigZag pattern: a rotation invariant descriptor for texture classification. Pattern Recogn. Lett. 108, 23–30 (2018) G. Amato, F. Falchi, L. Vadicamo, Aggregating binary local descriptors for image retrieval. Multimed. Tools Appl. 77(5), 5385–5415 (2018) J. Li, C. Xu, W. Yang, C. Sun, D. Tao, Discriminative multi-view interactive image re-ranking. IEEE Trans. Image Process. 26(7), 3113–3127 (2017) MathSciNet MATH Article Google Scholar Liang, R.-Z., L. Shi, H. Wang, J. Meng, J.J.-Y. Wang, Q. Sun, and Y. Gu, Optimizing top precision performance measure of content-based image retrieval by learning similarity function. ed. Pattern Recognition (ICPR), 2016 23rd International Conference on, 2016, p. 2954-2958. M. Mosbah, B. Boucheham, Distance selection based on relevance feedback in the context of CBIR using the SFS meta-heuristic with one round. Egyptian Informatics Journal 18(1), 1–9 (2017) F. Meng, D. Shan, R. Shi, Y. Song, B. Guo, W. Cai, Merged region based image retrieval. J. Vis. Commun. Image Represent. 55, 572–585 (2018) W. Song, Y. Zhang, F. Liu, Z. Chai, F. Ding, X. Qian, S.C. Park, Taking advantage of multi-regions-based diagonal texture structure descriptor for image retrieval. Expert Syst. Appl. 96, 347–357 (2018) K.T. Ahmed, M.A. Iqbal, Region and texture based effective image extraction. Clust. Comput. 21(1), 493–502 (2018) J. Pradhan, A.K. Pal, H. Banka, Principal texture direction based block level image reordering and use of color edge features for application of object based image retrieval. Multimed. Tools Appl. 78(2), 1685–1717 (2019) Hu, R., M. Barnard, and J. Collomosse, Gradient field descriptor for sketch based retrieval and localization. ed. 2010 IEEE International Conference on Image Processing, 2010, p. 1025-1028. F. Dornaika, Y. El Traboulsi, Proposals for local basis selection for the sparse representation-based classifier. SIViP 12(8), 1595–1601 (2018) N. Passalis, A. Tefas, Information clustering using manifold-based optimization of the bag-of-features representation. IEEE transactions on cybernetics 48(1), 52–63 (2016) X. Tian, L. Jiao, X. Liu, X. Zhang, Feature integration of EODH and Color-SIFT: application to image retrieval based on codebook. Signal Process. Image Commun. 29(4), 530–545 (2014) Delhumeau, J., P.-H. Gosselin, H. Jégou, and P. Pérez, Revisiting the VLAD image representation. ed. Proceedings of the 21st ACM international conference on Multimedia, 2013, p. 653-656. Douze, M., A. Ramisa, and C. Schmid, Combining attributes and fisher vectors for efficient image retrieval. ed. CVPR 2011, 2011, p. 745-752. Perronnin, F., Y. Liu, J. Sánchez, and H. Poirier, Large-scale image retrieval with compressed fisher vectors. ed. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010, p. 3384-3391. N. Ali, K.B. Bajwa, R. Sablatnig, Z. Mehmood, Image retrieval by addition of spatial information based on histograms of triangular regions. Comput. Electr. Eng. 54, 539–550 (2016) S.R. Dubey, S.K. Singh, R.K. Singh, Rotation and scale invariant hybrid image descriptor and retrieval. Comput. Electr. Eng. 46, 288–302 (2015) Z. Mehmood, T. Mahmood, M.A. Javid, Content-based image retrieval and semantic automatic image annotation based on the weighted average of triangular histograms using support vector machine. Appl. Intell. 48(1), 166–181 (2018) Xiao, J., J. Hays, K.A. Ehinger, A. Oliva, and A. Torralba, Sun database: Large-scale scene recognition from abbey to zoo. ed. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010, p. 3485-3492. Q. Zhu, Y. Zhong, B. Zhao, G.-S. Xia, L. Zhang, Bag-of-visual-words scene classifier with local and global features for high spatial resolution remote sensing imagery. IEEE Geosci. Remote Sens. Lett. 13(6), 747–751 (2016) S. Zhang, Q. Tian, G. Hua, Q. Huang, W. Gao, Generating descriptive visual words and visual phrases for large-scale image applications. IEEE Trans. Image Process. 20(9), 2664–2677 (2011) S. Xu, T. Fang, D. Li, S. Wang, Object classification of aerial images with bag-of-visual words. IEEE Geosci. Remote Sens. Lett. 7(2), 366–370 (2009) Dalal, N. and B. Triggs, Histograms of oriented gradients for human detection. ed., 2005, p. P.F. Alcantarilla, L.M. Bergasa, A.J. Davison, Gauge-SURF descriptors. Image Vis. Comput. 31(1), 103–116 (2013) S. Deerwester, S.T. Dumais, G.W. Furnas, T.K. Landauer, R. Harshman, Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 41(6), 391–407 (1990) Kumar, M.P., B. Packer, and D. Koller, Self-paced learning for latent variable models. ed. Advances in Neural Information Processing Systems, 2010, p. 1189-1197. S.N. Sulaiman, N.A.M. Isa, Adaptive fuzzy-K-means clustering algorithm for image segmentation. IEEE Trans. Consum. Electron. 56(4), 2661–2668 (2010) M.Y. Mashor, Hybrid training algorithm for RBF network. International Journal of the computer, the Internet and Management 8(2), 50–65 (2000) Dunn, J.C., A fuzzy relative of the ISODATA process and its use in detecting compact well-separated clusters. 1973. Boser, B.E., I.M. Guyon, and V.N. Vapnik, A training algorithm for optimal margin classifiers. ed. Proceedings of the fifth annual workshop on Computational learning theory, 1992, p. 144-152. M. Wang, X.-S. Hua, Active learning in multimedia annotation and retrieval: A survey. ACM Transactions on Intelligent Systems and Technology (TIST) 2(2), 10 (2011) S.C. Hoi, M.R. Lyu, R. Jin, A unified log-based relevance feedback scheme for image retrieval. IEEE Trans. Knowl. Data Eng. 18(4), 509–524 (2006) J. Li, J.Z. Wang, Real-time computerized annotation of pictures. IEEE Trans. Pattern Anal. Mach. Intell. 30(6), 985–1002 (2008) Afifi, A.J. and W.M. Ashour, Content-based image retrieval using invariant color and texture features. ed. 2012 International Conference on Digital Image Computing Techniques and Applications (DICTA), 2012, p. 1-6. 15 Scene. 2019 [cited 2019 19 June]; Available from: https://figshare.com/articles/15-Scene_Image_Dataset/7007177. Holidays. 2019 [cited 2019 19 June]; Available from: http://lear.inrialpes.fr/people/jegou/data.php#holidays. Department of Software Engineering, University of Engineering and Technology, Taxila, 47050, Pakistan Ruqia Bibi & Rehan Mehmood Yousaf Department of Computer Engineering, University of Engineering and Technology, Taxila, 47050, Pakistan Zahid Mehmood Department of Computer Science, COMSATS University Islamabad, Attock Campus, Attock, 43600, Pakistan Muhammad Tahir & Muhammad Sardaraz AIDA Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia Amjad Rehman Department of Computer Engineering, Umm Al-Qura University, Makkah, 21421, Saudi Arabia Muhammad Rashid Ruqia Bibi Rehan Mehmood Yousaf Muhammad Tahir Muhammad Sardaraz All the authors contributed equally. The authors read and approved the final manuscript. Correspondence to Zahid Mehmood. Bibi, R., Mehmood, Z., Yousaf, R.M. et al. BoVW model based on adaptive local and global visual words modeling and log-based relevance feedback for semantic retrieval of the images. J Image Video Proc. 2020, 27 (2020). https://doi.org/10.1186/s13640-020-00516-4 Query-by-image Visual feature integration Adaptive weighting features Robust learning Relevance feedback
CommonCrawl
Assessing the adequacy of lymph node yield for different tumor stages of colon cancer by nodal staging scores Zhenyu Wu1,2, Guoyou Qin1,2, Naiqing Zhao1, Huixun Jia3 & Xueying Zheng1,2 BMC Cancer volume 17, Article number: 498 (2017) Cite this article According to the current official guidelines, at least 12 lymph nodes (LNs) are qualified as an adequate sampling for colon cancer patients. However, patients evaluated with less nodes were still common in the United States, and the prevalence of positive nodal disease may be under-estimated because of the false-negative assessment. In this study, we present a statistical model that allows preoperative determination of the minimum number of lymph nodes needed to confirm a node-negative disease with certain confidence. Adenocarcinoma colon cancer patients with stage T1-T3, diagnosed between 2004 and 2013, who did not receive neoadjuvant therapies and had at least one lymph node pathologically examined, were extracted from the Surveillance, Epidemiology and End Results (SEER) database. A beta binomial distribution was used to estimate the probability of an occult nodal disease is truly node-negative as a function of total number of LNs examined and T stage. A total of 125,306 patients met study criteria; and 47,788 of those were node-positive. The probability of falsely identifying a patient as node-negative decreased with an increasing number of nodes examined for each stage, and was estimated to be 72% for T1 and T2 patients with a single node examined and 57% for T3 patients with a single node examined. To confirm an occult nodal disease with 90% confidence, 3, 8, and 24 nodes need to be examined for patients from stage T1, T2, and T3, respectively. The false-negative rate of diagnosed node negative, together with the minimum number of examined nodes for adequate staging, depend preoperatively on the clinical T stage. Predictive tools can recommend a threshold on the minimum number of examined nodes regarding to the favored level of confidence for each T stage. Colon cancer is the most common digestive system malignant tumor, accounting for approximately one thirds of the estimated new cases, in the United States in 2016 [1]. Although the incidence rate of colon cancer declines dramatically, decreased by more than 4% per year in both men and women from 2008 to 2012 [2], it is estimated that 95,270 cases were newly developed in 2016 [1]. Given the fact that about 49,000 Americans died of this disease in 2016 [1], improving the medical and clinical care of colon cancer remains a great challenge. Accurate evaluation of loco-regional lymph nodes (LNs) status is essential for assessing the stage of disease, planning the effective systematic therapies, and predicting the prognosis of these patients [3,4,5,6,7,8]. Therefore, the detection of positive LNs is critical and a great deal of efforts have been made on determination of the threshold of LNs need to be retrieved. Apparently, if there was too few LNs examined during the surgery, there would be a great chance of being "under-staging" or falsely identifying a node-positive patient as node-negative. Recommendations on lymph node sampling varied from 6 to 21 [9,10,11,12,13,14], however, most of the studies have suggested that an examination of at least 12 regional lymph nodes is reasonable for nodal evaluation for colon cancer patients [15,16,17,18]. Official guidelines, such as those announced by the American Joint Committee on Cancer, the American Society of Clinical Oncology, American College of Surgeons, the National Quality Forum, and the National Comprehensive Cancer Network also accepted a minimum of 12 LNs as a standard retrieved from a patient with colon cancer [19,20,21]. Despite these guidelines, false-negative nodal staging caused by inadequacy of lymph node retrieval exists on a broad scale. Previous studies showed certain interests in developing tools which can help physicians and pathologists predict the probability of missing nodal disease [12, 22]. In the context of tumor-node-metastasis staging, T stage was considered as the only stratified covariate in those tools. However, some other key factors, such as therapies and characteristics of patients, were not involved. Patients received neoadjuvant therapy had significantly fewer nodes assessed than patients who underwent surgery alone [23]. The aims of this study were to present a new statistical model to calculate the false-negative probability of occult nodal disease as a function of the number of examined LNs and the T stage, using the first primary colon patients without neoadjuvant therapy from a nationwide database. A larger value of the improved nodal staging score (NSS) indicates greater certainty on the node-negative status of a patient. Data for the current study were extracted from the Surveillance, Epidemiology, and End Results (SEER)-Medicare linked database. The SEER program of National Cancer Institute collects demographics, tumor characteristics, and survival data from 17 population-based cancer registries throughout the United States, covering approximately 28% of the US population [24]. The SEER-Medicare database has been described in detail elsewhere [25]. Only first primary (i.e., only primary cancer or first of two or more primary cancers) colon cancer patients diagnosed between 2004 and 2013 were included. Patients were excluded if they 1) have been treated with neoadjuvant therapy; 2) have histology type other than adenocarcinoma; 3) have no lymph node examined or the number of lymph nodes examined was not available; 4) T stage equals 0 or 4. A study flowchart is presented in Fig. 1. Flow diagram of colon cancer patients enrolled from the Surveillance, Epidemiology, and End Results (SEER)-Medicare linked database The probability that a node-negative patient has nodal disease can be computed using the following algorithm: Compute the probability of missing a positive node as a function of the number of examined nodes, which depends on the number of examined nodes and on T stage. Compute the corrected prevalence of nodal disease as a function of T stage, using the probability of missing a positive node. Compute the NSS. This is the probability that a pathologically node-negative patient is actually free of nodal disease, which is calculated from the prevalence and the probability of missing a positive node. Probability of missing a positive node We adapted a beta binomial distribution to estimate the probability of missing a positive node as a function of total number of examined nodes, only using node-positive patients. Two key assumptions underlie this step: (1) There are no false-positives, and (2) sensitivity is the same for node-positive and node-negative patients. The probability of false-negative depends on the number of examined nodes and on T stage: $$ P\left({FN}_{m,T}\ \right)=\frac{Beta\left({\alpha}_T,{\beta}_T+m\right)}{Beta\left({\alpha}_T,{\beta}_T\right)}, $$ where m denotes the number of nodes examined from 1 to 89, T denotes the stage of tumor from T1-T3, and Beta() represents the beta function. For each tumor stage, α T and β T are parameters that characterize the underlying intensity of nodal disease to be estimated from the individual patient data using maximum likelihood approach via VGAM package in R version 3.2.4. Estimation of prevalence of nodal disease The observed prevalence (OP) is an underestimate and needed to be adjusted for false negatives. This was done in two steps. The first step estimates the number of false negative #FN m , T as a function of number of examined nodes (m) and stage (T): $$ \#{FN}_{m,T}=\frac{P{\left({FN}_{m,T}\ \right)}^{\ast }\ \left(\#{TP}_{m,T}\right)}{1-P\left({FN}_{m,T}\ \right)}, $$ where #TP m , T is the number of true positives for a given number of examined nodes (m) and stage (T). The second step obtains the corrected prevalence (CP) for each stage by summing over all the number of examined nodes (m): $$ {CP}_T=\frac{\sum_m\left(\#{TP}_{m,T}+\#{FN}_{m,T}\right)}{\sum_m\left(\#{TP}_{m,T}+\#{TN}_{m,T}+\#{FN}_{m,T}\right)}=\frac{\sum_m\left(\#{TP}_{m,T}+\#{FN}_{m,T}\right)}{All\ Patients}. $$ Nodal staging score We assessed adequate staging by computing the NSS, the probability that a pathologically LN-negative patient is indeed free of nodal metastasis: $$ {NSS}_{m,T}=\frac{1-{CP}_T}{1-{CP}_T+{CP_T}^{\ast }P\left({FN}_{m,T}\ \right)}. $$ Precision of the reported estimates was assessed by creating 1000 bootstrap samples from the entire data set and replicating the estimation process. The 2.5th and 97.5th percentiles were used as the lower and upper 95% confidence intervals for the corresponding estimates, respectively. A total of 125,306 qualified patients were involved in our analyses. The proportions of patients with stage T1, T2 and T3 primary tumor were 14.51%, 17.04% and 68.45%, respectively. The median number of LNs was gradually increased with T stage, from 13 to 16. In addition, the proportion of ≥12 LNs examined and the rate of node-positivity were compared. Most of the enrolled patients were examined with more than 12 nodes, however, the highest node-positive rate was observed in patients with T3 stage. As expected, the rate of patients with positive node was lowest in T1 stage (11.12%) and highest in T3 stage (48.54%). The detailed summaries of patients and LNs were shown in Table 1. Table 1 Descriptions of enrolled patients and lymph nodes examined The distribution of the percentage of positive metastatic LNs among all patients with at least one positive node (n = 47,788) was fit using a beta-binomial distribution with resulting model parameter estimates of α = 1.130 (95% CI, 1.111 to 1.149) and β = 3.201 (95% CI, 3.128 to 3.288) (Table 2). Stratified by tumor stages (T stage), the resulting parameters were α 1= α 2= 1.960 (95% CI, 1.813 to 2.119) and β 1= β 2=10.453 (95% CI, 9.335 to 11.549) for stages T1 and T2 (estimated by n = 6153 patients in stage T1 and T2 with at least one positive node). For stage T3, α 3=1.117 (95% CI, 1.100 to 1.136) and β 3=2.957 (95% CI, 2.886 to 3.046) were estimated by all the patients in stage T3 with at least one positive node (n = 41,635). Table 2 Estimated parameters across different stages The set of parameters was then used to estimate the probability of false-negative disease as a function of the number of examined nodes and tumor stages, which is different from studies of Joseph et al. [13] and Gönen et al. [12]. In stages T1 and T2, the probability of a false-negative node dissection was estimated at 72%, 54%, 26%, 12% and less than 10% for 1, 3, 10, 20 and greater than 26 nodes examined, respectively (Fig. 2 and Additional file 1: Table S1 in the supporting information). In stage T3, the probability of a false-negative node dissection was estimated at 57%, 39%, 18% and less than 10% for 1, 3, 10 and greater than 20 nodes examined, respectively. It is shown in Fig. 2 that the overall α and β were prone to underestimate the probability of false-negative in stages T1 and T2 and overestimate the probability of stage T3. The differences among probability of false-negative in three stages are less than 3% when more than 20 nodes are examined. Probability of a false-negative as a function of number of nodes examined in a colon cancer patient with truly node-positive disease The observed prevalence of nodal disease is 38.1%, but accounting for false-negative patients, the corrected prevalence is 45.4% (Table 3). Underestimation of prevalence due to the existence of false- negatives is observed for all T stages, but its extent increases by T stage. As many as 57.0% of T3 colon cancer patients are estimated to have nodal disease, up from an observed rate of 48.5%. Table 3 Observed and Corrected Prevalence Nodal staging scores were presented in Fig. 3 and Additional file 1: Table S2 in the supporting information. Patients with stage T1 and T2 will have more than a 90% chance of a correct pathologic diagnosis with three and eight examined nodes, respectively. The same level of accuracy requires twenty-four examined nodes in T3 patients. To achieve an 80% chance of a correct pathologic diagnosis, one, one and ten nodes are required to be examined for T1, T2 and T3 patients, respectively. Nodal staging scores as a function of number of nodes examined in a colon cancer patient Adequate examined nodes are required for proper staging of colon cancer, and the number of LNs examined is associated with colon cancer survival [15]. When patients have too few nodes examined, clinicians face challenging decisions on under-staging because there would be a chance that this patient can be incorrectly treated as false-negative. By maximizing the prognostic discrimination between the grouped patients, many studies have sought a threshold for the minimum number of examined nodes [26,27,28,29], in which most of these suggestions have been made with regard to the number of examined nodes needed to accurately determine that a patient has occult node-negative cancer. Recent studies subjected nodal staging to the statistical model by computing the false-negative rate and calculating the negative predictive value to define NSS that characterizes the adequacy of node-negative classification [12, 22, 30, 31]. However, most of these studies lose sight of the effect of tumor stage on the false-positive rate in the surgery of colon cancer. To the best of our knowledge, this study is the first to formulate the false-positive rate of occult nodal disease as a function of the number of examined nodes together with the T stage, and find a significant difference of false-positive rate among different T stages. Combining the number of examined nodes with the T stage, our approach established an individualized prognostication of the true nodal stage. Our results suggested an evident higher false-positive rate of T1 and T2 patients comparing to that of T3 patients when the number of examined nodes is less than 15, and a small but statistically significant higher false-positive rate of 3% when the number of examined nodes is between 15 and 20. In addition, in order to minimize the bias caused by important confounders, we restricted our study population to first primary colon patients without neoadjuvent therapies. To facilitate the planning of the optimal individual treatment, we also evaluated whether other patient variables, such as patient sex and age, could lead to different false-positive rates. However, current data do not support that either patient sex or age can result in significantly different false-positive rates. Although we found that not all clinicopathological features are highly correlated with the false-positive rate in colon cancer, whether these features influence the false-positive rates in other categories of cancer are still open questions. As a convenient tool to evaluate whether a node-negative colon cancer patient is adequately staged, a higher value of the calculated NSS implies a greater likelihood in the node-negative status of the patient for each tumor stage. Because the NSS calculates the probability of occult nodal disease as a function of the number of examined nodes and the T stage, this tool might give an estimation of the likelihood of node-metastasis more accurately than a simple cutoff of the number of examined nodes, and help clinicians judge the adequacy of nodal staging. Current guidelines recommended that at least 12 nodes needed to be examined as a quality indicator, based on a series of studies correlating the number of examined LNs with progression or survival [15,16,17]. However, we found that the number of nodes needed to be removed varies largely among patients according to different T stages [32]. For example, insisting on 12 nodes for patients with stages T1 and T2 seems unjustified, because the examination of 3 nodes for a T1 patient maintains the same level of confidence 90% with that of the examination of 8 nodes for a T2 patient. Consequently, our findings encourage the development of techniques to improve LNs harvest in color cancer especially for T3 patients. Given the retrospective nature and a few key assumptions required for the calculation of NSS, there are several limitations of this study that warrant mention. First, although the assumptions on no false-positives and beta-binomial model are conservative and reasonable [12], the assumption that all nodes within a patient have the same probability of being involved is unlikely to hold in practice. We recognize, however, that the absence of the position of the examined nodes limit the justification on this assumption. The location of the examined nodes is substantial because nodes from an area of low likelihood of cancer may be less valuable than the nodes which are more likely to be involved with malignancy [31]. Consequently, prospective validation on this key assumption is required in the statistical model to estimate NSS in future. Secondly, the data from nodes-positive patients were used to interpret the data for the nodes-negative patients. We applied a bootstrap method to generate nodes-negative patients from observed nodes-positive patients by reducing one node that with equal possibility to be selected. The estimates of the false-positive rate from the bootstrap samples are in line with the estimates obtained only from nodes-positive patients, which justifies the rationality of the extension. Finally, as mentioned by many studies, the externally validation of the use of the NSS relies on the result of recurrence or death, to ensure that NSS can distinguish patients who are at high risk of having omitted occult nodal disease [12]. In conclusion, our study has several key distinctions. Strengths of our analysis included its novel application of tumor-stage-based false-positive rates into the calculation of NSS. The formula of prevalence and NSS varies in a way from the equations described in previous research. Our results allow clinicians to better understand the likelihood of missing nodal disease and assist the planning of optimal therapies. In conclusion, this study found that the false-negative rate of the examined lymph nodes in the colon cancer surgery depends preoperatively on the clinical T stage. A more accurate nodal staging score was developed to recommend a threshold on the minimum number of examined nodes regarding to the favored level of confidence for each T stage. corrected prevalence LN: NSS: OP: observed prevalence surveillance, epidemiology and end results American Cancer Society. Cancer Facts & Figures 2016. Atlanta: American Cancer Society; 2016. American Cancer Society. Colorectal Cancer Facts & Figures 2014–2016. Atlanta: American Cancer Society; 2014. Akagi Y, Adachi Y, Kinugasa T, Oka Y, Mizobe T, Shirouzu K. Lymph node evaluation and survival in colorectal cancer: review of population-based, prospective studies. Anticancer Res. 2013;33(7):2839–47. Chang GJ, Rodriguez-Bigas MA, Skibber JM, Moyer VA. Lymph node evaluation and survival after curative resection of colon cancer: systematic review. J Natl Cancer Inst. 2007;99(6):433–41. Berger AC, Sigurdson ER, LeVoyer T, Hanlon A, Mayer RJ, Macdonald JS, Catalano PJ, Haller DG. Colon Cancer survival is associated with decreasing ratio of metastatic to examined lymph nodes. J Clin Oncol. 2005;23(34):8706–12. Markl B. Stage migration vs immunology: the lymph node count story in colon cancer. World J Gastroenterol. 2015;21(43):12218–33. Hogan NM, Winter DC. A nodal positivity constant: new perspectives in lymph node evaluation and colorectal cancer. World J Surg. 2013;37(4):878–82. Costi R, Beggi F, Reggiani V, Ricco M, Crafa P, Bersanelli M, Tartamella F, Violi V, Roncoroni L, Sarli L. Lymph node ratio improves TNM and Astler-Coller's assessment of colorectal cancer prognosis: an analysis of 761 node positive cases. J Gastroimtest Surg. 2014;18(10):1824–36. Shanmugam C, Hines RB, Jhala NC, Katkoori VR, Zhang B, Posey JJ, Bumpers HL, Grizzle WE, Eltoum IE, Siegal GP, Manne U. Evaluation of lymph node numbers for adequate staging of Stage II and III colon cancer. J Hematol Oncol. 2011;4:25. Vather R, Sammour T, Kahokehr A, Connolly AB, Hill AG. Lymph node evaluation and long-term survival in stage II and stage III colon cancer: a national study. Ann Surg Oncol. 2009;16(3):585–93. Wong JH, Severino R, Honnebier MB, Tom P, Namiki TS. Number of nodes examined and staging accuracy in colorectal carcinoma. J Clin Oncol. 1999;17(9):2896–900. Gonen M, Schrag D, Weiser MR. Nodal staging score: a tool to assess adequate staging of node-negative colon cancer. J Clin Oncol. 2009;27(36):6166–71. Joseph NE, Sigurdson ER, Hanlon AL, Wang H, Mayer RJ, MacDonald JS, Catalano PJ, Haller DG. Accuracy of determining nodal negativity in colorectal cancer on the basis of the number of nodes retrieved on resection. Ann Surg Oncol. 2003;10(3):213–8. Iachetta F, Reggiani BL, Marcheselli L, Di Gregorio C, Cirilli C, Messinese S, Cervo GL, Postiglione R, Di Emidio K, Pedroni M, Longinotti E, Federico M, Ponz de Leon M. Lymph node evaluation in stage IIA colorectal cancer and its impact on patient prognosis: a population-based study. Acta Oncol. 2013;52(8):1682–90. Baxter NN, Virnig DJ, Rothenberger DA, Morris AM, Jessurun J, Virnig BA. Lymph node evaluation in colorectal cancer patients: a population-based study. J Natl Cancer Inst. 2005;97(3):219–25. Shia J, Wang H, Nash GM, Klimstra DS. Lymph node staging in colorectal cancer: revisiting the benchmark of at least 12 lymph nodes in R0 resection. J Am Coll Surg. 2012;214(3):348–55. Chen HH, Chakravarty KD, Wang JY, Changchien CR, Tang R. Pathological examination of 12 regional lymph nodes and long-term survival in stages I-III colon cancer patients: an analysis of 2,056 consecutive patients in NE.Reftwo branches of same institution. Int J Color Dis. 2010;25(11):1333–41. Tsai HL, Huang CW, Yeh YS, Ma CJ, Chen CW, Lu CY, Huang MY, Yang IP, Wang JY. Factors affecting number of lymph nodes harvested and the impact of examining a minimum of 12 lymph nodes in stage I-III colorectal cancer patients: a retrospective single institution cohort study of 1167 consecutive patients. BMC Surg. 2016;16:17. American Joint Committee on Cancer. Cancer staging manual. 5th ed. Chicago, IL: Springer; 1997. Bilimoria KY, Bentrem DJ, Stewart AK, Talamonti MS, Winchester DP, Russell TR, Ko CY. Lymph node evaluation as a colon cancer quality measure: a national hospital report card. J Natl Cancer Inst. 2008;100(18):1310–7. Nelson H, Petrelli N, Carlin A, Couture J, Fleshman J, Guillem J, Miedema B, Ota D, Sargent D. Guidelines 2000 for colon and rectal cancer surgery. J Natl Cancer Inst. 2001;93(8):583–96. Robinson TJ, Thomas S, Dinan MA, Roman S, Sosa JA, Hyslop T. How many lymph nodes are enough? Assessing the adequacy of lymph node yield for papillary thyroid cancer. J Clin Oncol. 2016;34(28):3434–9. Govindarajan A, Gonen M, Weiser MR, Shia J, Temple LK, Guillem JG, Paty PB, Nash GM. Challenging the feasibility and clinical significance of current guidelines on lymph node examination in rectal cancer in the era of neoadjuvant therapy. J Clin Oncol. 2011;29(34):4568–73. National Cancer Institute. About the SEER program. http://seer.cancer.gov/about/. Accessed 28 Feb 2017. Warren JL, Klabunde CN, Schrag D, Bach PB, Riley GF. Overview of the SEER-Medicare data: content, research applications, and generalizability to the United States elderly population. Med Care. 2002;40(8 Suppl):IV-3-18. Jessup JM, McGinnis LS, Steele GJ, Menck HR, Winchester DP. The National Cancer Data Base. Report on colon cancer. Cancer. 1996;78(4):918–26. Goldstein NS. Lymph node recovery from colorectal resection specimens. Dis Colon rectum. 1999;42(8):1107–8. Le Voyer TE, Sigurdson ER, Hanlon AL, Mayer RJ, Macdonald JS, Catalano PJ, Haller DG. Colon Cancer survival is associated with increasing number of lymph nodes analyzed: a secondary survey of intergroup trial INT-0089. J Clin Oncol. 2003;21(15):2912–9. Chen SL, Bilchik AJ. More extensive nodal dissection improves survival for stages I to III of colon cancer: a population-based study. Ann Surg. 2006;244(4):602–10. Kluth LA, Abdollah F, Xylinas E, Rieken M, Fajkovic H, Seitz C, Sun M, Karakiewicz PI, Schramek P, Herman MP, et al. Clinical nodal staging scores for prostate cancer: a proposal for preoperative risk assessment. Br J Cancer. 2014;111(2):213–9. Shariat SF, Ehdaie B, Rink M, Cha EK, Svatek RS, Chromecki TF, Fajkovic H, Novara G, David SG, Daneshmand S, et al. Clinical nodal staging scores for bladder cancer: a proposal for preoperative risk assessment. Eur Urol. 2012;61(2):237–42. Markl B, Olbrich G, Schenkirsch G, Kretsinger H, Kriening B, Anthuber M. Clinical significance of international union against cancer pN staging and lymph node ratio in node-positive colorectal cancer after advanced lymph node dissection. Dis Colon rectum. 2016;59(5):386–95. The authors acknowledge the efforts of the Surveillance, Epidemiology, and End Results (SEER) Program tumor registries in the creation of the SEER database. Consent to publication This study was supported by the National Science Foundation of China (No. 11371100; 11,501,124). The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. Any request of data and material may be sent to the corresponding author. Department of Biostatistics and Key Laboratory of Public Health Safety, School of Public Health, Fudan University, Shanghai, 200032, China Zhenyu Wu, Guoyou Qin, Naiqing Zhao & Xueying Zheng Collaborative Innovation Center of Social Risks Governance in Health, Fudan University, 130 Dongan Road, Shanghai, 200032, China Zhenyu Wu, Guoyou Qin & Xueying Zheng Center for Biomedical Statistics, Fudan University Shanghai Cancer Center, Shanghai, 200032, China Huixun Jia Zhenyu Wu Guoyou Qin Naiqing Zhao Xueying Zheng All authors made substantial contributions to one or more of the following: the study conception and design (ZW, XZ); acquisition of data or analysis (ZW, GQ, NZ, HJ); and interpretation of data (ZW, GQ, NZ, HJ, XZ). ZW and XZ drafted the article and all other authors contributed to revising the article critically for important intellectual content. All authors read and approved the final manuscript. Correspondence to Xueying Zheng. This study was partly based on the publicly available SEER database and we have got the permission to access the database on purpose of research only (Reference number: 14,120-Nov2015). It did not include interaction with humans or use personal identifying information. The informed consent was not required for this research. Additional file 1: Table S1. Probability of missing nodal disease (false negative, %) for selected values of the number of nodes examined. Table S2. Nodal staging score for selected values of the number of nodes examined. (DOCX 54 kb) Wu, Z., Qin, G., Zhao, N. et al. Assessing the adequacy of lymph node yield for different tumor stages of colon cancer by nodal staging scores. BMC Cancer 17, 498 (2017). https://doi.org/10.1186/s12885-017-3491-2 Accepted: 19 July 2017 False-negative rate Tumor stage Medical and radiation oncology
CommonCrawl
Uniform boundary observability with Legendre-Galerkin formulations of the 1-D wave equation EECT Home Boundary null-controllability of coupled parabolic systems with Robin conditions March 2021, 10(1): 103-127. doi: 10.3934/eect.2020053 On a final value problem for a class of nonlinear hyperbolic equations with damping term Nguyen Huu Can 1, , Nguyen Huy Tuan 1, , Donal O'Regan 2, and Vo Van Au 3,, Department of Mathematics and Computer Science, University of Science, Ho Chi Minh City, Vietnam, Vietnam National University, Ho Chi Minh City, Vietnam School of Mathematics, Statistics and Applied Mathematics, National University of Ireland, Galway, Ireland Institute of Fundamental and Applied Sciences, Duy Tan University, Ho Chi Minh City 700000, Vietnam, Faculty of Natural Sciences, Duy Tan University, Da Nang, 550000, Vietnam * Corresponding author: [email protected] (Vo Van Au) Received December 2019 Revised February 2020 Published March 2021 Early access May 2020 Fund Project: The second author is supported by Vietnam National University Ho Chi Minh City (VNU-HCM) under grant number B2020-18-03 This paper deals with the problem of finding the function $ u(x,t) $ $ (x,t)\in \Omega \times [0,T] $ , from the final data $ u(x,T) = g(x) $ $ u_t(x,T) = {h(x)} $ $ u_{tt} + a \Delta^2 u_t + b \Delta^2 u = \mathcal R(u). $ This problem is known as the inverse initial problem for the nonlinear hyperbolic equation with damping term and it is ill-posed in the sense of Hadamard. In order to stabilize the solution, we propose the filter regularization method to regularize the solution. We establish appropriate filtering functions in cases where the nonlinear source $ \mathcal R $ satisfies the global Lipschitz condition and the specific case $ \mathcal R(u) = u|u|^{p-1}, p>1 $ which satisfies the local Lipschitz condition. In addition, we show that regularized solutions converge to the sought solution under a priori assumptions in Gevrey spaces. Keywords: Inverse problems, nonlinear damped hyperbolic equation, nonlinear beam equation, regularization method, error estimate. Mathematics Subject Classification: 35R30, 35L35, 47J06, 47H10, 47A52. Citation: Nguyen Huu Can, Nguyen Huy Tuan, Donal O'Regan, Vo Van Au. On a final value problem for a class of nonlinear hyperbolic equations with damping term. Evolution Equations & Control Theory, 2021, 10 (1) : 103-127. doi: 10.3934/eect.2020053 M. Aassila and A. Guesmia, Energy decay for a damped nonlinear hyperbolic equation, Appl. Math. Lett., 12 (1999), 49-52. doi: 10.1016/S0893-9659(98)00171-2. Google Scholar A. S. Ackleh, H. T. Banks and G. A. Pinter, A nonlinear beam equation, Appl. Math. Lett., 15 (2002), 381-387. doi: 10.1016/S0893-9659(01)00147-1. Google Scholar R. P. Agarwal, S. Hodis and D. O'Regan, 500 Examples and Problems of Applied Differential Equations, Problem Books in Mathematics, Springer, Cham, 2019. doi: 10.1007/978-3-030-26384-3. Google Scholar H. T. Banks, K. Ito and Y. Wang, Well posedness for damped second-order systems with unbounded input operators, Differential Integral Equations, 8 (1995), 587-606. Google Scholar H. T. Banks, D. S. Gilliam and V. I. Shubov, Global solvability for damped abstract nonlinear hyperbolic systems, Differential Integral Equations, 10 (1997), 309-332. Google Scholar C. Cao, M. A. Rammaha and E. S. Titi, The Navier-Stokes equations on the rotating $2$-D sphere: Gevrey regularity and asymptotic degrees of freedom, Z. Angew. Math. Phys., 50 (1999), 341-360. doi: 10.1007/PL00001493. Google Scholar G. Chen and B. Lu, The initial-boundary value problems for a class of nonlinear wave equations with damping term, J. Math. Anal. Appl., 351 (2009), 1-15. doi: 10.1016/j.jmaa.2008.08.027. Google Scholar G. Chen and F. Da, Blow-up of solution of Cauchy problem for three-dimensional damped nonlinear hyperbolic equation, Nonlinear Anal., 71 (2009), 358-372. doi: 10.1016/j.na.2008.10.132. Google Scholar G. Chen, Y. Wang and Z. Zhao, Blow-up of solution of an initial boundary value problem for a damped nonlinear hyperbolic equation, Appl. Math. Lett., 17 (2004), 491-497. doi: 10.1016/S0893-9659(04)90116-4. Google Scholar G. Chen, Initial boundary value problem for a damped nonlinear hyperbolic equation, J. Partial Differential Equations, 16 (2003), 49-61. Google Scholar D. Henry, Geometric theory of semilinear parabolic equations, in Lecture Notes in Mathematics, 840, Springer-Verlag, Berlin-New York, 1981. Google Scholar T. Hosonoa and T. Ogawa, Large time behavior and $L^p$-$L^q$ estimate of solutions of 2-dimensional nonlinear damped wave equations, J. Differential Equations, 203 (2004), 82-118. doi: 10.1016/j.jde.2004.03.034. Google Scholar B. Jin, B. Li and Z. Zhou, Numerical analysis of nonlinear subdiffusion equations, SIAM J. Numer. Anal., 56 (2018), 1-23. doi: 10.1137/16M1089320. Google Scholar W. Liu and K. Chen, Existence and general decay for nondissipative hyperbolic differential inclusions with acoustic/memory boundary conditions, Math. Nachr., 289 (2016), 300-320. doi: 10.1002/mana.201400343. Google Scholar [15] W. McLean, Strongly Elliptic Systems and Boundary Integral Equations, Cambridge University Press, Cambridge, 2000. Google Scholar T. Narazaki, $L^p$-$L^q$ estimates for damped wave equations and their applications to semi-linear problem, J. Math. Soc. Japan, 56 (2004), 585-626. doi: 10.2969/jmsj/1191418647. Google Scholar H. T. Nguyen, V. N. Doan, V. A. Khoa and V. A. Vo, A note on the derivation of filter regularization operators for nonlinear evolution equations, Appl. Anal., 97 (2018), 3-12. doi: 10.1080/00036811.2016.1276176. Google Scholar K. Nishihara, $L^p$-$L^q$ estimates of solutions to the damped wave equation in 3-dimensional space and their application, Math. Z., 244 (2003), 631-649. doi: 10.1007/s00209-003-0516-0. Google Scholar T. Ogawa and H. Takeda, Non-existence of weak solutions to nonlinear damped wave equations in exterior domains, Nonlinear Anal., 70 (2009), 3696-3701. doi: 10.1016/j.na.2008.07.025. Google Scholar C. Song and Z. Yang, Existence and nonexistence of global solutions to the Cauchy problem for a nonlinear beam equation, Math. Methods Appl. Sci., 33 (2010), 563-575. doi: 10.1002/mma.1175. Google Scholar H. Takeda, Global existence and nonexistence of solutions for a system of nonlinear damped wave equations, J. Math. Anal. Appl., 360 (2009), 631-650. doi: 10.1016/j.jmaa.2009.06.072. Google Scholar N. H. Tuan, D. T. Dang, E. Nane and D. M. Nguyen, Continuity of solutions of a class of fractional equations, Potential Anal., 49 (2018), 423-478. doi: 10.1007/s11118-017-9663-5. Google Scholar Y.-Z. Wang, Asymptotic behavior of solutions to the damped nonlinear hyperbolic equation, J. Appl. Math., 2013, Art. ID 353757, 8 pp. doi: 10.1155/2013/353757. Google Scholar Z. Yang, Global existence, asymptotic behavior and blowup of solutions for a class of nonlinear wave equations with dissipative term, J. Differential Equations, 187 (2003), 520-540. Google Scholar Z. Yang, Global existence, asymptotic behavior and blowup of solutions to a nonlinear evolution equation, Acta Anal. Funct. Appl., 4 (2002), 350-356. Google Scholar J. Yu, Y. Shang and H. Di, On decay and blow-up of solutions for a nonlinear beam equation with double damping terms, Bound. Value Probl., 145 (2018), 17 pp. doi: 10.1186/s13661-018-1067-y. Google Scholar J. Yu, Y. Shang and H. Di, Existence and nonexistence of global solutions to the Cauchy problem of the nonlinear hyperbolic equation with damping term, AIMS Mathematics, 3 (2018), 322-342. Google Scholar Q. S. Zhang, A blow-up result for a nonlinear wave equation with damping: The critical case, C. R. Acad. Sci. Paris Sér I Math., 333 (2001), 109-114. doi: 10.1016/S0764-4442(01)01999-1. Google Scholar Gunther Uhlmann, Jian Zhai. Inverse problems for nonlinear hyperbolic equations. Discrete & Continuous Dynamical Systems, 2021, 41 (1) : 455-469. doi: 10.3934/dcds.2020380 Pao-Liu Chow. Asymptotic solutions of a nonlinear stochastic beam equation. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 735-749. doi: 10.3934/dcdsb.2006.6.735 Ru-Yu Lai, Laurel Ohm. Inverse problems for the fractional Laplace equation with lower order nonlinear perturbations. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021051 Abhishake Rastogi. Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems. Communications on Pure & Applied Analysis, 2020, 19 (8) : 4111-4126. doi: 10.3934/cpaa.2020183 Tarek Saanouni. Remarks on the damped nonlinear Schrödinger equation. Evolution Equations & Control Theory, 2020, 9 (3) : 721-732. doi: 10.3934/eect.2020030 Gen Nakamura, Michiyuki Watanabe. An inverse boundary value problem for a nonlinear wave equation. Inverse Problems & Imaging, 2008, 2 (1) : 121-131. doi: 10.3934/ipi.2008.2.121 S.V. Zelik. The attractor for a nonlinear hyperbolic equation in the unbounded domain. Discrete & Continuous Dynamical Systems, 2001, 7 (3) : 593-641. doi: 10.3934/dcds.2001.7.593 Deren Han, Zehui Jia, Yongzhong Song, David Z. W. Wang. An efficient projection method for nonlinear inverse problems with sparsity constraints. Inverse Problems & Imaging, 2016, 10 (3) : 689-709. doi: 10.3934/ipi.2016017 Thorsten Hohage, Mihaela Pricop. Nonlinear Tikhonov regularization in Hilbert scales for inverse boundary value problems with random noise. Inverse Problems & Imaging, 2008, 2 (2) : 271-290. doi: 10.3934/ipi.2008.2.271 Davit Martirosyan. Exponential mixing for the white-forced damped nonlinear wave equation. Evolution Equations & Control Theory, 2014, 3 (4) : 645-670. doi: 10.3934/eect.2014.3.645 Hiroshi Takeda. Large time behavior of solutions for a nonlinear damped wave equation. Communications on Pure & Applied Analysis, 2016, 15 (1) : 41-55. doi: 10.3934/cpaa.2016.15.41 Olivier Goubet, Ezzeddine Zahrouni. On a time discretization of a weakly damped forced nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2008, 7 (6) : 1429-1442. doi: 10.3934/cpaa.2008.7.1429 Dongsheng Yin, Min Tang, Shi Jin. The Gaussian beam method for the wigner equation with discontinuous potentials. Inverse Problems & Imaging, 2013, 7 (3) : 1051-1074. doi: 10.3934/ipi.2013.7.1051 Claudianor O. Alves, Rodrigo C. M. Nemer, Sergio H. Monari Soares. The use of the Morse theory to estimate the number of nontrivial solutions of a nonlinear Schrödinger equation with a magnetic field. Communications on Pure & Applied Analysis, 2021, 20 (1) : 449-465. doi: 10.3934/cpaa.2020276 Fabrice Planchon, John G. Stalker, A. Shadi Tahvildar-Zadeh. Dispersive estimate for the wave equation with the inverse-square potential. Discrete & Continuous Dynamical Systems, 2003, 9 (6) : 1387-1400. doi: 10.3934/dcds.2003.9.1387 Shumin Li, Masahiro Yamamoto, Bernadette Miara. A Carleman estimate for the linear shallow shell equation and an inverse source problem. Discrete & Continuous Dynamical Systems, 2009, 23 (1&2) : 367-380. doi: 10.3934/dcds.2009.23.367 Lucie Baudouin, Emmanuelle Crépeau, Julie Valein. Global Carleman estimate on a network for the wave equation and application to an inverse problem. Mathematical Control & Related Fields, 2011, 1 (3) : 307-330. doi: 10.3934/mcrf.2011.1.307 Soumen Senapati, Manmohan Vashisth. Stability estimate for a partial data inverse problem for the convection-diffusion equation. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021060 Raffaele Folino, Corrado Lattanzio, Corrado Mascia. Motion of interfaces for a damped hyperbolic Allen–Cahn equation. Communications on Pure & Applied Analysis, 2020, 19 (9) : 4507-4543. doi: 10.3934/cpaa.2020205 Nguyen Huu Can Nguyen Huy Tuan Donal O'Regan Vo Van Au
CommonCrawl
ERROR: type should be string, got "https://en.wikipedia.org/wiki/Medley_swimming\nIn individual medley events, the swimmer covers the four swimming styles in the following order: butterfly, backstroke, breaststroke and freestyle.\nIn medley relay events, swimmers will cover the four swimming styles in the following order: backstroke, breaststroke, butterfly and freestyle.\nEach section must be finished in accordance with the rule which applies to the style concerned.\nhttp://www.feelforthewater.com/2014/09/clearing-up-confusion-about-front.html\nYou might have heard of something called Front Quadrant Swimming which has to do with the timing of your freestyle stroke. It's widely recognised as being an efficient way to swim and something that you should use in your own stroke technique but there's a lot of confusion about what it actually means:\nIf you drew two lines, one through the swimmer's head and one at water level you would create four quadrants:\nFront quadrant swimming simply means that there is always one of your hands in one of the front quadrants (1 and 2) at any one point in time. Or, put even more simply, when your hands pass above and below the water, that should happen in front of your head, not behind it.\nhttp://matureadage3125.wikidot.com/blog:4\nThere are four basic swimming stokes: crawl (also known as freestyle), backstroke, breaststroke, and butterfly.\nSwimming Stroke # 1 – The Crawl, or Freestyle\nThe technique involved in this swimming stroke is pretty simple. You float on your belly in the water, and propel yourself by rotating your arms in a windmill motion, and kick your legs in a fluttering motion. The hardest part of this swimming technique is the coordination of the breathing while performing the strokes, since the face remains in the water almost all the time.\nSwimming Stroke # 2 – The Backstroke\nThe backstroke is akin to the crawl, except that you float on your back in the water. The arms are moved in a similar alternating windmill motion, and the legs a kicked in a similarly fluttering motion. The two basic techniques of a correct backstroke are: One, that the arms are moved with equal force, or else you will find yourself swimming off towards one side; Two, that the body should be rolled from one side to the other, so that the arms extend to their utmost reach, to propel you by catching enough water.\nSwimming Stroke # 3 – The Breaststroke\nThis swimming technique involves a pattern wherein the body bobs upwards and downwards as you propel yourself forward in the water. The breaststroke is a difficult swimming technique, and should not be chosen if you are just beginning to learn swimming. Basically, this swimming stroke involves pulling your arms through the water, as you bob up and breathe, and then kicking with your legs as you bob down and glide forward. The arm pulling and the leg kicking are done alternatively.\nSwimming Stroke # 4 – The Butterfly Stroke\nSimilar to the breaststroke, the butterfly is also a difficult swimming technique, and not advocated for beginning learners, since it involves a fair amount of strength as well as precise timing. While performing this stroke, the legs should be moved together akin to the movements of a dolphin's tail, the arms should also be moved together, pushing the water downwards and then backwards, while the torso moves forward in an undulating manner.\nThe fourth is always different\nhttps://en.wikipedia.org/wiki/Synchronised_swimming\nThe first Olympic demonstration was at the 1952 Olympic Games, where the Helsinki officials welcomed Kay Curtis and lit a torch in her honor. Curtis died in 1980, but synchronised swimming did not become an official Olympic sport until the 1984 Summer Olympic Games.[8] It was not until 1968 that synchronised swimming became officially recognized by FINA as the fourth water sport next to swimming, platform diving and water polo.\nhttps://en.wikipedia.org/wiki/Rubik%27s_Revenge\nThe Rubik's Revenge (also known as the Master Cube) is a 4×4×4 version of Rubik's Cube. It was released in 1981. Invented by Péter Sebestény, the Rubik's Revenge was nearly called the Sebestény Cube until a somewhat last-minute decision changed the puzzle's name to attract fans of the original Rubik's Cube.[1] Unlike the original puzzle (and the 5×5×5 cube), it has no fixed facets: the centre facets (four per face) are free to move to different positions.\nMethods for solving the 3×3×3 cube work for the edges and corners of the 4×4×4 cube, as long as one has correctly identified the relative positions of the colours — since the centre facets can no longer be used for identification.\nhttps://en.wikipedia.org/wiki/Rubik%27s_Cube\nMade of quadrants\nOriginally the rubiks cube was a two by two cube (the quadrant)\nRubik's Cube is a 3-D combination puzzle invented in 1974[1][2] by Hungarian sculptor and professor of architecture Ernő Rubik. Originally called the Magic Cube,[3] the puzzle was licensed by Rubik to be sold by Ideal Toy Corp. in 1980[4] via businessman Tibor Laczi and Seven Towns founder Tom Kremer,[5] and won the German Game of the Year special award for Best Puzzle that year. As of January 2009, 350 million cubes had been sold worldwide[6][7] making it the world's top-selling puzzle game.[8][9] It is widely considered to be the world's best-selling toy.[10]\nIn a classic Rubik's Cube, each of the six faces is covered by nine stickers, each of one of six solid colours: white, red, blue, orange, green, and yellow. In currently sold models, white is opposite yellow, blue is opposite green, and orange is opposite red, and the red, white and blue are arranged in that order in a clockwise arrangement.[11] On early cubes, the position of the colours varied from cube to cube.[12] An internal pivot mechanism enables each face to turn independently, thus mixing up the colours. For the puzzle to be solved, each face must be returned to have only one colour. Similar puzzles have now been produced with various numbers of sides, dimensions, and stickers, not all of them by Rubik.\nIn March 1970, Larry Nichols invented a 2×2×2 \"Puzzle with Pieces Rotatable in Groups\" and filed a Canadian patent application for it. Nichols's cube was held together with magnets. Nichols was granted U.S. Patent 3,655,201 on April 11, 1972, two years before Rubik invented his Cube.\nOn April 9, 1970, Frank Fox applied to patent his \"Spherical 3×3×3\". He received his UK patent (1344259) on January 16, 1974.[13]\nIt is made of quadrants\nhttps://en.wikipedia.org/wiki/Sudoku\nAlphabetical variations have emerged, sometimes called Wordoku; there is no functional difference in the puzzle unless the letters spell something. Some variants, such as in the TV Guide, include a word reading along a main diagonal, row, or column once solved; determining the word in advance can be viewed as a solving aid. A Wordoku might contain words other than the main word.\n\"Quadratum latinum\" is a Sudoku variation with Latin numbers (I, II, III, IV, ..., IX) proposed by Hebdomada aenigmatum, a monthly magazine of Latin puzzles and crosswords. Like the \"Wordoku\", the \"Quadratum latinum\" presents no functional difference with a normal Sudoku but adds the visual difficulty of using Latin numbers.\nhttps://en.wikipedia.org/wiki/Killer_sudoku\nKiller sudoku (also killer su doku, sumdoku, sum doku, sumoku, addoku, or samunamupure) is a puzzle that combines elements of sudoku and kakuro. Despite the name, the simpler killer sudokus can be easier to solve than regular sudokus, depending on the solver's skill at mental arithmetic; the hardest ones, however, can take hours to crack.\nhttps://en.wikipedia.org/wiki/Kakuro\nMade of quadrants and 16 is the squares of the quadrant model\nKakuro or Kakkuro (Japanese: カックロ) is a kind of logic puzzle that is often referred to as a mathematical transliteration of the crossword\nThe canonical Kakuro puzzle is played in a grid of filled and barred cells, \"black\" and \"white\" respectively. Puzzles are usually 16×16 in size, although these dimensions can vary widely.\nhttps://en.wikipedia.org/wiki/Crossword#Crossnumbers\nA crossnumber (also known as a cross-figure) is the numerical analogy of a crossword, in which the solutions to the clues are numbers instead of words. Clues are usually arithmetical expressions, but can also be general knowledge clues to which the answer is a number or year. There are also numerical fill-in crosswords.\nThe Daily Mail Weekend magazine used to feature crossnumbers under the misnomer Number Word. This kind of puzzle should not be confused with a different puzzle that the Daily Mail refers to as Cross Number.\nfour by four is the squares of the quadrant model\nhttps://en.wikipedia.org/wiki/Crossword\nOne of the smallest crosswords in general distribution is a 4×4 crossword compiled daily by John Wilmes, distributed online by USA Today as \"QuickCross\" and by Universal Uclick as \"PlayFour.\"\nA four by four grid is the quadrant model\nA lot of people do crossword puzzles. I know my Dad did every morning\nA crossword is a word puzzle that normally takes the form of a square or a rectangular grid of white and black shaded squares. The goal is to fill the white squares with letters, forming words or phrases, by solving clues which lead to the answers. In languages that are written left-to-right, the answer words and phrases are placed in the grid from left to right and from top to bottom. The shaded squares are used to separate the words or phrases.\nCrossword puzzles are made of quadrants\nMatisse is known for painting squares. Squares are quadrants\nhttps://en.wikipedia.org/wiki/Henri_Matisse\nMatisse was known in his later life for spearheading art that was extremely simple. One of his later paintings was simply squares, like the squares of the quadrant model of reality. The quadrant model itself is extremely simple. Just think, 16 squares, four quadrants, but it can explain all of reality, and it is not just that it can explain all of reality, it does. The last olympic logo had four parts to it, inspired by Matisse's later cut out works. In some of Matisses' later works he merely cut out squares of different colors. At the end of Matisse's life he became religious. His whole life he was an atheist but he had surgery and at the end of his life he made a sort of cathedral where he tried to represent color in its purest form just through light, and he has an image of Jesus in the Cathedral and a cross.\n16 is the squares of the quadrant model\nChess was originally called \"four parts of the army\", coming from India's four divisions of its army.\nThere are 16 pawns in a chess set and each player in a chess game starts with sixteen pieces\nhttp://wine.themerex.net/curabitur-auctor-adipiscing/\nhttps://en.wikipedia.org/wiki/Wine_tasting\nWine tasting is a huge thing for a lot of people\nThere are four recognized stages to wine tasting:\n\"in glass\" the aroma of the wine\n\"in mouth\" sensations\n\"finish\" (aftertaste)\nThe results of the four recognized stages to wine tasting: appearance, \"in glass\" the aroma of the wine, \"in mouth\" sensations, \"finish\" (aftertaste) – are combined in order to establish the following properties of a wine: complexity and character, potential (suitability for aging or drinking), possible faults.\nhttps://en.wikipedia.org/wiki/Russian_four_square\nRussian four square was started in the Soviet Union. Russian four square is a variation of the Russian game Квадрат (square).\nEach square that was divided is a position for the players:\n1st: Peasants square 2nd: Duke's square 3rd: Prince's square 4th: King's square\nhttps://en.wikipedia.org/wiki/File:Kvadrat.gif\nhttps://en.wikipedia.org/wiki/Four_square\nFour square, also known as handball, downball, squareball, blockball, boxball, champ or king's square, is a ball game played among four players on a square court divided into quadrants. It is a popular game at elementary schools with little required equipment, almost no setup, and short rounds of play that can be ended at any time.\nFour square is usually played with a rubber playground ball, on a square court with four maximum players. The objectives of four square are to eliminate other players to achieve the highest rank.\nhttps://en.wikipedia.org/wiki/List_of_Pokémon_characters#Members_of_the_Elite_Four\nMembers of the Elite Four[edit]\n\"Elite Four\" redirects here. For the video game, see Elite 4.\nThe Elite Four (四天王 Shitennō, lit. \"Four Heavenly Kings\") is an order of exceptionally skilled Pokémon trainers consisting of four member trainers of ascending rank. Like the Gym Leaders, they also specialize on a type of Pokémon but are far stronger. Most different regions possess their own organizations. The player must first defeat them all so that they may gain the right to challenge the 'Pokémon Champion'. The player must obtain all eight badges from each respective region's gym leaders.\nKanto Elite Four[edit]\nThe Kanto Elite Four act as the Elite Four in the original series of Pokémon games consisting of Pokémon Red, Blue, Green, and Yellow versions as well as in Pokémon FireRed and LeafGreen versions which act as remakes of the original games. Within the timeline of the game series they are eventually also given the status as the 'Johto Elite Four', as Johto shares its Pokémon league with Kanto. Specifically, this Elite Four is located on the Indigo Plateau, shared by both Kanto and Johto.\nLorelei (Kanna (カンナ)): Lorelei is a specialist of Ice-type Pokémon. She is originally from the Sevii Islands and she collects Pokémon Dolls. She appears in the Orange Islands series of the anime, where she is known as Prima in the English version. She is a villain in Pokémon Adventures, who attempts to take over the world with the other Elite Four. She later allies with Red and Blue to save her home.\nBruno (Shiba (シバ)): Bruno is an expert on Fighting-types, and a friend and training partner of Brawly. He constantly trains his own body along with his Pokémon, and he wishes to fight the best trainers in the world, which is why he is part of the group. He regularly trains on the Sevii Islands and utilizes the spa for his Pokémon. He appears in the first episode of the anime as a combatant on television, and he later meets Ash when he seeks out Bruno to learn of his \"secret\" to become a great trainer. Bruno is an unwilling villain in Pokémon Adventures, where he is forced by Agatha to fight for her. He later forms the Johto Elite Four with Will, Karen, and Koga. Bruno attaches his Poké Balls to the ends of a set of nunchaku, and unleashes his Pokémon at high speeds to give him an advantage.\nAgatha (Kikuko (キクコ)): Agatha is an elderly woman who specializes in Ghost-type Pokémon. In the anime, she appears in the episode \"The Scheme Team\" where she is acting Gym Leader for the Viridian City Gym, defeating Ash in a battle. She is one the main antagonists of the Yellow chapter of Pokémon Adventures, along with Lance. She attempts to destroy most of humanity from their base on Cerise Island. She controls Bruno against his will by utilizing the mind-controlling powers of her ghost Pokémon, and she is a former rival of Professor Oak, though their relationship eventually grew very bitter when he decided to pursue his own research career rather than stick with their group, which according to Oak, was only interested in finding new ways to control Pokémon, which the professor found unethical.\nLance (Wataru (ワタル)): Lance, known as one of the best Pokémon trainers in the world, specializes in dragon Pokémon. He is Clair's cousin, having previously trained with her in Blackthorn City. He helps the protagonist in the second generation games in the fight against Team Rocket. He appears in the anime, where he helps Ash's group defeat Team Rocket, catching a red Gyarados that is part of their experiments, and later helps to stop the battle between Groudon and Kyogre. He is the main antagonist of the Yellow chapter of Pokémon Adventures who wishes to destroy humanity due to all of the pollution and their hurting of Pokémon. He later becomes an ally of Silver, who he sends on various missions. He is promoted to a Pokémon League champion of the Indigo Plateau in the sequel games.\nJohto Elite Four[edit]\nThe Johto Elite Four act as the Elite Four in the original series of Pokémon games consisting of Pokémon Gold, Silver, and Crystal versions as well as in Pokémon HeartGold and SoulSilver versions which act as remakes of the original games. Within the timeline of the games series, they become the successors of the Kanto Elite Four. Only Bruno from the previous games returns, while the others are replaced by new ones.\nWill (Itsuki (イツキ)): Will is a Psychic-type specialist, who wears formal clothes and a mask. In the Pokémon Adventures manga, he was kidnapped by the Mask of Ice as a child and raised to be his servant. He is initially one of the leaders of Neo Team Rocket, but he eventually goes on to form the new Elite Four with Karen, Koga, and Bruno. He takes over Lorelei's place.\nKoga, Fuchsia City Gym Leader in the Kanto-based versions of the games, is promoted to the Elite Four in Johto-based versions.\nBruno, member of the Elite Four in the Kanto-based games, retains his membership in the Elite Four in the Johto-based editions.\nKaren (Karin (カリン)): Karen is a Dark-type specialist; she likes Dark-types because she finds their wild and tough nature appealing. In the Pokémon Adventures manga, she was kidnapped as a child and raised by Mask of Ice to be his servant. Like Will, she is initially a leader of Neo Team Rocket until she joins the others to form the new Elite Four. She takes over Agatha's place.\nHoenn Elite Four[edit]\nThe Hoenn Elite Four act as the Elite Four in the original series of Pokémon games consisting of Pokémon Ruby, Sapphire, Emerald, Omega Ruby, and Alpha Sapphire.\nSidney (Kagetsu (カゲツ)): Dark-type specialist, who believes that the dark-side is beautiful, and that \"might is right.\" He is always upbeat, and congratulates those who defeat him.\nPhoebe (Fuyō (フヨウ)): is a Ghost-type specialist, whose grandparents are responsible for guarding the Blue, Red, and Green Orbs at Mt. Pyre. She takes control of Regice, together with Glacia, in the Pokémon Adventures manga.\nGlacia (Prim (プリム Purimu)): Ice-type specialist, who came to Hoenn while looking for a warmer climate that, as she claims, help her Pokémon grow strong.\nDrake (Genji (ゲンジ)): Dragon-type specialist. He battles Ash in the anime, and he wins overwhelmingly due to Ash's overconfidence.\nSinnoh Elite Four[edit]\nAaron (Ryō (リョウ)): Aaron uses Bug Pokémon, calling them beautiful and perfect. He appears in the anime preparing for a championship battle against Cynthia. When he meets Ash, who tells him about his experience with Cynthia, Aaron tells Ash about how he abandoned his Wurmple during his youth. He does his best to train and understand Bug-types out of regret for his mistake. He is later shown to have lost his match.\nBertha (Kikuno (キクノ)): Bertha is an elderly Ground-type specialist. She appears in the anime along with Cynthia.\nFlint (Ōba (オーバ)): Flint is a Fire-type specialist, who meets the protagonist in Sunyshore City. He is a friend of Volkner and he has a younger brother named Buck. Flint's also seen on TV battling Cynthia in the final episode of Pokémon Diamond and Pearl.\nLucian (Goyō (ゴヨウ)): Lucian is a Psychic-type trainer, who is an avid reader. He battles with Dawn in the anime, and he is shown on television battling Cynthia.\nUnova Elite Four[edit]\nShauntal (Shikimi (シキミ)): Shauntal is a Ghost-Type Pokémon Trainer. Her hobby is writing books. She can also be seen at Cynthia's holiday home in Undella Town on occasion. According to one of her stories, she once battled Volkner.\nGrimsley (Gīma (ギーマ)): Grimsley is a Dark-Type Pokémon Trainer. The son of a distinguished family that fell into ruin, he has since become an expert gambler.\nCaitlin (Cattleya (カトレア Katorea)): Caitlin is a Psychic-Type Pokémon Trainer; she is described as having psychic powers which she had trouble controlling in the past due to her explosive temper. She travels to the region of Unova to learn how to control them and become a better trainer. She previously appeared in the Generation IV games' Battle Frontier and was in charge of running the Battle Castle but was unable to battle, with her valet taking that responsibility in her place.\nMarshal (Renbu (レンブ)): Marshal is a Fighting-Type Pokémon Trainer. He is one of Alder's apprentices.\nKalos Elite Four[edit]\nMalva (Pachira (パキラ Pakira)): Malva is a Fire-Type Pokémon Trainer. A hot-headed news reporter and a self-proclaimed star of the Holo Caster, she is also a former member of Team Flare and expresses animosity towards the player for the team's defeat. Looker later blackmails her into helping the player stop Xerosic's plans.\nSiebold (Zumi (ズミ)): Siebold is a Water-Type Pokémon Trainer. He is a chef, whose customers notably include Valerie and Grant, and compares the art of cooking to the art of Pokémon battles.\nWikstrom (Gampi (ガンピ Ganpi)): Wikstrom is a Steel-Type Pokémon Trainer. He wears a suit of armor and is eager to battle challengers.\nDrasna (Dracaena (ドラセナ Dorasena)): Drasna is a Dragon-Type Pokémon Trainer, inspired to train Dragon-types after her grandparents from Sinnoh told her about the region's mythology surrounding Dialga and Palkia. She is just happy to battle and enjoys it when trainers and their Pokémon like each other.\nAlola Elite Four[edit]\nHala: Due to his position as Kahuna of Melemele Island, Hala, was invited by Kukui to become one of the Elite Four. He focuses on Fighting-type Pokémon.\nOlivia: Due to her position as Kahuna of Akala Island, Olivia, was invited by Kukui to become one of the Elite Four. She focuses on Rock-type Pokémon.\nAcerola: As she had completed the Island Challenge and become a Trial Captain, Acerola was invited to be one of the Elite Four. She focuses on Ghost-type Pokémon.\nKahili: A friend of Kukui and known as one of the most pre-eminent golfers on Alola, Kahili was invited to be one of the Elite Four. She focuses on Flying-type Pokémon.\nhttps://en.wikipedia.org/wiki/List_of_Pokémon_characters\nIn Pokémon X and Y, the Battle Maison is introduced as a new system where the bosses are the sister Battle Chatelaines (バトルシャトレーヌ Batoru Shatorēnu). Each serves as a leader of a different type of battle style and are faced after winning a series of battles against other trainers in succession.\nNita (Lanuit (ラニュイ Ranyui)) is the Battle Chatelaine for Single Battles.\nEvelyn (Lesoir (ルスワール Rusuwāru)) is the Battle Chatelaine for Double Battles.\nDana (Lajournée (ラジュルネ Rajurune)) is the Battle Chatelaine for Triple Battles.\nMorgan (Lematin (ルミタン Rumitan)) is the Battle Chatelaine for Rotation Battles.\nWhen challenging the Multi Battle system, the sisters pair up amongst each other.\nGo-Rock Squad[edit]\nGo-Rock Squad (GoGo Gang (スナッチ団 GoGo-dan)) is a villainous team in Pokémon Ranger. Their plot consists of replacing the rangers and becoming the new heroes of Fiore. The Squad begins this by stealing a Capture Stylus from Professor Hastings. Reverse engineering the design, the Go Rock Squad mass-produces a great many styluses. Following this, the Squad captures a multitude of Pokémon for their own use. In the endgame of their plans, Gordor attempts to summon legendary Pokémon Entei, Raikou, and Suicune, who would terrorize the land with their power. In theory the Squad would then stop the legendaries with the Pokémon they already had, but the Squad broke up after the Rangers foiled the plot.\nGordor (ラゴウ (Ragō)): The Go Rock Squad's leader. Gordor was the mastermind of the Squad. A former professor, Gordor was jealous of Hastings receiving all the attention for projects they both contributed to, so Gordor went Rogue.\nThe Go-Rock Quads (Four GoGo Siblings (ゴーゴー4兄弟)) are Gordor's four children, and were the admins of the Squad as well as a musical quartet. They consist of:\nTiffany, who played the violin.\nGarrett, who played the electric guitar.\nBilly, the leader of the Quads, who also played an electric guitar.\nClyde, who played the bongos.\nhttps://en.wikipedia.org/wiki/Tetromino\nA tetromino is a geometric shape composed of four squares, connected orthogonally. This, like dominoes and pentominoes, is a particular type of polyomino. The corresponding polycube, called a tetracube, is a geometric shape composed of four cubes connected orthogonally.\nA popular use of tetrominoes is in the video game Tetris, where they have been called Tetriminos (spelled with an \"i\" as opposed to the \"o\" in \"tetromino\") since 2001.\nTetris is one of the most popular games of all time. It is no coincidence it is related to the number four.\nhttps://en.wikipedia.org/wiki/Formation_skydiving\nhttps://en.wikipedia.org/wiki/File:Skydiving_4_way.jpg\nFormation skydiving is a skydiving event where multiple skydivers attach themselves to one another by grabbing each other's limbs or by the use of \"grippers\" on their jumpsuit while free falling through the sky. The goal of this skydiving program is to build a formation of multiple divers arranged in a geometric pattern.\nFormation skydiving can be further divided into several sub-categories, so named for the number of members in a team:\n4-way sequential\n4-way vertical sequential (VFS, Vertical Formation Skydiving)\n16-way sequential\n10-way speed\nLarge formations (Big-ways)\nhttps://en.wikipedia.org/wiki/Vertical_formation_skydiving\nThere is only one category of official VFS competition, that being VFS 4-way, which is part of the United States Parachute Association Skydiving Nationals. The first official VFS 4-Way US Nationals Competition was held on October 27, 2006, in Eloy, Arizona. Nine teams (45 skydivers) competed.\nVFS 4-way has been adopted as an addition to future FAI world competitions (as VFS 4-way), the first being the FAI World Cup in Eloy, AZ, in October 2008.\nhttps://en.wikipedia.org/wiki/Surfboard\nQuad[edit]\nA \"Quad\" four fins, typically arranged as two pairs of thrusters in wing formation, which are quick down the line but tend to lose energy through turns. The energy is lost as the board goes up the wave because the fins are now vectoring energy from the oncoming water toward the back of the board, bleeding speed.\nThe fourth is transcendent- fifth is questionable\nPBA Tour major championships[edit]\nThe PBA Tour has four events that are considered major tournaments over the history of the organization:\nThe USBC Masters\nThe PBA World Championship\nThe Tournament of Champions\nThe U.S. Open\nThe PBA Players Championship (formerly Touring Players Championship) has been held off and on since the 1980s, and is considered fifth major tournament.\nhttps://en.wikipedia.org/wiki/PBA_Tour\nDon Carter is also noted for having won all four possible \"majors\" during his career (PBA National Championship, BPAA All-Star, World Invitational and ABC Masters), however some of these were not PBA events.\nhttps://en.wikipedia.org/wiki/List_of_WWE_pay-per-view_events\nThis is a list of WWE pay-per-view events, detailing all professional wrestling cards promoted on pay-per-view by WWE.\nWWE has broadcast pay-per-views since the 1980s, when its classic \"Big Four\" events (Royal Rumble, WrestleMania, SummerSlam, and Survivor Series) were first established. The company's PPV lineup expanded to a monthly basis in the mid-1990s, and reached its peak of sixteen shows a year in 2006 before returning to twelve in 2012. Pay-per-view shows are typically three hours in length, though budget priced events (e.g., In Your House) were shorter, and premium events such as WrestleMania can approach five hours. Since 2008, all WWE pay-per-views have been broadcast in high definition. Pay-per-view events are a significant part of the revenue stream for WWE.[1][2]\nhttps://en.wikipedia.org/wiki/File:Fatal_4_Way_(2010).jpg\nhttps://en.wikipedia.org/wiki/WWE_Fatal_4-Way\nFatal 4-Way was a professional wrestling pay-per-view (PPV) event produced by World Wrestling Entertainment (WWE), which took place on June 20, 2010, at the Nassau Veterans Memorial Coliseum in Uniondale, New York.[3] The show was based on certain matches on the card that are contested as fatal four-way matches. The event received 143,000 pay-per-view buys, down on The Bash's figure of 178,000 buys. This was the final WWE pay-per-view event to be held in Nassau Coliseum after the coliseum will have a renovation. Also, this was the first and only Fatal 4-Way event produced by WWE.\nhttps://en.wikipedia.org/wiki/The_Four_Horsemen_(professional_wrestling)\nThe Four Horsemen was a professional wrestling stable in the National Wrestling Alliance and later World Championship Wrestling. The original group consisted of Ric Flair, Arn Anderson, Ole Anderson and Tully Blanchard. Flair and Arn Anderson have been constant members in each incarnation of the group except once following Anderson's neck injury, when Curt Hennig was given his spot in the Horsemen.\nThe original Four Horsemen (1985–1987)[edit]\nThe Four Horsemen formed in November 1985 with Ric Flair and his storyline cousins Ole Anderson and Arn Anderson (the latter brought in from Continental Championship Wrestling), and Tully Blanchard from Southwest Championship Wrestling, with James J. Dillon as their manager. They feuded with Dusty Rhodes (breaking his ankle and hand), Magnum TA, Barry Windham, The Rock 'n' Roll Express (breaking Ricky Morton's nose), Nikita Koloff (injuring his neck), and The Road Warriors. Animal, Hawk, Ronnie Garvin and many others fought Ric Flair for the NWA World Heavyweight Title during that time period. They usually had most of the titles in the NWA, and they often bragged about their success (in the ring and with women) in their interviews.\nThe Four Horsemen moniker was not planned from the start. Due to time constraints at a television taping, production threw together an impromptu tag team interview of Flair, the Andersons, Tully Blanchard and Dillon; all were now united after Ole Anderson returned and, along with Flair and Arn, tried to break Dusty's leg during a wrestling event at the Omni in Atlanta during the fall of 1985. It was during this interview that Arn said something to the effect of \"The only time this much havoc had been wreaked by this few a number of people, you need to go all the way back to the Four Horsemen of the Apocalypse!\"[6] The comparison and the name stuck. Nevertheless, Arn has said in an RF Video shoot interview that he, Flair and Blanchard were as close as anybody could be away from the ring while they were together. They lived the gimmick outside of the arena, as they took limos and jets to the cities in which they wrestled. Baby Doll was Flair's valet for a couple of months in 1986, after previously managing Tully Blanchard during 1985.\nThe Xtreme Horsemen[edit]\nThe Xtreme Horsemen was a professional wrestling stable in Turnbuckle Championship Wrestling, and later Major League Wrestling, and also appeared across Japan, that disbanded in 2004. The groups name was in homage to the Four Horsemen, who in the 1980s were one of professional wrestling's top draws worldwide. The group came together in Dusty Rhodes' Turnbuckle Championship Wrestling promotion, but the group later left Rhodes' promotion to join Major League Wrestling where Steve Corino and \"The Enforcer\" C.W. Anderson were joined by former ECW superstars Justin Credible and Simon Diamond. This incarnation was briefly managed by former Four Horsemen manager J.J. Dillon before Major League Wrestling ceased operations. Barry Windham also joined the group for a War Games match for one time only.\nAt WXW-C4's Sportsfest 2009, Steve Corino reformed the Xtreme Horsemen in the form of Corino, NYWC's Papadon, WXW-C4's A.C. Anderson, and Corino's student Alex Anthony. They are managed by Corino's personal manager, Rob Dimension.\nAs of 2016, Corino and Anderson have retained the Extreme Horsemen faction and added independent wrestler John Skyler to the group.\nEvolution[edit]\nMain article: Evolution\nIn 2003, rumors began circulating that Ric Flair (at the time working for the World Wrestling Entertainment) was going to reform the Four Horsemen with Triple H, Randy Orton, and Batista. This group was eventually formed, but under the name Evolution instead of the Four Horsemen, and with Triple H as the leader instead of Flair. They served much the same function as the original heel Horsemen had, dominating the titles on Raw and feuding with that brand's top faces. The group slowly died between August 2004 and October 2005. Orton was kicked out of the group after he won the World Heavyweight Championship, which Triple H coveted. In February 2005, Batista left the group after winning the Royal Rumble, in a storyline where Triple H tried to protect his title from Batista. During a Triple H hiatus, Flair turned face, and at Raw Homecoming, Triple H returned as a face, but turned heel by the end of the night, hitting Flair in the face with a sledgehammer and officially ending Evolution. At Raw 15th Anniversary, an Evolution reunion as faces took place, though then-heel Randy Orton refused to participate and instead challenged the face versions of Flair, Batista, and Triple H to a match in which he partnered with then-heel, Edge and Umaga, and at the same time reforming Rated-RKO for one night. On the March 31, 2008 episode of Raw, Flair delivered his farewell address. Afterward, Triple H brought out many current and retired superstars to thank Flair for all he has done, including Four Horsemen members, Arn Anderson, Tully Blanchard, Barry Windham, J.J. Dillon, and Dean Malenko. Also, it was the night in which Evolution got back together in the ring, except for Randy Orton (who was outside the ring). This would mark the last time both groups would be in the ring together.\nOn the April 14, 2014 episode of Raw, Triple H, Orton, and Batista reunited Evolution full-time, once again heels, to feud with The Shield. However, on the April 28, 2014 episode of Raw, Flair showed his endorsement for The Shield, effectively turning his back on his old teammates, thus not turning heel.\nFortune[edit]\nMain article: Fortune\nFortune was a professional wrestling stable in Total Nonstop Action Wrestling, announced by Ric Flair on June 17, 2010 as a \"reformed\" version of the Four Horsemen. Flair had been loosely associated with A.J. Styles, Desmond Wolfe, Beer Money, Inc. (James Storm and Robert Roode) and Kazarian since April 5, 2010, and announced that each of them and anyone else who wanted to join Fortune (originally spelled Fourtune) would have to earn their place in the stable.[9] On July 11 at Victory Road, Styles and Kazarian became the first official members of Fortune by defeating Samoa Joe and Rob Terry in a tag team match.[10] On the July 29 edition of Impact!, Flair announced that James Storm and Robert Roode had earned the right to become the final two members of Fortune.[11] However, on the August 12 edition of Impact! Douglas Williams, who had helped Flair defeat his nemesis Jay Lethal the previous week, and Matt Morgan were added to Fortune as the stable assaulted EV 2.0, a stable consisting of former Extreme Championship Wrestling performers.[12] Fortune had since merged with Hulk Hogan and Eric Bischoff's Immortal stable, but turned on them months later, splitting them into two feuding factions. Ric Flair would turn on Fortune and remain associated with Immortal.\nThe Four Horsewomen[edit]\nThe stable was invoked by mixed martial artists Ronda Rousey, Shayna Baszler, Jessamyn Duke and Marina Shafir (Invicta Fighter), who named themselves \"The Four Horsewomen\" in 2013, with the blessing of Anderson and Flair.[13] After Bethe Correia defeated Duke, she held up four fingers and symbolically put one down. She did this again after beating Baszler. As Shafir is not in the UFC, these two wins set the stage for a bantamweight title fight between her and Rousey (the \"Ric Flair of the Four Horsewomen\") at UFC 190.[14] Rousey knocked Correia out in 34 seconds.[15]\nThe group was shown at ringside during WrestleMania 31, where Rousey was later involved in a major in-ring segment with The Rock, Triple H and Stephanie McMahon.[16]\nThe NXT wrestlers Charlotte (Ric Flair's daughter), Bayley, Becky Lynch and Sasha Banks have referred to themselves as \"The Four Horsewomen\", and posed in ring at NXT TakeOver: Brooklyn each with four fingers held up.[17]\nhttps://en.wikipedia.org/wiki/The_Radicalz\nThe Radicalz (sometimes titled The Radicals) were a professional wrestling stable in the World Wrestling Federation (WWF). The members were former World Championship Wrestling (WCW) wrestlers Chris Benoit, Eddie Guerrero, Perry Saturn and Dean Malenko.[1] Terri Runnels later joined the group by proxy after becoming Saturn's on-screen girlfriend. Benoit, Malenko, and Saturn all had previously been a part of a similar small stable of younger talent while in WCW, The Revolution, which was dismantled by their defection.\nWorld Wrestling Federation[edit]\nThe four first made their appearance on the January 31, 2000 episode of Raw Is War as audience members and backstage guests of Mick Foley. They interfered in a match consisting of Al Snow and Steve Blackman and The New Age Outlaws. While the group was sitting in the front row, Road Dogg took a cheap shot at Benoit, which prompted all four to severely beat both of The New Age Outlaws inside and out of the ring. The attack ended after Guerrero performed a frog splash on Billy Gunn and Benoit performed a diving headbutt on Dogg, with Jim Ross dubbing them The Radicalz. The four were offered a chance to \"win\" contracts by beating the members of D-Generation X in a series of three matches. Malenko lost to X-Pac after an illegal groin attack, while Saturn and Guerrero ended up losing against The New Age Outlaws, since Dogg had pulled the referee out of the ring when Guerrero was covering Gunn for the pin after a frog splash, thereby illegally breaking up the cover. Benoit then lost to Triple H, but not before making him tap out to the Crippler Crossface while the referee was unconscious. Soon afterwards, the four wrestlers were \"given\" contracts with the WWF by Triple H, in exchange for them turning on Mick Foley. The group became known as The Radicalz (sometimes spelled The Radicals in on-screen graphics), and they attained some measure of success. At first tightly knit, all four of the wrestlers in the group eventually drifted apart as all of them sought stardom as singles wrestlers in the WWF.\nStarted with four members- X is the quadrant\nhttps://en.wikipedia.org/wiki/D-Generation_X\nD-Generation X (also known as DX) was a professional wrestling stable (and later tag team) best known for their appearances in the World Wrestling Federation/Entertainment/WWE. The group originated in the midst of the WWF's \"Attitude Era\" in 1997 as a foil to another prominent faction, The Hart Foundation.[1]\nAfter its original run with members Shawn Michaels, Hunter Hearst Helmsley (later known simply as Triple H), Chyna, and Rick Rude, the group expanded to become a mainstay of the Attitude Era with new additions like X-Pac, The New Age Outlaws (Road Dogg and Billy Gunn) and Tori until it disbanded in August 2000. After a teased reunion in 2002, DX reformed in June 2006 as the duo of Triple H and Shawn Michaels for the remainder of the year[2] and again in August 2009 until March 2010, shortly before Michaels' retirement. This incarnation was voted the greatest WWE Tag Team Champions of all time in a 2013 WWE viewer poll.[3]\nhttps://en.wikipedia.org/wiki/The_Fabulous_Freebirds\nThe Fabulous Freebirds were a professional wrestling tag team that attained fame in the 1980s, performing into the 1990s. The team usually consisted of three wrestlers, although in different situations and points in its history, just two performed under the Freebirds name.\nMain Members\nMichael Hayes was the leader of the group. Nicknamed \"P.S.\" (Purely Sexy), he was known to get the crowd going with his antics.\nTerry Gordy was the powerhouse of the group. Nicknamed \"Bam Bam\", he loved to fight and beat his opponents down.\nBuddy Roberts, nicknamed \"Jack\" for his love of Jack Daniel's whiskey, was the speed of the group, who would often frustrate other wrestlers into chasing him, until Hayes and/or Gordy surprised them with a move. Buddy was also acknowledged as the best ring technician of the group.\nJimmy Garvin's association with the Freebirds began in 1983, as he had often teamed with Hayes, Gordy, and Roberts in WCCW and AWA. In 1988, he teamed with Steven Dane while Hayes was injured as a watered-down version of the Freebirds, and with Hayes during a reignited WCW run between June 1989 and July 1992. He was always considered the fourth Freebird by Hayes, Gordy and Roberts, although no one really believed it until 1989, when Hayes and Garvin (nicknamed \"Jam\") teamed up for the NWA World Tag Team Championship tournament.\nhttps://en.wikipedia.org/wiki/The_Diamond_Exchange\nDiamond Exchange (1988–1989)- four members\nBadd Company (Paul Diamond and Pat Tanaka)[5]\nCol. DeBeers[5]\nCurt Hennig[6]\nMadusa Miceli[6]\nDiamond Mine (1991–1992)- four\nThe Fabulous Freebirds (Michael Hayes, Jimmy Garvin, and Badstreet)\nScotty Flamingo[22]\nDiamond Studd[5]\nVinnie Vegas[5]\nThe Diamond Exchange was a professional wrestling stable led by Diamond Dallas Page in the American Wrestling Association from 1988 to 1989. Page led a spiritual success known as The Diamond Mine in World Championship Wrestling from 1991 to 1992.\nhttps://en.wikipedia.org/wiki/The_Dangerous_Alliance\nThe Dangerous Alliance was a heel professional wrestling stable that made a name for itself in World Championship Wrestling (WCW) in the early 1990s and the American Wrestling Association (AWA) in 1987, with Adrian Adonis, Randy Rose, and Dennis Condrey making up the AWA incarnation of the group.\nAWA members[edit]- Four\nAdrian Adonis\nDennis Condrey – he was one half of the Original Midnight Express\nPaul E. Dangerously – leader and manager of the Alliance\nRandy Rose – he was one half of the Original Midnight Express\nhttps://en.wikipedia.org/wiki/Professional_wrestling_tag_team_match_types#Four-way_tag_team_elimination_match\nElimination tag team matches[edit]\nElimination tag team matches are the same as a normal tag team match except that a wrestler who suffers a loss is eliminated from participation. The match continues until all members of one team are eliminated. WWE uses the term \"Survivor Series match\" to denote an eight or ten person match held during their yearly Survivor Series pay-per-view. Lucha libre wrestling promotions use the term Torneo cibernetico (cybernetic tournament) for multi-man elimination matches. Sometimes in these matches, there can be only one winner, so after the other team has been eliminated former teammates face each other in an elimination match. A further variation is where teams of four or more are composed of tag teams, and once a member of a team is eliminated their partner is also eliminated.\nThree-way tag team elimination match[edit]\nIn a three-way tag team elimination match, three teams compete as tag teams with two or more members per team. One member of two teams start. Anyone could be tagged in by anyone else and can be subject to immediate disqualification for failure to accept a tag. When a wrestler is pinned, the entire team is eliminated and the last team left of the three wins.\nFour-way tag team elimination match[edit]\nMuch like in a three-way tag team elimination match (see above), a four-way tag team elimination match (also known as a Fatal 4-way tag team elimination match, and at times has also been called the Raw Bowl and the Superstars Bowl), four teams compete. Anyone could be tagged in by anyone else and can be subject to immediate disqualification for failure to accept a tag. When a wrestler is pinned, the entire team is eliminated and the last team of the four wins.\nTag team turmoil[edit]\nTag team turmoil is another version of an elimination tag team match. The match has a team in each of the four corners to start the match, but as each team is eliminated another team takes its place, similar to a gauntlet match. Another variation of tag team turmoil took place at SummerSlam in 1999, Night of Champions in 2010, Night of Champions Kickoff Show in 2013, and Elimination Chamber in 2017. Two teams start, when one is eliminated a new team comes to the ring until all teams have competed, the remaining team is the winner. This was used on the May 31, 2011 episode of NXT, with a team consisting of a WWE Pro and his NXT Rookie. The winning team earned 3 Redemption Points for the Rookie in this version.\nTables and Stables[edit]\nTables and Stables are similar to table matches, however, in an elimination styled-manner. Two teams consisting of four compete, and one wrestler can be eliminated either getting dropped by his opponent through a table, or accidentally falling by themselves. The match is a no disqualification and a no countout match.\nhttps://en.wikipedia.org/wiki/The_West_Texas_Rednecks\nThe West Texas Rednecks was a short-lived professional wrestling stable and country music band in World Championship Wrestling (WCW) in 1999. They are famous for the recording of two songs, \"Rap is Crap (I Hate Rap)\" and \"Good Ol' Boys.\"\nThe West Texas Rednecks formed in June 1999 in WCW. The group developed from four wrestlers who fit the mold of a southern gimmick and had teamed with one another in the recent months. They were to be a heel group to feud with The No Limit Soldiers led by Master P.\nTheir main feuds were with Master P's No Limit Soldiers (Swoll, 4X4, Chase and BA)\nThe group was made up of leader Curt Hennig, brothers Barry and Kendall Windham, and Bobby Duncum, Jr..\nhttps://en.wikipedia.org/wiki/Teddy_Reade\nTeddy Reade[1] is an American professional wrestler who is known for his short-lived stint in World Championship Wrestling. As of 2013 Reade was working on the independent circuit.[citation needed]\n1 World Championship Wrestling\n2 In wrestling\n3 Championships and accomplishments\nWorld Championship Wrestling[edit]\nIn 1999 Reade went under the ring 4x4 and debuted in World Championship Wrestling as a member of Master P's No Limit Soldiers along with BA, Chase Tatum, Konnan, Rey Mysterio, Jr. and Swoll.[2] They later feuded with The West Texas Rednecks due to the Rednecks hatred of rap music. After the soldiers broke up 4x4 changed his name to Cassius by joining a heel stable called Harlem Heat 2000 and acted as a bodyguard, the group consist of the leader Stevie Ray, Big T and manager J. Biggs then began feuding with Booker T. although the feud didn't last long and Harlem Heat 2000 began to split up.[3][4] Reade's presence would draw attention of the audience simply due to his enormous physical size.\nhttps://en.wikipedia.org/wiki/New_World_Order_(professional_wrestling)\nThe New World Order (commonly abbreviated NWO, in logo stylized as nWo) was a professional wrestling stable that originally consisted of \"Hollywood\" Hulk Hogan, Kevin Nash, and Scott Hall, best known for their appearances in World Championship Wrestling (WCW) from the mid to late 1990s.[1]\nAs WCW's annual pay-per-view Fall Brawl was drawing closer, WCW was preparing for another battle against the nWo. On the September 9 episode of Nitro, the nWo tricked fans and wrestlers into thinking that Sting had joined the nWo by putting wrestler Jeff Farmer into the group as a Sting clone, complete with Sting attire and face paint. This point was furthered when Farmer, as the fake Sting, attacked Luger, who had been lured into an attack by referee Nick Patrick. This led Luger, his longtime ally and tag team partner, to publicly question Sting. At Fall Brawl, as Team WCW was being interviewed, Sting told his teammates that he had nothing to do with the attack, but Luger did not believe him. Going into the match, only three wrestlers on each side had been officially named: Hogan and The Outsiders for the nWo, with Luger, Arn Anderson, and Ric Flair for Team WCW. Sting had originally been named the fourth man for WCW, but his participation was in doubt. The fourth man for the nWo was indeed the fake Sting, who convinced everyone (including the broadcast team) that the real Sting was nWo. The real Sting showed up moments later as the last man for Team WCW and took apart the nWo by himself. After assaulting Hogan, Hall, Nash and the fake Sting, Sting left the ring and Team WCW, yelling at an apologetic looking Luger \"Now do you believe me?\" as he did so. Team WCW, now fighting a 4-on-3 handicap match, lost when the nWo Sting locked Luger in the Scorpion Death Lock.\nFour members\nhttps://en.wikipedia.org/wiki/Bullet_Club\nThe group was formed in May 2013, when Irish wrestler Prince Devitt turned on his partner Ryusuke Taguchi and came together with American wrestler Karl Anderson and Tongan wrestlers Bad Luck Fale and Tama Tonga to form a villainous stable of foreigners, which they subsequently named \"Bullet Club\"\nThe four members of Bullet Club wrestled their first match together on May 22, when they defeated Captain New Japan, Hiroshi Tanahashi, Manabu Nakanishi and Ryusuke Taguchi in an eight-man tag team match.\nTama Tonga, one of the four founding members of Bullet Club\nhttps://en.wikipedia.org/wiki/File:Tama_Tonga_2015.JPG\nhttps://en.wikipedia.org/wiki/File:Figure_four_toe_hold.jpg\nhttps://en.wikipedia.org/wiki/Figure-four_(grappling_hold)\nA figure-four is a Catch wrestling term for a joint-lock that resembles the number \"4\". A keylock or toe hold can be referred to as a figure-four hold, when it involves a figure-four formation with the legs or arms. If the figure-four involves grabbing the wrists with both hands, it is called a double wrist lock; known as kimura in MMA circles . A figure-four hold done with the legs around the neck and (usually) arm of an opponent is called figure-four (leg-)choke, better known as a triangle choke these days, and is a common submission in modern mixed martial arts, Submission wrestling and Brazilian jiu jitsu, and of course Catch wrestling from where it originates. The leg figure-four choke is also part of Japanese martial arts, where it is known as Sankaku-Jime.\nThe wrestling move figure 4 leg lock was made famous by WWE Hall of Famer Ric \"The Nature Boy\" Flair.\nhttps://www.youtube.com/watch?v=9V65qnHBC6I\nFigure four\nhttps://en.wikipedia.org/wiki/Four-in-hand_(carriage)\nA four-in-hand is a carriage drawn by a team of four horses having the reins rigged in such a way that it can be driven by a single driver. The stagecoach and the tally-ho are usually four-in-hand coaches.\nBefore the four-in-hand rigging was developed, two drivers were needed to handle four horses. However, with a four-in-hand, the solo driver could handle all four horses by holding all the reins in one hand, thus the name.\nThe four-in-hand knot used to tie neckwear may have developed from a knot used in the rigging of the reins.\nToday Four-in-hand driving is the top discipline of combined driving in sports. One of its major events is the FEI World Cup Driving series.\nhttps://en.wikipedia.org/wiki/Cutting_(sport)\nCutting is a western-style equestrian competition in which a horse and rider work as a team before a judge or panel of judges to demonstrate the horse's athleticism and ability to handle cattle during a 2 1⁄2 minute performance, called a \"run.\" Each contestant is assisted by four helpers: two are designated as turnback help to keep cattle from running off to the back of the arena, and the other two are designated as herd holders to keep the cattle bunched together and prevent potential strays from escaping into the work area. Cutting cattle are typically young steers and heifers that customarily range in size from 400 to 650 lb (180 to 290 kg). They are of Angus or Hereford lineage or possibly a mix of crossbred beef cattle with Charolais or Brahman lineage.\nhttps://en.wikipedia.org/wiki/Chilean_rodeo\nhttps://en.wikipedia.org/wiki/File:Puntaje_rodeo_chileno.svg\nFour parts of the animals body, each accrue different numbers of points. The fourth is different.\nRodeo is a traditional sport in Chile. It was declared the national sport in 1962.\nhttps://en.wikipedia.org/wiki/Gymkhana_(equestrian)\nThe fourth is different.\nO-Mok-See or omoksee is the most common term used in the Western United States for events in the sport of pattern horse racing. Most events are run with contestants simultaneously running in 4 separate lanes (3 for small arenas), with each contestant riding in a 30 foot wide lane.\nhttps://www.youtube.com/watch?v=quGsV_IqfoI\nFour flag mounted game\nFour scoring possibilities. The fourth is different, outside of the circle around the thirty and fourty\nhttps://en.wikipedia.org/wiki/File:Skee_Ball.JPG\nhttps://en.wikipedia.org/wiki/Skee_ball\nMore traditional skee ball machines like this one do not include the two additional \"100 points\" holes, located on the uppermost corners of the machine, on either side of the \"50 points\" hole.\nMany course themes recur throughout the series. Most are based on an existing area in the Mario franchise (Bowser's Castle being among the most prominent), but there are a number of courses that have not appeared elsewhere, but still belong in the Mushroom Kingdom, such as Rainbow Road.[3] Each game in the series includes at least 16 original courses\nhttps://en.wikipedia.org/wiki/Mario_Kart\nEach game's tracks are divided into four \"cups\", or groups in which the player has to have the highest overall placing to win. Most courses can be done in three laps. The first game to feature courses from previous games was Mario Kart: Super Circuit, which contained all of the tracks from the original Super NES game. Starting with Mario Kart DS, each entry in the series has featured 16 \"nitro\" (courses belonging to its own game) and 16 \"retro\" tracks (courses from previous Mario Kart titles), spread across four cups each with four races. In Mario Kart 8, 16 additional tracks are available across two downloadable packages\nhttps://en.wikipedia.org/wiki/Mario_Kart:_Double_Dash\nAgain 16 courses and 16 players can play simultaneously\nDouble Dash!! supports LAN play using the Nintendo GameCube Broadband Adapter, allowing up to 16 players to compete simultaneously. There are 20 characters to select from in total, each of which with a special item, and with eleven characters being new to the series.\nGame modes[edit]\nThere are four game modes in Double Dash: Grand Prix, Time Trial, Versus, and Battle. Most of the modes can be played cooperatively, while some can only be played by themselves in single-player races.\nGrand Prix – This mode has the player race against 7 (or 6) teams, which are controlled by the computer, in a series of predetermined courses. The player can choose to race using 3 different engine size classes: 50cc, 100cc and 150cc. A fourth unlockable class, Mirror Mode, allows the player to race through a mirrored version of the tracks using the 150cc engine size.[4] Since all karts go faster when using higher engine sizes, the 4 classes serve as difficulty levels. There are 16 tracks, divided into 4 cups: Mushroom, Flower, Star and Special. A 5th cup has the player race in every track called the All-Cup Tour. The tour always starts with Luigi Circuit and ends with Rainbow Road, but the remaining tracks show up in random order. Every race is three laps long except for Baby Park and Wario Colosseum, which have 7 and 2, respectively. After all the human players cross the finish line, the positions of the computer-controlled teams are immediately locked in and they are given points based on those eight positions, ranging from 0 to 10. At the end of the cup, there will be an award ceremony for the 3 teams, where they will get a trophy ranging from bronze to gold. No matter which position they earned after each race, everyone will move on because of these new rules.\nTime Trial – This single-player mode has the player to finish any of the 16 courses in the fastest time possible, with the best time being saved as a ghost, a carbon copy of the player's performance that they can race against in later runs. Each character will receive a mushroom, which can be used at any time during the run. (1P only)\nVersus – In this mode, players can choose any course and race against up to 3 (or 15 with LAN) human opponents with customized rules such as changing the item frequency or the number of laps in each race. (2P-16P only)\nBattle – In battle mode, the player fights against up to 3 (or up to 15 with LAN) human-controlled opponents using items scattered throughout a battle arena. There is the traditional balloon-popping battle game, in which the player must use items to pop an opponent's three balloons while defending their own. Players can also steal items from one another by speeding towards them with a mushroom or star. In Co-op battles, the player in the back of the kart can perform a slide-attack on another driver, which can also steal balloons. Additionally, two new games have been implemented: the first involves capturing a Shine Sprite and maintaining possession of it for a certain amount of time, usually starting out with 55 to 60 seconds. Each time the Shine Sprite is lost, the counter will somewhat reset the time. For instance, if a player is able to keep possession of the Shine Sprite for only 30 seconds, the counter would reset to 40 instead of 60. The other mode involves throwing Bob-ombs at each other to collect points. With two players, 3 points are needed to win, but when playing with 3 or 4, 4 points are required to win. If two or more players throw a bomb at each other in unison, no points will be awarded to anybody. In a way, it's similar to a tie. As in previous installments, the battle arenas are enclosed (the exception being Tilt-A-Kart), with a varying layout and a replenishing arsenal of items. (2P-16P only)\nLAN play – Double Dash also features LAN play using the Nintendo GameCube Broadband Adapter. Up to 8 GameCube consoles can be connected, allowing for 16-player multiplayer races, with 2 players controlling each kart.[5]\nhttps://en.wikipedia.org/wiki/Mario_Kart_DS\nUnlike previous Mario Kart games, which featured 4 playable cups, Mario Kart DS features a total of 8 cups: Mushroom, Flower, Star, Special, Shell, Banana, Leaf and Lightning, with the latter 4 cups consisting entirely of tracks drawn from previous entries in the Mario Kart series. Each cup has four tracks for a grand total of 32.\nhttps://en.wikipedia.org/wiki/Mario_Kart_Wii\nMario Kart Wii (マリオカートWii Mario Kāto Wī?) is a racing video game developed and published by Nintendo for the Wii video game console. It is the sixth installment in the Mario Kart series, and was released worldwide in April 2008.\nMario Kart Wii supports four different control schemes. The primary control scheme is the Wii Remote by itself, optionally used in conjunction with the plastic Wii Wheel accessory, which uses the controller's motion sensing to simulate operating a steering wheel. The other supported control schemes are the Wii Remote with the Nunchuk attachment; the Classic Controller; and the Nintendo GameCube controller.[3]\nMario Kart Wii features multiple game modes: Grand Prix, Time Trials, Versus, and Battle. All modes support single-player gameplay; Versus and Battle support local multiplayer for up to four players, with or without computer-controlled players.\nhttps://en.wikipedia.org/wiki/Mario_Kart_7\nMario Kart 7 offers 32 different tracks, which consist of 16 tracks unique to the game and 16 \"classic\" tracks, remakes of tracks featured in the previous six installments.\nMario Kart 7 features four single-player game modes: Grand Prix, Time Trial, Balloon Battle, and Coin Runners. Some modes feature multiplayer options. In Grand Prix, the player races against seven computer-controlled opponents in one of eight different cups, each featuring four tracks. The player receives points based on his or her finishing position in each race. After all four races, there will be a trophy.\nThe game features 32 tracks, with an additional 16 later released as downloadable content (DLC).\nThe game continues the traditional gameplay of the Mario Kart series, in which characters from the Mario universe race against each other in go-karts, attempting to hinder their opponents or improve their racing performance using various tools found in item boxes. In addition, the game includes four different difficulties, which can be selected before beginning the race.\nhttps://en.wikipedia.org/wiki/Pac-Man_World_Rally\nPac-Man World Rally, known in Europe as Pac-Man Rally, is a kart racing game in the Pac-Man series. It is developed by Bandai Namco Games, and released in August 2006 for the PlayStation 2, Nintendo GameCube, PlayStation Portable, and Microsoft Windows. An Xbox version of the game was cancelled, even though there is a preview of it included in Pac-Man World 3.\nThe game has 15 race tracks and a battle mode similar to other kart racing games. In addition, there are four battle arenas for multiplayer action. You will also be able to collect power-ups to attack opponents or gain an edge in the race, as well as Pac-Man's signature fruit pickups, which in Pac-Man World Rally unlock secret shortcuts. There are 16 characters for the player to choose from.\nhttps://en.wikipedia.org/wiki/TurboGrafx-16\nThe TurboGrafx-16 Entertainment SuperSystem, known in Japan and France as the PC Engine (PCエンジン Pī Shī Enjin?), is a home video game console jointly developed by Hudson Soft and NEC Home Electronics, released in Japan on October 30, 1987, in the United States on August 29, 1989, and in France on November 22, 1989. It was the first console released in the 16-bit era, albeit still utilizing an 8-bit CPU. Originally intended to compete with the Nintendo Entertainment System (NES), it ended up competing with the Sega Genesis, and later on the Super Nintendo Entertainment System (SNES).\nhttps://en.wikipedia.org/wiki/Fourth_generation_of_video_game_consoles\n16 bit era. 16 squares in the quadrant model\nFourth generation of video game consoles\n(Redirected from History of video game consoles (fourth generation))\nVideo-Game-Controller-Icon-IDV-green-history.svg\nPart of a series on the\nHistory of video games\nGeneral[show]\nConsoles[show]\nGenres[show]\nLists[show]\nv t e\nIn the history of computer and video games, the fourth generation (more commonly referred to as the 16-bit era) of games consoles began on October 30, 1987 with the Japanese release of NEC Home Electronics' PC Engine (known as the TurboGrafx-16 in North America). This generation saw strong console wars. Although NEC released the first fourth generation console, and was second to the Super Famicom in Japan, this era's sales were mostly dominated by the rivalry between Nintendo and Sega's consoles in North America: the Super Nintendo Entertainment System (the Super Famicom in Japan) and the Mega Drive (named the Genesis in North America due to trademark issues). Nintendo was able to capitalize on its previous success in the third generation and managed to win the largest worldwide market share in the fourth generation as well. Sega was extremely successful in this generation and began a new franchise, Sonic the Hedgehog, to compete with Nintendo's Mario series of games. Several other companies released consoles in this generation, but none of them were widely successful. Nevertheless, several other companies started to take notice of the maturing video game industry and began making plans to release consoles of their own in the future. This generation ended with the discontinuation of the Neo Geo in 2004.\nSome features that distinguished fourth generation consoles from third generation consoles include:\nMore powerful 16-bit microprocessors\nMulti-button game controllers (3 to 8 buttons)\nComplex parallax scrolling, multi-layer tilemap backgrounds, with pseudo-3D scaling & rotation\nLarge sprites (up to 64×64 or 16×512 pixels), 80–380 sprites on screen, scalable on-the-fly, with pseudo-3D scaling & rotation\nElaborate colour, 64 to 4096 colours on screen, from palettes of 512 (9-bit) to 65,536 (16-bit) colours\nFlat-shaded 3D polygon graphics\nCD-ROM support via add-ons, allowing larger storage space and full motion video playback\nStereo audio, with multiple channels and digital audio playback (PCM, ADPCM, streaming CD-DA audio)\nAdvanced music synthesis (FM synthesis and 'wavetable' sample-based synthesis)\nhere it describes that all japanese stories even novels follow the four part structure\nhttp://www.manga-audition.com/the-four-part-construction-k…/\nhttp://blog.tewaters.com/…/on-narrative-structure-kishotenk…\nThe FOUR Part construction \"Ki-Sho-Ten-Ketsu\" – Japanese Manga 101 #049\nPosted on 26/08/2016Sayuri KimizukaPosted in Japanese Manga 101\nToday, we will talk about one subject, that great manga god Tezuka Osamu, as well as senseis like Tsukasa Hojo and Tetsuo Hara sensei,\nALL been telling over and over and over again.\njm101_49_01\nWhile this is known as THE MOST BASIC thinking in Japanese manga creation,\nbut many of you outside Japan may never heard about it.\nIf this is the first time you heard this, then please do pay close attention!\nThe secret art:\nMANGA is \"Ki-Sho-Ten-Ketsu\"\n– \"Introduction / Development / Turn / Conclusion\"\nIn Japan, not only Manga, but any story or novels are constructed in 4 parts.\nPretty much everything here is written, drawn or presented this way!\nInternationally, \"Three-act structure\" is more widely adapted in education and production.\nFor example, any English teacher would tell you to write using,\nthe basic paragraph structure.\nSupporting sentence\nConcluding sentence\nIn film making, \"Three-Act-Structure\" is widely known as the standard:\nThe \"Three-Act-Structure\" is widely regarded as the standard,\nUsed in comic, TV Drama, Documentary or even computer games.\nSo why do the Japanese Love \"Ki-Sho-Ten-Ketsu\", the FOUR part structure ?\nWe believe it's all thanks to a Chinese Poet, whose works became a national hit,\ninfluencing many poets and novelist in Japan, around fourteen hundred years ago.\nSpring Dawn by Meng Haoran\nIn Spring one sleeps, unaware of dawn;\neverywhere one hears crowing birds.\nIn the night came the sound of wind and rain;\nwho knows how many flowers fell?\nThis, is a very famous poem \"Spring Dawn\", by Meng Haoran.\nWhat does each of the 4 lines in poem tell us?\n<Meaning>\nI slept too much this lovely spring morning, the sun's already up.\nFrom everywhere I hear the birds, chirping happily\nLast night, I heard loud sound of wind and rain,\nI hope the flowers are okay, but who knows how many flower petals had fallen?\nThis 4 line poem, is the classic example 4 part structure, \"Ki-Sho-Ten-Ketsu\".\n1, Introduction\n2, Development\n3, Twist\n4, Conclusion\nIntroduction – The intro\nDevelopment – Develop further on the intro\nTurn – Look at the event, from a completely different point of view\nConclusion – Bring both points of view, to a unified ending\nTo be frank, this 4 part structure is a bit, illogical.\nOften doesn't make instant sense especially compared to the three part structure.\nAnd \"being illogical\" is often treated as bad, or perhaps a little immature.\nBUT! The Japanese readers as well as the creators absolutely LOVE this 4 parts structure.\nJapanese Manga creators use \"Ki-Sho-Ten-Ketsu\"\n– NOT ONLY in story writing, but ALSO, how they layout the PANELS on each and every page.\nSound interesting, doesn't it?\nWe'll talk more about this mysterious 4 parts structure \"Ki-Sho-Ten-Ketsu\" next week.\nhttps://en.wikipedia.org/wiki/Abhinaya\nAbhinaya (Sanskrit abhi- 'towards' + nii- 'leading/guide') is the art of expression in Indian aesthetics. More accurately it means \"leading an audience towards\" the experience (bhava) of a sentiment (rasa). The concept, derived from Bharata Muni's Natya Shastra, is used as an integral part of all Indian classical dance styles.\nTypes of Abhinaya are four in number according to the natya shastra and they are: Angika abhinaya, Vanchika abhinaya, Aharya abhinaya and sattvika abhinaya\nThe Natya Shastra (Sanskrit: नाट्य शास्त्र, Nāṭyaśāstra) is an ancient Indian treatise on the performing arts,\nAngika Abhinaya\nThis relates to body movement. How the thing is to be expressed is portrayed by movement of the anga or limbs which include facial expressions. Abhinaya has different schools with the expressions ranging from the grotesque to the understated, from the crude to the refined. Angika abhinaya forms either Padartha abhinaya or Vaakyartha abhinaya. Padartha Abhinaya is when the artiste delineates each word of the lyrics with gestures and expressions. Vaakyartha abhinaya is where the dancer acts out an entire stanza or a sentence.\nVachika Abhinaya\nThis is regarding how relates to how expression is carried out through speech. It is used more overtly in drama. In music also this is employed. Traces of it are preserved in dance forms of Kuchipudi and Melattur style of Bharatanatyam where the dancers often mouth the words of the songs to support Padartha abhinaya. There are some art forms in Kerala that still has on stage art forms: Koodiyattam, Nangyar Kooothu, Ottan, Seetangan & Parayan.\nSattvika Abhinaya Aharya Abhinaya\nThe costumes and physical decorations of the actors and the theatre are other means of representation of the play. The decoration of the stage theatre which include lights and accessories related to the scene enhances the rasa between the audience and artists comes under this category.\nThis abhinaya is very prominent in kathakali where there are different dress and makeup for different characters.\nSattvika Abhinaya\nSattvika Abhinaya is confused with facial expressions that belong to angika Abhinaya. This Abhinaya is the mental message, emotion or image communicated to the spectators through eyes. The dancer has to bring their own authentic experiences that would capture the attention of the audience.\nhttps://en.wikipedia.org/wiki/Four-cross\nFour-cross (4X), also called mountain-cross, not to be confused with fourcross, is a relatively new style of mountain bike racing where four bikers race downhill on a prepared, BMX like, track, simply trying to get down first. These bikes are generally either full suspension with 3 to 4 inches of travel, or hardtails, and typically have relatively strong frames. They run a chainguide on front and gears on the back. They have slack head angles, short chainstays and low bottom brackets for good cornering and acceleration. In recent years the tracks raced on have been rougher and less like those used in BMX.\nhttps://en.wikipedia.org/wiki/File:QuartoSpiel.JPG\nhttps://en.wikipedia.org/wiki/Quarto_(board_game)\nQuarto is a board game for two players invented by Swiss mathematician Blaise Müller in 1991.[1][2]\nIt is played on a 4×4 board. There are 16 unique pieces, each of which is either:\ntall or short;\nred or blue (or a different pair of colors, e.g. light- or dark-stained wood);\nsquare or circular; and\nhollow-top or solid-top.\nPlayers take turns choosing a piece which the other player must then place on the board. A player wins by placing a piece on the board which forms a horizontal, vertical, or diagonal row of four pieces, all of which have a common attribute (all short, all circular, etc.). A variant rule included in many editions gives a second way to win by placing four matching pieces in a 2x2 square.\nQuarto is distinctive in that there is only one set of common pieces, rather than a set for one player and a different set for the other. It is therefore an impartial game.\nhttps://en.wikipedia.org/wiki/Score_Four\nScore Four is a 3-D version of the abstract strategy game Connect Four. It was first sold under the name \"Score Four\" by Funtastic in 1968. Lakeside issued 4 different versions in the 1970s. Later Hasbro sold the game as \"Connect Four Advanced\" in the UK. .\nThe object of Score Four is to position four beads of the same color in a straight line on any level or any angle. As in Tic Tac Toe, Score Four strategy centers around forcing a win by making multiple threats simultaneously, while preventing the opponent from doing so.\nhttps://en.wikipedia.org/wiki/3D_tic-tac-toe\n3D tic-tac-toe, also known by the trade name Qubic, is an abstract strategy board game, generally for two players. It is similar in concept to traditional tic-tac-toe but is played in a cubical array of cells, usually 4x4x4. Players take turns placing their markers in blank cells in the array. The first player to achieve four of their own markers in a row wins. The winning row can be horizontal, vertical, or diagonal on a single board as in regular tic-tac-toe, or vertically in a column, or a diagonal line through four boards.\nPencil and paper[edit]\n3-D Tic-Tac-Toe for the Atari 2600\nLike traditional 3x3 tic-tac-toe, the game may be played with pencil and paper. A game board can easily be drawn by hand, with players using the usual \"naughts and crosses\" to mark their moves.\nIn the 1970s 3M Games (a division of 3M Corporation) sold a series of \"Paper Games\", including \"3 Dimensional Tic Tac Toe\". Buyers received a pad of 50 sheets with preprinted game boards.[1]\n\"Qubic\"[edit]\n\"Qubic\" is the brand name of equipment for the 4x4x4 game that was manufactured and marketed by Parker Brothers, starting in 1964.[2] It was reissued in 1972 with a more modern design. Both versions described the game as \"Parker Brothers 3D Tic Tac Toe Game\".\nIn the original issue the bottom level board was opaque plastic, and the upper three clear, all of simple square design. The 1972 reissue used four clear plastic boards with rounded corners. Whereas pencil and paper play almost always involves just two players, Parker Brothers' rules said that up to three players could play. The circular playing pieces resembled small poker chips in red, blue, and yellow.\nThe game is no longer manufactured.\nGame play and analysis[edit]\nThe 3x3x3 version of the game cannot end in a draw, and is easily won by the first player. The following applies to the 4x4x4 version of the game.\nThere are 76 winning lines. On each of the four 4x4 boards, or horizontal planes, there are four columns, four rows, and two diagonals, accounting for 40 lines. There are 16 vertical lines, each ascending from a cell on the bottom board through the corresponding cells on the other boards. There are eight vertically-oriented planes parallel to the sides of the boards, each of these adding two more diagonals (the horizontal and vertical lines of these planes have already been counted). Finally, there are two vertically-oriented planes that include the diagonal lines of the 4x4 boards, and each of these contributes two more diagonal lines—each of these including two corners and two internal cells.\nThe 16 cells lying on these latter four lines (that is, the eight corner cells and eight internal cells) are each involved in seven winning lines; the other 48 cells (24 face cells and 24 edge cells) are each involved in four winning lines.\nThe corner cells and the internal cells are actually equivalent via an automorphism; likewise for face and edge cells. The group of automorphisms of the game contains 192 automorphisms. It is made up of combinations of the usual rotations and reflections that reorient or reflect the cube, plus two that scramble the order of cells on each line. If a line comprises cells A, B, C and D in that order, one of these exchanges inner cells for outer ones (such as B, A, D, C) for all lines of the cube, and the other exchanges cells of either the inner or the outer cells ( A, C, B, D or equivalently D, B, C, A) for all lines of the cube. Combinations of these basic automorphisms generate the entire group of 192 as shown by R. Silver in 1967.[3]\n3D tic-tac-toe was weakly solved, meaning that the existence of a winning strategy was proven but without actually presenting such a strategy, by Eugene Mahalko in 1976.[4] He proved that in two-person play, the first player will win if there are two optimal players.\nA more complete analysis, including the announcement of a complete first-player-win strategy, was published by Oren Patashnik in 1980.[5] Patashnik used a computer-assisted proof that consumed 1500 hours of computer time. The strategy comprised move choices for 2929 difficult \"strategic\" positions, plus assurances that all other positions that could arise could be easily won with a sequence entirely made up of forcing moves. It was further asserted that the strategy had been independently verified. As computer storage became cheaper and the internet made it possible, these positions and moves were made available online.[6]\nThe game was solved again by Victor Allis using proof-number search.[7]\nA more general analysis of tic-tac-toe-like games, including Qubic, appears in Combinatorial Games: Tic-Tac-Toe Theory by József Beck.[8]\nAll of the analyses described above are for the two-player version of the game.\nComputer implementations[edit]\nSeveral computer programs that play the game against a human opponent have been written. The earliest used text or similar interaction: the human player would enter moves numerically (such as \"4 2 3\" for fourth level, second row, third column) on a console typewriter or time-sharing terminal and the program would respond similarly, as graphics displays were uncommon.\n3-D Tic-Tac-Toe\n3dtictactoe.png\nDeveloper(s) Atari, Inc\nPublisher(s) Atari Inc.\nDesigner(s) Carol Shaw\nPlatform(s) Atari 2600\nAtari 8-bit family\nRelease date(s) 1978\nWilliam Daly Jr. wrote and described a Qubic-playing program as part of his Master's program at the Massachusetts Institute of Technology. The program was written in assembler language for the TX-0 computer. It included lookahead to 12 moves and kept a history of previous games with each opponent, modifying its strategy according to their past behavior.[9]\nAn implementation in Fortran was written by Robert K. Louden and presented, with extensive description of its design, in his book Programming the IBM 1130 and 1800. Its strategy involved looking for combinations of one or two free cells shared among two or three rows with particular contents.[10]\nA Qubic program in a DEC dialect of BASIC appeared in 101 BASIC Computer Games by David H. Ahi.[11] Ahi said the program \"showed up,\" author unknown, on a G.E. timesharing system in 1968.\nAtari released a graphical version of the game for the Atari 2600 console and Atari 8-bit computers in 1978.[12][13] The program was written by Carol Shaw, who went on to greater fame as the creator of Activision's River Raid.[14] It uses the standard joystick controller. It can be played by two players against each other, or one player can play against the program on one of eight different difficulty settings.[15] The product code for the Atari game was CX-2618.[16]\nThree-dimensional tic-tac-toe on a 4x4x4 board (optionally 3x3x3) was included in the Microsoft Windows Entertainment Pack in the 1990s under the name TicTactics. In 2010 Microsoft made the game available on its Game Room service for its Xbox 360 console.\nA program library named Qubist, and front-end for the GTK 2 window library are a project on SourceForge.[17]\nSimilar and related games[edit]\nBesides the related tic tac toe, a popular variant is a commercial product called \"Score Four\". In Score Four the markers are small spheres with a hole drilled all the way through. The base of the game board provides 16 vertical spikes. To make a move, a player places a sphere on one of the spikes. Thus a move can only be made in a cell wherein all of the cells below it are already occupied.\nhttps://en.wikipedia.org/wiki/File:Gomoku-game-3.svg\nhttps://en.wikipedia.org/wiki/Gomoku\nGomoku is an abstract strategy board game. Also called Gobang or Five in a Row, it is traditionally played with Go pieces (black and white stones) on a go board with 19x19 (15x15) intersections;[1] however, because pieces are not moved or removed from the board, gomoku may also be played as a paper and pencil game. This game is known in several countries under different names.\nhttps://en.wikipedia.org/wiki/File:Twixtboard185132.jpg\nhttps://en.wikipedia.org/wiki/TwixT\nTwixt is played on a board comprising a 24×24 square grid of holes (minus four corner holes).\nTwixT is a two-player strategy board game, an early entrant in the 1960s 3M bookshelf game series. It became one of the most popular and enduring games in the series. It is a connection game where players alternate turns placing pegs and links on a pegboard in an attempt to link their opposite sides. The rules are simple but the strategy complex, so young children can play it, but it also appeals to adults. The game has been discontinued except in Germany.\nhttps://en.wikipedia.org/wiki/File:Crosstrack_box.jpg\nhttps://en.wikipedia.org/wiki/Crosstrack\nCrosstrack, the \"unique track switching game\", is an abstract strategy game created by Shoptaugh Games in 1994. Players place special track pieces onto an irregular octagon board, winning by being the first to create an unbroken path between two opposite sides.\nFour-player game[edit]\nPlayers choose one color each as well as a partner, and play as two opposing teams. Partners sit opposite each other, with play passing between teams every turn. Players have the power to rotate or relocate a team member's piece if it is already on the board, but do not have the ability to play unplayed pieces from their partners' stocks.\nhttps://en.wikipedia.org/wiki/File:1-1-4-game-example.gif\nhttps://en.wikipedia.org/wiki/Dots_(game)\nhttps://en.wikipedia.org/wiki/File:Dots-initial-fourcrosses.png\nInitial positions that are commonly used (from left to right): cross, double cross, four crosses (only center of the board is shown)\nDots (Czech: Židi, Polish: Kropki, Russian: Точки) is an abstract strategy game, played by two or more people on a sheet of squared paper. The game is superficially similar to Go, except that pieces are not taken, and the primary target of dots is capturing enemy dots by surrounding them with a continuous line of one's own dots. Once surrounded, dots are not playable.\nDots is played on a grid of some finite size, usually 39x32 (this is the size of the grid that is often encountered on a page of squared copybook in Russia) but arbitrary sizes can be used. Players take turns by placing a dot of their own color (usually red and blue) on empty intersections of the grid.\nExample game of Dots and Boxes on a 2 square × 2 square board.\nhttps://en.wikipedia.org/wiki/File:Dots-and-boxes.svg\nDots and Boxes is a pencil-and-paper game for two players (sometimes more). It was first published in the 19th century by Édouard Lucas, who called it la pipopipette.[1] It has gone by many other names,[2] including the game of dots,[3] boxes,[4] dot to dot grid,[5] and pigs in a pen.[6]\nStarting with an empty grid of dots, two players take turns adding a single horizontal or vertical line between two unjoined adjacent dots. The player who completes the fourth side of a 1×1 box earns one point and takes another turn. (A point is typically recorded by placing a mark that identifies the player in the box, such as an initial). The game ends when no more lines can be placed. The winner is the player with the most points.[2][7] The board may be of any size. When short on time, a 2×2 board (a square of 9 dots) is good for beginners.[8] A 5×5 is good for experts.[9]\nThe diagram on the right shows a game being played on the 2×2 board. The second player (B) plays the mirror image of the first player's move, hoping to divide the board into two pieces and tie the game. But the first player (A) makes a sacrifice at move 7 and B accepts the sacrifice, getting one box. However, B must now add another line, and connects the center dot to the center-right dot, causing the remaining boxes to be joined together in a chain (shown at the end of move 8). With A's next move, player A gets them all and wins 3–1.\nThe double-cross strategy: faced with position 1, a novice player would create position 2 and lose. An experienced player would create position 3 and win.\nhttps://en.wikipedia.org/wiki/File:Dots-and-boxes-chains.png\nDots and Boxes need not be played on a rectangular grid – it can be played on a triangular grid or a hexagonal grid.[2] There is also a variant in Bolivia where it is played in a Chakana or Inca Cross grid, which adds more complications to the game.[citation needed]\nDots and Boxes has a dual graph form called \"Strings-and-Coins\". This game is played on a network of coins (vertices) joined by strings (edges). Players take turns cutting a string. When a cut leaves a coin with no strings, the player \"pockets\" the coin and takes another turn. The winner is the player who pockets the most coins. Strings-and-Coins can be played on an arbitrary graph.[2]\nA variant played in Poland allows a player to claim a region of several squares as soon as its boundary is completed.[citation needed] In the Netherlands, it is called \"kamertje verhuren\" (\"Rent-a-Room\") and the outer border already has lines.[citation needed] In analyses of Dots and Boxes, starting with outer lines is called a Swedish board while the standard version is called an American board. An intermediate version with the outer left and bottom sides starting with lines is called an Icelandic board.[11]\nhttps://en.wikipedia.org/wiki/Onyx_(game)\nOnyx is a two-player abstract strategy board game invented by Larry Back in 1995. The game features a rule for performing captures, making Onyx unique among connection games.\nThe initial setup has four black pieces and four white pieces pre-placed (see illustration). Black moves first by placing a black piece on any empty point of the board. White follows suit.[note 1] Turns continue to alternate. A piece can be place on the midpoint of a square only if all four corners of that square are currently unoccupied. Once placed, pieces do not move. Captured pieces are immediately removed from the game.\nhttps://en.wikipedia.org/wiki/File:QT3_animated_opening.gif\nhttps://en.wikipedia.org/wiki/Quantum_tic-tac-toe\nQuantum tic-tac-toe is a \"quantum generalization\" of tic-tac-toe in which the players' moves are \"superpositions\" of plays in the classical game. The game was invented by Allan Goff of Novatia Labs, who describes it as \"a way of introducing quantum physics without mathematics\", and offering \"a conceptual foundation for understanding the meaning of quantum mechanics\".[1]\nhttps://en.wikipedia.org/wiki/Ultimate_tic-tac-toe\nUltimate tic-tac-toe also known as super tic-tac-toe or meta tic-tac-toe is a board game composed of nine tic-tac-toe boards arranged in a 3-by-3 grid.[1][2] Players take turns playing in the smaller tic-tac-toe boards until one of them wins in the larger tic-tac-toe board. Strategy in this game is much more conceptually difficult, and has proven more challenging for computers.[3]\nhttps://en.wikipedia.org/wiki/Wild_tic-tac-toe\nWild tic-tac-toe is a game similar to Tic-tac-toe. However, in this game players can choose to play as X or O.[1][2] This game can also be played in its misere form where if a player creates a three-in-a-row of marks, that player loses the game.[3]\nhttps://en.wikipedia.org/wiki/Tic-tac-toe_variants\n3-dimensional tic-tac-toe on a 3×3×3 board. In this game, the first player has an easy win by playing in the centre if 2 people are playing.\nOne can play on a board of 4x4 squares, winning in several ways. Winning can include: 4 in a straight line, 4 in a diagonal line, 4 in a diamond, or 4 to make a square. Another variant, Qubic, is played on a 4×4×4 board; it was solved by Oren Patashnik in 1980 (the first player can force a win).[9] Higher dimensional variations are also possible.[10]\nTic-tac-toe variants\nIt has been suggested that Wild tic-tac-toe be merged into this article. (Discuss) Proposed since December 2016.\nAn complete game of Notakto, a misère variant of the game\nTic-tac-toe is an instance of an m,n,k-game, where two players alternate taking turns on an m×n board until one of them gets k in a row.[1] Harary's generalized tic-tac-toe is an even broader generalization. The game can also be generalized as a nd game.[2]\nMany board games share the element of trying to be the first to get n-in-a-row, including Three Men's Morris, Nine Men's Morris, pente, gomoku, Qubic, Connect Four, Quarto, Gobblet, Order and Chaos, Toss Across, and Mojo.\nVariants of tic-tac-toe date back several millennia.[3]\n1 Historic\n2 Variants in higher dimensions\n3 Misère games\n4 Variants with bigger boards\n5 Isomorphic games\n6 Other variants\nHistoric[edit]\nAn early variation of tic-tac-toe was played in the Roman Empire, around the first century BC.[4] It was called Terni Lapilli and instead of having any number of pieces, each player only had three, thus they had to move them around to empty spaces to keep playing. The game's grid markings have been found chalked all over Rome.[5] However, according to Claudia Zaslavsky's book Tic Tac Toe: And Other Three-In-A Row Games from Ancient Egypt to the Modern Computer, Tic-tac-toe could be traced back to ancient Egypt.[6][7] Another closely related ancient game is Three Men's Morris which is also played on a simple grid and requires three pieces in a row to finish.[8]\nVariants in higher dimensions[edit]\nMisère games[edit]\nIn misère tic-tac-toe, the player wins if the opponent gets n in a row.[11][12][13][14] This game is also known as avoidance tic tac toe,[12] toe-tac-tic,[12] inverse tic tac toe,[13] or reverse tic tac toe.[14] A 3×3 game is a draw. More generally, the first player can draw or win on any board (of any dimension) whose side length is odd, by playing first in the central cell and then mirroring the opponent's moves.[10][13]\nNotakto is a misere and impartial form of tic tac toe. This means unlike in misere tic tac toe, in Notakto, both players play as the same symbol, X.[15] It also can be played on one or multiple boards.[16]\nVariants with bigger boards[edit]\nThe game Quixo is played on a 5 by 5 board of cubes with two players or teams.[17] On a player's turn, they select a blank cube or a cube with their symbol on it that is at the edge of the board. If a blank cube was selected, the cube is turned to be the player's symbol (either a X or O). The game ends when one player gets 5 in a row.[17][18][19][20]\nIsomorphic games[edit]\nThere is a game that is isomorphic to tic-tac-toe, but on the surface appears completely different. It is called Pick15[21] or Number Scrabble.[22] Two players in turn say a number between one and nine. A particular number may not be repeated. The game is won by the player who has said three numbers whose sum is 15.[21][23] If all the numbers are used and no one gets three numbers that add up to 15 then the game is a draw.[21] Plotting these numbers on a 3×3 magic square shows that the game exactly corresponds with tic-tac-toe, since three numbers will be arranged in a straight line if and only if they total 15.[24]\neat bee less →e\nair bits lip →i\nsoda book lot →o\n↙\n↘\nAnother isomorphic game uses a list of nine carefully chosen words, for instance \"eat\", \"bee\", \"less\", \"air\", \"bits\", \"lip\", \"soda\", \"book\", and \"lot\". Each player picks one word in turn and to win, a player must select three words with the same letter. The words may be plotted on a tic-tac-toe grid in such a way that a three in a row line wins.[25]\nNumerical Tic Tac Toe is a variation invented by the mathematician Ronald Graham.[26] The numbers 1 to 9 are used in this game. The first player plays with the odd numbers, the second player plays with the even numbers. All numbers can be used only once. The player who puts down 15 points in a line wins (sum of 3 numbers).[27] This game can be generalized to a n by n board.[27]\nOther variants[edit]\nA complete game of Wild tic-tac-toe.\nIn the 1970s, there was a two player game made by Tri-ang Toys & Games called Check Lines, in which the board consisted of eleven holes arranged in a geometrical pattern of twelve straight lines each containing three of the holes. Each player had exactly five tokens and played in turn placing one token in any of the holes. The winner was the first player whose tokens were arranged in two lines of three (which by definition were intersecting lines). If neither player had won by the tenth turn, subsequent turns consisted of moving one of one's own tokens to the remaining empty hole, with the constraint that this move could only be from an adjacent hole.[28]\nQuantum tic tac toe allows players to place a quantum superposition of numbers on the board, i.e. the players' moves are \"superpositions\" of plays in the original classical game. This variation was invented by Allan Goff of Novatia Labs.[29]\nIn wild tic-tac-toe, players can choose to place either X or O on each move.[7][30][31][32] It can be played as a normal game where the player who makes three in a row wins or a misere game where they would lose.[7] This game is also called your choice tic-tac-toe.[33]\nIn the game SOS, the players on each turn choose to play a \"S\" or an \"O\" in an empty square.[34] If a player creates the sequence, SOS vertically, horizontally or diagonally they get a point and also take another turn.[35] The player with the most points (SOSs) is the winner.[34][35]\nA completed game of Treblecross\nIn Treblecross, both players play with the same symbol(a X[13] or black chip[36]). The game is played on a 1 by n board with k equal to 3.[13] The player who creates a three in a row of X's (or black chips) wins the game.[13][36]\nIn revenge n-in a row the player who creates a n-in a row wins unless the opponent can create a n-in a row in the next move where they lose.[37][13]\nIn the game random turn tic-tac-toe, a coin flip determines who's turn it is.[7]\nIn quick-tac-toe, the players can play their mark in any squares they want provided that all the marks are in the same vertical or horizontal row. The winner is the player who places the last mark.[38]\nhttps://en.wikipedia.org/wiki/Notakto\nNotakto is a tic-tac-toe variant, also known as neutral or impartial tic-tac-toe.[1][2] The game is a combination of the games tic-tac-toe and Nim,[1][3] played across one or several boards with both of the players playing the same piece (an \"X\" or cross). The game ends when all the boards contain a three-in-a-row of Xs,[4][5] at which point the player to have made the last move loses the game.[6] However, in this game, unlike in the game tic-tac-toe, there will always be a player who wins any game of Notakto.[7]\nhttps://en.wikipedia.org/wiki/File:Renju.jpg\nhttps://en.wikipedia.org/wiki/Renju\nPlayed on quadrants.\nRenju (Japanese: 連珠) is the professional variant of Gomoku. It was named Renju by Japanese journalist Ruikou Kuroiwa (黒岩涙香) on December 6, 1899 in a Japanese newspaper Yorozu chouhou (萬朝報). The game is played with black and white stones on a 15×15 gridded Go board.\nhttps://en.wikipedia.org/wiki/Irensei\nQuadrant board.\nIrensei (Japanese: 囲連星) is an abstract strategy board game. It is traditionally played with Go pieces (black and white stones) on a Go board (19x19 intersections), but any equipment with which Go can be played is also suitable for Irensei.\nhttps://en.wikipedia.org/wiki/Reversi\nReversi is a strategy board game for two players, played on an 8×8 uncheckered board. There are sixty-four identical game pieces called disks (often spelled \"discs\"), which are light on one side and dark on the other. Players take turns placing disks on the board with their assigned color facing up. During a play, any disks of the opponent's color that are in a straight line and bounded by the disk just placed and another disk of the current player's color are turned over to the current player's color.\nhttps://en.wikipedia.org/…/File:Nintendo-TV-Game-Computer.j…\nOthello was one of Nintendo's first arcade games, and was later ported to a dedicated home game console in 1980.\nThe historical version of Reversi starts with an empty board, and the first two moves by each player are in the four central squares of the board. The players place their disks alternately with their color facing up and no captures are made. A player may choose to not play both pieces on the same diagonal, different from the standard Othello opening. It is also possible to play variants of Reversi and Othello wherein the second player's second move may or must flip one of the opposite-colored disks (as variants closest to the normal games).\nFor the specific game of Othello (as technically differing from the historical Reversi), the rules state that the game begins with four disks placed in a square in the middle of the grid, two facing white side up, two pieces with the dark side up, with same-colored disks on a diagonal with each other. Convention has initial board position such that the disks with dark side up are to the north-east and south-west (from both players' perspectives), though this is only marginally meaningful to play (where opening memorization is an issue, some players may benefit from consistency on this). If the disks with dark side up are to the north-west and south-east, the board may be rotated by 90° clockwise or counterclockwise. The dark player moves first.\nhttps://en.wikipedia.org/wiki/File:Reversi_d44.png\nQuadrants\nhttps://en.wikipedia.org/wiki/Pegity\nPegity is a board game similar to Gomoku and tic-tac-toe, and is intended for two to four players. Parker Brothers introduced the game in 1925,[1] and continued to produce it through the 1960s. The box includes wooden pegs in four colors and a cardboard game board divided into a 16 by 16 grid. The object of the game is for a player to place five pegs of a single color in a row, vertically, horizontally, or diagonally. Its instructions also included patterns for creating designs on the game board as an alternative to playing the game.\n16 is the squares of the quadrant model- four colors\nfour power stones\nquadrant board\nhttps://en.wikipedia.org/wiki/Pente\nPente is a strategy board game for two or more players, created in 1977 by Gary Gabrel, a dishwasher at Hideaway Pizza, in Stillwater, Oklahoma.[1] Customers played Pente at Hideaway Pizza on checkerboard tablecloths while waiting for their orders to arrive. Thirty years later, patrons are still playing Pente at Hideaway Pizza, although now with roll-up Pente boards.[citation needed] Pente is based on the Japanese game ninuki-renju, a variant of renju or gomoku that is played on a Go board of 19x19 intersections with white and black stones. Like ninuki-renju, Pente allow captures, but Pente added a new opening rule. In the nineteenth century, gomoku was introduced to Britain where it was known as \"Go Bang.\" (borrowed from Japanese \"goban\" 碁盤 meaning \"go board\")[2]\nPente is a registered trademark of Hasbro for strategy game equipment. Pente (πέντε) is the number five in Greek.\nHasbro ceased distribution of Pente in 1993. It later licensed the name to Winning Moves, a classic games publisher that resurrected the game in 2004. The 2004 version includes 4 extra stones, called power stones, that can be played in the Pente Plus version.\nhttps://en.wikipedia.org/wiki/Connect_Four\nConnect Four\nConnect 4 Board and Box.jpg\nConnect 4 game board and box\nDesigner(s) Howard Wexler[1]\nNed Strongin[2]\nPublisher(s) Milton Bradley / Hasbro\nGenre(s) Abstract strategy\nPlayers 2\nAge range 6 and up\nPlaying time 1 - 10 minutes\nConnect Four (also known as Captain's Mistress, Four Up, Plot Four, Find Four, Fourplay[citation needed], Four in a Row, Four in a Line and Gravitrips (in Soviet Union)) is a two-player connection game in which the players first choose a color and then take turns dropping colored discs from the top into a seven-column, six-row vertically suspended grid. The pieces fall straight down, occupying the next available space within the column. The objective of the game is to be the first to form a horizontal, vertical, or diagonal line of four of one's own discs. Connect Four is a solved game. The first player can always win by playing the right moves.\nThe game was first sold under the famous Connect Four trademark by Milton Bradley in February 1974.\n1 Gameplay\n2 Mathematical solution\n3 Rule variations\n3.1 Pop Out\n3.2 Pop 10\n3.3 5-in-a-Row\n3.4 Power Up\n5 Popular culture\nGameplay[edit]\nGameplay of Connect Four\nObject: Connect four of your checkers in a row while preventing your opponent from doing the same.\n— Milton Bradley, Connect Four \"Pretty Sneaky, Sis\" television commercial, 1977\nThe animation demonstrates Connect Four gameplay where the first player begins by dropping his/her yellow disc into the center column of the game board. The two players then alternate turns dropping one of their discs at a time into an unfilled column, until the second player, with red discs, achieves four discs in a row, diagonally, and wins. If the game board fills before either player achieves four in a row, then the game is a draw.\nMathematical solution[edit]\nConnect Four is a two-player game with \"perfect information\". This term describes games where one player at a time plays, players have all the information about moves that have taken place, and all moves that can take place, for a given game state. Connect Four also belongs to the classification of an adversarial, zero-sum game, since a player's advantage is an opponent's disadvantage.\nOne measure of complexity of the Connect Four game is the number of possible games board positions. For classic Connect Four played on 6 high, 7 wide grid, there are 4,531,985,219,092 positions[3] for all game boards populated with 0 to 42 pieces.\nThe game was first solved by James Dow Allen (October 1, 1988), and independently by Victor Allis (October 16, 1988).[4] Allis describes a knowledge based approach,[5] with nine strategies, as a solution for Connect Four. Allen also describes winning strategies[6][7] in his analysis of the game. At the time of the initial solutions for Connect Four, brute force analysis was not deemed feasible given the game's complexity and the computer technology available at the time.\nConnect Four has since been solved with brute force methods beginning with John Tromp's work in compiling an 8-ply database[4][8] (Feb 4, 1995). The artificial intelligence algorithms able to strongly solve Connect Four are minimax or negamax, with optimizations that include alpha-beta pruning, move ordering, and transposition tables. The code for solving Connect Four with these methods is also the basis for the Fhourstones[9] integer performance benchmark.\nThe solved conclusion for Connect Four is first player win. With perfect play, the first player can force a win,[4][5][6] on or before the 41st move[10] (ply) by starting in the middle column. The game is a theoretical draw when the first player starts in the columns adjacent to the center. For the edges of the game board, column 1 and 2 on left (or column 7 and 6 on right), the exact move-value score for first player start is loss on the 40th move,[10] and loss on the 42nd move,[10] respectively. In other words, by starting with the four outer columns, the first player allows the second player to force a win.\nRule variations[edit]\nThere are many variations of Connect Four with differing game board sizes, board arrangements, game pieces, and/or gameplay rules. Many variations are popular with game theory and artificial intelligence research, rather than with physical game boards and gameplay by persons.\nThe most commonly used Connect Four board size is 7 columns × 6 rows. Size variations include 8×7, 9×7, 10×7, 8×8, and Infinite Connect-Four.[11] Alternate board arrangements include Cylinder-Infinite Connect-Four.[12] One board variation available as a physical game is Hasbro's Connect 4x4.\nA travel version of the Milton Bradley game.\nSeveral versions of Hasbro's Connect Four physical gameboard make it easy to remove game pieces from the bottom one at a time. Along with traditional gameplay, this feature allows for variations of the game.[13]\nPop Out[edit]\nPop Out starts the same as traditional gameplay, with an empty board and players alternating turns placing their own colored discs into the board. During each turn, a player can either add another disc from the top or, if one has any discs of his or her own color on the bottom row, remove (or \"pop out\") a disc of one's own color from the bottom. Popping a disc out from the bottom drops every disc above it down one space, changing their relationship with the rest of the board and changing the possibilities for a connection. The first player to connect four of their discs horizontally, vertically, or diagonally wins the game.\nPop 10[edit]\nBefore play begins, Pop 10 is set up differently from the traditional game. Taking turns, each player places their opponent's color discs into the slots filling up only the bottom row, then moving on to the next row until it is filled and so forth until all rows have been filled.\nGameplay works by players taking turns removing a disc of one's own color through the bottom of the board. If the disc that was removed was part of a four-disc connection at the time of its removal, the player sets it aside out of play and immediately takes another turn. If it was not part of a \"connect four\", then it must be placed back on the board through a slot at the top into any open space and the turn ends, switching to the other player. The first player to set aside ten discs of his or her color wins the game.\n5-in-a-Row[edit]\nThe 5-in-a-Row variation for Connect Four is a game played on a 6 high, 9 wide, grid. Hasbro adds two additional board columns, already filled with player pieces in an alternating pattern, to the left and right sides of their standard 6 by 7 game board. The game plays similarly to the original Connect Four, except players must now get five pieces in a row to win. Notice this is still a 42-ply game, since the two new columns added to the game represent twelve game pieces already played, before the start of a game.\nPower Up[edit]\nIn this variation of Connect Four, players begin a game with one or more specially marked, \"Power Checkers\" game pieces, which each player may choose to play once per game. When playing a piece marked with an anvil icon, for example, the player may immediately pop out all pieces below it, leaving the anvil piece at the bottom row of the game board. Other marked game pieces include one with a wall icon, allowing a player to play a second consecutive non winning turn with an unmarked piece, a \"×2\" icon, allowing for an unrestricted second turn with an unmarked piece, and a bomb icon, allowing a player to immediately pop out an opponent's piece.\nOther versions[edit]\nHasbro also produce various sizes of Giant Connect Four, suitable for outdoor use. The largest is built from weather-resistant wood, and measures 120 cm in both width and height. Connect Four was released for the Microvision video game console in 1979, developed by Robert Hoffberg. It was also released for the Texas Instruments 99/4 computer the same year.\nWith the proliferation of mobile devices, Connect Four has regained popularity as a game that can be played quickly and against another person over an Internet connection.\nIn 2015 Winning Moves published Connect 4 Twist & Turn. This game variant features a game tower instead of the flat game grid. The tower has 5 rings that twist independently. Game play is similar to standard Connect 4 where players try to get 4 in a row of their own colored discs. However, with Twist & Turn, players have the choice to twist a ring after they have played a piece. It adds a subtle layer of strategy to game play.\nPopular culture[edit]\nBroadcaster and writer Stuart Maconie—while working at the NME—started a rumour that Connect 4 was invented by David Bowie, which became an urban myth.[14]\nOn The Hub's game show Family Game Night, there is a game under the name \"Connect 4 Basketball\" in which teams use colored balls.\nDuring the second season of the History Channel competition series Top Shot, one challenge required teams to throw tomahawks at a square grid of 36 targets. The first team to hit four targets in a continuous line won the challenge.\nhttps://en.wikipedia.org/wiki/Family_Game_Night_(TV_series)#Connect_4_Basketball\nConnect 4 Basketball[edit]\nIn this variation on the vertical checkers game Connect Four, the checkers are replaced with red and yellow balls. Family members take turns in family order throwing those balls into baskets on a 7x6 board, in order to get 4 in a row in any direction.\nIn an early episode, players from both teams shot their red and yellow balls at the same time. The first team to make 4 in a row won one round; the first to win two rounds won the game.\nhttps://en.wikipedia.org/wiki/Connect6\nConnect6 (Chinese: 六子棋; Pinyin: liùzǐqí; Chinese: 連六棋;Japanese: 六目並べ; Korean: 육목) introduced in 2003 by Professor I-Chen Wu at Department of Computer Science and Information Engineering, National Chiao Tung University, is a two-player strategy game similar to Gomoku.[1]\nTwo players, Black and White, alternately place two stones of their own colour, black and white respectively, on empty intersections of a Go-like board, except that Black (the first player) places one stone only for the first move. The one who gets six or more stones in a row (horizontally, vertically or diagonally) first wins the game.\nIt is a two dimensional double tetrahedron Merkaba\nhttp://star-of-david.blogspot.com/2006/05/synonyms.html\nChinese checkers is a board game for two to six people. Each player tries to jump his marbles from one point of a six-pointed star shaped board to the opposite point. It is not a Chinese game – it got its name in the United States to make it sound more exotic.\nhttps://en.wikipedia.org/wiki/Four_go_houses\nThe Go Board is featured in the cult classic Pi as the microcosm of existence\nIn the history of Go in Japan, the Four Go houses were the four academies of Go instituted, supported, and controlled by the state, at the beginning of the Tokugawa shogunate. At roughly the same time shogi was organised into three houses. Here \"house\" implies institution run on the recognised lines of the iemoto system common in all Japanese traditional arts. In particular the house head had, in three of the four cases, a name handed down: Inoue Inseki, Yasui Senkaku, Hayashi Monnyu. References to these names therefore mean to the contemporary head of house.\nThe four academies were the Honinbo Go house, Hayashi Go house, Inoue Go house and Yasui house. Theoretically these were on a par, and competed in the official castle games called oshirogo.\nhttps://en.wikipedia.org/wiki/Go_(game)\nhttps://en.wikipedia.org/wiki/File:Golibs.png\nGo (traditional Chinese: 圍棋; simplified Chinese: 围棋; pinyin: wéiqí; Japanese: 囲碁; rōmaji: igo[nb 2]; Korean: 바둑; romaja: baduk[nb 3]; literally: \"encircling game\") is a board game involving two players, that originated in ancient China more than 2,500 years ago. It was considered one of the four essential arts of a cultured Chinese scholar in antiquity. The earliest written reference to the game is generally recognized as the historical annal Zuo Zhuan[2][3] (c. 4th century BC).[4]\nThe Go Board was seen by the Japanee to be a microcosm of the cosmos.\nTHe game board is made up of quadrants.\nPlayers from the four schools (Honinbo, Yasui, Inoue and Hayashi) competed in the annual castle games, played in the presence of the shogun\nThe four points around a piece in the Go game are called the \"four liberties\", until one is filled and then it becomes the three liberties and so on and so fourth.\nThe four liberties (adjacent empty points) of a single black stone (A), as White reduces those liberties by one (B, C, and D). When Black has only one liberty left (D), that stone is \"in atari\".[13] White may capture that stone (remove from board) with a play on its last liberty (at D-1).\nCheckers is a quadrant board\nhttps://en.wikipedia.org/wiki/Draughts\nDraughts (UK /ˈdrɑːfts/) or checkers[1] (American English) is a group of strategy board games for two players which involve diagonal moves of uniform game pieces and mandatory captures by jumping over opponent pieces. Draughts developed from alquerque.[2] The name derives from the verb to draw or to move.\nCheckers boards are made up of different color quadrants.\nA similar game has been played for thousands of years.[3] A board resembling a draughts board was found in Ur dating from 3000 BC.[9] In the British Museum are specimens of ancient Egyptian checkerboards, found with their pieces in burial chambers, and the game was played by Queen Hatasu.[3][10] Plato mentioned a game, πεττεία or petteia, as being of Egyptian origin,[10] and Homer also mentions it.[10] The method of capture was placing two pieces on either side of the opponent's piece. It was said to have been played during the Trojan War.[11][12] The Romans played a derivation of petteia called latrunculi, or the game of the Little Soldiers.[10][13]\nAlquerque board and setup\nAlquerque[edit]\nMain article: Alquerque\nAn Arabic game called Quirkat or al-qirq, with similar play to modern draughts, was played on a 5×5 board. It is mentioned in the 10th century work Kitab al-Aghani.[9] Al qirq was also the name for the game that is now called Nine Men's Morris.[14] Al qirq was brought to Spain by the Moors,[15] where it became known as Alquerque, the Spanish derivation of the Arabic name. The rules are given in the 13th century book Libro de los juegos.[9] In about 1100, probably in the south of France, the game of Alquerque was adapted using backgammon pieces on a chessboard.[16] Each piece was called a \"fers\", the same name as the chess queen, as the move of the two pieces was the same at the time.[citation needed]\nIt is a quadrant model with 16 squares\nhttps://en.wikipedia.org/wiki/File:Alquerque_board_at_starting_position_2.svg\nTurkish draughts (also known as Dama) is a variant of draughts (checkers) played in Turkey, Egypt, Kuwait, Lebanon, Syria, Jordan and several other locations in the Middle East.\nOn an 8×8 board, 16 men are lined up on each side, in two rows. The back rows are vacant. A traditional gameboard is mono-coloured. White moves first.\nhttps://en.wikipedia.org/wiki/File:TurkishDraughts_(trad).png\nhttps://en.wikipedia.org/wiki/Turkish_draughts\n16 squares of quadrant model\nBackgammon is one of the most popular games of all time. The game consists of four quadrants. I have watched numerous lectures of different Backgammon players talk about strategies for each of the quadrants\nhttps://www.youtube.com/watch?v=NSI214-cqM8\nQuadrant backgammon video by professional\nBackgammon Rules and Instructions : The Last Quadrant in Backgammon\nhttp://www.bkgm.com/gloss/pics/quadrant.gif\nhttp://www.bkgm.com/gloss/lookup.cgi?quadrant\nOne quarter of the playing area on a backgammon board. The first quadrant comprises a player's points 1 to 6, the second quadrant points 7 to 12, the third quadrant points 13 to 18, and the fourth quadrant points 19 to 24.\n64 dice. 64 is four 16s, four quadrant models\nhttp://www.bkgm.com/rules.html\nBackgammon is a game for two players, played on a board consisting of twenty-four narrow triangles called points. The triangles alternate in color and are grouped into four quadrants of six triangles each. The quadrants are referred to as a player's home board and outer board, and the opponent's home board and outer board. The home and outer boards are separated from each other by a ridge down the center of the board called the bar.\nFigure 1. A board with the checkers in their initial position.\nAn alternate arrangement is the reverse of the one shown here, with the home board on the left and the outer board on the right.\nThe points are numbered for either player starting in that player's home board. The outermost point is the twenty-four point, which is also the opponent's one point. Each player has fifteen checkers of his own color. The initial arrangement of checkers is: two on each player's twenty-four point, five on each player's thirteen point, three on each player's eight point, and five on each player's six point.\nBoth players have their own pair of dice and a dice cup used for shaking. A doubling cube, with the numerals 2, 4, 8, 16, 32, and 64 on its faces, is used to keep track of the current stake of the game.\nChess comes from the Indian game chatarang- which means \"four arms of the army\"\nhttp://www.houseofchess.com/chess_history_rules.php\nThe earliest evidence of chess is found in the nearby Sassanid Persia around 600, where the game came to be known by the name chatrang. Chatrang is evoked in three epic romances written in Pahlavi (Middle Persian). Chatrang was taken up by the Muslim world after the Islamic conquest of Persia (633–44), where it was then named shatranj, with the pieces largely retaining their Persian names. In Spanish \"shatranj\" was rendered as ajedrez (\"al-shatranj\"), in Portuguese as xadrez, and in Greek as ζατρίκιον (zatrikion, which comes directly from the Persian chatrang),[33] but in the rest of Europe it was replaced by versions of the Persian shāh (\"king\"), which was familiar as an exclamation and became the English words \"check\" and \"chess\"\nThe Arabic word shatranj is derived from the Sanskrit chaturanga (catuḥ: \"four\"; anga: \"arm\").\nThe rules of chaturanga seen in India today have enormous variation, but all involve four branches (angas) of the army: the horse, the elephant (bishop), the chariot (rook) and the foot soldier (pawn), played on an 8×8 board.\nThe game was played within quadrants.\nhttps://www.chess.com/forum/view/general/chess-is-an-alien-creation-no-human-claimed-the-invention\nChess is believed to have originated in Eastern India, c. 280 – 550,[28] in the Gupta Empire,[29][30][31][32] where its early form in the 6th century was known as chaturaṅga (Sanskrit: चतुरङ्ग), literally four divisions [of the military] – infantry, cavalry, elephants, and chariotry, represented by the pieces that would evolve into the modern pawn, knight, bishop, and rook, respectively.\nhttps://en.wikipedia.org/wiki/Arimaa\nArimaa is played on an 8×8 board with four trap squares\nhttps://en.wikipedia.org/wiki/File:Arimaa_csb74.png\nPlayed on quadrant grid\nhttps://en.wikipedia.org/wiki/Powerlifting\nA powerlifting competition takes place as follows:\nEach competitor is allowed three to four attempts on each of the squat, bench press, and deadlift, depending on their standing and the organization they are lifting in. The lifter's best valid attempt on each lift counts toward the competition total. For each weightclass, the lifter with the highest total wins. If two or more lifters achieve the same total, the lighter lifter ranks above the heavier lifter.\nhttps://en.wikipedia.org/wiki/Finswimming\nFinswimming is an underwater sport consisting of four techniques involving swimming with the use of fins either on the water's surface using a snorkel with either monofins or bifins (i.e. one fin for each foot) or underwater with monofin either by holding one's breath or using open circuit scuba diving equipment.\nhttps://en.wikipedia.org/wiki/Surf_kayaking\nThere are a number of speciality surf kayak designs available. They are often equipped with up to four fins with a three fin thruster set up being the most common.\nhttps://en.wikipedia.org/wiki/Butterfly_stroke\nThere are four different strokes in swim racing. Butterfly is one of them.\nThere are four styles of butterfly stroke.\nTwo main styles of butterfly stroke seen today are: \"arm pull up simultaneous with dolphin kick\" and \"arm pull down simultaneous with dolphin kick\".[14]\n\"Arm pull up simultaneous with dolphin kick\": After head goes underwater, both arms go underwater but still higher than head. After first dolphin kick, pull both arms immediately with downward motion. While pulling arms, legs are relaxed, both knees and waist are slightly bent to prepare dolphin kick. After arms push water backward, pull arms up simultaneous with dolphin kick. In this style, turning point from drowning to floating is at the time of downward arm motion.\n\"Arm pull down simultaneous with dolphin kick\": After head goes underwater, both arms go underwater until lower than head. After first dolphin kick, raise both arms with relax. While rising arms, bend both knees and waist to send body back to the surface and prepare dolphin kick. Pull both arms downward while executing dolphin kick. After this sequence, immediately push the water backward. In this style, turning point from drowning to floating is at the time of waist bend.\nTwo additional styles of butterfly stroke is similar with two styles above, but without \"second\" dolphin kick [15] in order to save energy and be more relaxed.\nhttps://en.wikipedia.org/wiki/Wallball\nWallball is a type of school yard game similar to butts up, aces-kings-queens, Chinese handball, and American handball (American handball is sometimes actually referred to as wallball). The sport was played by a few schools in the San Francisco Bay Area and New York City, then began gaining much popularity, resulting in a popular worldwide sport. Wallball is now played globally with the international federation, Wall Ball International, promoting the game.[1] The game requires the ball to be hit to the floor before hitting the wall, but in other respects is similar to squash. It can be played as a singles, doubles or elimination game.\nWallball is derived from many New York City street games played by young people, often involving the Spalding hi-bounce balls created in 1949.\nThere are four main types of wallball: regular, teams, line-up, and random. Regular is where the players line up outside of the court on a bench (which is usually present) or standing if there is no bench, and two players enter the court. They play, and the losing player goes to the end of the line, and the next person comes up. Teams is the same as regular, except that there are four (or more, but even) players that come up (grouped in equal teams).\nThe game has four stages. The fourth is different\nhttps://en.wikipedia.org/wiki/Donkey_Kong_(video_game)\nFollowing 1980's Space Panic, Donkey Kong is one of the earliest examples of the platform game genre[10]:94[11] even prior to the term being coined; the US gaming press used climbing game for titles with platforms and ladders.[12] As the first platform game to feature jumping, Donkey Kong requires the player to jump between gaps and over obstacles or approaching enemies, setting the template for the future of the platform genre.[13] With its four unique stages, Donkey Kong was the most complex arcade game at the time of its release, and one of the first arcade games to feature multiple stages, following 1980's Phoenix and 1981's Gorf and Scramble:66[14]\nCompetitive video gamers and referees stress the game's high level of difficulty compared to other classic arcade games. Winning the game requires patience and the ability to accurately time Mario's ascent.[15]:82 In addition to presenting the goal of saving Pauline, the game also gives the player a score. Points are awarded for the following: leaping over obstacles; destroying objects with a hammer power-up; collecting items such as hats, parasols, and purses (presumably belonging to Pauline); removing rivets from platforms; and completing each stage (determined by a steadily decreasing bonus counter). The player typically receives three lives with a bonus awarded for the first 7,000 points, although this can be modified via the game's built in DIP switches. One life is lost whenever Mario touches Donkey Kong or any enemy object, falls too far through a gap or off the end of a platform, or lets the bonus counter reach zero.\nThe game is divided into four different single-screen stages. Each represents 25 meters of the structure Donkey Kong has climbed, one stage being 25 meters higher than the previous. The final stage occurs at 100 meters. Stage one involves Mario scaling a construction site made of crooked girders and ladders while jumping over or hammering barrels and oil drums tossed by Donkey Kong. Stage two involves climbing a five-story structure of conveyor belts, each of which transport cement pans. The third stage involves the player riding elevators while avoiding bouncing springs. The final stage involves Mario removing eight rivets which support Donkey Kong. Removing the final rivet causes Donkey Kong to fall and the hero to be reunited with Pauline.[16] These four stages combine to form a level.\nUpon completion of the fourth stage, the level then increments, and the game repeats the stages with progressive difficulty. For example, Donkey Kong begins to hurl barrels faster and sometimes diagonally, and fireballs get speedier. The victory music alternates between levels 1 and 2. The 22nd level is colloquially known as the kill screen, due to an error in the game's programming that kills Mario after a few seconds, effectively ending the game.[16]\nIn January 1983, the 1982 Arcade Awards gave it the Best Single-player video game award and the Certificate of Merit as runner-up for Coin-Op Game of the Year.[39] In September 1982, Arcade Express reviewed the ColecoVision port and scored it 9 out of 10.[40] Computer and Video Games reviewed the ColecoVision port in its September 1984 issue and scored it 4 out of 4 in all four categories of Action, Graphics, Addiction and Theme.[41]\nA complete remake of the original arcade game on the Game Boy, named Donkey Kong or Donkey Kong '94 contains levels from both the original Donkey Kong and Donkey Kong Jr. arcades. It starts with the same damsel-in-distress premise and four basic locations as the arcade game and then progresses to 97 additional puzzle-based levels. It is the first game to have built-in enhancement for the Super Game Boy accessory. The arcade version makes an appearance in Donkey Kong 64 in the Frantic Factory level.\nhttps://www.mariowiki.com/Reznor\nhttps://www.mariowiki.com/File:MKBBR116.png\nReznors on the rotating platforms.\nhttps://www.mariowiki.com/File:3DS_NewMario2_2_scrn05_E3.png\nA group of Reznor, as they appear in New Super Mario Bros. 2.\nhttps://www.mariowiki.com/File:Reznor_SuperMarioKun_4.jpg\nSuper Mario-Kun\nSuper Mario World[edit]\nReznor in Super Mario World.\nIn Super Mario World, Reznor spit fireballs at the player. Mario must hit their platforms from below or shoot fireballs at them to defeat them. The player has a limited amount of time available to knock off the first Reznor and then jump to one of their platforms as the floor begins breaking away exposing the molten lava below at the start of the battle. If the player does not defeat the rest of the Reznor promptly, they will have to jump onto one of the abandoned platforms and fight from there. It is, however, possible to defeat all four without doing so.\nThey are the bosses of four fortresses scattered through the areas of Dinosaur Land: the Vanilla Fortress, the Forest Fortress, the Chocolate Fortress, and the Valley Fortress. The four sets of Reznor all behave the same.\nNew Super Mario Bros. 2[edit]\nAfter a 22-year absence, Reznor reappear in New Super Mario Bros. 2 as the game's tower bosses, and their battle theme is a cover of the Super Mario World boss battle theme. They now roar before the battle starts. Their appearance has changed slightly, with their lower jaws and belly having a lighter tone, and their heads redesigned to resemble those of triceratops. The platforms they stand on are now coin-giving Rectangular Coin Blocks, and the word \"REZNOR\" on the boards behind the wheel is gone. Unlike in Super Mario World, where four of them always appear on a single four-plaform wheel, Reznor appear either in groups of two or four, and their wheel setups can also vary. They don't attack as frequently as they did in their debut game, and the bridge under them doesn't collapse as quickly. After defeating half of them, the remaining Reznor roar and stomp their blocks, causing the bridge below them to collapse, much like in Super Mario World. Reznor can be defeated when Mario hits the block underneath them, hits them with six fireballs or one gold fireball, or simply touches them while under the effects of the Invincibility Leaf. If defeated by a gold fireball, they will give thirty coins each.\nLevel Appearances[edit]\nWorld 1-Tower: Two Reznor on one wheel with four Rectangular Coin Blocks.\nWorld 2-Tower: Four Reznor on one wheel with four Rectangular Coin Blocks.\nWorld 3-Tower: Same as World 2-Tower.\nWorld 4-Tower: Four Reznor on two wheels; two Reznor per wheel, with each wheel having four Rectangular Coin Blocks. One bridge is positioned to the left of one wheel, while another bridge is positioned to the right of the other wheel.\nWorld 5-Tower: Four Reznor on two wheels; two Reznor per wheel, with each wheel having four Rectangular Coin Blocks. One bridge is positioned between the two wheels.\nWorld 6-Tower: Four Reznor on one giant wheel with eight Rectangular Coin Blocks.\nhttps://www.mariowiki.com/Super_Mario_Bros.\nSuper Mario Bros. is divided into eight worlds, each of them containing four levels. Mario (or, in the case of a second player, his brother Luigi) has to get to the end of the level by jumping over various gaps and avoiding the enemies on his way. Mario can use several platforms (some of them collapse when Mario lands on them), stairs in the level, as well as Jumping Boards. There are also pipes along the way, some of which Mario can enter to visit various secret coin rooms before returning to the level, a bit further ahead than when he left.\nThe fourth level of each world plays inside a castle. They are usually filled with Fire Bars and Podoboos. At the end of a castle level, Mario is confronted with a Bowser Impostor in Worlds 1 through 7 and the actual Bowser in World 8. Mario and Luigi ordinarily have no way to hurt the Bowser Impostors or the actual Bowser, and have to either use the axe to destroy the bridge, causing either the false or real one to fall into the lava, or pelt him with a number of fireballs, which produces the same result and reveals the true forms of the fakes. After defeating an impostor, Mario frees one of the seven remaining mushroom retainers from the castle, at which point they say their iconic phrase: \"Thank you, Mario! But our princess is in another castle!\" At the end of the castle in World 8, Mario frees the grateful Princess Toadstool and completes his adventure, having the choice to continue playing in a \"new quest.\" In this second quest, the player gets to choose a world, and replay some levels. However, all Goombas are replaced by Buzzy Beetles, all ground enemies are also considerably faster, some platforms and Elevators are shortened in length, and the level design is slightly changed for some levels (see below at \"Hard mode\").\nAll of the sprites and tiles in the game have at least four color schemes, one for each setting: either brown, beige, and black, or green, yellow, and white for overworld environments, blue, cyan, and black or teal, brown, and pink for underground environments, black, gray, and yellow or gray, yellow, and white for underwater environments, black, gray, and white for castle environments, and red, yellow, and white for all four. Bowser and Bowser Impostors are exceptions, as they are found in castles while having the overworld color scheme.\nhttps://en.wikipedia.org/wiki/Four-wheel_drive\nFour-wheel drive, 4×4 (\"four by four\"), 4WD, and AWD is a form of drivetrain most commonly capable of providing power to all wheel ends of a two-axled vehicle simultaneously. It may be full-time or on-demand, and may be linked via a transfer case to provide multiple gear ranges.\nA four-wheeled vehicle with power supplied to both axles may be described as \"all-wheel drive\". However, not all \"four-wheel drive\" vehicles are \"all-wheel drive\", as vehicles with more than two axles may also be described as \"four-wheel drive\" regardless of how many axles, so long as two axles (of two wheel ends apiece) are powered.[\n4WD/AWD systems were developed in many different markets and used in many different vehicle platforms. There is no universally accepted set of terminology to describe the various architectures and functions.[2] The terms used by various manufactures often reflect marketing rather than engineering considerations or significant technical differences between systems.[3][4]\n4×4[edit]\nFour-by-four (4×4) refers to the general class of vehicles. The first figure is normally the total wheels (more precisely, axle ends, which may have multiple wheels), and the second, the number that are powered. Syntactically, 4×2 means a four-wheel vehicle that transmits engine power to only two axle-ends: the front two in front-wheel drive or the rear two in rear-wheel drive.[5] Alternatively, a 6x4 vehicle has three axles, any two of which provide power to two wheel ends each. The number of wheels may be greater than six, as on ubiquitous ten-wheel tractor units, but the designation stays the same.[1]\n4WD[edit]\nFour wheel drive (4WD) refers to vehicles with two or more axles providing power to four wheel ends.[1] In the North American market the term generally refers a system that is optimized for severe off-road driving situations.[6] Four-wheel drive vehicles typically have a transfer case, which locks the front and rear axles, meaning that the front and rear drive shafts will be locked together when engaged. This provides maximum torque transfer to the axle with the most traction, but can cause binding in high traction turning situations.[7]\nAWD[edit]\nMain article: AWD (vehicle)\nAll wheel drive (AWD) historically was synonymous with \"four-wheel drive\" on four-wheeled vehicles, and six-wheel drive on 6x6s, and so on, being used in that fashion at least as early as the 1920s.[8][9] Today in the United States the term is applied to both heavy vehicles as well as light passenger vehicles. When referring to heavy vehicles the term is increasingly applied to mean \"permanent multiple-wheel drive\" on 2×2, 4×4, 6×6 or 8×8 drive train systems that include a differential between the front and rear drive shafts.[10] This is often coupled with some sort of anti-slip technology, increasingly hydraulic-based, that allows differentials to spin at different speeds but still be capable of transferring torque from a wheel with poor traction to one with better. Typical AWD systems work well on all surfaces, but are not intended for all consumers.[10] When used to describe AWD systems in light passenger vehicles it describes a system that applies power to all four wheels and targeted as improving on road traction and performance, particularly in inclement conditions, rather than for off road applications.[6]\nSome all wheel drive electric vehicles solve this challenge using one motor for each axle, thereby eliminating a mechanical differential between the front and rear axles. An example of this is the dual motor variant of the Tesla Model S, which on a millisecond scale can control the power distribution electronically between its two motors.[11]\nIWD[edit]\nIndividual-wheel drive (IWD) was coined to identify those electric vehicles whereby each wheel is driven by its own individual electric motor. This system essentially has inherent characteristics that would be generally attributed to four-wheel drive systems like the distribution of the available power to the wheels. However, because of the inherent characteristics of electric motors, torque can be negative, as seen in the Rimac Concept One and SLS AMG Electric. This can have drastic effects, as in better handling in tight corners.[12]\nThe term IWD can refer to a vehicle with any number of wheels. For example, the mars rovers are 6-wheel IWD.\nMario Bros. (マリオブラザーズ Mario Burazāzu?) is a platform game published and developed for arcades by Nintendo in 1983. It was created by Shigeru Miyamoto. It has been featured as a minigame in the Super Mario Advance series and numerous other games. Mario Bros. has been re-released for the Wii's, Nintendo 3DS's, and Wii U's Virtual Console services in Japan, North America, Europe and Australia.\nhttps://en.wikipedia.org/wiki/Mario_Bros.\nThere are four enemies: the Shellcreeper, which simply walks around; the Sidestepper, which requires two hits to flip over; the Fighter Fly, which moves by jumping and can only be flipped when it is touching a platform; and the Slipice, which turns platforms into slippery ice. When bumped from below, the Slipice dies immediately instead of flipping over; these enemies do not count toward the total number that must be defeated to complete a phase. All iced platforms return to normal at the start of each new phase.\nThe four ghosts of Pac Man- the fourth is different\nhttps://en.wikipedia.org/wiki/Ghosts_(Pac-Man)\nThe Ghosts (Japanese: モンスター monsutā, \"monsters\"), primarily Blinky, Pinky, Inky and Clyde, are the ghosts chasing the player in the Pac-Man franchise.\nIn the 2013 TV series Pac-Man and the Ghostly Adventures, the four Ghosts come from the Netherworld. Though they are ruled by Lord Betrayus, they are actually good-natured spirits and often supply Pac-Man with information about Lord Betrayus' plots, while ensuring Betrayus doesn't catch them in the act. It is also suggested that they could be reunited with their bodies and brought back to life, though their 'living' forms are unknown. There were also some Ghosts that were exclusive to the TV series like Cyclops Ghosts (a race of heavyset, horned Ghosts with one eye), Fire Ghosts (a race of orange Ghosts who can emit fire from their body), Tentacle Ghosts (a race of 4-eyed purple-black Ghosts who look similar to jellyfish), Guardian Ghosts (a race of large Ghosts who guard the Netherworld), and Aqua Ghosts (a race of light blue Ghosts with fins on their head).\nKnown ghosts[edit]\nBelow is the description of each Ghost.[6]\nColor Pac Man (Original)[7] Pac-Man (English version)\n(Personality) Translation Nickname Translation Alternate\nCharacter Alternate\nNickname Character\n(Personality) Nickname\nRed Oikake (追いかけ) Chaser Akabei (赤ベイ) Red guy Urchin Macky Shadow Blinky\nPink Machibuse (待ち伏せ) Ambusher Pinky (ピンキー) Pink guy Romp Micky Speedy Pinky\nCyan Kimagure (気まぐれ) Fickle Aosuke (青助) Blue guy Stylist Mucky Bashful Inky\nOrange Otoboke (お惚け) Feigned Ignorance Guzuta (愚図た) Slow guy Crybaby Mocky Pokey Clyde\nBlinky[edit]\nBlinky is a red ghost who, in the original arcade game, follows behind Pac-Man. He is considered the leader of the ghosts. In the Pac-Man cartoon, Blinky (voiced by Chuck McCann) is slow-witted and cowardly with grammar problems. In Pac-Man and the Ghostly Adventures, Blinky (voiced by Ian James Corlett in the TV series and by Lucien Dodge in the video game) is the default leader of the Ghost Gang Family and tends to help the winning side.[citation needed] Blinky receives a speed boost after a number of pac-pellets have been cleared. This mode has been informally referred to as \"Cruise Elroy\".[6][8] He is sometimes known as Clyde, mainly in the Pac-Man World games.\nPinky[edit]\nPinky is a pink ghost who, in the original arcade game, positions him/herself in front of Pac-Man. In the Pac-Man cartoon, Pinky (voiced by Chuck McCann) is depicted as male dimwitted shape shifter. In recent games, and Pac-Man and the Ghostly Adventures, Pinky (voiced by Ashleigh Ball in the TV series and by Julie Kliewer in the video game and sequel) is depicted as a female with a crush on Pac-Man, which often puts her at odds with Cylindria.\nInky[edit]\nInky is a baby blue ghost who, in the original arcade game, has a fickle mood. He can be unpredictable. Sometimes he chases Pac-Man aggressively like Blinky; other times he jumps ahead of Pac-Man as Pinky would. He might even wander off like Clyde on occasion. In the Pac-Man cartoon, Inky (voiced by Barry Gordon) is depicted as dim and loony. In Pac-Man and the Ghostly Adventures, Inky (voiced by Lee Tockar in the TV series and by Bryce Papenbrook in the video game) is the youngest member. Though the smartest, he lacks focus most of the time. In Pac-Man, Inky likes to appear in front of Pac-Man's face.\nClyde[edit]\nClyde is an orange ghost who, in the original arcade game, acts stupid. He will chase after Pac-Man in Blinky's manner, but will wander off to his home corner when he gets too close. In Ms. Pac-Man, this ghost is named Sue, and in Jr. Pac-Man, this ghost is named Tim. In the animated series, Clyde (voiced by Neil Ross) is the leader of the group. In recent games and Pac-Man and the Ghostly Adventures, Clyde (voiced by Brian Drummond in the TV series and by Orion Acaba in the video game) is depicted as a large ghost who is simple, but not unintelligent and has an appetite equal to Pac-Man's. He lacks the devious natures of his brothers and sister and is considerate towards others. He is sometimes known as Blinky, mainly in the Pac-M\nhttps://en.wikipedia.org/wiki/Super_Mario_All-Stars\nSuper Mario All-Stars, released in Japan as Super Mario Collection (Japanese: スーパーマリオコレクション Hepburn: Sūpā Mario Korekushon?) is a collection of Super Mario platforming video games developed and published by Nintendo for the Super Nintendo Entertainment System in 1993.\nSuper Mario All-Stars is a video game compilation that features complete remakes of the four Super Mario side-scrolling platform games that were originally released for the Nintendo Entertainment System and the Famicom Disk System between 1985 and 1990: Super Mario Bros., Super Mario Bros.: The Lost Levels (Super Mario Bros. 2 in Japan), Super Mario Bros. 2 (Super Mario Bros. USA in Japan), and Super Mario Bros. 3. The gameplay of each remade game is nearly identical to its original version, though some game physics as well as character and level designs are slightly modified, and some bugs, including the \"Minus World\" in Super Mario Bros., are fixed.\nThe four games each feature enhanced 16-bit graphics and updated soundtracks to take advantage of the Super NES hardware, including parallax scrolling.[2] All four games offer a save feature, which the original games lacked, allowing the player to save progress and resume play from the start of any previously accessed world (or in The Lost Levels, any previously accessed level). Up to four individual save files can be stored for each game. The games also allow the player to customize control configuration, allowing the \"jump\" and \"dash\" actions to be mapped to different buttons on the Super NES controller.\nTetra is four\nhttps://en.wikipedia.org/wiki/Dr._Mario\nDr. Mario is a falling block tile-matching video game,[9] in which Mario assumes the role of a doctor, dropping two-colored medical capsules into a medicine bottle representing the playing field. This area is populated by viruses of three colors: red, yellow, and blue. In a manner and style considered similar to Tetris,[10] the player manipulates each capsule as it falls, moving it left or right and rotating it such that it is positioned alongside the viruses and any existing capsules. When four or more capsule halves or viruses of matching color are aligned in vertical or horizontal configurations, they are removed from play; any remaining capsules or capsule halves which are not supported by a virus, capsule, or the bottom of the playing field immediately below will then fall as far as needed until it lands on one, after which any new 4-in-a-row alignments created from this will be removed. The main objective is to complete levels, which is accomplished by eliminating all viruses from the playing field. A game over occurs if capsules fill up the playing field in a way that obstructs the bottle's narrow neck.[11]\nhttps://en.wikipedia.org/wiki/Super_Mario_Bros._2\nA 16 bit game\nSuper Mario Bros. 2 is a 2D side-scrolling platform game. The objective of the game is to navigate the player's character through the dream world Subcon and defeat the main antagonist Wart.[4]:3–4 Before each stage, the player chooses one of four different protagonists to use: Mario, Luigi, Toad, and Princess Peach. All four characters can run, jump, and climb ladders or vines, but each character possesses a unique strength that causes them to be controlled differently. For example, Luigi can jump the highest; Princess Peach can jump the farthest; Toad's strength allows him to pick up items quickly. As opposed to the original Super Mario Bros., which only moved from left to right, players can move either left or right, as well as vertically in waterfall, cloud and cave levels. Unlike other Mario games, the characters cannot defeat enemies by jumping on them; but they can stand on, ride on, and jump on the enemies. Instead, the character picks up and throws objects at the enemies, or throws the enemies away, to defeat them. These objects include vegetables plucked from the ground or other enemies.[4]:13–16\nhttps://en.wikipedia.org/wiki/Link_(The_Legend_of_Zelda)\nIn The Legend of Zelda: A Link to the Past & Four Swords (2002), set at some point before Ocarina of Time, Zelda goes to the Sanctuary of the Four Sword with her friend, Link, to check on the seal containing the evil Wind Mage, Vaati. The seal has weakened, however, and Vaati emerges, kidnaps Zelda, and defeats Link. Later, Link finds three fairies, who instruct him to draw the Four Sword. The magical Four Sword divides him into four identical Links. The first Link wears his traditional green outfit; the second, a red version; the third, blue; and the fourth, purple. The Links must cooperate to overcome obstacles, collect keys, and storm Vaati's Palace so they can rescue Zelda and seal the mage away again.[27]\nhttps://en.wikipedia.org/wiki/The_Legend_of_Zelda:_Oracle_of_Seasons_and_Oracle_of_Ages\nThe Legend of Zelda: Oracle of Seasons and The Legend of Zelda: Oracle of Ages[a] are two action-adventure games in the Legend of Zelda series, developed by Flagship (a subsidiary of Capcom). They were released in 2001 for Nintendo's Game Boy Color handheld console and re-released on the Virtual Console for the Nintendo 3DS in 2013.\nThe central item of Oracle of Seasons is the Rod of Seasons. By standing on a stump and swinging the rod, Link can change the season and affect his surroundings.[19] For example, to cross a body of water, Link can change the season to winter and walk on the ice. Changing the season to summer causes vines to flourish, which Link can use to scale cliffs. When Link obtains the rod, he initially cannot use it.[20] In the course of the game, Link visits four towers that house the four spirits of the seasons; each tower Link visits allows him to switch to an additional season.[20]\nhttps://en.wikipedia.org/wiki/Radio-controlled_aerobatics\nRadio-controlled aerobatics is the practice of flying radio-controlled aircraft in maneuvers involving aircraft attitudes that are not used in normal flight.\nFour-point roll[edit]\nThe four-point roll is a quick series of quarter rolls. The pilot gives four separate, but very brief aileron inputs. The first rolls the aircraft to knife-edge, the second rolls the aircraft inverted, the third rolls the aircraft to opposite knife-edge, and the final input rolls the aircraft back to upright.\nRolling circle[edit]\nControl stick inputs for the rolling circle (left-turning right-rolling), showing the typical amount of elevator and rudder input as a function of rolling position.\nRolling circle is a maneuver in which an aircraft rolls continuously while turning in a circle. This is arguably one of the most difficult maneuvers to perfect, since varying pitch and yaw corrections are necessary to keep the heading level while maintaining constant roll rate and turning radius.\nThe standard rolling circle involves 1 roll at each quadrant of the turn, resulting in a total of 4 rolls throughout the 360° horizontal turn. The most logical method to approach the rolling circle is to think of it as 4 slow rolls with turn. The procedure below describes a left-turning right-rolling quadrant\nThe eight point roll is two fours and it makes a crossing infinity sign.\nhttps://en.wikipedia.org/wiki/File:Rolling_Circle.JPG\nFour quadrants\nhttps://en.wikipedia.org/wiki/Kitesurfing\nIn the 1800s, George Pocock used kites of increased size to propel carts on land and ships on the water, using a four-line control system—the same system in common use today. Both carts and boats were able to turn and sail upwind\nhttps://en.wikipedia.org/wiki/Gaelic_handball\nIn Ireland, there are four main types of handball. There is 40x20 (small court), the traditional 60x30 Softball & Hardball (big alley) and One-wall handball. One-wall handball has become very popular over the past 3 years and it is the most popular version of international handball. It is played in over 74 countries including the USA, Mexico, Ecuador, Spain, the Basque Country.\nGaelic handball (known in Ireland simply as handball;[1][2][3][4] Irish: liathróid láimhe) is a sport played in Ireland where players hit a ball with a hand or fist against a wall in such a way as to make a shot the opposition cannot return,[5] and that may be played with two (singles) or four players (doubles). The sport is similar to American handball (a related and almost identical game), Basque pelota, racquetball and squash. It is one of the four Gaelic games organised by the Gaelic Athletic Association (GAA).[6] In 2009, Irish Handball was rebranded as GAA Handball.\nThere are four Gaelic games\nhttps://en.wikipedia.org/wiki/Gaelic_games\nGaelic games are sports played in Ireland under the auspices of the Gaelic Athletic Association (GAA). Gaelic football and hurling are the two main games. Other games organised by the GAA include Gaelic handball and rounders.\nhttps://en.wikipedia.org/wiki/Racketlon\nRacketlon is a combination sport where competitors play a sequence of the four most popular racket sports: table tennis, badminton, squash, and tennis.\nIn a regulation game, two individuals (or two pairs in doubles) play each other in four sets, one in each sport. Each set has the same format: the serve switches every two points, with the first serve of the two in badminton, squash and tennis always being from the right, and the set finishes when one player has earned 21 points with at least a 2-point margin. The sets are played from smallest racket to largest: first table tennis, then badminton, squash, and finally tennis. The player (or pair) who has won the most points overall wins the match. If the score is tied after all four sets, the tie is broken by one extra tennis point. Other than the scoring, each point follows the respective sport's international rules.\nhttps://en.wikipedia.org/wiki/Squash_(sport)\nSquash is a racquet sport played by two (singles) or four players (doubles) in a four-walled court with a small, hollow rubber ball. The players must alternate in striking the ball with their racquet and hit the ball onto the playable surfaces of the four walls of the court.\nSquash was invented in Harrow School out of the older game racquets around 1830 before the game spread to other schools, eventually becoming an international sport. The first courts built at this school were rather dangerous because they were near water pipes, buttresses, chimneys, and ledges. The school soon built four outside courts. Natural rubber was the material of choice for the ball. Students modified their racquets to have a smaller reach to play in these cramped conditions.\nSquash balls are between 39.5 and 40.5 mm in diameter, and have a weight of 23 to 25 grams.[3] They are made with two pieces of rubber compound, glued together to form a hollow sphere and buffed to a matte finish. Different balls are provided for varying temperature and atmospheric conditions and standards of play: more experienced players use slow balls that have less bounce than those used by less experienced players (slower balls tend to \"die\" in court corners, rather than \"standing up\" to allow easier shots). Depending on its specific rubber composition, a squash ball has the property that it bounces more at higher temperatures. Squash balls must be hit dozens of times to warm them up at the beginning of a session; cold squash balls have very little bounce. Small colored dots on the ball indicate its dynamic level (bounciness), and thus the standard of play for which it is suited. The recognized speed colors indicating the degree of dynamism are:\nColor Speed (of Play) Bounce Player Level\nDouble yellow Extra Super Slowest Very low Experienced\nYellow Super Slow Low Advanced\nWhite Slow Low Advanced/Medium\nBlue Fast Very high Beginner/Junior\nSome ball manufacturers such as Dunlop use a different method of grading balls based on experience. They still have the equivalent dot rating, but are named to help choose a ball that is appropriate for one's skill level. The four different ball types are Intro (Blue dot), Progress (Red dot), Competition (single yellow dot) and Pro (double yellow dot).\nThe squash court is a playing surface surrounded by four walls. The court surface contains a front line separating the front and back of the court and a half court line, separating the left and right hand sides of the back portion of the court, creating three 'boxes' - the front half, the back left quarter and the back right quarter. Both the back two boxes contain smaller service boxes. All of the floor-markings on a squash court are only relevant during serves.\nThere are four walls to a squash court. The front wall, on which three parallel lines are marked, has the largest playing surface, whilst the back wall, which typically contains the entrance to the court, has the smallest. The out line runs along the top of the front wall, descending along the side walls to the back wall. There are no other markings on the side or back walls. Shots struck above or touching the out line, on any wall, are out. The bottom line of the front wall marks the top of the 'tin', a half metre-high metal area which if struck means that the ball is out. In this way the tin can be seen as analogous to the net in other racquet sports such as tennis. The middle line of the front wall is the service line and is only relevant during serves.\nhttps://en.wikipedia.org/wiki/Sport_kite\nFour-line (or \"quad-line\") kites are controlled with a pair of handles, each with two lines attached to the top and bottom and attached to the kite correspondingly. To control the kite, the pilot pulls on the lower line to turn the kite in that direction. Skilled use of these handles allows a quad-line kite to perform in ways that are difficult or impossible with a dual-line kite. Unique quadline maneuvers include reverse flight, axis spins, hovers, and side to side flight.\nOther aspects of sport kiting include power or traction kites, which can be used to tow wheeled kite buggies (kite buggying) or surfboards (kite surfing). Power kites vary in size from \"trainers\" which often have dual lines and a small sail area, to large full size traction kites with four lines, designed to pull people on kite boards or vehicles.\nhttps://en.wikipedia.org/wiki/File:Olympics_2012_Mixed_Doubles_Final.jpg\nhttps://en.wikipedia.org/wiki/Badminton\nhttps://en.wikipedia.org/…/File:Olympics_2012_Mixed_Doubles…\nBy 1875, returning officers had started a badminton club in Folkestone. Initially, the sport was played with sides ranging from 1–4 players but it was quickly established that games between two or four competitors worked the best.\nNotice how there are four squares (a quadrant) on a badminton court, as well as ping pong and tennis.\nhttps://en.wikipedia.org/wiki/File:Curlingdiagram.svg\nhttps://en.wikipedia.org/wiki/Curling\nCurling is a sport in which players slide stones on a sheet of ice towards a target area which is segmented into four concentric circles. It is related to bowls, boules and shuffleboard. Two teams, each with four players, take turns sliding heavy, polished granite stones, also called rocks, across the ice curling sheet towards the house, a circular target marked on the ice.\nNot only are their four concentric circles that the players are aiming four but they are divided into quadrants.\nUntil four stones have been played (two from each side), stones in the free guard zone (those stones left in the area between the hog and tee lines, excluding the house) may not be removed by an opponent's stone (although they can be moved as long as they are not taken out of play). These are known as guard rocks. If the guard rocks are removed, they are replaced to where they were before the shot was thrown, and the opponent's stone is removed from play and cannot be replayed. This rule is known as the four-rock rule or the free guard zone rule (for a while in Canada, a \"three-rock rule\" was in place, but that rule has been replaced by the four-rock rule).\nOriginally, the Modified Moncton Rule was developed from a suggestion made by Russ Howard for the Moncton 100 cashspiel (with the richest prize ever awarded at the time in a tournament) in Moncton, New Brunswick, in January 1990. \"Howard's Rule\" (also known as the Moncton Rule), used for the tournament and based on a practice drill his team used, had the first four rocks in play unable to be removed no matter where they were at any time during the end. This method of play altered slightly and adopted as a Four-rock Free Guard Zone for international competition shortly after. Canada kept to the traditional rules until a three-rock Free Guard Zone rule was adopted, starting in the 1993-94 season. After several years of having the three-rock rule used for the Canadian championships and the winners then having to adjust to the four-rock rule in the World Championships, the Canadian Curling Association adopted the now-standard Free Guard Zone in the 2002-2003 season.\nhttps://en.wikipedia.org/wiki/Biathlon\nBiathlon is a winter sport that combines cross-country skiing and rifle shooting.\nThis sport has its origins in an exercise for Norwegian people, as an alternative training for the military. Norwegian skiing regiments organized military skiing contests in the 18th century, divided in four classes: shooting at mark while skiing at top speed, downhill race among trees, downhill race on big hills without falling, and a long race on flat ground while carrying rifle and military pack.\nThe competitions from 1958 to 1965 used high-power centerfire cartridges, such as the .30-06 Springfield and the 7.62×51mm NATO, before the .22 Long Rifle rimfire cartridge was standardized in 1978. The ammunition was carried in a belt worn around the competitor's waist. The sole event was the men's 20 kilometres (12 mi) individual, encompassing four separate ranges and firing distances of 100 metres (330 ft), 150 metres (490 ft), 200 metres (660 ft), and 250 metres (820 ft). The target distance was reduced to 150 metres (490 ft) with the addition of the relay in 1966. The shooting range was further reduced to 50 metres (160 ft) in 1978 with the mechanical targets making their debut at the 1980 Winter Olympics in Lake Placid\nA biathlon competition consists of a race in which contestants ski around a cross-country trail system, and where the total distance is broken up by either two or four shooting rounds, half in prone position, the other half standing. Depending on the shooting performance, extra distance or time is added to the contestant's total running distance/time. The contestant with the shortest total time wins.\nThe 20 kilometres (12 mi) individual race (15 kilometres (9.3 mi) for women) is the oldest biathlon event; the distance is skied over five laps. The biathlete shoots four times at any shooting lane,[7] in the order of prone, standing, prone, standing, totaling 20 targets. For each missed target a fixed penalty time, usually one minute, is added to the skiing time of the biathlete. Competitors' starts are staggered, normally by 30 seconds.\nIn a pursuit, biathletes' starts are separated by their time differences from a previous race,[8] most commonly a sprint. The contestant crossing the finish line first is the winner. The distance is 12.5 kilometres (7.8 mi) for men and 10 kilometres (6.2 mi) for women, skied over five laps; there are four shooting bouts (two prone, two standing, in that order), and each miss means a penalty loop of 150 metres (490 ft). To prevent awkward and/or dangerous crowding of the skiing loops, and overcapacity at the shooting range, World Cup Pursuits are held with only the 60 top ranking biathletes after the preceding race. The biathletes shoot on a first-come, first-served basis at the lane corresponding to the position they arrived for all shooting bouts.\nIn the mass start, all biathletes start at the same time and the first across the finish line wins. In this 15 kilometres (9.3 mi) for men or 12.5 kilometres (7.8 mi) for women competition, the distance is skied over five laps; there are four bouts of shooting (two prone, two standing, in that order) with the first shooting bout being at the lane corresponding to the competitor's bib number (Bib #10 shoots at lane #10 regardless of position in race), with the rest of the shooting bouts being on a first-come, first-served basis (If a competitor arrives at the lane in fifth place, they shoot at lane 5). As in sprint and pursuit, competitors must ski one 150 metres (490 ft) penalty loop for each miss. Here again, to avoid unwanted congestion, World Cup Mass starts are held with only the 30 top ranking athletes on the start line (half that of the Pursuit as here all contestants start simultaneously).\nThe relay teams consist of four biathletes, who each ski 7.5 kilometres (4.7 mi) (men) or 6 kilometres (3.7 mi) (women), each leg skied over three laps, with two shooting rounds; one prone, one standing. For every round of five targets there are eight bullets available, though the last three can only be single-loaded manually one at a time from spare round holders or bullets deposited by the competitor into trays or onto the mat at the firing line. If after eight bullets there are still misses, one 150 m (490 ft) penalty loop must be taken for each missed target remaining. The first-leg participants start all at the same time, and as in cross-country skiing relays, every athlete of a team must touch the team's next-leg participant to perform a valid changeover. On the first shooting stage of the first leg, the participant must shoot in the lane corresponding to their bib number (Bib #10 shoots at lane #10 regardless of position in race), then for the remainder of the relay, the relay team shoots on a first-come, first-served basis (arrive at the range in fifth place, shoot at lane 5)\nThe most recent addition to the number of biathlon competition variants, the mixed relay is similar to the ordinary relay but the teams are composed of two women and two men. Legs 1 and 2 are done by the women, legs 3 and 4 by the men. The women's legs are 6 kilometres (3.7 mi) and men's legs are 7.5 kilometres (4.7 mi) as in ordinary relay competitions.\nA team consists of four biathletes, but unlike the relay competition, all team members start at the same time. Two athletes must shoot in the prone shooting round, the other two in the standing round. In case of a miss, the two non-shooting biathletes must ski a penalty loop of 150 m (490 ft). The skiers must enter the shooting area together, and must also finish within 15 seconds of each other; otherwise a time penalty of one minute is added to the total time. Since 2004, this race format has been obsolete at the World Cup level.\nhttps://en.wikipedia.org/wiki/Squash_tennis\nSquash tennis is an American variant of squash, but played with a ball and racquets that are closer to the equipment used for lawn tennis, and with somewhat different rules. For younger players the game offers the complexity of squash and the speed of racquetball. It also has exercise and recreational potential for older players.\nSquash tennis is played in various four-walled courts. The front wall (against which the ball is served) features a telltale (usually clad in tin) at the bottom couple feet from the floor, a service line about 6 feet (1.8 m) from the floor, and an out-of-bounds line around 16 feet (4.9 m) from the floor. The back wall out line is 4.5 feet (1.4 m) from the floor. There are two required lines on the floor: a service line about 10 feet (3.0 m) from the back wall, and a center court line running at least from the front wall to the service line. Unlike a squash racquets court, there are no service boxes. There are four types of courts:\nNorth American squash court[edit]\nA North American squash court is 18.5 by 32 feet (5.6 by 9.8 m). Originally designed for the related game of squash racquets, by the early 1930s the National Squash Tennis Association (NSTA) approved play on this kind of court. The dimensions are quite similar to the official squash tennis court. The only required modifications are the addition of a 4.5-foot (1.4 m) back wall line (in N. American squash the back wall line is 6.5 feet or 2.0 metres from the floor) and the center court line on the floor. Temporary lines can easily be added with blue painter's tape. The problem today is that as the North American version of squash becomes less popular, new courts are not being built, and many old ones are being converted to other uses.\nSquash tennis court[edit]\nIn 1910 the NSTA adopted a standard court size of 17 by 32.5 feet (5.2 by 9.9 m). Although many of these were built in the New York area, after play was authorized on a N. American squash court they began to disappear. It did not make economic sense to maintain a specialty court when a more versatile one was acceptable.\nInternational squash court[edit]\nAn International squash court is 21 by 32 feet (6.4 by 9.8 m). The additional lines will need to be added. The extra width of the court makes the various multi-walled shots more difficult or impossible, so experienced players prefer to use a N. American court. However, a 21-foot (6.4 m) court is often the only one generally available, particularly outside North America.\nNon-standard courts[edit]\nOriginally the game was played on a racquets court, then on fives courts. Before 1911 there were no standards for court size, and ones constructed specifically for squash tennis varied from each other somewhat. They were constructed at private estates and clubs. At least one of these courts survives today in a playable condition. The court at Plum Orchard was fully restored in 2008 with the tins in place and working electric lights. It was added to George Lauder Carnegie's \"Plum Orchard\" estate on Cumberland Island, Georgia, in the winter of 1903/04, and is now owned by the National Park Service. An exhibit on squash tennis history has recently been installed in the mansion, which is occasionally open for public tours.\nhttps://en.wikipedia.org/wiki/Four_corners_(game)\nFour corners is a children's game, often played in elementary schools. The object of the game is for players to choose corners of the room and not get caught by the designated \"It\" player until they are the last remaining participant.\nTo begin, four corners (or general areas) of the room are marked from the numbers one to four. One player is designated to be \"It,\" or the \"counter.\" This player sits in the middle of the room and closes his or her eyes, or exits the room, and counts to ten. The remaining players choose any one of the corners and quietly go and stand in that area. When the \"It\" player has finished counting, he or she calls out one of the numbers. All players who had chosen that corner or area are out of the game, and they sit down. Then, \"It\" counts again and the remaining players move to a different corner. Unless the corner is out.\nThe last person to still be in the game wins, and usually becomes the new \"It.\"\nIf \"It\" calls out a corner containing no players, she either calls out another number right away or the players rotate to a new corner, according to different versions of gameplay.\nCanadian four corners[edit]\nA very different 5-player children's game is played in Canada under the name \"four corners\" (also known as \"king's court\"). The Canadian version is played on a large square drawn in chalk, usually in a schoolyard or other similar area. Four of the children stand on one of the corners of the square, while the fifth player is designated \"it\" and stands in the middle of the square. The four corner players then attempt to trade places without being tagged by the player who is \"it\", or without vacating a corner long enough for the player who is \"it\" being able to stand in the vacant corner. If a corner player is tagged or stranded without a free corner to stand in, they become \"it\". Common strategy is to try to swap corners while the player who is \"it\" is chasing other players who are trying to swap corners.\nhttps://en.wikipedia.org/…/File:Mondial_Ping_-_Men%27s_Sing…\nhttps://en.wikipedia.org/wiki/Table_tennis\nTable tennis, also known as ping pong, is a sport in which two or four players hit a lightweight ball back and forth across a table using a small, round bat.\nThe table is kind of in the shape of a quadrant.\nMerk Diezle In women's gymnastics there are four events.\nIn men's there are six but gymnastics is considered a woman's sport.\nLike · Reply · December 27, 2016 at 3:59pm\nhttps://en.wikipedia.org/wiki/Juggling_pattern\nThe fourth is different. Fifth is questionable\nHalf shower[edit]\nSimilar to the shower pattern, a half shower pattern (siteswap: 3[9]) is any pattern where both hands throw arcing cascade-like throws to the other hand, but the props from one hand always pass above the props from the other hand. The half shower may be performed with any number of props greater than or equal to three, and with more than four props, different versions of the half shower with varying heights of throw may be executed, even without taking into account both synchronous and asynchronous variations.\nHalf showers where hands throw at notably different heights may be executed with cascade-style inside throws; this style of half shower is popular in club juggling, where they go by the name of triple-doubles or double-singles due to the higher clubs naturally spinning a greater number of times than the lower clubs.\nFountain[edit]\n4-ball fountain, siteswap: 4[10]\nMain article: Fountain (juggling)\nPerformed using an even number of props greater than or equal to four, the fountain is a symmetrical pattern where each hand independently juggles exactly half the total number of props, i.e. each hand always throws to itself. As with the cascade, a fountain where the throws are to the outside of the catches is known as a reverse fountain (siteswap: 4[11]). A fountain where only one hand juggles is generally known as an n in one hand, where n is the number of props juggled. Unlike the cascade, fountains can be performed both synchronously (each hand throws at the same time) and asynchronously (hands throw alternately)\nFour balls can be juggled. Not really five. Three can be juggled the best. The nature of the quadrant model is the fourth is always different and transcendent.\nPass juggling[edit]\nFour-count, or \"Every others\": <333P|333P>\nMain article: Passing (juggling)\nSiteswap may also be extended to pass juggling. Simultaneous juggling: <xxx|yyy> notation means one juggler does 'xxx' while another does 'yyy'. 'p' is used to represent a passing throw. For example the Four-count, or \"Every others\", pattern (one of the most basic forms of passing) every fourth throw — that is, every second right-handed throw — is a passing throw, thus the pattern is <333P|333P>. One-count (<3p|3p>), two-count (<33p|33p>), three-count (<333p|333p>), four-count (<3333p|3333p>).[33]\nhttps://en.wikipedia.org/wiki/File:Juggling_columns_es.png\nhttps://en.wikipedia.org/wiki/Columns_(juggling)\nThe fourth is always different/transcendent. Three is difficult, four is trancendent.\n4 ball synchronous columns: \"symmetrical\",[3] \"asymmetrical\",[4] \"splits\", and \"pistons\"[5]\nhttps://en.wikipedia.org/wiki/File:4-ball_juggling.gif\nhttps://en.wikipedia.org/wiki/Fountain_(juggling)\nThe fountain is a juggling pattern that is the method most often used for juggling an even number of objects. In a fountain, each hand juggles separately, and the objects are not thrown between the hands. To illustrate this, it can be seen that in the most common fountain pattern where four balls are juggled, each hand juggles two balls independently. As Crego states \"In the fountain pattern, each hand throws balls straight up into the air and each ball is caught in the same hand that throws it.\"[3]\nA fountain can be synchronous or asynchronous. In a synchronous fountain, both hands throw at the same time, while in an asynchronous fountain, the hands alternate throws. \"The fountain pattern...can be stably performed in two ways...one can perform the fountain with different frequencies for the two hands, but that coordination is difficult because of the tendency of the limbs to synchronize.\"[4] The fountain is juggled in a circular fashion, distinguishing it from columns. The circular method means that the balls juggled travel in a circle like motion with the jugglers hands throwing the ball from a point close to their body centre line and catching the ball further away from their body centre line. This circle motion is called 'outside circles' and is the fountain pattern shown in the animation. This circle method can be reversed to create an 'inside circle' pattern whereby throws are from a position away from the body midline and catches are closer to the body midline. In the columns method the balls all travel vertically up and down in their own 'column', and are caught from where they are thrown.\nOther fountain patterns[edit]\n4 is the asynchronous asymmetrical fountain. (4,4) is the synchronous fountain. (4,4)(4,0) is a synchronous fountain with one ball missing (two in one, one in the other).\nWimpy, (4x,4x), is a crossing version of the synchronous fountain.[5]\nNotice the fourth ball is different\nhttps://en.wikipedia.org/wiki/Multiplex_(juggling)\nThe fourth is transcendent and fifth is questioanble. Five balls can be thrown but in a cascade where three are thrown at one time\nMultiplex throws are generally grouped into four different categories: Stack, Split, Cut, and Slice.\nhttps://en.wikipedia.org/wiki/File:4b-multiplex-3-Ball-Kaskade-mit-mitjonglierter-5.gif\n4 ball multiplex, 3-ball-cascade, one \"5\" juggled with it: [53]3333\nhttps://en.wikipedia.org/wiki/Pen_spinning\nIn pen spinning there are four main fundamental tricks spinners often learn first. The fundamentals do not represent all fields in pen spinning nor are they the smallest individual pen movements possible, but they are recognized as providing useful foundations for basic technique and concepts.\nThumbAround Normal[edit]\nA ThumbAround is performed by pushing a pen using any finger (usually the middle finger if done in isolation) except the thumb to initiate the pen to spin around the thumb one time, then catching it between the thumb and a finger. Before the pen spinning community became significantly organized, the ThumbAround Normal was known by a multitude of names, including 360 Degree Normal, Forward, Normal, and ThumbSpin (now the name of a separate trick).[23][24]\nSonic Normal[edit]\nThe primary goal of a Sonic is to transfer the pen from one finger slot to another quickly. In the Sonic Normal, a pen is held in a finger slot not involving the thumb and is spun in a conic-like motion behind a finger (or fingers) to another finger slot further up the hand, making a single revolution.[25] Hideaki Kondoh is generally accredited with giving the Sonic its name, which he did because of the rapid speed at which the pen would move compared to the ThumbAround.[9]\nCharge Normal[edit]\nA Charge does not involve spinning the pen over any fingers or any body parts, rather, the pen is spun conically in a single finger slot.[26] When viewing the palm-side of the hand during the Charge Normal, the pen spins clockwise in the right hand and counterclockwise in the left hand. The Charge forms the basis for all tricks that rely on conical movement, including the Sonic. This trick is often performed by drummers using drumsticks rather than pens.\nFingerpass Normal[edit]\nA single Pass involves rotating a pen 0.5 times from one finger slot to another. When performing the Pass Normal on the palm-side of hand, the pen goes downward. When performing a Pass Normal on the other side of the hand, the pen goes upward. A small combination of Passes involving the pen rotating fully around the hand, starting and ending in the 12 slot, is called a Fingerpass, with the Fingerpass Normal being constructed out of Pass Normals.[27] This short combo is consistently considered the hardest fundamental to master because Passes between the little finger and the ring finger are often difficult to make smooth. A Pass combo similar to a Fingerpass was performed by the character Boris Grishenko in the James Bond film GoldenEye, using only three fingers instead of the usual four.[28]\nTiming and Direction[edit]\nhttps://en.wikipedia.org/wiki/File:Poi-UnitCircle.png\nhttps://en.wikipedia.org/wiki/Poi_definitions\nSplit, Same, and Hybrid Timing\nFig.1-Poi Timings\nTiming and Direction is a concept used by poi spinners to refer to how the props and hands move in relation to each other.[1] There are currently four major categories of timing and direction that prop movements commonly fall into. These categories are:\nTogether Time, Same Direction (also referred to as Same Time[2] and abbreviated TS[3] ): Props are spinning in the same direction and in phase with each other so that a doubling effect occurs in the audience's perception of the resulting trick.[4]\nTogether Time, Opposite Direction (also referred to as Opposites[2] and abbreviated TO[3]): Props are spinning in opposite directions and in phase with each other so that the trick they produce appears to reflect across a vertical line of symmetry.[4]\nSplit Time, Same Direction (also referred to as Split Time[2] and abbreviated SS[3]): Props are spinning in the same direction and 180 degrees out of phase with each other so that the trick they produce appears to reflect along a line of symmetry that rotates from the center of the trick.[4]\nSplit Time, Opposite Direction (abbreviated SO[3]): Props are spinning in opposite directions and 180 degrees out of phase with each other so that the trick they produce appears to reflect across a horizontal line of symmetry.[4]\nhttps://en.wikipedia.org/wiki/Cross-country_equestrianism\nCross country equestrian jumping is an endurance test that forms one of the three phases of the sport of eventing; it may also be a competition in its own right, known as hunter trials or simply \"cross-country\", although these tend to be lower level, local competitions.\nThe object of the endurance test is to prove the speed, endurance and jumping ability of the true cross-country horse when he is well trained and brought to the peak of condition. At the same time, it demonstrates the rider's knowledge of pace and the use of this horse across country.\nHistorically, the so-called 'long format' endurance test included four phases: Phases A and C, Roads and Tracks; Phase B, the Steeplechase; and Phase D, the Cross-Country. Each phase had to be completed in a set time. Phase A of the roads and tracks was a warming-up period, usually done at a brisk trot, for the purpose of relaxing and loosening up both horse and rider. Phase A led directly to the start for Phase B, the steeplechase. This phase was ridden at a strong gallop to achieve an average speed of 24 miles per hour with six to eight jumps. At the end of the steeplechase, the horse and rider went directly into Phase C, the second roads and tracks. This phase was very important for allowing the horse to relax and recover and to get his wind back to normal. The pace is usually a quiet trot, interspersed with periods of walking and an occasional relaxed canter. Some riders also dismounted and ran next to their horse during this section of the test.\nThe end of Phase C brought the pair to the ten-minute Vet Box prior to starting out on Phase D, the cross-country. Here the horse had a compulsory ten-minute rest allowing a panel of judges and veterinarians to check the horse's temperature, pulse, respiration and soundness. If, in the opinion of the panel, the horse was not fit or sound enough to continue, it was withdrawn from the competition. At this time the horse was sponged down, the tack adjusted and they were prepared for the next phase. Those passing the inspection went to the start box ready for the most exciting phase of the whole endurance test.\nDisobediences from the horse[edit]\nFirst refusal or crossing tracks (circling) in front of an obstacle: 20 penalties per obstacle\n2nd refusal or crossed tracks at the same obstacle: 40 additional penalties\n3rd refusal or crossed tracks at the same obstacle (an \"obstacle\" includes all its elements): elimination\n4th cumulative refusal or crossed tracks on the entire course: elimination\nhttps://en.wikipedia.org/wiki/Disc_dog\nThe QUADRUPED was an NFL draft day competition held in April 1996 for the Jacksonville Jaguars. This competition format is older than all other disc dog competition formats other than the Ashley Whippet and the FDDO formats. Originally a halftime show for football games with four frisbee dog teams competing to be the last team standing. It turned into an open competition where many more than four teams were able to compete. Today we have The QUADRUPED Series, a group of competitions that are a points championship in the United States. The popularity has been so great within the frisbee dog world that it has spread to Europe where it has occurred in several countries.\nhttps://en.wikipedia.org/wiki/Polo\nIn horse polo each team consists of four mounted players, which can be mixed teams of both men and women.\nEach position assigned to a player has certain responsibilities:\nNumber One is the most offence-oriented position on the field. The Number One position generally covers the opposing team's Number Four.\nNumber Two has an important role in offence, either running through and scoring themselves, or passing to the Number One and getting in behind them. Defensively, they will cover the opposing team's Number Three, generally the other team's best player. Given the difficulty of this position, it is not uncommon for the best player on the team to play Number Two so long as another strong player is available to play Three.\nNumber Three is the tactical leader and must be a long powerful hitter to feed balls to Number Two and Number One as well as maintaining a solid defence. The best player on the team is usually the Number Three player, usually wielding the highest handicap.\nNumber Four is the primary defence player. They can move anywhere on the field, but they usually try to prevent scoring. The emphasis on defence by the Number Four allows the Number Three to attempt more offensive plays, since they know that they will be covered if they lose the ball.\nPolo must be played right-handed\nThe sword and broadsword are among the four main weapons taught in the Chinese martial arts, the others being the staff and spear. The order in which these weapons is taught may vary between schools and styles, but the jian is generally taught last among the four.\nhttps://en.wikipedia.org/wiki/Swordsmanship\nThe sword in ancient Egypt was known by several names, but most are variations of the words sfet, seft or nakhtui. The earliest bronze swords in the country date back 4000 years. Four types of sword are known to have been used: the ma or boomerang-sword based on the hunting stick, the kat or knife-sword, the khopesh or falchion based on the sickle, and a fourth form of straight longsword. The khopesh was used region-wide and is depicted as early as the Sixth Dynasty (3000 BC). It was thick-backed and weighted with bronze, sometimes even with gold hilts in the case of pharaohs. The blade may be edged on one or both sides, and was made from iron or blue steel. The double-edge sword had a leaf-shaped blade, and a handle which hollows away at the centre and thickens at each end. These swords are of various lengths, and were paired with shields. Middle Eastern swords became dominant throughout North Africa after the introduction of Islam, after which point swordsmanship in the region becomes that of Arabian or Middle Eastern fencing.\nhttps://en.wikipedia.org/wiki/Silambam\n16 is the squares of the quadrant model. There are 16 techniques, four of which are very important (different the fourth quadrant)\nSilambam is a weapon-based Indian martial art from Tamil Nadu, but also traditionally practised by the Tamil community of Sri Lanka and Malaysia. It is closely related to Keralan kalaripayat and Sri Lankan angampora. It derives from the Tamil word silam meaning \"hill\" and the Kannada word bambu from which the English \"bamboo\" originates. The term silambambu referred to a particular type of bamboo from the Kurinji hills in present-day Kerala. Thus silambam was named after its primary weapon, the bamboo staff. The related term silambattam often refers specifically to stick-fighting.\nBeginners are first taught footwork (kaaladi) which they must master before learning spinning techniques and patterns, and methods to change the spins without stopping the motion of the stick. There are sixteen of them among which four are very important. Footwork patterns are the key aspects of silambam. Traditionally, the masters first teach kaaladi for a long time before proceeding to unarmed combat. Training empty-handed allows the practitioner to get a feel of silambam stick movements using their bare hands, that is, fighters have a preliminary training with bare hands before going to the stick.\n16 is the number of squares in the quadrant model. There are four very important patterns in the techniques, reflecting the quadrant four.\nhttps://en.wikipedia.org/wiki/Jeet_Kune_Do\nThe following are principles that Lee incorporated into Jeet Kune Do.[6] Lee felt these were universal combat truths that were self-evident, and would lead to combat success if followed. Familiarity with each of the \"Four ranges of combat\", in particular, is thought to be instrumental in becoming a \"total\" martial artist.\nFour ranges of combat\nJeet Kune Do students train in each of the aforementioned ranges equally. According to Lee, this range of training serves to differentiate JKD from other martial arts. Lee stated that most but not all traditional martial arts systems specialize in training at one or two ranges. Lee's theories have been especially influential and substantiated in the field of mixed martial arts, as the MMA Phases of Combat are essentially the same concept as the JKD combat ranges. As a historic note, the ranges in JKD have evolved over time. Initially the ranges were categorized as short or close, medium, and long range.[3] These terms proved ambiguous and eventually evolved into their more descriptive forms, although some may still prefer the original three categories.\nhttps://en.wikipedia.org/wiki/Savate\nSavate (French pronunciation: [saˈvat]), also known as boxe française, French boxing, French kickboxing or French footfighting, is a French martial art which uses the hands and feet as weapons combining elements of western boxing with graceful kicking techniques.\nIn competitive or competition savate which includes Assaut, Pre-Combat, and Combat types, there are only four kinds of kicks allowed along with four kinds of punches allowed:\nKicks[edit]\nfouetté (literally \"whip\", roundhouse kick making contact with the toe—hard rubber-toed shoes are worn in practice and bouts), high (figure), medium (médian) or low (bas)\nchassé (side (\"chassé lateral\") or front (\"chassé frontal\") piston-action kick, high (figure), medium (médian) or low (bas)\nrevers, frontal or lateral (\"reverse\" or hooking kick) making contact with the sole of the shoe, high (figure), medium (médian), or low (bas)\ncoup de pied bas (\"low kick\", a front or sweep kick to the shin making contact with the inner edge of the shoe, performed with a characteristic backwards lean) low only[15][16]\nPunches[edit]\ndirect bras avant (jab, lead hand)\ndirect bras arrière (cross, rear hand)\ncrochet (hook, bent arm with either hand)\nuppercut (either hand)\nSavate did not begin as a sport, but as a form of self-defence and fought on the streets of Paris and Marseille. This type of savate was known as savate de rue. In addition to kicks and punches, training in savate de rue (savate defense) includes knee and elbow strikes along with locks, sweeps, throws, headbutts, and takedowns.[17][18][19][20]\nhttps://en.wikipedia.org/wiki/Sanshou\nJūnshì Sǎndǎ (Mandarin Chinese, Military Free Fighting): A system of unarmed combat that was designed by Chinese Elite Forces based upon their intense study of traditional martial arts such as traditional Kung Fu, Shuai Jiao, Chin Na and modern hand-to-hand fighting and combat philosophy to develop a realistic system of unarmed fighting for the Chinese military. Jùnshì Sǎndǎ employs all parts of the body as anatomical weapons to attack and counter with, by using what the Chinese consider to be the four basic martial arts techniques:\nDa – Upper-Body Striking – using fists, open hands, fingers, elbows, shoulders, forearms and the head\nTi – Lower-Body Striking – including kicks, knees and stomping\nShuai – Throws – using Wrestling and Judo-like takedowns and sweeps, and\nChin-Na – Seizing – which includes jointlocks, strangulation and other submissions\nhttps://en.wikipedia.org/wiki/Pradal_serey\nPradal Serey (Khmer: ប្រដាល់សេរី) or Kun Khmer (Khmer: គុណខ្មែរ) is an unarmed martial art and combat sport from Cambodia.[1] In Khmer, pradal means fighting or boxing and serey means free. Thus, pradal serey may be translated as \"free fighting\". The sport consists of stand up striking and clinch fighting where the objective is to knock an opponent out, force a technical knockout, or win a match by points.\nPradal Serey is most well known for its kicking technique, which generates power from hip rotation rather than snapping the leg, Pradal Serey consists of four types of strikes: punches, kicks, elbows and knee strikes.\nhttps://en.wikipedia.org/wiki/Capoeira\nCapoeira (/ˌkæpuːˈɛərə/; Portuguese pronunciation: [kapuˈejɾɐ]) is a Brazilian martial art that combines elements of dance,[1][2][3] acrobatics[4] and music, and is sometimes referred to as a game.\nThere are four basic kinds of songs in capoeira, the Ladaínha, Chula, Corrido and Quadra. The Ladaínha is a narrative solo sung only at the beginning of a roda, often by a mestre (master) or most respected capoeirista present. The solo is followed by a louvação, a call and response pattern that usually thanks God and one's master, among other things. Each call is usually repeated word-for-word by the responders. The Chula is a song where the singer part is much bigger than the chorus response, usually eight singer verses for one chorus response, but the proportion may vary. The Corrido is a song where the singer part and the chorus response are equal, normally two verses by two responses. Finally, the Quadra is a song where the same verse is repeated four times, either three singer verses followed by one chorus response, or one verse and one response.\nhttps://en.wikipedia.org/wiki/Freestyle_BMX\nFlatland BMX occupies a position somewhat removed from the rest of freestyle BMX. People who ride in the above disciplines will generally take part in at least one of the others, but flatlanders tend to only ride flatland. They are often very dedicated and will spend several hours a day perfecting their technique.\nA variety of options are commonly found on flatland bikes. The most unifying feature of flatland bikes is the use of four pegs, one on the end of each wheel axle. Flatland riders will choose to run either a front brake, a rear brake, both brakes, or no brakes at all, depending on stylistic preference.\nhttps://en.wikipedia.org/wiki/Fourcross\nFourcross is a form of four-wheeled downhill mountain biking, pioneered in Canada and the United States. It has the benefit of being suitable for disabled riders. The sport each year is part of the Crankworx festival\nThe current world champion is Joost Wichman.\nFour-cross was added to the UCI Mountain Bike World Cup and the UCI Mountain Bike & Trials World Championships in 2002, replacing dual slalom. It was removed from the World Cup following the 2011 series. A replacement world series, the 4X Pro Tour, was launched in 2012.\nhttps://en.wikipedia.org/wiki/Core_Four\nThe \"Core Four\" are the former New York Yankees baseball players Derek Jeter, Andy Pettitte, Jorge Posada, and Mariano Rivera. All four players were drafted or originally signed as amateurs by the Yankees in the early 1990s. They played together in the minor leagues, and made their Yankee major league debuts in 1995. Each of them was a key contributor to the Yankees' late-1990s dynasty that won four World Series championships in five years. By 2007, they were the only remaining Yankees from the franchise's dynasty of the previous decade. All four players were on the Yankees' active roster in 2009 when the team won the 2009 World Series—its fifth championship in the previous 14 years.\nThree members of the Core Four—Jeter, Rivera and Posada—played together for 17 consecutive years (1995–2011), longer than any other similar group in North American professional sports.[1] Pettitte had a sojourn away from the team when he played for the Houston Astros for three seasons, before returning to the Yankees in 2007. He retired after the 2010 season,[2] reducing the group to the so-called Key Three.[3] Posada followed suit after 2011, ending his 17-year career with the Yankees.[4] Pettitte came out of retirement prior to the 2012 season and played for two more years.[5] Both Pettitte and Rivera retired after the 2013 season, and Jeter retired after the 2014 season.[6]\nNormally pool has 16 balls. 16 is the squares of the quadrant model.\nhttps://en.wikipedia.org/wiki/Four-ball_billiards\nFour-ball billiards (often abbreviated to simply four-ball, and sometimes spelled 4-ball or fourball) is a carom billiards game, played on a pocketless table with four billiard balls, usually two red and two white, one of the latter with a spot to distinguish it (in some sets, one of the white balls is yellow instead of spotted). Each player is assigned one of the white (or yellow) balls as a cue ball. A point is scored when a shooter's cue ball caroms on any two other balls in the same shot (with the opponent's cue ball serving as an object ball, along with the reds, for the shooter). Two points are scored when the shooter caroms on each of the three object balls in a single shot.[1] A carom on only one ball results in no points, and ends the shooter's inning.\nA variant of four-ball is the East Asian game yotsudama (四つ球?, Japanese for 'four balls'), or sagu (사구, Korean for 'four balls').\nhttps://en.wikipedia.org/wiki/Equestrian_vaulting\nEquestrians\nBasic Seat An astride position (the vaulter sits on the horse as a rider would), with the arms held to the side and the hands raised to ear level. Hands should be held with closed fingers and palms facing downward, with the fingers arching slightly upward. Legs are wrapped around the horse's barrel, soles facing rearward, with toes down and feet arched. Vaulter holds this position for four full strides.\nFlag From the astride position, the vaulter hops to his or her knees and extends the right leg straight out behind, holding it slightly above his or her head so the leg is parallel to the horse's spine. The other leg should have pressure distributed through the shin and foot, most weight should be on the back of the ankle, to avoid digging the knee into the horse's back. The left arm is then stretched straight forward, at a height nearly that of the right leg. The hand should be held as it is in basic seat (palm down, fingers together). The right foot should be arched and the sole should face skyward. This movement should be held for four full strides after the arm and leg are raised.\nMill From the astride position, the vaulter brings the right leg over the horse's neck. The grips must be ungrasped and retaken as the leg is brought over. The left leg is then brought in a full arc over the croup, again with a change of grips, before the right leg follows it, and the left leg moves over the neck to complete the full turn of the vaulter. The vaulter performs each leg movement in four strides each, completing the Mill movement in sixteen full strides. During the leg passes, the legs should be held perfectly straight, with the toes pointed. When the legs are on the same side of the horse, they should be pressed together.\nStand The vaulter moves from the astride position onto the shins and immediately onto both feet, and releases the grips. The vaulter then straightens up with both knees bent, the buttocks tucked forward, and the hands held as they are in basic seat. The vaulter must hold the position for four full strides. [1\nhttps://en.wikipedia.org/wiki/NASCAR_playoffs\nShort track racing, the grassroots of NASCAR, began experimenting with ideas to help the entry-level racer. In 2001, the United Speed Alliance Racing organization, sanctioning body of the USAR Hooters Pro Cup Series, a short-track stock car touring series, devised a five-race system where the top teams in their Hooters ProCup North and Hooters ProCup South divisions would participate in a five-race playoff, the Four Champions, named for the four Hooters Racing staff members (including 1992 NASCAR Winston Cup Series champion and pilot Alan Kulwicki) killed in an April 1, 1993 plane crash in Blountville, Tennessee. The system organized the teams with starting points based on the team's performance in their division (division champions earn a bonus), and the teams would participate in a five-race playoff. The five races, added to the team's seeding points, would determine the winner. The 2001 version was four races, as one was canceled because of the September 11 terrorist attacks; however, NASCAR watched as the ProCup's Four Champions became a success and drivers from the series began looking at NASCAR rides. The idea was to give NASCAR, which was becoming in many areas the fourth-largest sport (after Major League Baseball, the NFL, the NBA and surpassing in some regions the NHL) attention during baseball's road to the World Series and the outset of the pro and college football, NHL and NBA seasons.\nThe playoffs system was announced on January 21, 2004 as the \"Chase for the Championship\", and first used during the 2004 Nextel Cup season. The format used from 2004 to 2006 was modified slightly starting with the 2007 season. A major change to the qualifying criteria was instituted in 2011, along with a major change to the points system. Even more radical changes to the qualifying criteria, and to the format of the playoffs itself, were announced for the upcoming 2014 Sprint Cup Series. As of 2014, the 10-race playoff format involves 16 drivers chosen primarily on wins during the \"regular season\"; if fewer than 16 drivers win races during the regular season, the remaining field is filled on the basis of regular season points. These drivers compete against each other while racing in the standard field of 40 cars. The driver with the most points after the final 10 races is declared the champion.\nOn January 30, 2014, a new Chase system resembling the playoff systems used in other major league sports was announced at Media Day.[8] On July 15, NASCAR announced various design changes to identify Chase drivers in the field: on these drivers, their cars' roof numbers, front splitters and fascia, and the windshield header are colored yellow, and the Chase logo on the front quarter panel.[9]\nUnder the new system, the Chase field is expanded to 16 drivers for the 10-race Chase. The 16 drivers are chosen primarily on wins during the \"regular season\"; if fewer than 16 drivers win races during the regular season, the remaining field is filled on the basis of regular season points. These drivers compete against each other while racing in the standard field of 43 cars. The driver with the most points after the final 10 races is declared the champion.\nThe new playoff system means that drivers are eliminated from title contention as the Chase progresses. The bottom four of the top-16 drivers are eliminated from title contention after the third race (Dover) in what was called the \"Challenger Round\", reducing the size of the field by 25%. The bottom four winless drivers have their points reset based on the standard points system, while the remaining 12 Chase drivers' points are reset to 3,000 points. The new bottom four are eliminated after the sixth Chase race (Talladega) in the \"Contender Round\", reducing the size of the field another 33%. Those who continue have their points all reset to 4,000. Then the \"Eliminator Round\" involves axing 50% of the Chase grid, cutting the drivers 5th-8th in the points after the penultimate race at Phoenix, and the top four drivers have their point totals reset to 5,000 so that they are tied for the final race at Homestead-Miami for the title run. Of these four drivers, the driver with the best finish at Homestead is then the crowned series champion (these drivers do not earn bonus points for leading a lap or leading the most laps).[10] Any Chase driver who wins a race is automatically guaranteed a spot in the next round. Up to three drivers thus can advance to the next round of the Chase through race wins, regardless of their actual points position when the elimination race in that round happens. The remaining drivers advance on points. The round names were removed starting in 2016, being changed to \"Round of 16\", \"Round of 12\", \"Round of 8\", and \"Championship Round\".[11]\nThe previous championship format will be maintained for the 2017 season, but with changes. A revised regular-season points system will be adopted, splitting races into three stages. The top 10 drivers at the end of the first two stages each race will earn additional bonus points towards the championship, 10 points for the first place car down to 1 point for the 10th place car. At the end of the race, the normal championship point scheme will be used to award points to the entire field. Additionally, \"playoff points\" will be awarded during the regular season for winning stages, winning races, and finishing the regular season in the top 16 on the championship points standings. 1 playoff point for the winner of a stage, 5 playoff points plus an automatic birth into the round of 16 for the race winner. (unless there are more than 16 race winners in the season, then the top 16 in race wins move on). If a driver qualifies for the championship, these playoff points will be carried into their reset points totals until the final round. [12][13][14] So a driver can have less regular season points than another driver, but be seeded higher due to more wins.\nNASCAR also stated that it intended to drop the \"Chase\" branding for the post-season and re-brand them as the \"Playoffs\" for 2017.[14]\nThe Kevin Harvick Rule – Fifth Place[edit]\nThe idea of NASCAR driver Kevin Harvick, drivers eliminated in each of the three rounds will be eligible to race for fifth place during the final races. Drivers eliminated in the first round will retain their Chase score (for example, a driver with one win during the season eliminated after scoring 75 points during the first round will score 2,078 points) and start the fourth race the same score after the first three races, and will accumulate points for the remainder of the season.[15]\nDrivers eliminated in the second or third round will have their score reverted to the score at the end of the first round, then their individual race scores for the three (eliminated in the second round) or six races (eliminated in the third round), respectively, before their eliminiation will be combined with the score after the third race of the first round for the driver's total score.\nAfter ten races, the drivers positions 5–16 will be determined by the total number of points accumulated in the ten races (bonus points will apply), without the points resets of the second or third rounds, added to the driver's base Chase score with bonuses added.\nhttps://en.wikipedia.org/wiki/File:Restrictor-Plate-Rendering.png\nA restrictor plate or air restrictor is a device installed at the intake of an engine to limit its power. This kind of system is occasionally used in road vehicles (e.g., motorcycles) for insurance purposes, but mainly in automobile racing, to limit top speed to provide equal level of competition, and to lower costs; insurance purposes have also factored in for motorsports.\nhttps://en.wikipedia.org/wiki/Restrictor_plate\nhttps://en.wikipedia.org/wiki/Formula_4\nTo become an eligible FIA Formula 4 engine, the engine must meet the homologation requirements. According to the homologation requirements a FIA Formula 4 engine must last at least 10,000 km and have a maximum purchasing price of €9,500.[3] According to the FIA Formula 4 technical regulations only four cylinder engines are allowed. Both normally aspirated and turbocharged engines are permitted. The power output has been maximized at 160hp. The engine displacement is unlimited.[4] Currently four engines are homologated for use in the FIA Formula 4.[5]\nhttps://en.wikipedia.org/wiki/Daytona_500\nThe Daytona 500 is a 500-mile-long (805 km) Monster Energy NASCAR Cup Series motor race held annually at Daytona International Speedway in Daytona Beach, Florida. It is the first of two Cup races held every year at Daytona, the second being the Coke Zero 400. It is one of the four restrictor plate races on the Cup schedule. The inaugural Daytona 500 was held in 1959 coinciding with the opening of the speedway and since 1982, it has been the season-opening race of the Cup series.[1]\n16 squares quadrant model\nhttps://en.wikipedia.org/wiki/2014_NASCAR_Sprint_Cup_Series\nNew Chase format[edit]\nOn January 30, 2014, NASCAR announced radical changes to the format for the season-ending Chase for the Sprint Cup.[64]\nThe group of drivers in the Chase will now officially be called the NASCAR Sprint Cup Chase Grid.\nThe number of drivers qualifying for the Chase Grid will expand from 12 to 16.\nFifteen of the 16 slots in the Chase Grid are reserved for the drivers with the most race wins over the first 26 races, provided that said drivers are in the top 30 in series points and have attempted to qualify for each race (with rare exceptions). The remaining spot is reserved for the points leader after 26 races, if that driver does not have a victory. If fewer than 16 drivers have wins in the first 26 races, the remaining Chase Grid spots are filled by winless drivers in order of season points. As in the recent past, all drivers on the Chase Grid have their driver points reset to 2,000 prior to the Chase, with a 3-point bonus for each win in the first 26 races.\nThe Chase will be divided into four rounds. After each of the first three rounds, the four Chase Grid drivers with the fewest season points are eliminated from the Grid and championship contention. Any driver on the Chase Grid who wins a race in the first three rounds automatically advances to the next round. Also, all drivers eliminated from the Chase have their points readjusted to the regular-season points scheme.\nChallenger Round (races 27–29)\nBegins with 16 drivers, each with 2,000 points plus a 3-point bonus for each win in the first 26 races.\nContender Round (races 30–32)\nBegins with 12 drivers, each with 3,000 points.\nEliminator Round (races 33–35)\nBegins with eight drivers, each with 4,000 points.\nNASCAR Sprint Cup Championship (final race)\nThe last four drivers in contention for the season title start the race at 5,000 points, with the highest finisher in the race winning the Cup Series title.\nhttps://en.wikipedia.org/wiki/Oval_track_racing\nA typical oval track consists of two parallel straights, connected by two 180° turns. Although most ovals generally have only two radii curves, they are usually advertised and labeled as four 90° turns.\nRounded-off rectangle or square One prominent, but now uncommon shape is the \"rounded-off rectangle\". Pursuant to its name, the track shape resembles a rectangle, with two long straights and two short straights, connected by four separate turns. The primary characteristic of a rounded-off rectangle that differentiates it from a traditional oval shape, is the presence of two \"short chutes\", one between turns one and two, and one between turns three and four. While most traditional ovals have two continuous 180° radii (advertised as four 90° turns), this shape actually has four distinct 90° curves. When it was first constructed, the Homestead-Miami Speedway was designed to this layout and touted as a \"mini-Indy.\" However, at only 1.5 miles (one mile shorter than Indy), the track proved to be uncompetitive, owing largely to the sharp corners, and was soon reconfigured as a traditional oval. Indianapolis remains as the only major track to this specification. Tracks of this shape have been avoided due to grandstand sight line issues, slow corners, and dangerous impact angles. However, numerous private manufacturers' test tracks use this type of layout. The only major short track with a rectangular layout has the shape of a rounded-off square with four nearly identical straights and turns.\nFlemington Speedway, a square\nHomestead (original design)\nRounded-off trapezoid A very rare layout is a trapezoid oval course. The difference to rounded-off rectangle is the shorter back straight and longer front straight. So, the Turns 1 und 4 are tighter than the Turns 2 and 3.\nEmerson Fittipaldi Speedway\nDogleg Some oval tracks have minor variations, such as kinks or doglegs. A \"dogleg\" is a defined as a soft curve down one of the straights, either inward or outward, which skews the oval into a non-symmetric or non-traditional shape. While the extra curve would seemingly give the oval five turns, the dogleg is normally omitted from identification, and the ovals are still labeled with four turns.\nNazareth Speedway\nPhoenix International Raceway\nI-70 Speedway\nhttps://en.wikipedia.org/wiki/Four-vertex_theorem\nThe classical four-vertex theorem states that the curvature function of a simple, closed, smooth plane curve has at least four local extrema (specifically, at least two local maxima and at least two local minima). The name of the theorem derives from the convention of calling an extreme point of the curvature function a vertex. This theorem has many generalizations, including a version for space curves where a vertex is defined as a point of vanishing torsion.\n3 Proof\n4 Converse\n5 Application to mechanics\n6 Discrete variations\n7 Generalizations to space curve\nAn ellipse has exactly four vertices: two local maxima of curvature where it is crossed by the major axis of the ellipse, and two local minima of curvature where it is crossed by the minor axis. In a circle, every point is both a local maximum and a local minimum of curvature, so there are infinitely many vertices.\nThe four-vertex theorem was first proved for convex curves (i.e. curves with strictly positive curvature) in 1909 by Syamadas Mukhopadhyaya.[1] His proof utilizes the fact that a point on the curve is an extremum of the curvature function if and only if the osculating circle at that point has 4th-order contact with the curve (in general the osculating circle has only 3rd-order contact with the curve). The four-vertex theorem was proved in general by Adolf Kneser in 1912 using a projective argument.[2]\nProof[edit]\nFor many years the proof of the four-vertex theorem remained difficult, but a simple and conceptual proof was given by Osserman (1985), based on the idea of the minimum enclosing circle.[3] This is a circle that contains the given curve and has the smallest possible radius. If the curve includes an arc of the circle, it has infinitely many vertices. Otherwise, the curve and circle must be tangent at at least two points. At each tangency, the curvature of the curve is greater than that of the circle (else the curve would continue from the tangency outside the circle rather than inside). However, between each pair of tangencies, the curvature must decrease to less than that of the circle, for instance at a point obtained by translating the circle until it no longer contains any part of the curve between the two points of tangency and considering the last point of contact between the translated circle and the curve. Therefore, there is a local minimum of curvature between each pair of tangencies, giving two of the four vertices. There must be a local maximum of curvature between each pair of local minima, giving the other two vertices.[3][4]\nConverse[edit]\nThe converse to the four-vertex theorem states that any continuous, real-valued function of the circle that has at least two local maxima and two local minima is the curvature function of a simple, closed plane curve. The converse was proved for strictly positive functions in 1971 by Herman Gluck as a special case of a general theorem on pre-assigning the curvature of n-spheres.[5] The full converse to the four-vertex theorem was proved by Björn Dahlberg shortly before his death in January 1998, and published posthumously.[6] Dahlberg's proof uses a winding number argument which is in some ways reminiscent of the standard topological proof of the Fundamental Theorem of Algebra.[7]\nApplication to mechanics[edit]\nOne corollary of the theorem is that a homogeneous, planar disk rolling on a horizontal surface under gravity has at least 4 balance points. A discrete version of this is that there cannot be a monostatic polygon. However, in three dimensions there do exist monostatic polyhedra, and there also exists a convex, homogeneous object with exactly 2 balance points (one stable, and the other unstable), the Gömböc.\nillustration of the Four-vertex theorem at an ellipse\nDiscrete variations[edit]\nThere are several discrete versions of the four-vertex theorem, both for convex and non-convex polygons.[8] Here are some of them:\n(Bilinski) The sequence of angles of a convex equilateral polygon has at least four extrema.\nThe sequence of side lengths of a convex equiangular polygon has at least four extrema.\n(Musin) A circle circumscribed around three consecutive vertices of the polygon is called extremal if it contains all remaining vertices of the polygon, or has none of them in its interior. A convex polygon is generic if it has no four vertices on the same circle. Then every generic convex polygon has at least four extremal circles.\n(Legendre–Cauchy) Two convex n-gons with equal corresponding side length have either zero or at least 4 sign changes in the cyclic sequence of the corresponding angle differences.\n(A.D. Alexandrov) Two convex n-gons with parallel corresponding sides and equal area have either zero or at least 4 sign changes in the cyclic sequence of the corresponding side lengths differences.\nSome of these variations are stronger than the other, and all of them imply the (usual) four-vertex theorem by a limit argument.\nGeneralizations to space curve[edit]\nThe stereographic projection from the sphere to the plane preserves critical points of geodesic curvature. Thus simple closed spherical curves have four vertices. Furthermore, on the sphere vertices of a curve correspond to points where its torsion vanishes. So for space curves a vertex is defined as a point of vanishing torsion. In 1994 V. D. Sedykh [9] showed that every simple closed space curve which lies on the boundary of a convex body has four vertices. In 2015 Mohammad Ghomi [10] generalized Sedykh's theorem to all curves which bound a locally convex disk.\nQuad oval\nhttps://en.wikipedia.org/wiki/Texas_Motor_Speedway\nTexas Motor Speedway is a speedway located in the northernmost portion of the U.S. city of Fort Worth, Texas – the portion located in Denton County, Texas. The track measures 1.5 miles (2.4 km) around and is banked 20° in turns 1 and 2 and banked 24° in turns 3 and 4. Texas Motor Speedway is a quad-oval design, where the front straightaway juts outward slightly. The track layout is similar to Atlanta Motor Speedway and Charlotte Motor Speedway (formerly Lowe's Motor Speedway). The track is owned by Speedway Motorsports, Inc., the same company that owns Atlanta and Charlotte Motor Speedways, as well as the short-track Bristol Motor Speedway.\nhttps://en.wikipedia.org/wiki/Charlotte_Motor_Speedway\nCharlotte Motor Speedway, formerly Lowe's Motor Speedway, is a motorsports complex located in Concord, North Carolina 13 miles (21 km) from Charlotte. The complex features a 1.5 mile (2.4 km) quad oval track that hosts NASCAR racing including the prestigious Coca-Cola 600 on Memorial Day weekend, the NASCAR All-Star Race, and the Bank of America 500. The speedway was built in 1959 by Bruton Smith and is considered the home track for NASCAR with many race teams located in the Charlotte area. The track is owned and operated by Speedway Motorsports, Inc. (SMI) with Marcus G. Smith (son of Bruton Smith) as track president.\nThe 2,000 acres (810 ha) complex also features a state-of-the-art quarter mile (0.40 km) drag racing strip, ZMAX Dragway. It is the only all-concrete, four-lane drag strip in the United States and hosts NHRA events. Alongside the drag strip is a state-of-the-art clay oval that hosts dirt racing including the World of Outlaws finals among other popular racing events.\nhttps://en.wikipedia.org/wiki/Calder_Park_Raceway\nThe Thunderdome is a purpose-built 1.8 km (1.1 mi) quad-oval speedway located on the grounds of Calder Park Raceway. It was originally known as the Goodyear Thunderdome to reflect the naming rights sponsorship bought by the Goodyear Tire & Rubber Company.\nWith its \"double dogleg\" front stretch and the start/finish line located on a straight section rather than the apex of a curve, the Thunderdome is technically a quad-oval in shape, though since its opening it has generally been referred to as a tri-oval. The track, modelled on a scaled down version of the famous Charlotte Motor Speedway, has 24° banking on Turns 1, 2, 3 and 4 while the front stretch is banked at 4° and the back straight at 6°.\nCalder Park Raceway is a motor racing circuit in Melbourne, Victoria, Australia. The complex includes a dragstrip, a road circuit with several possible configurations, and the \"Thunderdome\", a high-speed banked oval equipped to race either clockwise (for right-hand-drive cars) or counter-clockwise (for left-hand-drive cars such as NASCAR).\nFour turn\nhttps://en.wikipedia.org/wiki/Twin_Ring_Motegi\nTwin Ring Motegi (ツインリンクもてぎ Tsuin Rinku Motegi?) is a motorsport race track located at Motegi, Japan. Its name comes from the facility having two race tracks: a 2.493-kilometer (1.549 mi) oval and a 4.8-kilometer (2.98 mi) road course. It was built in 1997 by Honda, as part of the company's effort to bring the IndyCar Series to Japan, helping to increase their knowledge of American open-wheel racing.\nThe oval course is the only one of its kind in Japan, and currently is only used once a year for racing. It is a low-banked, 1.549-mile-long egg-shaped course, with turns three and four being much tighter than turns one and two. On March 28, 1998, CART held the inaugural race at Twin Ring Motegi Speedway. The race was won by Mexican driver Adrian Fernandez. CART continued racing at Twin Ring Motegi Speedway from 1998–2002. In 2003, Honda entered the Indy Racing League and the race became a part of the IRL schedule. In addition to Indycar racing, the track has also hosted a single NASCAR exhibition race in 1998.\nhttps://en.wikipedia.org/wiki/Darlington_Raceway\nDarlington Raceway is a race track built for NASCAR racing located near Darlington, South Carolina. It is nicknamed \"The Lady in Black\" and \"The Track Too Tough to Tame\" by many NASCAR fans and drivers and advertised as \"A NASCAR Tradition.\" It is of a unique, somewhat egg-shaped design, an oval with the ends of very different configurations, a condition which supposedly arose from the proximity of one end of the track to a minnow pond the owner refused to relocate. This situation makes it very challenging for the crews to set up their cars' handling in a way that will be effective at both ends.\nBanking Turns 1 and 2: 25°\nTurns 3 and 4: 23°\nFront Straight: 3°\nBack Straight: 2°\nhttps://en.wikipedia.org/wiki/Phoenix_International_Raceway\nFour turn dogleg\nPhoenix Raceway (PIR) is a 1 mile, low-banked tri-oval race track located in Avondale, Arizona. It is named after the nearby metropolitan area of Phoenix. The motorsport track opened in 1964 and currently hosts two NASCAR race weekends annually. Phoenix Raceway has also hosted the IndyCar Series, CART, USAC and the Rolex Sports Car Series. The raceway is currently owned and operated by International Speedway Corporation.\nPhoenix Raceway was built in 1964 around the Estrella Mountains on the outskirts of Avondale, Arizona. Because of the terrain and the incorporation of a road course and drag strip, designers had to build a \"dogleg\" into the backstretch. The original roadcourse was 2 miles (3.2 km) in length and ran both inside and outside of the main oval track.[3] The hillsides adjacent to the track also offer a unique vantage point to watch races from. \"Monument Hill\", located alongside turns 3 and 4, is a favorite among race fans because of the unique view and lower ticket prices. At the top of this hill lies a USGS bench marker known as Gila and Salt River Meridian, now listed on the National Register of Historic Places. Long before Phoenix Raceway existed, this spot was the original land survey point for all of what later became the state of Arizona.[4]\nBanking Turns 1 & 2: 10–11°\nDogleg: 10–11°\nTurn 3: 8°\nTurn 4: 8–9°\nBackstretch: 10°, 8°\nFrontstretch: 3°\nIn November 2010, ISC and the Avondale City Council announced plans for a $100 million long-term development for hoenix Raceway. $15 million would go towards repaving the track for the first time since 1990 and building a new media center. The plans also include a reconfiguration of the track.[6] The front stretch was widened from 52 feet to 62 feet (19 m), the pit stalls were changed from asphalt to concrete, the dogleg (between Turn 2 and Turn 3) was moved outward by 95 feet (29 m), tightening the turn radius of the dogleg from 800 feet to 500 feet (152 m). Along with the other changes, progressive banking was added to the turns: Turns 1 and 2, which had 11 degrees of banking, changed to 10 degrees on the bottom and 11 degrees on the top. Turns 3 and 4, which had 9 degrees of banking, changed to 8 degrees on the bottom and 9 on the top. Project leader Bill Braniff, Senior Director of Construction for North American Testing Corporation (NATC), a subsidiary of hoenix Raceway's parent company International Speedway Corporation, said \"All of the changes – including the adjustment of the dog-leg – will be put in place in order to present additional opportunities for drivers to race side-by-side. We're very confident that we'll have multi-groove racing at Phoenix from Day 1 because of the variable banking that will be implemented.\"[7][8] The infield road course was also sealed off and removed from use, making Phoenix Raceway an oval-only facility.[9] The reconfiguration project was completed by mid-August 2011, and on August 29–30, five drivers tested the new track, describing the new dogleg and backstretch as a \"rollercoaster\" as now when they enter it dips, then rises on exit and dips down going into turn 3, due to the elevation changes. On October 4–5, several NASCAR Cup Series teams tested the oval which was open to the public. Seven–eight million dollars went towards connecting the track property to the Avondale water and sewer systems. Work began following the 2011 Subway Fresh Fit 500.[6]\nhttps://en.wikipedia.org/wiki/Trenton_Speedway\nKidney bean shape four turn\nTrenton Speedway was a racing facility located near Trenton, New Jersey at the New Jersey State Fairgrounds. Races for the United States' premier open-wheel and full-bodied racing series of the times were held at Trenton Speedway.\nThe first race at the Fairgrounds was held on September 24, 1900, but there was no further racing there until 1907. Regular racing began in 1912 and continued until 1941. A new 1 mile dirt oval was opened in 1946. In 1957 the track was paved. It operated in that configuration until 1968 when the track was expanded to 1.5 miles (2.41 km) and a \"kidney bean\" shape with a 20° right-hand dogleg on the back stretch and a wider turn 3 & 4 complex than turns 1 & 2. The track closed in 1980 and the Fairgrounds itself closed 3 years later. The former site of the speedway is now occupied by the Grounds for Sculpture, a UPS shipping facility, and the housing development known as \"Hamilton Lakes\".[1]\nTurns 1 & 2: 10°\nDogleg: 4°\nhttps://en.wikipedia.org/wiki/Bristol_Motor_Speedway\nFour turns\nThe Bristol Motor Speedway, formerly known as Bristol International Raceway and Bristol Raceway, is a NASCAR short track venue located in Bristol, Tennessee. Constructed in 1960, it held its first NASCAR race on July 30, 1961. Despite its short length, Bristol is among the most popular tracks on the NASCAR schedule because of its distinct features, which include extraordinarily steep banking, an all concrete surface, two pit roads, and stadium-like seating. It has also been named one of the loudest NASCAR tracks.[2]\nAnother anomaly is that the short overall length means that there are two sets of pits, which also prevents a garage from being built due to limited space. Until 2002, slower starters were relegated to those on the backstretch. That year, the rules were changed to form essentially one long pit road. Thus, Bristol has unique rules about pit road — during caution, drivers who are wanting to pit must enter pit road in turn 2, drive all the way down the back stretch through the apron of turns 3 and 4 and down the front stretch, exiting pit road in turn 1. This rule eliminated the inherent disadvantage of pitting on the back stretch. During green flag pit stops, cars with pit stalls on the back stretch enter the pits in turn 2 and exit in turn 3; those with pits on the front stretch enter in turn 4 and exit in turn 1. Since the new pit rules were instituted, several drivers (most notably Jeff Gordon)[5] have made major mistakes during green flag pit stops by driving through both pit roads when only one is necessary for green flag pit stops.\nBanking Turns: 26–30°\nStraights: 6–10°\nLap record 0:12.742 (Brian Gerster, , 2011, Must See Racing X-treme Speed Classic)\nTemporary Dirt Oval\nSurface Clay\nLength 0.533 mi (0.858 km)\nStraights: 9°\nLap record 0:13.86 (Sammy Swindell, Swindell Motorsports, 2000, World of Outlaws Sprint Car Series)\nhttps://en.wikipedia.org/wiki/Fairgrounds_Speedway\nFairgrounds Speedway is an independent racetrack located at the Tennessee State Fairgrounds near downtown Nashville, Tennessee. The track is the second oldest continually operating track in the United States.[1] The track held NASCAR Grand National/Winston Cup (now Monster Energy NASCAR Cup Series) races from 1958 to 1984.\nTurns: 18°\nhttps://en.wikipedia.org/wiki/Lucas_Oil_Raceway_at_Indianapolis\nLucas Oil Raceway (formerly Indianapolis Raceway Park and O'Reilly Raceway Park at Indianapolis) is a drag racing track. The complex in Brownsburg, Indiana, also has a 0.686-mile (1.104 km) oval, 2.5-mile (4.0 km) road course. The 4,400-foot (1,300 m) drag strip is among the main drag racing venues in the world.\nBanking 12°\nLap record 0:19.581 (Mark Smith, Ralt of America, 1989, Formula Super Vee[1])\nhttps://en.wikipedia.org/wiki/Autódromo_Miguel_E._Abed\nOval has four turns\nAutódromo Miguel E. Abed\nAutódromo Internacional Miguel E. Abed\nMiguel E. Abed logo.png\nLocation Amozoc, near Puebla, Mexico\nTime zone UTC-6\nMajor events WTCC\nNASCAR Corona Series\nLATAM Challenge Series\n24 Hours of Mexico\nJetta TDI Cup USA\nMexican Super Turismo Championship\nLength 3.363 km (2.090 mi)\nLength 2.01 km (1.25 mi)\nThe Autódromo Internacional Miguel E. Abed is a racing track located in the town of Amozoc, 30 kilometres (18.64 mi) east of the city of Puebla in the Mexican state of the same name.\nhttps://en.wikipedia.org/wiki/Martinsville_Speedway\nThe first NASCAR sanctioned event was held on July 4, 1948. In 1951, only 4 cars were running at the finish, the fewest of any race held at the speedway. In 1960, Richard Petty became the youngest winner at Martinsville, at 22 years, 283 days; to date Petty has the most wins (15). In 1991, Harry Gant became the oldest winner at 51 years, 255 days. It was Gant's fourth win in a row, earning him the nickname Mr. September.\nThe oldest nascar race track had four turns\nMartinsville Speedway is an International Speedway Corporation-owned NASCAR stock car racing track located in Henry County, in Ridgeway, Virginia, just to the south of Martinsville. At 0.526 miles (847 m) in length, it is the shortest track in the Monster Energy NASCAR Cup Series. The track was also one of the first paved oval tracks in NASCAR, being built in 1947 by H. Clay Earles. It is also the only race track that has been on the NASCAR circuit from its beginning in 1948. Along with this, Martinsville is the only NASCAR oval track on the entire NASCAR track circuit to have asphalt surfaces on the straightaways, then concrete to cover the turns.\nThe track is often referred to as paper clip-shaped and is banked only 12° in the turns. The combination of long straightaways and flat, narrow turns makes hard braking going into turns and smooth acceleration exiting turns a must. The track was paved in 1955 and in 1956 it hosted its first 500 lap event. By the 1970s, a combination of high-traction slick tires and high speed were putting excessive wear on the asphalt surface. In 1976 the turns were repaved with concrete (a rare concept in the 1970s).[2] By 2004, the then 28-year-old concrete had shown significant wear. On April 18, 2004 a large chunk of concrete had become dislodged from the track's surface and caused severe damage to the body of Jeff Gordon's car. In reaction to this, the track was fully repaved with new concrete and asphalt.[3]\nUntil 1999, Martinsville was notorious for having two pit roads. The backstretch pit road was generally avoided because if a team had to pit there during a caution, any car pitting on the frontstretch had the advantage of pitting first and not having to adhere to pace car speed upon exiting their pit road. This was rectified when pit road was reconfigured to extend from the entrance of turn 3 to the exit of turn 2.[4] This move allowed for a garage to be built inside the track, and leaves Bristol as the only active NASCAR track with two pit roads.\nBanking Turns 12°\nStraights 0°\nLap record 18.746 seconds (Greg Sacks, , 1986, NASCAR Whelen Modified Tour)\nhttps://en.wikipedia.org/wiki/Dover_International_Speedway\nDover International Speedway (formerly Dover Downs International Speedway) is a race track in Dover, Delaware, United States. Since opening in 1969, it has held at least two NASCAR races. In addition to NASCAR, the track also hosted USAC[4] and the Verizon IndyCar Series. The track features one layout, a 1 mile (1.6 km) concrete oval, with 24° banking in the turns and 9° banking on the straights. The speedway is owned and operated by Dover Motorsports.\nBanking Turns: 24°\nStraights: 9°[3]\nFour turn oval started as- became paperclip\nHomestead-Miami Speedway is a motor racing track located in Homestead, Florida. The track, which has several configurations, has promoted several series of racing, including NASCAR, the Verizon IndyCar Series, the Grand-Am Rolex Sports Car Series, and the Championship Cup Series.\nThe track opened as a four-turn, rectangular-oval, based on the Indianapolis Motor Speedway's layout, coincidental considering that circuit and Miami Beach were developed by Carl G. Fisher. However, due to its shorter distance, the track was not able to maintain the racing characteristics of the Indianapolis Motor Speedway. Instead, the sharp, flat turns and aprons made passing difficult and lowered overall speed. The geometry also created unfavorably severe crash angles. In 1996, track management attempted to correct the problems by widening the aprons of the turns by as much as 24 feet (7.3 m). The movie Super Speedway was shot at the speedway before the track was reconfigured to an oval. In the summer of 1997, an $8.2 million reconfiguration project changed the turns from a rectangle to a traditional, continuous turn oval.\nIn 2003, the track was reconfigured once again. The turns were changed from mostly flat to steep variable banking. In 2005, lights were installed to allow night racing for the first time. The renovations were praised by fans, and the track has produced a number of close finishes, including 2005's last-lap battle between Greg Biffle and Mark Martin.\nFour turn paperclip\nhttps://en.wikipedia.org/wiki/Autódromo_Ciudad_de_Rafaela\nThe Autódromo Ciudad de Rafaela is a motor racing circuit in Rafaela, Santa Fe, Argentina built in 1952 and paved in 1966. The venue – owned by Atlético de Rafaela – hosted the 500 Millas Argentinas race until 1971. The USAC Rafaela Indy 300 race was held at the Autódromo in 1971, won by Al Unser in a Colt-Ford Turbo. Current major race series using the circuit include TC2000, Turismo Carretera, TRV6, Formula Three Sudamericana, and the South American Super Touring Car Championship.\nSurface Asphalt (since 1966)\nhttps://en.wikipedia.org/wiki/Milwaukee_Mile\nThe Milwaukee Mile is an approximately one mile-long (1.6 km) oval race track in the central United States, located on the grounds of the Wisconsin State Fair Park in West Allis, Wisconsin, a suburb west of Milwaukee. Its grandstand and bleachers seat approximately 37,000 spectators. Paved 63 years ago in 1954, it was originally a dirt track. In addition to the oval, there is a 1.8 mile (2.8 km) road circuit located on the infield.\nSurface Asphalt\nLength ~ 1.0 mi (~ 1.6 km)\nBanking Turns – 9.25°\nStraights – 2.5°\nLap record 198.2 mph, November 2, 1985 (Sam Jones, Billy Ballew Motorsports, 1964)\nhttps://en.wikipedia.org/wiki/North_Wilkesboro_Speedway\nNorth Wilkesboro Speedway was a short track that held races in NASCAR's top three series, including 93 Winston Cup Series races. The track, a NASCAR original, operated from 1949, NASCAR's inception, until the track's closure in 1996. The speedway briefly reopened in 2010 and hosted several Stock Car Series races, including the now-defunct ASA Late Model Series, USARacing Pro Cup Series, and PASS Super Late Models, before closing again in the spring of 2011. The track is located on U.S. Route 421, about five miles east of the town of North Wilkesboro, North Carolina. It measures five-eighths of a mile and features a unique uphill backstretch and downhill frontstretch.\nDarel Dieringer completely dominated the 1967 Gwyn Staley 400, driving for Junior Johnson. Dieringer got the pole with a lap of 21.50 seconds / 104.693 mph and lead all 400 laps. He was the first driver to run a Grand National Series race of over 250 miles while leading from start to finish. He lapped the whole field twice at one point. Dieringer took the checked flag after he ran out of gas in Turn Four of the last lap and coasted to the finish line. This was Dieringer's last Grand National victory. Cale Yarborough, driving the No. 21 Wood Brothers Ford, finished second, one lap behind Dieringer. A 20-lap qualifying race to make the field was won by Clyde Lynn.\nhttps://en.wikipedia.org/wiki/New_Hampshire_Motor_Speedway\nNew Hampshire Motor Speedway is a 1.058-mile (1.703 km) oval speedway located in Loudon, New Hampshire, which has hosted NASCAR racing annually since the early 1990s, as well as the longest-running motorcycle race in North America, the Loudon Classic. Nicknamed \"The Magic Mile\", the speedway is often converted into a 1.6-mile (2.6 km) road course, which includes much of the oval.\nIn 2000, the track was the site of a pair of fatal collisions which took the lives of two promising young drivers. In May, while practicing for a Busch Series race, Adam Petty perished when his throttle stuck exiting the second turn, resulting in a full speed crash head-on in the middle of the third and fourth turns. When the NASCAR Cup Series made their first appearance of the season, a similar fate befell 1998 Rookie of the Year Kenny Irwin, Jr. For safety reasons, track owners decided to run restrictor plates on the cars during their return trip to the speedway in September 2000, making it the first track in recent history outside of Daytona and Talladega to use them. It would be the last one as well; an uneventful Dura Lube 300 won by Jeff Burton, which had no lead changes, was the result of the experiment. It was the first wire-to-wire race since the 1970s.\nhttps://en.wikipedia.org/wiki/Salem_Speedway\nSalem Speedway is a .555 miles (0.893 km) long paved oval motor racetrack in Washington Township, Washington County, near Salem, Indiana, approximately 100 miles (160 km) south of Indianapolis. It opened in 1947. Major auto racing series that run at Salem are ARCA, USAR and USAC.\nThe track has 33° degrees of banking in the corners. The first ARCA race was 1955.\nThe qualifying record is 16.785 seconds/119.035 mph by Gary Bradberry in 1994.[1]\nARCA Lincoln Welders Truck Series Trucks at Salem, September 16, 2006\nRich Vogler[edit]\nOn the 21st night of July, 1990, during the Joe James / Pat O'Connor Memorial sprint car event at the Salem Speedway, which was nationally broadcast on ESPN Thunder, sprint car driver Rich Vogler sustained severe head injuries and was killed after a crash in turn 4. Vogler, who was leading the event at the time and was about to take the white flag signaling one lap to go, hit head on with the turn 4 wall, violently throwing tires, Vogler's helmet, and other pieces of Vogler's car all over the track. The race was red flagged and would never restart. Vogler, now dead at the age of 39, was declared the winner posthumously because of USAC National Sprint Car Series rules on a red flag reverting to the previous completed lap. This was his 170th win. Finishing first among the survivors was a young driver from Pittsboro, Indiana, named Jeff Gordon.\nFour turns- 16 is the squares of the quadrant model\nhttps://en.wikipedia.org/wiki/Thompson_Speedway_Motorsports_Park\nThompson Speedway Motorsports Park (TSMP), formerly Thompson International Speedway, is a motorsports park in Thompson, Connecticut, featuring a 5⁄8-mile (1.0 km) paved oval racetrack and a 1.7-mile (2.7 km) road racing course. Once known as the \"Indianapolis of the East\", it was the first asphalt-paved racing oval track in the United States and is now under the NASCAR Whelen All-American Series banner. Each year Thompson hosts one of the great fall variety events \"The World Series of Auto Racing\" highlighted by the International Supermodified Association and the NASCAR Whelen Modified Tour. This event frequently draws over 350 race cars in 16 separate divisions over three days.\nLength 5/8 mi (1 km)\nhttps://en.wikipedia.org/wiki/Flemington_Speedway\nFlemington Speedway was a motor racing circuit in Flemington, New Jersey which operated from 1915 to 2002. The track was known for being the fastest 5/8 dirt track in the United States. Later it was for hosting four NASCAR Craftsman Truck Series races and its pioneering use of foam blocks used to lessen the impact of crashes, which led to the adoption of the SAFER barrier and was America's longest-running Saturday night shorttrack until its closing.\nFlemington Speedway was created as a nineteenth century fairgrounds horse track. It was a half mile, four-cornered dirt oval.\nRectangle Oval\nBanking Semi-banked\nLap record 0:18.817 (Stacy Compton, Impact Motorsports, 1998, NASCAR Craftsman Truck Series)\nhttps://en.wikipedia.org/wiki/Ontario_Motor_Speedway\nAlso there is four dominant groups for Black rights one is the NAAC{\nOntario Motor Speedway was a motorsport venue located in Ontario, California. It was the first and only automobile racing facility built to accommodate major races sanctioned by all of the four dominant racing sanctioning bodies: USAC (and now IndyCar Series) for open-wheel oval car races; NASCAR for a 500-mile (800 km) oval stock car races; NHRA for drag races; and FIA for Formula One road course races. Constructed in less than two years,[2] the track opened in August 1970 and was considered state of the art at the time.[3][4]\nFour dominant racing sanctioning bodies\nThe property remained vacant for several years until the mid-1980s when a Hilton Hotel was built on turn 4 of the old speedway site. It was the first multiple-story building of its kind in the City of Ontario.\nAs of the mid-2000s, development on the property has increased. Over half of the old speedway property, adjacent to Interstate 10, has been developed commercially. However, a minor tribute to the racing heritage of the property can be seen in the street names of the developed area (ex: Duesenburg Drive, Ferrari Lane, and others), in much the same way that the developed area that was formerly Riverside International Raceway reflects the same heritage, with roads named after famous drivers.\nIn 2007, much of the remainder of the property became Piemonte, a mixed-use development with condominiums, business offices, and some retail stores. In the fall of 2008, the centerpiece of Piemonte opened: the Citizens Business Bank Arena, an 11,000-seat sports and entertainment venue. The arena is home to the AHL Ontario Reign, and is built in the general area of Turn 3 of the old Ontario track.\nThe Ontario Mills is located to the east, across the street from the former site of the Ontario Motor Speedway.\nhttps://en.wikipedia.org/wiki/Autódromo_Internacional_Nelson_Piquet\nThe Autódromo Internacional Nelson Piquet (Nelson Piquet International Autodrome), also known as Jacarepaguá after the neighbourhood in which it was located, was a motorsport circuit in Rio de Janeiro, Brazil. Opened in 1977, it hosted the Formula One Brazilian Grand Prix on ten occasions, and was also used for CART, motorcycle racing and stock car racing. In 2012, it was demolished to make way for facilities to be used in the 2016 Summer Olympics.\nEmerson Fittipaldi Speedway (1996–2005)\nLength 3 km (1.864 mi)\nLap record 38.565 (Brazil Christian Fittipaldi, Newman-Haas, 1999, Cart FedEx Championship Series)\nFour turn oval\nhttps://en.wikipedia.org/wiki/Rockingham_Motor_Speedway\nOval Circuit\nLength1.479 mi (2.38 km)\nTurns4\nBanking3.5 – 7.9º\nLap record0:24.719 [2] ( Tony Kanaan, Lola–Ford, 2001, CART)\nHandling Circuit\nSurface Tarmac\nLength 0.97 mi (1.56 km)\nRockingham Motor Speedway is a modern motorsport venue in the United Kingdom, that hosts corporate driving days, driver training, conferencing and exhibitions, vehicle manufacturing events, track days, testing, driving experiences and motor racing. It claims to be Europe's fastest racing circuit,[4] and was the first banked oval constructed in Britain since the closure of Brooklands in 1939.[5]\nThe Oval Circuit[edit]\nThe 1.48 mile American-style banked oval circuit is 18.3 metres wide and has a maximum bank angle of 7 degrees and comprises four very distinct corners. Rockingham's oval is unique in the UK and one of only two speedways in Europe. The oval circuit can also be converted to a road course layout for events by positioning temporary chicanes and curves both on the main area and apron of the circuit.\nOver the weekend of 20–22 September 2001, the Champ cars came to England for the first time to contest the Rockingham 500, a round of the CART (Championship Auto Racing Teams) Fedex Championship Series. For various reasons the race distance was shortened to 300 km and victory was snatched on the exit of Turn Four of the last lap by Gil de Ferran driving the Marlboro Team Penske Honda –powered Reynard 01i at a race average speed of 153.41 mph from Kenny Bräck at the wheel the Team Rahal Lola-Ford Cosworth B1/00, and the Newman-Haas Racing Lola-Toyota B1/00 driven by Cristiano da Matta. The fastest lap, and therefore outright lap record was set by Patrick Carpentier in 25.551secs (210.59 mph) in the Player's Forsythe Racing Reynard-Cosworth. Carpentier became for first Canadian to ever hold the outright lap record at an English circuit.[6]\nFour turns oval\nhttps://en.wikipedia.org/wiki/Pikes_Peak_International_Raceway\nOval & Road Course\nPikes Peak International Raceway (PPIR) is a racetrack in a Colorado Springs annexed area of the Fountain, Colorado, postal zone that by October 12, 1997, was \"the fastest 1-mile paved oval anywhere\".[2] The speedway hosted races in several series including the Indy Racing League and 2 NASCAR series (Busch and Truck) until operations were suspended 2005–08. A wide variety of amateur racing groups use PPIR for racing and training, and many NASCAR teams use PPIR for testing[citation needed] (the design is similar to the California Speedway in Fontana.)[3] PPIR Is more similar to Phoenix International Raceway than Fontana Raceway.\nShort Oval\nD shaped oval four turns\nhttps://en.wikipedia.org/wiki/Auto_Club_Speedway\nD-shaped oval\nLength 2.0 mi (3.22 km)\nFrontstretch: 11°\nBackstretch: 3°\nLap record 241.428 miles per hour (Gil de Ferran, Penske Racing, October 28, 2000, CART)\nAuto Club Speedway, formerly California Speedway,[3] is a two-mile (3 km), low-banked, D-shaped oval superspeedway in Fontana, California which has hosted NASCAR racing annually since 1997. It is also used for open wheel racing events. The racetrack is located near the former locations of Ontario Motor Speedway and Riverside International Raceway. The track is owned and operated by International Speedway Corporation and is the only track owned by ISC to have naming rights sold. The speedway is served by the nearby Interstate 10 and Interstate 15 freeways as well as a Metrolink station located behind the backstretch.\nhttps://en.wikipedia.org/wiki/Michigan_International_Speedway\nIn 1999, the speedway was purchased by International Speedway Corporation (ISC) and in 2000 the track was renamed to its original name of Michigan International Speedway. In 2000 10,800 seats were added via a turn 3 grandstand bringing the speedway to its current capacity. In 2004-2005 the largest renovation project in the history of the facility was ready for race fans when it opened its doors for the race weekend. The AAA Motorsports Fan Plaza—a reconfiguration of over 26 acres (110,000 m2) behind the main grandstand—provided race fans a new and improved area to relax and enjoy sponsor displays, merchandise, and concessions during breaks of on-track activity. A new, three-story viewing tower housing the Champions Club presented by AAA and 16 new corporate suites also awaited VIP guests, while a state-of-the-art press box and an expansive race operations facility high above the two-mile (3.2 km) oval welcomed the media and race officials.[3] Michigan was repaved prior to the 2012 season. This marks the first time since 1995 that the oval was resurfaced, along with 1967, 1975, and 1986. Also new for 2012 was the addition of a new 20-space trackside luxury campsite to be known as APEX. Situated in turn 3, each site will offer a 20-by-55-foot (6.1 by 16.8 m) area, with water and electric hookups, a picnic table and grill. Besides front-row seating for the racing action, the APEX area will offer personalized service to its guests, including a concierge to address any of their needs during race weekend. To accommodate these new campsites, the remaining silver grandstands in turns 3 and 4 were removed.[4]\nLength 2 mi (3.2 km)\nhttps://en.wikipedia.org/wiki/Texas_World_Speedway\nTexas World Speedway was built in 1969 and is one of only seven superspeedways of two miles (3 km) or greater in the United States used for racing, the others being Indianapolis, Daytona, Pocono, Talladega, Auto Club, and Michigan (there are several tracks of similar size used for vehicle testing). TWS is located on approximately 600 acres (2.4 km²) on State Highway 6 in College Station, Texas. There is a 2-mile (3 km) oval, and several road course configurations. The full oval configuration is closely related to that of Michigan and is often considered the latter's sister track, featuring steeper banking, at 22 degrees in the turns, 12 degrees at the start/finish line, and only 2 degrees along the backstretch,[1] compared to Michigan's respective 18, 12, and 5 degrees. The last major race occurred at the track in 1981. The track is still used by amateur racing clubs such as the SCCA, NASA, Porsche Club of America, Corinthian Vintage Auto Racing, CMRA, driving schools and car clubs, as well as hosting music concerts and the like.\nFour turns D Shaped Oval\nhttps://en.wikipedia.org/wiki/Richmond_International_Raceway\nD-shaped oval (1988-present)\nBanking 14° in turns\n8° on frontstretch\n2° on backstretch\nLap record 0:15.3197 seconds (176.244 mph) (Sam Hornish Jr., Team Penske, 2005, IndyCar)\nWebsite www.rir.com\nRichmond International Raceway (RIR) is a 0.75 miles (1.21 km), D-shaped, asphalt race track located just outside Richmond, Virginia in Henrico County. It hosts the Monster Energy NASCAR Cup Series and NASCAR Xfinity Series. Known as \"America's premier short track\", it formerly hosted a NASCAR Camping World Truck Series race, an IndyCar Series race, and two USAC sprint car races.\nhttps://en.wikipedia.org/wiki/944_Cup\nChapters[edit]\nRacing under the 944 Cup rule set is possible in four different chapters and a National runoff event each year. The following are the only currently recognized official chapters of the 944 Cup National Series:\nhttps://en.wikipedia.org/wiki/Motorcycle_racing…\nCategories[edit]\nThe FIM classifies motorcycle racing in the following four main categories.[1] Each category has several sub categories.[2]\nRoad racing[edit]\nMain article: Road racing\nRoad racing is the sport of racing motorcycles on hard surfaces resembling roads, usually paved with tarmac. Races can take place either on purpose-built racing circuits or on closed public roads.\nTraditional road racing[edit]\nCompetitors line up at the start of the 2010 Senior TT race. This form of road racing differs from others insofar as it takes the form of a Time Trial\nHistorically, \"road racing\" meant a course on closed public road. This was once commonplace but currently only a few such circuits have survived, mostly in Europe. Races take place on publics roads which have been temporarily closed to the public by legal orders from the local legislature. Two championships exist, the first is the International Road Racing Championship, the other is the Duke Road Racing Rankings. The latter accounts for the majority of road races that take place each season, with an award for the highest placed rider. Prominent road races include the Isle of Man TT, North West 200 and the Ulster Grand Prix. Ireland has many road racing circuits still in use. Other countries with road races are the Netherlands, Spain, Belgium, Germany, Great Britain, the Czech Republic, Ukraine, New Zealand and Macau.\nMotorcycle Grand Prix[edit]\nMotoGP racing\nMain article: Grand Prix motorcycle racing\nGrand Prix motorcycle racing refers to the premier category of motorcycle road racing. It is divided into three distinct classes:\nMoto3: Introduced in 2012, motorcycles in this class are 250cc with single-cylinder four-stroke engines Previously it featured 125 cc two-stroke motorcycles. This class is also restricted by rider age, with an upper limit of 25 for newly signed riders and wild card entries and an absolute upper limit of 28 for all riders.\nMoto2: Introduced by Dorna Sports, the commercial rights holder of the competition, in 2010 as a 600 cc four-stroke class. Prior to that season, the intermediate class was 250 cc with two-stroke engines. Moto2 races in the 2010 season allowed both engine types; from 2011 on, only the four-stroke Moto2 machines were allowed.\nMotoGP: is the current term for the highest class of GP racing. The class was contested with prototype machines with varying displacement and engine type over the years. Originally contested by large displacement four stroke machines in the early years it eventually switched to 500 cc two strokes. In 2002 990 cc four-stroke bikes were allowed to compete alongside the 500 cc two strokes and then completely replaced them in 2003. 2007 saw a reduction to 800 cc four stroke engines to unsuccessfully slow things down a bit before finally settling on 1000 cc four strokes in 2012.[3]\nGrand prix motorcycles are prototype machines not based on any production motorcycle.\nSuperbike racing[edit]\nSuperbike racing\nMain article: Superbike racing\nSuperbike racing is the category of motorcycle road racing that employs modified production motorcycles. Superbike racing motorcycles must have four stroke engines of between 800 cc and 1200 cc for twins, and between 750 cc and 1000 cc for four cylinder machines. The motorcycles must maintain the same profile as their roadgoing counterparts. The overall appearance, seen from the front, rear and sides, must correspond to that of the bike homologated for use on public roads even though the mechanical elements of the machine have been modified.\nSupersport racing[edit]\nSee also: AMA Supersport Championship, British Supersport Championship, and Supersport World Championship\nSupersport racing is another category of motorcycle road racing that employs modified production motorcycles. To be eligible for Supersport racing, a motorcycle must have a four-stroke engine of between 400 and 600 cc for four-cylinder machines, and between 600 and 750 cc for twins, and must satisfy the FIM homologation requirements. Supersport regulations are much tighter than Superbikes. Supersport machines must remain largely as standard, while engine tuning is possible but tightly regulated.\nEndurance racing[edit]\nMain article: Endurance racing (motorsport)\nEndurance racing is a category of motorcycle road racing which is meant to test the durability of equipment and endurance of the riders. Teams of multiple riders attempt to cover a large distance in a single event. Teams are given the ability to change riders during the race. Endurance races can be run either to cover a set distance in laps as quickly as possible, or to cover as much distance as possible over a preset amount of time. Reliability of the motorcycles used for endurance racing is paramount.\nSidecar racing[edit]\nSidecar racing\nMain article: Sidecar World Championship\nSidecar racing is a category of sidecar motorcycle racing. Older sidecar road racers generally resembled solo motorcycles with a platform attached; modern racing sidecars are purpose built low and long vehicles. Sidecarcross resembles MX motorcycles with a high platform attached. In sidecar racing a rider and a passenger work together to make the machine perform optimally; the way in which the passenger shifts their weight across the sidecar is crucial to its performance around corners.\nSidecar racing has many sub-categories including:\nSidecarcross (sidecar motocross)\nSidecar trials\nF1/F2 road racing\nHistoric (classic) road racing\nMotocross[edit]\nStart of a Motocross race\nMain article: Motocross\nMotocross (or MX) is the direct equivalent of road racing, but off road, a number of bikes racing on a closed circuit. Motocross circuits are constructed on a variety of non-tarmac surfaces such as dirt, sand, mud, grass, etc., and tend to incorporate elevation changes either natural or artificial. Advances in motorcycle technology, especially suspension, have led to the predominance of circuits with added \"jumps\" on which bikes can get airborne. Motocross has another noticeable difference from road racing, in that starts are done en masse, with the riders alongside each other. Up to 40 riders race into the first corner, and sometimes there is a separate award for the first rider through (see holeshot). The winner is the first rider across the finish line, generally after a given amount of time or laps or a combination.\nMotocross has a plethora of classes based upon machine displacement (ranging from 50cc 2-stroke youth machines up to 250cc two-stroke and 450cc four-stroke), age of competitor, ability of competitor, sidecars, quads/ATVs, and machine age (classic for pre-1965/67, Twinshock for bikes with two shock absorbers, etc.).\nSupercross[edit]\nMain article: Supercross\nSupercross (or SX) is simply indoor motocross. Supercross is more technical and rhythm like to riders. Typically situated in a variety of stadiums and open or closed arenas, it is notable for its numerous jumps. In North America, this has been turned into an extremely popular spectator sport, filling large baseball, soccer, and football stadiums, leading to Motocross being now termed the \"outdoors\". However, in Europe it is less popular sport, as the predominate focus there is on Motocross.\nSupermoto[edit]\nA Supermoto rider on a tarmac section\nMain article: Supermoto\nSupermoto is a racing category that is a crossover between road-racing and motocross. The motorcycles are mainly motocross types with road-racing tyres. The racetrack is a mixture of road and dirt courses (in different proportions) and can take place either on closed circuits or in temporary venues (such as urban locations).\nThe riding style on the tarmac section is noticeably different from other forms of tarmac-based racing, with a different line into corners, sliding of the back wheel around the corner, and using the leg straight out to corner (as opposed to the noticeable touching of the bent knee to the tarmac of road racers).\nEnduro and cross-country[edit]\nEnduro[edit]\nFormer World Enduro Champion Stefan Merriman\nMain article: Enduro\nEnduro is a form of off road motorcycle sport that primarily focuses on the endurance of the competitor. In the most traditional sense (\"Time Card Enduros\"), competitors complete a 10+ mile lap, of predominately off road going, often through forestry. The lap is made up of different stages, each with a target time to complete that stage in exactly, there are penalties for being early and late, thus the goal is to be exactly \"on time\". Some stages are deliberately \"tight\", others are lax allowing the competitor to recuperate. There are also a variety of special tests, on variety of terrain to further aid classification, these are speed stages where the fastest time is desired. A normal event lasts for 3 to 4 hours, although longer events are not uncommon. Some events, particularly national and world championship events take place over several days and require maintenance work to be carried out within a limited time window or while the race is running. To prevent circumvention of the maintenance restrictions, the motorcycles are kept overnight in secure storage.\nThere is a World Enduro Championship (WEC) that has events across Europe, with a few excursions to North America. The most significant event in the Enduro calendar is the International Six Days Enduro (formerly the International Six Days Trial), where countries enter teams of riders (i.e. Enduro's \"World Cup\"), as well as club teams – the event combines amateur sport with the professional level sport, it also takes place in a much more geographically dispersed range of locations.\nIn addition to traditional Time Card Enduros held over a long lap, a variety of other forms of sport have been taken up; notably \"Short Course Enduros\", a shorter (in lap length) form of Time Card Enduros Hare scrambles and \"Hare and Hounds\".\nHare Scramble[edit]\nMain article: Hare scramble\nHare Scramble racer at Hyden, Ky\nHare scramble is the name given to a particular form of off-road motorcycle racing. Traditionally a hare scramble can vary in length and time with the contestants completing multiple laps around a marked course through wooded or other rugged natural terrain. The overall winner is the contestant who maintains the highest speed throughout the event. In Florida, Hare scrambles start the race with a staggered starting sequence. Once on the course, the object of the competitor is to complete the circuit as fast as possible. The race consists of wooded areas and/or open fields.\nCross-country rally[edit]\nMain article: Rally raid\nCross-country rally events (also called Rallye Raid or simply Rallye, alternate spelling Rally) are much bigger than enduros. Typically using larger bikes than other off road sports, these events take place over many days, travelling hundreds of miles across primarily open off road terrain. The most famous example is the Dakar Rally, previously travelling from Western Europe (often Paris) to Dakar in Senegal, via the Sahara desert, taking almost two weeks. Since 2009 the Dakar Rally has been held in South America traveling through Peru, Argentina and Chile. A FIM Cross-Country Rallies World Championship also exists encompassing many events across the world, typically in desert nations. These events often run alongside \"car\" rallies (under the FIA).\nTrack racing[edit]\nMain article: Track racing\nTrack racing is a form of motorcycle racing where teams or individuals race opponents around an oval track. There are differing variants, with each variant racing on a different surface type.\nIndoor short track and TT Racing[edit]\nTrack racing motorcycles\nIndoor races consist of either a polished concrete floor with coke syrup or other media sprayed or mopped onto the concrete for traction for the tyres of the motorcycles, or on dirt that has been moistened and hard packed, or left loose (often called a cushion). Similar to size of the Arenacross Arenas or sometimes smaller the riders must have accurate throttle control to negotiate these tight Indoor Race Tracks.\nIn the U.S., flat-track events are held on outdoor dirt ovals, ranging in length from one mile to half-mile, short-tracks and TTs. All are usually held outdoors, though a few short-track events have been held in indoor stadiums. A Short Track event is one involving a track of less than 1⁄2 mile in length, while a TT event can be of any length, but it must have at least one right turn and at least one jump to qualify.\nIn the A.M.A. Grand National Championship, mile, half-mile, short-track and TT races are part of a specific discipline labelled \"Dirt track\" or sometimes \"Flat track\" (also called Flat Track). However the AMA Sanction rule books refer to this discipline as Dirt track racing. Whether mile, half-mile, short-track or TT, traction is what defines a dirt track race. The bikes cannot use \"knobbies\", they must use \"Class C\" tires which are similar to street tires. On mile, half-mile, short-track course, the track is an oval, all turns to the left only, and only a rear brake is allowed. On the TT courses, there must be at least one right hand turn with a jump being optional, front and rear brakes are allowed, but the same \"Class C\" tires are required.\nAlthough not mandated, most flat track racers wear a steel \"shoe\" on the left boot which is actually a fitted steel sole that straps onto the left boot. This steel shoe lets the rider slide more easily and safely on their left foot when needed as they lean the bike to the left while sliding through the corners, though riders can often perform what is known as a \"feet-up slide\", using throttle control, body lean and steering alone to power-slide through the turns, without sliding on their steel shoe.\nHard-packed tracks are generally referred to as \"groove\" tracks, loosely packed tracks are called \"cushions\". The composition of the track surface is usually decided by the race promoter and track preparation team, the latter using various methods and materials including combinations of clay, decomposed granite, sand, calcium (to allow the surface to retain water moisture) and other materials. An optimum \"groove\" track will have enough moisture to be \"tacky\", without being slick, and will develop what is called a \"blue groove\" as the motorcycle tires lay down a thin layer of tire rubber on the racing line.\nA \"cushion\" track consists of similar materials to the groove track, but mixed in a way that allows the surface to maintain a more sandy, loose composition. While power-sliding is common on both groove and cushion tracks, a cushion track allows more power-sliding, into, through and out of the turns. Though the \"Class C\" tires allowed by the rules are the same for both cushion and groove tracks, riders are allowed to modify the tires by cutting some rubber off the tire grooves for improved traction, but are not allowed to add materials to the tires.\nSpeedway[edit]\nMain article: Motorcycle speedway\nSpeedway racing takes place on a flat oval track usually consisting of dirt or loosely packed shale, using bikes with a single gear and no brakes. Competitors use this surface to slide their machines sideways (powersliding or broadsliding) into the bends using the rear wheel to scrub-off speed while still providing the drive to power the bike forward and around the bend.\nGrasstrack[edit]\nGrass track racing\nMain article: Grasstrack\nGrasstrack is outdoor speedway. The track are longer (400 m+, hence it is often also referred to as Long Track at world level), often on grass (although other surfaces exist) and even feature elevation changes. Machinery is very similar to a speedway bike (still no brakes, but normally two gears, rear suspension, etc.).\nIce speedway[edit]\nIce Racing using full-rubber tyres\nMain article: Ice Racing\nIce racing includes a motorcycle class which is the equivalent of Speedway on ice. Bikes race anti-clockwise around oval tracks between 260 and 425 metres in length. Metal tire spikes or screws are often allowed to improve traction. The race structure and scoring are similar to Speedway.\nBoard track[edit]\nBoard track racing\nMain article: Board track racing\nBoard track racing was a type of track racing popular in the United States between the second and third decades of the 20th century, where competition was conducted on oval race courses with surfaces composed of wooden planks. By the early 1930s, board track racing had fallen out of favor, and into eventual obsolescence.\nFrom the astride position, the vaulter brings the right leg over the horse's neck. The grips must be ungrasped and retaken as the leg is brought over. The left leg is then brought in a full arc over the croup, again with a change of grips, before the right leg follows it, and the left leg moves over the neck to complete the full turn of the vaulter. The vaulter performs each leg movement in four strides each, completing the Mill movement in sixteen full strides. During the leg passes, the legs should be held perfectly straight, with the toes pointed. When the legs are on the same side of the horse, they should be pressed together.\nThe 16 strides are the 16 squares of the quadrant model\nIn a television series that followed Bobby Knight, Knight in one of the episodes said \"the basketball court looks like a cross, with the free throw line as the cross\"\nFour bases\nhttps://en.wikipedia.org/wiki/Pesäpallo\nPesäpallo (Finnish pronunciation: [pesæpɑlːo]; Swedish: boboll, both names literally meaning \"nest ball\", also referred to as \"Finnish baseball\") is a fast-moving bat-and-ball sport that is quite often referred to as the national sport of Finland and has some presence in other countries including Germany, Sweden, Switzerland, Australia, and Canada's northern Ontario (the latter two countries have significant Scandinavian populations.) The game is similar to brännboll, rounders, and lapta, as well as baseball.\nThe game has four bases\nhttps://en.wikipedia.org/wiki/Old_cat\nOld cat (also known as ol' cat or cat-ball) games were bat-and-ball, safe haven games played in North America The games were numbered according to the number of bases. The number of bases varied according to the number of players.\nThree old cat had a triangular base layout and three strikers, while four old cat had four strikers and four bases in a square pattern. The Mills Commission, formed in 1905 to ascertain the origins of baseball, recorded many reminiscences of people playing three and four old cat in their youth. Baseball historian Harold Seymour reported that old cat games were still being played on the streets and vacant lots of Brooklyn in the 1920s.\nAlbert Spalding suggested that four old cat was the immediate ancestor of town ball, from which baseball evolved.\nhttps://en.wikipedia.org/wiki/Corkball\nCorkball is a \"mini-baseball\" game featuring a 1.6-ounce (45 g) ball, which is stitched and resembles a miniature baseball. The bat has a barrel that measures 1.5 inches (3.8 cm) in diameter. Originally played on the streets and alleys of St. Louis, Missouri as early as 1890,[1] today the game has leagues formed around the country as a result of St. Louis servicemen introducing the game to their buddies during World War II and the Korean War. It has many of the features of baseball, yet can be played in a very small area because there is no base-running.\nIt also has four bases.\nhttps://en.wikipedia.org/wiki/Brännboll\nBrännboll is a form of baseball in Norway with four bases. There is no pitcher as the hitter throws the ball in the air and then hits it.\nGeneric penalty system (Several varieties exist)\nFirst time – warning.\nSecond time – 5 penalty points.\nThird time – 10 penalty points.\nFourth time – disqualification, the opponent wins.\nhttps://en.wikipedia.org/wiki/British_baseball\nScoring system – In British baseball a player scores a run for every base he/she reaches after hitting the ball. He or she will not subsequently score when moving around the bases on another player's hit. The equivalent of a home run scores four runs. As in cricket a bonus run can be awarded for excessively-wide deliveries. In North American baseball, a player scores a run only on a successful circuit of all four bases, whether on his own or another player's hit, or by other means such as a walk or stolen base.\nhttps://en.wikipedia.org/wiki/Bat_and_trap\nBat and trap is an English bat-and-ball pub game. It is still played in Kent, and occasionally in Brighton. By the late 20th century it was usually only played on Good Friday in Brighton, on the park called The Level, which has an adjacent pub called The Bat and Ball, whose sign depicts the game. Brighton & Hove City Council plans to start a Bat and Trap club based at The Level in 2013, as part of the Activities Plan associated with a £2.2m Heritage Lottery Fund and Big Lottery Fund-funded restoration of the park. www.brighton-hove.gov.uk/thelevel\nIn the American rules of bat and trap, there are several differences in the equipment and game mechanics as well as the layout of the pitch. Each team is limited to 4 players. The trap is 6 inches by 6 inches, and it has a yellow background with a black \"X\" mark across the front. The posts are 1-2 feet high. There are two additional lines, one of which extends across the field at a right angle 10 yards in front of the trap; this line is the \"foul line\". Balls put into play must not touch the ground prior to hitting this line or the batter is called out. In addition, there is an additional line 5 yards behind the posts; this line is known as the \"back line\", and fair hit balls that cross the line, either before touching a fielder or after, or on the ground or in the air but below the imaginary line demarcating the fair zone, score 4 runs for the batting side. This is known as a \"four\", and the fielding team does not have the opportunity to roll out the batter following a four. Since the posts are only 1-2 feet, the top of the fair hit zone is demarcated by an imaginary line running from the top of the tallest fielding player's head. Batted balls that travel above this imaginary line are automatically out.\nhttps://en.wikipedia.org/wiki/Shuffleboard\nhttps://en.wikipedia.org/wiki/File:Shuffleboard.svg\nFour rows of shuffleboard (the fourth is different)- four light disks four dark disks played with\nIn deck or floor shuffleboard, players use a cue (cue-stick), to push their colored disks, down a court (a flat floor of concrete, wood or other hard material, marked with lines denoting scoring zones), attempting to place their disks within a marked scoring area at the far end of the court. The disks themselves are of two contrasting colors (usually yellow and black), each color belonging to a player or team. The scoring diagram is divided by lines, into six scoring zones, with the following values: 10, 8, 8, 7, 7, 10-off. (See Court Description below for details.) After 8 disks (four per team, taking alternating shots) have been played from one end of the court (a frame), the final score values of disks for each player (or team) in the scoring zones is assessed: If a disk is completely within a scoring zone without touching (overlapping) any part of the border-line of the zone, it is good and that zone value is added to the correct player's score for the frame, and then to the player's total points. Both players good disks are added to their respective scores (As opposed to being subtracted to give only one player a net score for a frame.) Players (or teams of two players, one at each end) take turns going first during a game, so that the advantageous last shot of a frame (the hammer) also alternates between players. The winner of the game may be the first to reach any total decided upon, or may be the higher score after playing a certain number of frames (e.g. 8, 12 or 16). There is also the 'first to 75-points' game. Ties are broken by playing extra frames (two for singles, four for doubles). [2]\nDisks: Modern floor shuffleboard is played with 8 round, hard, durable 6 inch diameter plastic disks - New disks are about 1\" in thickness, weighing 15 ounces. There should be four (4) discs of a light color, usually yellow, and four of a dark color, usually black. These eight (8) discs comprise a set. (Other colored combinations may be used, but black and yellow will be used here.) One player or team uses the yellow disks, the other player or team, the black disks. Cue-Sticks: Each player uses a cue (cue-stick) to push their disks down the court to the opposite end. The cue length is six feet, three inches (6'3\") or less, with hard plastic feet on the end (metal would damage the court surface). Scoreboard: There are two basic types (1) - Resort Type uses two sliders that can move up and down a numbered scale, like a thermometer, with values running from zero at the bottom to 75 at the top (First to 75 points is a common shuffleboard game). Each team used their own slider to record their total score. The advantages of the Resort-type include simplicity, durable and weather proof, needs no other items such as chalk or eraser. The disadvantage is that scoring mistakes are impossible to determine, and a frames played cannot be tracked unless a separate recording method (e.g. pen and paper) is used.(2)- Blackboard (Whiteboard) Type is ruled with four or eight horizontal lines and each teams total score is written after each frame, yellow on the left and black on the right. When all the lines have been filled with scores the top lines are erased and scores are again written from the top. The advantage of the blackboard type is that mistakes in adding and recording the score are easier to spot, because previous scores should always be seen. As well, it is easy to keep track of frames played using small numbers written down the scoreboard. (Note that in western USA and western Canada, scoreboards (blackboards) run from side-to-side, but the principal is the same.)[3]\nhttps://en.wikipedia.org/wiki/Table_shuffleboard#Sjoelen\nFour rows of triangle\nPlayers take turns sliding, or \"shuffling,\" the weights to the opposite end of the board, trying to score points, bump opposing pucks off the board, or protect their own pucks from bump-offs. Points are scored by getting a weight to stop in one of the numbered scoring areas. A weight has to completely cross the zone line to count as a full score (if a weight is partially in zone 2 and 3 the weight's score is 2). A weight that's hanging partially over the edge at the end of the table in the 3-point area, called a \"hanger\" (or sometimes a \"shipper\"), usually receives an extra point (count as 4). If a puck hangs off the end corner, it receives no additional scoring points other than being a 4 for hanging over the back edge of the board.\nThe objective of the game is to slide, by hand, all four of one's weights alternately against those of an opponent, so that they reach the highest scoring area without falling off the end of the board into the alley. Furthermore, a player's weight(s) must be farther down the board than his opponent's weight(s), in order to be in scoring position. This may be achieved either by knocking off the opponent's weight(s), or by outdistancing them. Horse collar, the most common form of the game, is played to either 15 or, more typically, 21. Below is an image of the weights on the board. Only the weights in front score.[1]\nhttps://en.wikipedia.org/wiki/Paper_football\nPaper football (also called FIKI Football, Finger football, Chinese Football, Flick Football, or Tabletop Football) refers to a table-top game, loosely based on American football, in which a sheet of paper folded into a small triangle is slid back and forth across a table top by two opponents. This game is widely practiced for fun, mostly by students in primary, middle school, and high school age in the United States and by bored employees.[1]\nAdvancing the ball[edit]\nThe primary activity of the game is to slide the paper football across the football field by flicking it. The legal flick or shot or throw is any method which advances the ball through flicking or hitting, but pushing the ball is disallowed. The ball is generally flicked either with the thumb and forefinger in a manner similar to shooting marbles, or another manner comfortable to the player. Striking with objects such as pencils is more rare.\nPlayers have four chances (downs) to score a touchdown. They may attempt a field goal on fourth down.\nScoring[edit]\nA team scores points by the following plays:\nTouchdown[edit]\nA touchdown (TD) is worth 6 points, as in American football. A touchdown is scored when a player advances the ball such that it comes to rest with part of the ball extending over the edge the opponent's end of the table without falling to the ground. If the ball falls to the ground it is considered a touchback. If on fourth down a player feels that they are not close enough to have a good chance at scoring a touchdown then they can attempt a field goal for 3.\nPlayers are allowed only one chance to advance the ball over the goal line per turn (instead of the aforementioned four tries). If a player pushes the ball off of their opponent's end of the table a \"strike\" is awarded and their opponent gets to kick the ball back into play. After 3 strikes a player's opponent has the option of kicking a field goal for 3 points.\nTabletop football was played in Connecticut in the 1950s using an American quarter. Each player had 4 downs to advance the quarter up the field, and hang it over the edge of the table for a touchdown. If the quarter fell off the edge or the player failed to hang it within 4 downs, the opponent was given possession. The shooting player could try a field goal at any time by hanging the quarter over his own edge of the table, and \"kicking\" it with his index finger toward the opponents field goal \"posts.\" The player with the highest score won the opponent's quarter. Due to the excessive noise of the quarter during play, the quarter variation was often avoided in school.\nhttps://en.wikipedia.org/wiki/Penny_football\nFourth is always different\nPenny football (also coin football, sporting coin, spoin, table football, tabletop football,[1] or shove ha'penny football[2]) is a coin game played upon a table top. The aim of the game is for a player to score more goals with the pennies (\"Spucks\") than their opponent.[3] An electronic version of the game has also been produced.[4] The game has been in existence since at least 1959.[5]\nThere is another variation of the game in which players use four coins, the fourth coin representing a goalkeeper. Again, the opposing player puts out his index and pinky finger, but also puts the fourth coin under his index finger. The coin acts as a \"goalkeeper\", and may be used to block shots. He then sticks out his pinky finger of his other hand and places it right next to the other hand. The two hands should be touching. If the player blocks the shot with his index finger, the shot counts as a goal.\nhttps://en.wikipedia.org/wiki/Trivia\nThe fourth is always different/transcendent. In ancient times the quadrivium (four) was seen as advanced, whereas the trivium (three) was seen as elementary\nThe trivia (singular trivium) are three lower Artes Liberales, i.e. grammar, logic, and rhetoric. These were the topics of basic education, foundational to the quadrivia of higher education, and hence the material of basic education and an important building block for all undergraduates.\nThe ancient Romans used the word triviae to describe where one road split or forked into two roads. Triviae was formed from tri (three) and viae (roads) – literally meaning \"three roads\", and in transferred use \"a public place\" and hence the meaning \"commonplace.\"[2]\nThe pertaining adjective is triviālis. The adjective trivial was adopted in Early Modern English, while the noun trivium only appears in learned usage from the 19th century, in reference to the Artes Liberales and the plural trivia in the sense of \"trivialities, trifles\" only in the 20th century.[citation needed]\nThe Latin adjective triviālis in Classical Latin besides its literal meaning could have the meaning \"appropriate to the street corner, commonplace, vulgar.\" In late Latin, it could also simply mean \"triple.\" In medieval Latin, it came to refer to the lower division of the Artes Liberales, namely grammar, rhetoric, and logic. (The other four Liberal Arts were the quadrivium, namely arithmetic, geometry, music, and astronomy, which were more challenging.) Hence, trivial in this sense would have meant \"of interest only to an undergraduate.\"[citation needed]\nhttps://en.wikipedia.org/wiki/Darts\nModern darts have four parts: The points, the barrels, the shafts and the flights.[15] The steel points come in 2 common lengths, 32mm and 41mm and are sometimes knurled or coated to improve grip. Others are designed to retract slightly on impact to lessen the chance of bouncing out.[16]\nThe image is a cross/quaddrant- throw four\nhttps://en.wikipedia.org/wiki/File:Garden_Quoits.jpg\nhttps://en.wikipedia.org/wiki/Quoits\nIndoor or table quoits[edit]\nA game of indoor quoits, being played in the Forest of Dean\nExclusively a pub game, this variant is predominantly played in mid and south Wales and in England along its border with Wales.\nMatches are played by two teams (usually the host pub versus another pub) and typically consist of four games of singles, followed by three games of doubles. Players take it in turns to pitch four rubber rings across a distance of around 8½ feet onto a raised quoits board. The board consists of a central pin or spike and two recessed sections: an inner circular section called the dish and a circular outer section.\nFive points are awarded for a quoit landing cleanly over the pin, two points for a quoit landing cleanly in the dish, and one point for a quoit landing cleanly on the outer circular section of the board. The scoreboard consists of numbers running from 1 to 10, 11 or 12, and the object of the game is to score each of these numbers separately using four or fewer quoits, the first side to achieve this being the winner.\nDeck quoits[edit]\nSee also: Deck tennis\nDeck quoits is a variant which is popular on cruise ships. The quoits are invariably made of rope, so as to avoid damaging the ship's deck, but there are no universally agreed standards or rules - partly because of the game's informal nature and partly because the game has to adapt to the shape and area of each particular ship it is played upon.\nPlayers take it in turn to throw three or four hoops at a target which usually, though not always, consists of concentric circles marked on the deck. The centre point is called the jack. Occasionally this may take the form of a raised wooden peg, but more usually it is marked on the surface in the same way that the concentric circles are.\nSlate-board quoits[edit]\nThis is a popular outdoor variation played principally in and around Pennsylvania, USA (specifically the 'Slate Belt' which is in the Lehigh Valley). This game uses two one-pound rubber quoits per player, which are pitched at a short metal pin mounted on a heavy 24x24x1 inch slab of slate. The common pronunciation of quoits in the Slate Belt region is (qwaits).\nPlayers take turns throwing a quoit at the pin. The quoit nearest the pin gets one point. If one player has two quoits nearer the pin than either of his opponent's quoits, he gets two points. A quoit that encircles the pin (called a ringer) gets three points. If all four quoits are ringers, the player who threw the last ringer gets three points only; otherwise, the first player to make 21 points wins the game. For two or four players.\nGarden quoits or hoopla[edit]\nTypical set of garden quoits\nThis version of the game exists largely as a form of recreation, or as a game of skill found typically at fairgrounds and village fetes.\nThere are no leagues or universally accepted standards of play and players normally agree upon the rules before play commences.\nGarden quoit and hoopla sets can be purchased in shops and usually involve players taking it in turns to throw rope or wooden hoops over one or more spikes.\nThe fairground version typically involves a person paying the stallholder for the opportunity to throw one or more wooden hoops over a prize, which if done successfully, they can keep. Generally speaking, the odds of winning are normally heavily weighted in favour of the stallholder unless the cost of play is higher than the value of the prize.\nhttps://en.wikipedia.org/wiki/Horseshoes\nHorseshoes is an outdoor game played between two people (or two teams of two people) using four horseshoes and two throwing targets (stakes) set in a sandbox area. The game is played by the players alternating turns tossing horseshoes at stakes in the ground, which are traditionally placed 40 feet (12 m) apart. Modern games use a more stylized U-shaped bar, about twice the size of an actual horseshoe.\nhttps://en.wikipedia.org/wiki/Washer_pitching\nIn the Central Illinois Washers variant, the game uses wooden boxes with 2 × 4 sides (15 inches outside / 12 inches inside). The boxes have plywood bottoms ( 1⁄2- 3⁄4 innc thick – 15 inches × 15 inches square) and are lined with carpet - (12 inch × 12 inch – thickness optional, short/medium is preferable). A 4-inch PVC pipe is cut to a height that is level with the top of the side boards. The boxes are placed 30 ft apart (front of Box 1 to front of Box 2) on level ground, preferably going North and South to avoid sunlight distraction for one side/player.\nFour steel or brass washers are used, having (2 1⁄2-inch outer diameter and 1-inch inside diameter and approximately 1⁄8-inch thickness. Two small opposing holes are drilled in two of the four washers for team designation.\nPlayers throw the washers in attempt to get in, on or near to the box or in the pipe. When throwing, the player may stride forward of the front of the box or remain entirely in back of the box, but at least one foot must remain behind the front of the box. (The front is the side facing the opponent.) In other words, players may stand next to the box and stride past it with one foot. In the traditional four-player game, players throw two washers each, throwing both before the opponent throws their two. A player may throw both washers at once, but this may decrease accuracy. The style of throw is dependent only upon player preference, and the scoring team throws first in the next round.\nPlayers earn one point for a washer landing within one foot of the box, or leaning next to the box, or under the box. Two points are scored if a washer rests lying on the top edge of the box. Three points are scored for washers landing inside the box, but not in the pipe, and five points when the washer lands inside the pipe.\nOnly one team/player scores per round, as illustrated in the following scenarios:\nPlayer 1 throws both washers five feet away from the box while Player 2 throws one in the box and one 10 inches from the box. Player 2 scores 4 (3 + 1).\nPlayer 1 throws one away and one within 5 inches of the box. Player 2 throws one away and one in the pipe. Even though Player 1 has a valid 1-pt. throw, his washer is cancelled out by Player 2's throw in the pipe. Player 2 scores 5.\nPlayer 1 throws one in the box and one within 10 inches of the box. Player 2 throws both within 5 inches of the box. Player 2 cancelled out Player 1's 1-pt. throw, but is still outside of Player 1's box throw. Because player 1 was in the box, it cancelled out both of player 2's close throws. Player 1 scores 3.\nIf each team throws a washer under the box or both throw a leaner, they cancel each other out and neither scores. If one team throws a washer under the box and the other team throws a leaner, the team under the box scores (under beats leaner). However, any throw in the box or pipe cancels ALL opposing washers outside the box, whether they are within 12 inches, leaning, or under the box.\nAfter all 4 washers have been thrown, points are tallied. It is possible for a washer to knock another washer into a better OR worse position during play. If one washer should move, shift or alter another washer during a throw, the final resting places of both washers are noted and scores tallied accordingly. Because washers can hit each other during play and affect their final position, do not move any washers until all are thrown.\nIn Illinois, the traditional four-player game, with two teams of two players, is played to 21 points. Three-player games, with three teams of one player, play to 31 points. Two-player games play to 51 points.\nNote: All end scores must be reached exactly. If a player scores too many points in a round, they must subtract that round's point value from their score previous to the round and continue play. Example: In a 4-player game to 21 points, Team 1 has 19 points, but throws one in the box. Team 1 loses 3 points, going back to 16 points, and the next round begins. All scoring washers in the round are counted toward the negative score, not just the throw that exceeded the limit. Example: Team 1 has 16 points, then throws one in the box for an interim 3 points, but then throws the second in the box as well for a total of 6 points. Adding that 6 points to the beginning score of 16 exceeds the 21-pt goal, so the team deducts 6 points from 16 and starts the next round with 10 points.\nIn Connecticut, there is yet another variation of the game. The backyard game of Washers in Connecticut is played with the Fender Washer. The game is played with two teams, consisting of two players per team, four players total.\nhttps://en.wikipedia.org/wiki/Kubb\nThere are four types of kubb that you knock over.\nThe alleged Viking origin of the game has led some players and kubb fans to nickname the game \"Viking chess\".\nThere are typically twenty-three game pieces used in kubb- and four types:[3]\nTen kubbs, rectangular wooden blocks 15 cm tall and 7 cm square on the end.\nOne king, a larger wooden piece 30 cm tall and 9 cm square on the end, sometimes adorned with a crown design on the top.\nSix batons, 30 cm long and 4.4 cm in diameter.\nSix field marking pins, four to designate the corners of the pitch, and two to mark the centreline.\nThe four types of wooden pieces in \"Viking Chess\" (allegedly the game comes from the Vikings)\nKUBB (KUBBSPEL) CONSTRUCTION PLANS\nhttp://www.missouriscenicrivers.com/baton.jpg\nhttp://www.missouriscenicrivers.com/king.jpg\nMake A Kubb Set\nKing Kubbs Batons - Dowels\nBuy these here! Markers\n4x4x16 in. 3x3x8 in. 1.75\"x12\" 0.5x12 in.\nBuy a \"hardwood\" Kubb set instead - Click Here!\nPictured in the table above are the four types of wooden pieces you will need to play Kubb. Kubb is played with one king, ten kubbs, six throwing batons ( dowels ) and four markers.\nSome Tips, Before You Get Started\nUse a hard wood. The kubbs and batons ( dowels ) get knocked around a lot.\nUse sandpaper to smooth the edges. This will help prevent splinters.\nCoat the pieces with enamel to help ensure long term usage.\nPaint can make your new kubb set more attractive and fun.\nA burlap or cloth bag is perfect for transporting your new kubb set to a picnic or BBQ.\nThe King is 4x4x16 inches. 4x4 is a standard size for wood and can be found at most hardware stores. The King in the picture above has a \"crown\" on it. Here is a simpler version of cuts for the Kubb King.\nYou can go as simple or elegant as you wish with this part. The easiest thing to do is angle your table saw blade at 45°, set your sliding stop so the edge of the blade cuts down the center of the 3-1/2\" square end of the piece, then cut. This step can also be dangerous as well so use caution. Be sure to maintain a strong hold on the wood as well as keeping it firmly against the saw bed and sliding stop as you guide it over the blade. Rotate the Kung 90° about its long axis, make the pass again, repeat two more times and a simple crown will have been formed.\nThe Kubbs\nThe Kubbs are 3x3x8 inches. There is nothing complicated about them at all. 3x3 is also a standard size for wood.\nThe Throwing Batons ( dowels )\nThe batons are 1.75 inches in diameter and 12 inches long. The wood for these can be a bit harder to come by. Some large hardware stores carry this type of rounded wood. If you are having problems finding the right size, you might also try furniture factories. The legs of stools are often just about the right shape and size. If you've lost your tossing dowel or are making a Kubb set yourself and need inexpensive dowels, then you can now buy them individually.\nThe Markers\nThe markers are just sticks of wood that you will be driving into the ground. Their exact shape and size does not really matter. You can probaly just make them from the wood you have left over from making the other pieces. You might also try metal wire with a small red flag or reflector on top. You can usually find these at Wal-mart in the home and garden section.\nRound off all the edges of your parts to reduce the possibility of slivers and enhance their appearance. Sand them with consecutively higher grit sand paper until reaching 220-grit. Finish your parts by staining them the color(s) of your choice.\nhttps://en.wikipedia.org/wiki/Kickball\nKickball is a playground game and league game, similar to baseball, invented in the United States in the first half of the 20th century.\nThe game has four bases like baseball.\nhttps://en.wikipedia.org/wiki/Cornhole\nEquipment and court layout[edit]\nCornhole matches are played with two sets of bags, two platforms and two to four players.[1]\nThere are four bags to a set. Each set should be identifiable from the other; different colors work well. The American Cornhole Organization Official Cornhole Rules call for double-seamed fabric bags measuring 6 by 6 inches (150 by 150 mm) and weighing 15 to 16 ounces (430 to 450 g)[1] Bags should be filled with dried corn kernels. The final weight of the bag may vary due to the material of the bag itself.\nCornhole being played during a pre-game tailgate at Texas A&M University–Commerce\nCornhole matches are broken down into innings or frames of play.[1] During each frame, every player throws four bags. A player may deliver the bag from either the left or right pitcher's box, but, in any one inning, all bags must be delivered from the same pitcher's box. It is possible that both players can throw from the same pitcher's box. Also, the player gets a three-foot box to throw in. Each player must deliver the bag within twenty seconds. The time starts when the player steps onto the pitcher's box with the intention of pitching. The player who scored in the preceding inning pitches first in the next inning. If neither pitcher scores, the contestant or team who pitched last in the preceding inning pitches first in the next inning. Note: No foot can land past the front of the board until the corn bag leaves the hand, otherwise the point does not count. At the end of the round there is a 10-second window to allow beans to fall within the bag, possibly allowing additional points.\nA typical cornhole board, with two colors of bag\nCornhole can be played as either doubles or singles. In doubles play, four players split into two teams. One member from each team pitches from one cornhole platform and the other members pitch from the other. The first side of players alternate pitching bags until both players have thrown all four of their bags, then the players pitching from the opposing cornhole board continue to alternate in the same manner until all four of their bags are delivered and the inning or frame is completed. In singles play, two players play against each other. Delivery is handled in the same manner as doubles play. Both contestants pitch from the same cornhole platform and alternate their pitches until all of their bags have been pitched, completing the inning or frame.[\nCornucopia: Achieved when a player throws all four bags into the hole in one inning.\nGRAND BAG, Four Bagger Jumanji, double deuce, Cornholio, Catorce Four Bagger or Four Pack: Four cornholes by a single player in a single round.[6]\nTrip Dip: When a single player cornholes 3 out of the 4 bags in a single round.\nThere are four scoring options- like the four levels in shuffleboard\nhttps://en.wikipedia.org/wiki/File:SholfTailgate2.jpg\nhttps://en.wikipedia.org/wiki/Sholf\nThe fourth is different\nhttps://en.wikipedia.org/wiki/Table_shuffleboard\nShuffleboard set of pucks\nhttps://en.wikipedia.org/wiki/File:TableShuffleboardPucks.jpg\nCome in packs of four\nhttps://en.wikipedia.org/wiki/File:Jarts_Canada.jpg\nhttps://en.wikipedia.org/wiki/Lawn_darts\nLawn darts (also known as Javelin darts, jarts or yard darts) is a lawn game for two players or teams. A lawn dart set usually includes four large darts. The game play and objective are similar to both horseshoes and darts. The darts are similar to the ancient Roman plumbata. They are typically 12 inches (30 cm) long with a weighted metal or plastic tip on one end and three plastic fins on a rod at the other end. The darts are intended to be tossed underhand toward a horizontal ground target, where the weighted end hits first and sticks into the ground. The target is typically a plastic ring, and landing anywhere within the ring scores a point.\nhttps://en.wikipedia.org/wiki/Bowls\nAfter each competitor has delivered all of their bowls (four each in singles and pairs, three each in triples, and two bowls each in fours), the distance of the closest bowls to the jack is determined (the jack may have been displaced) and points, called \"shots\", are awarded for each bowl which a competitor has closer than the opponent's nearest to the jack. For instance, if a competitor has bowled two bowls closer to the jack than their opponent's nearest, they are awarded two shots. The exercise is then repeated for the next end, a game of bowls typically being of twenty-one ends.\nScoring systems vary from competition to competition. Games can be decided when:\na player in a singles game reaches a specified target number of shots (usually 21 or 25).\na team (pair, triple or four) has the higher score after a specified number of ends.\nGames to a specified number of ends may also be drawn. The draw may stand, or the opponents may be required to play an extra end to decide the winner. These provisions are always published beforehand in the event's Conditions of Play.\nIn the Laws of the Sport of Bowls[3] the winner in a singles game is the first player to score 21 shots. In all other disciplines (pairs, triples, fours) the winner is the team who has scored the most shots after 21/25 ends of play. Often local tournaments will play shorter games (often 10 or 12 ends). Some competitions use a \"set\" scoring system, with the first to seven points awarded a set in a best-or-three or best-of-five set match. As well as singles competition, there can be two (pairs), three (triples) and four-player (fours) teams. In these, teams bowl alternately, with each player within a team bowling all their bowls, then handing over to the next player. The team captain or \"skip\" always plays last and is instrumental in directing his team's shots and tactics. The current method of scoring in the professional tour (World Bowls Tour) is sets. Each set consists of nine ends and the player with the most shots at the end of a set wins the set. If the score is tied the set is halved. If a player wins two sets, or gets a win and a tie, that player wins the game. If each player wins a set, or both sets end tied, there is a 3-end tiebreaker to determine a winner.\nBowls have symbols unique to the set of four for identification. The side of the bowl with a larger symbol within a circle indicates the side away from the bias. That side with a smaller symbol within a smaller circle is the bias side toward which the bowl will turn. It is not uncommon for players to deliver a \"wrong bias\" shot from time to time and see their carefully aimed bowl crossing neighbouring rinks rather than heading towards their jack.\nSingles, triples and fours and Australian pairs are some ways the game can be played. In singles, two people play against each other and the first to reach 21, 25 or 31 shots (as decided by the controlling body) is the winner. In one variation of singles play, each player uses two bowls only and the game is played over 21 ends. A player concedes the game before the 21st end if the score difference is such that it is impossible to draw equal or win within the 21 ends. If the score is equal after 21 ends, an extra end is played to decide the winner. An additional scoring method is set play. This comprises two sets over nine ends. Should a player win a set each, they then play a further 3 ends that will decide the winner.\nPairs allows both people on a team to play Skip and Lead. The lead throws two bowls, the skip delivers two, then the lead delivers his remaining two, the skip then delivers his remaining two bowls. Each end, the leads and skips switch positions. This is played over 21 ends or sets play. Triples is with three players while Fours is with four players in each team and is played over 21 ends.\nAnother pairs variation is 242 pairs (also known as Australian Pairs). In the first end of the game the A players lead off with 2 bowls each, then the B players play 4 bowls each, before the A players complete the end with their final 2 bowls. The A players act as lead and skip in the same end. In the second end the roles are reversed with the A players being in the middle. This alternating pattern continues through the game which is typically over 15 ends.\nhttps://en.wikipedia.org/wiki/Origins_of_baseball\nRounders[edit]\nMain article: Rounders\nThe British game most similar to baseball, and most mentioned as its ancestor or nearest relation, is rounders. Like baseball, the object is to strike a pitched ball with a baton or paddle and then run a circuit of four bases. While the game in many respects is quite different from modern baseball, it preserves a number of features which were characteristic of \"town ball\", the earlier form of American baseball.\nThe court has four squares like a quadrant\nhttps://en.wikipedia.org/wiki/Pickleball\nhttps://en.wikipedia.org/wiki/File:Pickleballcourt.PNG\nPickleball is a racquet sport that combines elements of badminton, tennis, and table tennis.[1] Two, three, or four players use solid paddles made of wood or composite materials to hit a perforated polymer ball, similar to a wiffle ball, over a net. The sport shares features of other racquet sports, the dimensions and layout of a badminton court, and a net and rules similar to tennis, with a few modifications. Pickleball was invented in the mid 1960s as a children's backyard pastime but quickly became popular among adults as a fun game for players of all levels.\nhttps://en.wikipedia.org/wiki/Powered_paragliding\nLightweight carts or \"trikes\" (called \"quads\" if they have four wheels) can be mounted on powered paragliders for those who prefer not to, or are unable to, foot launch. Some are permanent units.\nhttps://en.wikipedia.org/wiki/Oină\nOină (Romanian pronunciation: [ˈoj.nə]) is a Romanian traditional sport, similar in many ways to baseball and lapta.\nThe attacking side player that has commenced a run will have to cross the following four lines in order:\nthe start line (the left side of the batting line)\nthe arrival line (the left side of the back line)\nthe return line (the right side of the back line)\nthe escape line (the right side of the batting line)\nhttps://en.wikipedia.org/wiki/Quidditch_(sport)\nSince its inception, quidditch has sought gender equality on the pitch.[54] One of the most important requirements within the sport is its 'four maximum' rule:\nEach match begins with six of the starting players (excluding the seekers) along the starting line within their keeper zone with brooms on the ground and the four balls lined in the centre of the pitch. The head referee then calls \"brooms up!\" at which players run to gain possession of the balls.[25] After brooms up is called, the seekers must not interfere with other positions, and wait near the pitch until the end of the seeker floor, usually 18 minutes. The snitch goes on the field at 17 minutes, and the seekers are released at 18 minutes.[26]\nA quidditch game allows each team to have a maximum of four players, not including the seeker, who identify as the same gender in active play on the field at the same time. The gender that a player identifies with is considered to be that player's gender, which may or may not correspond with that person's sex. This is commonly referred to as the \"four maximum\" rule.\nUSQ accepts those who don't identify within the binary gender system and acknowledges that not all of our players identify as male or female. USQ welcomes people of all identities and genders into our league.\n— US Quidditch, Four Maximum Rule\nFour Positions in quidditch[edit]\nChasers are responsible for passing the quaffle and scoring points by throwing the quaffle through one of the opponent's goals for 10 points. When a bludger hits a chaser in possession of the quaffle, they must drop the quaffle, remove the broom from between their legs, and touch their own hoops to rejoin play. Chasers not in possession of the quaffle must perform the same knockout procedure when hit by a bludger, but do not have a ball to drop. Chasers may enter into physical contact with opposing chasers or keepers. There are three chasers on the field for each team, identified by a white headband.\nKeepers can be likened to goalies in other sports, and must try to block attempts to score by the opposing team's chasers. The keeper is invulnerable to bludgers as well as having indisputable possession of the quaffle when within their team's keeper zone, an area around the team's hoops. Once outside of the keeper zone, the keeper may serve as a fourth chaser. Keepers may enter into physical contact with opposing keepers or chasers. There is one keeper on the field for each team, identified by a green headband.\nBeaters attempt to hit the opposing team's players with bludgers and attempt to block the bludgers from hitting their team's players. Beaters are subject to the same knockout procedure as chasers or keepers when hit with a bludger, but unlike chasers and keepers, they may attempt to catch a bludger thrown at them. If they succeed in catching a bludger, they are not knocked out, and the beater who threw the bludger may remain in play. As there are three bludgers for the four beaters on the pitch, the fourth, bludger-less beater puts pressure on the team in control of both bludgers (often called \"bludger control\" or \"bludger supremacy\"). Beaters may enter into physical contact only with other beaters. Two beaters on a team may be in play at a time, identified by black headbands.\nSeekers attempt to catch the snitch. They may not contact the snitch, but are permitted to contact the other seeker. Seekers are released after 18 minutes of game time. There is one seeker on the field for each team, identified by a gold or yellow headband.\nhttps://en.wikipedia.org/wiki/Rounders\nRounders (Irish: cluiche corr) is a bat-and-ball game played between two teams. Rounders is a striking and fielding team game that involves hitting a small, hard, leather-cased ball with a rounded end wooden, plastic or metal bat. The players score by running around the four bases on the field.[1][2] The game is popular among Irish and British school children.\nThe ball circumference must be between 180 millimetres (7.1 in) and 200 millimetres (7.9 in) and the bat no more than 460 millimetres (18 in) in length and 170 millimetres (6.7 in) in diameter. Rounders England place a weight-limit of 370 grams (13 oz) on the bat. The bases are laid out in a manner similar to a baseball diamond, except that batters run to a separate fourth base, at right-angles to third base and the batsman's base.[12] Each base is marked with poles, which must be able to support themselves and stand at a minimum of 1 metre (3 ft 3 in).\nIf a ball is delivered well, batters must try to hit the ball and must run regardless of whether the ball is hit. If the ball is hit into the backward area, the batter may not pass first post until the ball is returned to the forward area. A batter that hits a no-ball may not be caught out or stumped at the first post. Batters may run on 'no-balls' but do not have to. Each batter, except the last in each inning, is entitled to receive one good ball: the last batter is entitled to receive three good balls unless he or she is caught out.\nOne rounder is gained if the player hits the ball, then reaches the fourth post and touches it before the next ball is bowled and is not caught out and hit by the ball. A half rounder is gained if: the player reaches the fourth post having missed the ball; the player reaches the second post having hit the ball; if a batter is obstructed by a fielder whilst running; or if the same batter has two consecutive no balls.\nhttps://en.wikipedia.org/wiki/Sipa…\nSipa (lit. kick or to kick) is the Philippines' traditional native sport which predates Spanish rule. The game is related to Sepak Takraw. Similar games include Footbag net, Footvolley, Bossaball and Jianzi.\nIt can be one on one two on two three on three or four on four.\nSimplified play (one on one, two on two, three on three, or four on four)[edit]\nA set of rules determines penalty points (such as the ball bouncing twice on the ground). The two teams play against each other until a set number of penalty points is reached by one of the teams.\nThere is also a court version in which a rectangle is marked in grids. Grids denote zones, and dictate where players stand, and how points are allotted based on where the ball lands in the court.\nThis game requires much coordination.\nThe four different color feathers look like a quadrant\nJianzi (Chinese: 毽子), tī jianzi (踢毽子), tī jian (踢毽) or jianqiú (毽球), also known by other names,[which?] is a traditional Chinese national sport in which players aim to keep a heavily weighted shuttlecock in the air by using their bodies, apart from the hands, unlike in similar games peteca and indiaca. The primary source of jianzi sport is a Chinese ancient game called cuju of the Han dynasty 2000 years ago. Jianzi's competitive sport types are played on a badminton court using inner or outside lines in different types of jianzi's competitive sports, respectively. it can be played also artistically, among a circle of players in a street or park, with the objective to keep the shuttle 'up' and show off skills. In Vietnam, it is known as đá cầu and is the national sport. In the Philippines, it is known as sipa and was also the national sport until it was replaced by arnis in December 2009.[1] In recent years, the game has gained a formal following in Europe, the United States, and elsewhere.\nhttps://en.wikipedia.org/wiki/Jianzi\nThe shuttlecock, called a jianzi in the Chinese game and also known in English as a 'Chinese hacky sack' or 'kinja', typically has four feathers fixed into a rubber sole or plastic discs. Some handmade jianzis make use of a washer or a coin with a hole in the center.\nThe official featherball used in the sport of shuttlecock consists of four equal-length goose or duck feathers conjoint at a rubber or plastic base. It weighs approximately 15-25 grams. The total length is 15 to 21 cm. The feathers vary in color, usually dyed red, yellow, blue and/or green. However, in competitions a white featherball is preferred. The Official Jianzi for Competitions The shuttlecock used in Chinese JJJ games weighs 24-25 grams. The height from the bottom of rubber base to top of the shuttlecock is 14–15 cm, the width between tops of two opposite feathers is 14–15 cm.\nOther names[edit]\nIsrael - נוצה or נוצ\nUnited States - Chinese hacky sack or kikbo[7] or KickShuttle\nHungary - lábtoll-labda\nCanada - kikup\nVietnam - đá cầu\nMalaysia - sepak bulu ayam\nSingapore (and SE Asia) - chapteh or capteh or chatek\nJapan - kebane (蹴羽根)\nKorea - jegichagi or jeigi (to most Koreans known as sports only for children)\nIndonesia - bola bulu tangkis or sepak kenchi\nPhilippines - larong sipa\nMacau - chiquia\nIndia - poona (forerunner of badminton) (unknown to most Indians)\nGreece - podopterisi\nFrance - plumfoot or pili\nPoland - zośka\nGermany - Federfußball\nThe Netherlands - \"voetpluim\" or \"voet pluim\" or \"jianzi\"\nCambodia - sey\nMéxico - gallito\nSweden - spunky or adde-boll\nUK - featherdisk\nIreland - kickum[9]\nMongolia - teveg - тэвэг\nCentral Asia - Lian-ga (ru:Лянга)\nRussia (CIS) - Zoska (ru:Зоска)\nhttps://en.wikipedia.org/wiki/Cuju\nCuju, or Tsu' Chu,[1] is an ancient Chinese ball game, Cantonese \"chuk-ko\".\nPainting of four people playing\nhttps://en.wikipedia.org/wiki/File:One_Hundred_Children_in_the_Long_Spring.jpg\nOne Hundred Children in the Long Spring (长春百子图), a painting by Chinese artist Su Hanchen (苏汉臣, active 1130–1160s AD), Song Dynasty\nFour people playing cuju\nhttps://upload.wikimedia.org/wikipedia/commons/thumb/c/c8/Bronze_mirror_depicting_kickball.jpg/1024px-Bronze_mirror_depicting_kickball.jpg\nBronze mirror dating to the Song Dynasty.\nThe Massachusetts Game was a type of amateur club baseball popular in 19th century New England. It was an organized and codified version of local games called \"base\" or \"round ball\", and related to town ball and rounders. The Massachusetts Game is remembered as a rival of the New York Game of baseball, which was based on Knickerbocker Rules. In the end, however, it was the New York style of play which was adopted as the \"National Game\" and was the fore-runner of modern baseball.\nIt also has four bases\nhttps://en.wikipedia.org/wiki/Coxless_four\nA coxless four is a rowing boat used in the sport of competitive rowing. It is designed for four persons who propel the boat with sweep oars.\nThe crew consists of four rowers, each having one oar. There are two rowers on the stroke side (rower's right hand side) and two on the bow side (rower's lefthand side). There is no cox, but the rudder is controlled by one of the crew, normally with the rudder cable attached to the toe of one of their shoes which can pivot about the ball of the foot, moving the cable left or right. The steersman may row at bow, who has the best vision when looking over their shoulder, or on straighter courses stroke may steer, since they can point the stern of the boat at some landmark at the start of the course. The equivalent boat when it is steered by a cox is referred to as a \"coxed four\".\nRacing boats (often called \"shells\") are long, narrow, and broadly semi-circular in cross-section in order to reduce drag to a minimum. Originally made from wood, shells are now almost always made from a composite material (usually carbon-fibre reinforced plastic) for strength and weight advantages. Fours have a fin towards the rear, to help prevent roll and yaw and to help the rudder. The riggers are staggered alternately along the boat so that the forces apply asymmetrically to each side of the boat. If the boat is sculled by rowers each with two oars the combination is referred to as a quad scull. In a quad scull the riggers apply forces symmetrically. A sweep oared boat has to be stiffer to handle the unmatched forces, and so requires more bracing, which means it has to be heavier than an equivalent sculling boat. However most rowing clubs cannot afford to have a dedicated large hull with four seats which might be rarely used and instead generally opt for versatility in their fleet by using stronger shells which can be rigged for either as fours or quads.\n\"Coxless four\" is one of the classes recognized by the International Rowing Federation.[1] and is an event at the Olympic Games.\nIn 1868, Walter Bradford Woodgate rowing a Brasenose coxed four arranged for his coxswain to jump overboard at the start of the Stewards' Challenge Cup at Henley Royal Regatta to lighten the boat. The unwanted cox narrowly escaped strangulation by the water lilies, but Woodgate and his home-made steering device triumphed by 100 yards and were promptly disqualified. This led to the adoption of Henley Regatta rules specifically prohibiting such conduct and a special prize for four-oared crews without coxswains was offered at the regatta in 1869. However in 1873 the Stewards cup was changed to a coxless four event.[2]\nhttps://en.wikipedia.org/wiki/Brasenose_College_Boat_Club\nInvention of the coxless four[edit]\nIn a cause celebre, Walter Bradford Woodgate introduced the coxless four to the United Kingdom in 1868, when he got his Brasenose cox, Frederic Weatherly (later a well-known lawyer and writer of the song \"Danny Boy\"), to jump overboard at the start of the Steward's Cup at Henley Royal Regatta. While Weatherley narrowly escaped strangulation by the water lilies, Woodgate and his home-made steering device triumphed by 100 yards and were promptly disqualified.\nA special Prize for four-oared crews without coxswains was offered at the regatta in 1869 when it was won by the Oxford Radleian Club and when Stewards' became a coxless race in 1873, Woodgate \"won his moral victory,\" the Rowing Almanack later recalled. \"Nothing but defeating a railway in an action at law could have given him so much pleasure.\"[18]\nBrasenose and \"Childe of Hale Boat Club\" went on to record legitimate victories in the event.\nTwo years later, Woodgate founded Vincent's Club as \"an elite social club of the picked hundred of the University, selected for all round qualities; social, physical a\nhttps://en.wikipedia.org/wiki/Eight_(rowing)\nAn eight is a rowing boat used in the sport of competitive rowing. It is designed for eight rowers, who propel the boat with sweep oars, and is steered by a coxswain, or \"cox\".\nEach of the eight rowers has one oar. There are four rowers on the stroke side (rower's right hand side) and four on the bow side (rower's lefthand side).\nhttps://en.wikipedia.org/wiki/Rowing_at_the_Summer_Olympics\nThe Olympic Games are held every four years, where only select boat classes are raced (14 in total):\nMen: quad scull, double scull, single scull, eight, coxless four, and coxless pair\nLightweight Men: coxless four and double scull\nWomen: quad scull, double scull, single scull, eight, and coxless pair\nLightweight Women: double scull\nhttps://en.wikipedia.org/wiki/Rowing_at_the_1912_Summer_Olympics_–_Men%27s_coxed_four,_inriggers\nThe men's coxed fours with inriggers, also referred to as the coxed four with jugriggers, was a rowing event held as part of the Rowing at the 1912 Summer Olympics programme. It was the only appearance of the restricted event. The competition was held on Wednesday, July 17, 1912 and on Thursday, July 18, 1912.\nhttps://en.wikipedia.org/wiki/Regatta\nThe Athlone Yacht Club Regatta on Lough Ree, Ireland, in the notice of race of 1835 included:\n4 August: A silver cup value 15gns. for gentlemen's four oared gigs (not more than 30'-0\" on keel) to be won 3 years in succession. No race unless 3 start. Also: A prize will be pulled for in four oared cots. No race unless 3 start.\n5 August: A prize will be pulled for in 2 oared boats. A prize will be pulled for in four oared cots.\n6 August: A silver cup value 15gns. for gentlemen's four oared gigs - open to gigs from any part of Ireland. A prize for four oared cots.[3]\nThe event now takes place between the bridges in Athlone or at Killinure.\nhttps://en.wikipedia.org/wiki/Field_archery\nThe World Archery Federation, commonly known as WA and formerly as FITA (Fédération Internationale de Tir à l'Arc), defines a suite of rounds based on a 24-target course.\nFour target face sizes are specified: 80 cm; 60 cm; 40 cm and 20 cm. Six target faces of each size are used on the course. For each target face size there are upper and lower distance limits for the various divisions of archer. Target faces have four black outer rings and a yellow spot, each with an equal width. The yellow spot is subdivided into two rings. The black rings score 1 point for the outermost to 4 points for the innermost. A hit in the outer yellow scores 5 points. A hit in the inner yellow scores 6 points. Before April 2008, the innermost yellow ring counted as an X (the number of Xs was used for tie-breaks) but only scored 5 points.\nField rounds are at 'even' distances up to 80 yards (although some of the shortest are measured in feet), using targets with a black inner ring, two white middle rings and two black outer rings. Four face sizes are used for the various distances. A score of five points is awarded for shots which hit the centre spot, four for the white inner ring, and three for the outer black ring.\nhttps://en.wikipedia.org/wiki/File:Court_plan.png\nhttps://en.wikipedia.org/wiki/Slamball\nSlamBall is a form of basketball played with four trampolines in front of each net and boards around the court edge. The name SlamBall is the trademark of SlamBall, LLC.\nEach team has four players on the court at any one time.\nThe spring floor lies adjacent to two sets of four trampoline or spring bed 'quads' which dominate each end of the court. Each trampoline surface measures 7 ft by 14 ft (2.1 m by 4.2 m.) The shock absorbent panels pair with the competition bed trampolines to create a unique playing surface that both launches players to inhuman heights and cushions their landing upon returning to the floor. Specifically engineered pads are designed to cover the frame rails and their tapered design allows for maximum safety for on-court play. This entire playing surface will be surrounded with an 8 ft (2.4 m) Plexiglass wall much like in a hockey rink. Players wear protective cups and special equipment to protect various areas of the body. This consists of knee and elbow pads, and an optional SlamBall-specific helmet.\nhttps://www.wikiwand.com/en/Popinjay_(sport)\nThe object of popinjay is to knock artificial birds off their perches. The perches are cross-pieces on top of a 90-foot (27 m) mast. The \"cock\" (the largest bird) is set on the top cross piece. Four smaller \"hens\" are set on the next crosspiece down. Two dozen or so \"chicks\" (the smallest birds) are set on the lower cross pieces. (GNAS, 2006 - rule 1000)\nhttps://en.wikipedia.org/wiki/Tennis\nTennis is played by millions of recreational players and is also a popular worldwide spectator sport. The four Grand Slam tournaments (also referred to as the \"Majors\") are especially popular: the Australian Open played on hard courts, the French Open played on red clay courts, Wimbledon played on grass courts, and the US Open played also on hard courts.\nWimbledon, the US Open, the French Open, and the Australian Open (dating to 1905) became and have remained the most prestigious events in tennis.[17][24] Together these four events are called the Majors or Slams (a term borrowed from bridge rather than baseball)\nA tennis game is based on four points\nA game consists of a sequence of points played with the same player serving. A game is won by the first player to have won at least four points in total and at least two points more than the opponent. The running score of each game is described in a manner peculiar to tennis: scores from zero to three points are described as \"love\", \"fifteen\", \"thirty\", and \"forty\", respectively. If at least three points have been scored by each player, making the player's scores equal at forty apiece, the score is not called out as \"forty-forty\", but rather as \"deuce\". If at least three points have been scored by each side and a player has one more point than his opponent, the score of the game is \"advantage\" for the player in the lead. During informal games, \"advantage\" can also be called \"ad in\" or \"van in\" when the serving player is ahead, and \"ad out\" or \"van out\" when the receiving player is ahead\nhttps://en.wikipedia.org/wiki/Quad_scull\nQuad scull\nQuad scull Germany 1982: Martin Winter (front), Uwe Heppner (second), Uwe Mund (third), and Karl-Heinz Bußert (last)\nA quad scull, or quadruple scull in full, is a rowing boat used in the sport of competitive rowing. It is designed for four persons who propel the boat by sculling with two oars, one in each hand\nRacing boats (often called \"shells\") are long, narrow, and broadly semi-circular in cross-section in order to reduce drag. They usually have a fin towards the rear, to help prevent roll and yaw. Originally made from wood, shells are now almost always made from a composite material (usually carbon-fiber reinforced plastic) for strength and weight advantages. The riggers in sculling apply the forces symmetrically to each side of the boat. Quad sculls is one of the classes recognized by the International Rowing Federation and the Olympics.[1] FISA rules specify minimum weights for each class of boat so that no individual will gain a great advantage from the use of expensive materials or technology.\nWhen there are four rowers in a boat, each with only one sweep oar and rowing on opposite sides, the combination is referred to as a \"coxed four\" or \"coxless four\" depending on whether the boat has a cox. In sweep oared racing the rigging means the forces are staggered alternately along the boat. The symmetrical forces in sculling make the boat more efficient and so the quadruple scull is faster than the coxless four.[2] *Update Required*\nA 'quad' is different to a 'four' in that a 'quad', or quadruple scull, is composed of four rowers each with two blades, sculling. A 'four' is made up of four rowers each with one oar in hand, sweeping.\nfour positions\nThe sport consists of four positions: midfield, attack, defense and goalie. In field lacrosse, attackmen are solely offensive players (except on the \"ride\", when the opposition tries to bring the ball upfield and attackmen must stop them), defensemen or defenders are solely defensive players (except when bringing up the ball, which is called a \"clear\"), the goalie is the last line of defense, directly defending the goal, and midfielders or \"middies\" can go anywhere on the field and play offense and defense, although in higher levels of lacrosse there are specialized offensive and defensive middies. Long stick middies only play defense and come off of the field on offense.\nField lacrosse[edit]\nDiagram of a men's college lacrosse field\nThere are ten players in each team: three attackmen, three midfielders, three defensemen, and one goalie.\nEach player carries a lacrosse stick (or crosse). A \"short crosse\" (or \"short stick\") measures between 40 in (1.0 m) and 42 in (1.1 m) long (head and shaft together) and is typically used by attackers or midfielders. A maximum of four players on the field per team may carry a \"long crosse\" (sometimes called \"long pole\", \"long stick\" or \"d-pole\") which is 52 in (1.3 m) to 72 in (1.8 m) long; typically used by defenders or midfielders.\nThe NLL games consist of four fifteen-minute quarters compared with three periods of twenty minutes each (similar to ice hockey) in CLA games (multiple 15-minute OT periods for tied games, until whoever scores first). NLL players may use only sticks with hollow shafts, while CLA permits solid wooden sticks.:[35][36]\nBegun in 1968, world championships began as a four-team invitational tournament sponsored by the International Lacrosse Federation. Until 1986, lacrosse world championships had been contested only by the US, Canada, England, and Australia. Scotland and Wales had teams competing in the women's edition. They are now held for lacrosse at senior men, senior women, under 19 men and under 19 women levels.\nWith the expansion of the game internationally, the 2006 Men's World Championship was contested by 21 countries and the Iroquois Nationals, representing the Six Nations of the Iroquois Confederacy. They are the only Native American/First Nations team to compete internationally. The 2009 Women's World Cup was competed for by 16 nations.\nIn 2003, the first World Indoor Lacrosse Championship was contested by six nations at four sites in Ontario. Canada won the championship in a final game against the Iroquois Nationals, 21–4. The 2007 WILC was held in Halifax from May 14–20, and also won by Canada. Competition included the Iroquois Nationals and teams from Australia, Canada, Czech Republic, England, Ireland, Scotland, and the United States.\nhttps://en.wikipedia.org/wiki/Lacrosse\n\"League announces expansion of rosters to 19 and addition of fourth long pole for 2009\". Inside Lacrosse. October 22, 2008. Retrieved October 24, 2008.\nJump up ^\nhttps://en.wikipedia.org/wiki/Coxed_four\nA coxed four is a rowing boat used in the sport of competitive rowing. It is designed for four persons who propel the boat with sweep oars and is steered by a coxswain.\nThe crew consists of four rowers, each having one oar, and a cox. There are two rowers on the stroke side (rower's right hand side) and two on the bow side (rower's lefthand side). The cox steers the boat using a rudder and may be seated at the stern of the boat where there is a view of the crew or in the bow (known as a bowloader). With a bowloader, amplification is needed to communicate with the crew which is sitting behind, but the cox has a better view of the course and the weight distribution may help the boat go faster. When there is no cox, the boat is referred to as a \"coxless four\".\n\"Coxed four\" is one of the classes recognized by the International Rowing Federation. It was one of the original events in the Olympics but was dropped in 1992.[1]\nhttps://en.wikipedia.org/wiki/Gondola\nDuring their heyday as a means of public transports, teams of four men would share ownership of a gondola — three oarsmen (gondoliers) and a fourth person, primarily shore based and responsible for the booking and administration of the gondola (Il Rosso Riserva).\nhttps://en.wikipedia.org/wiki/Sandolo\nFour types of boats used in Venice\nSpace in the sandolo is limited, with enough room for one oarsman, aft, two passengers on the main seat, and two more passengers sitting on small stools towards the bow.[7] The traditional use of the sandolo is for recreation and racing, and it is considered one of the four principal types of boat used in and around Venice.[8] Rather less stable than a gondola, it has a rocking motion all of its own.[9]\nhttp://www.venipedia.org/wiki/index.php?title=Traditional_Boats\nFour traditional boats of Venice\nIn early Venice, traditional boats were used as a means of both personal and public transportation. The most common and well-known human transport boats are the sandolo, mascareta, puparin, and gondola. Today motorized boats have replaced many of these human powered boats, and only the gondola remains as a tourist attraction.\nhttps://en.wikipedia.org/wiki/Line_(ice_hockey)\nIce hockey teams usually consist of four lines of three forwards, three pairs of defencemen, and two goaltenders.\nIn ice hockey, a line is a group of forwards that play in a group, or \"shift\", during a game.\nA complete forward line consists of a left wing, a center, and a right wing, while a pair of defensemen who play together are called \"partners.\" Typically, an NHL team dresses twelve forwards along four lines and three pairs of defensemen, though some teams elect to dress a seventh defenseman, or a thirteenth forward. In ice hockey, players are substituted \"on the fly,\" meaning a substitution can occur even in the middle of play as long as proper protocol is followed (under typical ice hockey rules, the substituting player cannot enter the ice until the substituted player is within a short distance of the bench and not actively playing the puck); substitutions can still be made during stoppages. Usually, coordinated groups of players (called linemates) are substituted simultaneously in what are called line changes. Linemates may change throughout the game at the coach's discretion.\nIce hockey is one of only a handful of sports (gridiron football being one of the most prominent others) that allows for unlimited free substitution and uses a system of multiple sets of players for different situations. Because of the use of lines in hockey, ice hockey rosters have relatively large rosters compared to the number of players on the ice (23 for a typical NHL team, with 20 active on game day and six on the ice at any given time). Only gridiron football has a larger relative roster size (the NFL has 53 players, 46 active on gameday, 11 on the field).\nTypes of line[edit]\nThe first line is usually composed of the best offensive players on the team. Teams heavily rely on this line, which generates the bulk of the team's scoring. These players often see the highest number of minutes among forwards in a game and are usually part of the team's starting lineup.\nThe second line is generally composed of second-tier offensive players, and helps by adding supplementary offense to that generated by the first line while contributing more two-way play than the offensively-focused scoring line. Higher end (typically first line) players may be put on the second line to spread scoring across the lineup, making a team more difficult for opponents to defend against. This frequently happens when a team has two high-end players who play the same position.\nThe third line is often called the checking line, and is generally made up of more defensively oriented forwards and grinders. This line is often played against an opponent's first or second lines in an effort to reduce their scoring, and physically wear them down. The third line adds less offense than the first or second lines, but generally more than the fourth.\nThe fourth line is often called the \"energy line,\" both because their shifts give other players a chance to rest, and because their physically oriented play is said to give their teammates an emotional boost. It is usually composed of journeymen with limited scoring potential, but strong physical play and, as often as possible, strong skating abilities. With the smallest amount of ice time, they tend to play in short bursts rather than pace themselves. Pests and enforcers usually play the fourth line, as do centers whose primary skill is winning faceoffs. The fourth line can be a checking line\nhttps://en.wikipedia.org/wiki/Face-off\nA common formation, especially at centre ice, is for a skater to take the face-off, with the wings lateral to the centre on either side, and the skater, usually a defenseman, behind the player handling the face-off, one toward each side. This is not mandatory, however, and other formations are seen—especially where the face-off is in one of the four corner face-off spots.\nhttps://en.wikipedia.org/wiki/National_Hockey_League\nAt its inception, the NHL had four teams—all in Canada, thus the adjective \"National\" in the league's name\nhttps://en.wikipedia.org/wiki/Pocket_Cube\n2x2 is a quadrant.\nThe Pocket Cube (also known as the Mini Cube or the Ice Cube) is the 2×2×2 equivalent of a Rubik's Cube. The cube consists of 8 pieces, all corners.\nhttps://en.wikipedia.org/wiki/Pyramorphix\nThis is a tetrahedron- tetra means four\nThe Pyramorphix (/ˌpɪrəˈmɔːrfɪks/, often misspelt Pyramorphinx) is a tetrahedral puzzle similar to the Rubik's Cube. It has a total of 8 movable pieces to rearrange, compared to the 20 of the Rubik's Cube. Though it looks like a simpler version of the Pyraminx, it is an edge-turning puzzle with the mechanism identical to that of the Pocket Cube.\n2 Number of combinations\n3 Master Pyramorphix\n3.1 Solutions\n3.2 Number of combinations\nAt first glance, the Pyramorphix appears to be a trivial puzzle. It resembles the Pyraminx, and its appearance would suggest that only the four corners could be rotated. In fact, the puzzle is a specially shaped 2×2×2 cube, if the tetrahedron is considered to be demicube. Four of the cube's corners are reshaped into pyramids and the other four are reshaped into triangles. The result of this is a puzzle that changes shape as it is turned.\nThe original name for the Pyramorphix was \"The Junior Pyraminx.\" This was altered to reflect the \"Shape Changing\" aspect of the puzzle which makes it appear less like the 2x2x2 Rubik Cube. \"Junior\" also made it sound less desirable to an adult customer. The only remaining reference to the name \"Junior Pyraminx\" is on Uwe Mèffert's website-based solution which still has the title \"jpmsol.html\".[1][2]\nThe purpose of the puzzle is to scramble the colors and the shape, and then restore it to its original state of being a tetrahedron with one color per face.\nNumber of combinations[edit]\nThe puzzle is available either with stickers or plastic tiles on the faces. Both have a ribbed appearance, giving a visible orientation to the flat pieces. This results in 3,674,160 combinations, the same as the 2×2×2 cube.\nHowever, if there were no means of identifying the orientation of those pieces, the number of combinations would be reduced. There would be 8! ways to arrange the pieces, divided by 24 to account for the lack of center pieces, and there would be 34 ways to rotate the four pyramidal pieces.\n{\\frac {8!\\times 3^{4}}{24}}=136080\nThe Pyramorphix can be rotated around three axes by multiples of 90°. The corners cannot rotate individually as on the Pyraminx. The Pyramorphix rotates in a way that changes the position of center pieces not only with other center pieces but also with corner pieces, leading to a variety of shapes.\nMaster Pyramorphix[edit]\nThe Master Pyramorphix\nThe Master Pyramorphix, color-scrambled\nThe Master Pyramorphix, color- and shape- scrambled\nThe Master Pyramorphix, partially solved\nThe Master Pyramorphix, with maximal face-piece flip, equivalent to the \"superflip\" configuration of the 3x3x3 Rubik's Cube\nThe Master Pyramorphix is a more complex variant of the Pyramorphix. Although it is officially called the Master Pyramorphix, most people refer to it as the \"Mastermorphix\". Like the Pyramorphix, it is an edge-turning tetrahedral puzzle capable of changing shape as it is twisted, leading to a large variety of irregular shapes. Several different variants have been made, including flat-faced custom-built puzzles by puzzle fans and Uwe Mèffert's commercially produced pillowed variant (pictured), sold through his puzzle shop, Meffert's.\nThe puzzle consists of 4 corner pieces, 4 face centers, 6 edge pieces, and 12 non-center face pieces. Being an edge-turning puzzle, the edge pieces only rotate in place, while the rest of the pieces can be permuted. The face centers and corner pieces are interchangeable because they are both corners although they are shaped differently, and the non-center face pieces may be flipped, leading to a wide variety of exotic shapes as the puzzle is twisted. If only 180° turns are made, it is possible to scramble only the colors while retaining the puzzle's tetrahedral shape. When 90° and 180° turns are made this puzzle can \"shape shift″.\nIn spite of superficial similarities, the only way that this puzzle is related to the Pyraminx is that they are both \"twisty puzzzles\"; the Pyraminx is a face-turning puzzle. On the Mastermorphix the corner pieces are non-trivial; they cannot be simply rotated in place to the right orientation.\nSolutions[edit]\nDespite its appearance, the puzzle is in fact equivalent to a shape modification of the original 3x3x3 Rubik's Cube. Its 4 corner pieces on the corners and 4 corner pieces on the face centers together are equivalent to the 8 corner pieces of the Rubik's Cube, its 6 edge pieces are equivalent to the face centers of the Rubik's Cube, and its non-center face pieces are equivalent to the edge pieces of the Rubik's Cube. Thus, the same methods used to solve the Rubik's Cube may be used to solve the Master Pyramorphix, with a few minor differences: the center pieces are sensitive to orientation because they have two colors, unlike the usual coloring scheme used for the Rubik's Cube, and the face centers are not sensitive to orientation (however when in the \"wrong\" orientation parity errors may occur). In effect, it behaves as a Rubik's Cube with a non-standard coloring scheme where center piece orientation matters, and the orientation of 4 of the 8 corner pieces do not, technically, matter.\nUnlike the Square One, another shape-changing puzzle, the most straightforward solutions of the Master Pyramorphix do not involve first restoring the tetrahedral shape of the puzzle and then restoring the colors; most of the algorithms carried over from the 3x3x3 Rubik's Cube translate to shape-changing permutations of the Master Pyramorphix. Some methods, such as the equivalent of Phillip Marshal's \"Ultimate Solution\", show a gradual progression in shape as the solution progresses; first the non-center face pieces are put into place, resulting in a partial restoration of the tetrahedral shape except at the face centers and corners, and then the complete restoration of tetrahedral shape as the face centers and corners are solved.\nThere are four corners and four face centers. These may be interchanged with each other in 8! different ways. There are 37 ways for these pieces to be oriented, since the orientation of the last piece depends on the preceding seven, and the texture of the stickers makes the face center orientation visible. There are twelve non-central face pieces. These can be flipped in 211 ways and there are 12!/2 ways to arrange them. The three pieces of a given color are distinguishable due to the texture of the stickers. There are six edge pieces which are fixed in position relative to one another, each of which has four possible orientations. If the puzzle is solved apart from these pieces, the number of edge twists will always be even, making 46/2 possibilities for these pieces.\n{8!\\times 3^{7}\\times 12!\\times 2^{9}\\times 4^{6}}\\approx 8.86\\times 10^{{22}}\nThe full number is 88 580 102 706 155 225 088 000.\nHowever, if the stickers were smooth the number of combinations would be reduced. There would be 34 ways for the corners to be oriented, but the face centers would not have visible orientations. The three non-central face pieces of a given color would be indistinguishable. Since there are six ways to arrange the three pieces of the same color and there are four colors, there would be 211×12!/64 possibilities for these pieces.\n{\\frac {8!\\times 3^{4}\\times 12!\\times 2^{{10}}\\times 4^{6}}{6^{4}}}\\approx 5.06\\times 10^{{18}}\nThe full number is 5 062 877 383 753 728 000.\nhttps://en.wikipedia.org/wiki/Pyraminx\nTetrahedron- tetra means four- it is actually a tetractys with the first line with four dots and ten dots in all. The Pythagoreans worshipped the tetractys as the ultimate symbol\nPyraminx in its solved state\nThe Pyraminx (/ˈpɪrəmɪŋks/) is a regular tetrahedron puzzle in the style of Rubik's Cube. It was made and patented by Uwe Mèffert after the original 3 layered Rubik's Cube by Erno Rubik, and introduced by Tomy Toys of Japan (then the 3rd largest toy company in the world) in 1981.[1]\n2 Optimal solutions\n5 Variations\nPyraminx in the middle of a twist\nThe Pyraminx was first conceived by Mèffert in 1970. He did nothing with his design until 1981 when he first brought it to Hong Kong for production. Uwe is fond of saying had it not been for Erno Rubik's invention of the cube, his Pyraminx would have never been produced.[citation needed]\nThe Pyraminx is a puzzle in the shape of a regular tetrahedron, divided into 4 axial pieces, 6 edge pieces, and 4 trivial tips. It can be twisted along its cuts to permute its pieces. The axial pieces are octahedral in shape, although this is not immediately obvious, and can only rotate around the axis they are attached to. The 6 edge pieces can be freely permuted. The trivial tips are so called because they can be twisted independently of all other pieces, making them trivial to place in solved position. Meffert also produces a similar puzzle called the Tetraminx, which is the same as the Pyraminx except that the trivial tips are removed, turning the puzzle into a truncated tetrahedron.\nScrambled Pyraminx\nThe purpose of the Pyraminx is to scramble the colors, and then restore them to their original configuration.\nThe 4 trivial tips can be easily rotated to line up with the axial piece which they are respectively attached to; and the axial pieces are also easily rotated so that their colors line up with each other. This leaves only the 6 edge pieces as a real challenge to the puzzle. They can be solved by repeatedly applying two 4-twist sequences, which are mirror-image versions of each other. These sequences permute 3 edge pieces at a time, and change their orientation differently, so that a combination of both sequences is sufficient to solve the puzzle. However, more efficient solutions (requiring a smaller total number of twists) are generally available (see below).\nThe twist of any axial piece is independent of the other three, as is the case with the tips. The six edges can be placed in 6!/2 positions and flipped in 25 ways, accounting for parity. Multiplying this by the 38 factor for the axial pieces gives 75,582,720 possible positions. However, setting the trivial tips to the right positions reduces the possibilities to 933,120, which is also the number of possible patterns on the Tetraminx. Setting the axial pieces as well reduces the figure to only 11,520, making this a rather simple puzzle to solve.\nOptimal solutions[edit]\nThe maximum number of twists required to solve the Pyraminx is 11. There are 933,120 different positions (disregarding rotation of the trivial tips), a number that is sufficiently small to allow a computer search for optimal solutions. The table below summarizes the result of such a search, stating the number p of positions that require n twists to solve the Pyraminx:\nn 0 1 2 3 4 5 6 7 8 9 10 11\np 1 8 48 288 1728 9896 51808 220111 480467 166276 2457 32\nRecords[edit]\nSolving pyraminx in a competition. Andreas Pung at Estonian Open 2011.\nThe current world record for a single solve of the Pyraminx stands at 1.32 seconds, set by Drew Brads at the Lexington Fall 2015. He also holds the fastest average of 5 (with the fastest and slowest solve disregarded) with 2.14 seconds at US Nationals 2016.[2][3]\nThere are many methods for solving a Pyraminx. They can be split up into two groups.\n1) V first- In these methods, two or three edges, and not a side, is solved first, and a set of algorithms, also called LL algs (last layer algs), are given to solve the remaining puzzle.\n2) Top first methods- In these methods a block on the top, which is three edges around a corner, is solved first and the remaining is solved using a set of algorithms.\nCommon V first methods-\na) Layer by Layer - In this method a face with all edges oriented in the right spot (a.k.a. a layer) is solved and then the remaining puzzle is solved using 5 algorithms particularly for this method.\nb) L4E- L4E or last 4 edges is very similar to Layer by Layer. The only difference is that TWO edges are solved around three Centers, and the rest is done by a set of algorithms.\nc) Intuitive L4E- A method similar to the L4E, as the name suggests, in which lots of visualization is required. The set of algorithms mentioned in the previous method are not memorized. Instead, cubers intuitively solve each case by anticipating the movement of pieces. This is the most advanced V first method.\nCommon top first methods-\na) One Flip- This method uses two edges around one centre solved and the third edge flipped. There are a total of six cases after this step, for which algorithms are memorized and executed. The third step involves using a common set of algorithms for ALL top first methods, also called Keyhole last layer, which involves 5 algorithms, four of them being the mirrors of each other.\nb) Keyhole- This method uses two edges in the right place around one centre, and the third edge does not match any color of the edge i.e. it is not in the right place OR flipped. The centers of the fourth color are then solved USING the non oriented edge (a.k.a. keyhole). The last step is solved using Keyhole last layer algs.\nc) OKA- In this method, One edge is oriented around two edges in the wrong place, but one of the edges that is in the wrong place belongs to the block itself. The last edge is found on the bottom layer and a very simple algorithm is executed to get it in the right place, followed by keyhole last layer algs.\nSome other common top first methods are WO and Nutella.\nProfessional Pyraminxers like Drew Brads usually learn all methods, and while observing a case, decide which method best suits that case.\nVariations[edit]\nA solved Tetraminx.\nThere are several variations of the puzzle. The simplest, Tetraminx, is equivalent to the (3x) Pyraminx but without the tips (see photo). There also exists \"higher-order\" versions, such as the 4x Master Pyraminx (see photos) and the 5x Professor's Pyraminx.\nA scrambled Master Pyraminx\nA solved Master Pyraminx\nThe Master Pyraminx has 4 layers and 16 triangles-per-face (compared to 3 layers and 9 triangles-per-face of the original). This version has about 2.17225 × 1017 combinations.[4][5] The Master Pyraminx has\n4 \"tips\" (same as the original Pyraminx)\n4 \"middle axials\" (same as the original Pyraminx)\n4 \"centers\" (similar to Rubik's Cube, none in the original Pyraminx)\n6 \"inner edges\" (similar to Rubik's Cube, none in the original Pyraminx)\n12 \"outer edges\" (2-times more than the 6 of the original Pyraminx)\nIn summary, the Master Pyraminx has 30 \"manipulable\" pieces. However, like the original, 8 of the pieces (the tips and middle axials) are fixed in position (relative to each other) and can only be rotated in place. Also, the 4 centers are fixed in position and can only rotate (like the Rubik's Cube). So there are only 18 (30-8-4) \"truly movable\" pieces; since this is 10% less than the 20 \"truly movable\" pieces of the Rubik's Cube, it should be no surprise that the Master Pyraminx has about 200-times fewer combinations than a Rubik's Cube (about 4.3252 × 1019[6]).\nThere is 16 squares in the quadrant model\nhttps://en.wikipedia.org/wiki/15_puzzle\nThe 15-puzzle (also called Gem Puzzle, Boss Puzzle, Game of Fifteen, Mystic Square and many others) is a sliding puzzle that consists of a frame of numbered square tiles in random order with one tile missing. The puzzle also exists in other sizes, particularly the smaller 8-puzzle. If the size is 3×3 tiles, the puzzle is called the 8-puzzle or 9-puzzle, and if 4×4 tiles, the puzzle is called the 15-puzzle or 16-puzzle named, respectively, for the number of tiles and the number of spaces. The object of the puzzle is to place the tiles in order by making sliding moves that use the empty space.\nThe puzzle was \"invented\" by Noyes Palmer Chapman,[11] a postmaster in Canastota, New York, who is said to have shown friends, as early as 1874, a precursor puzzle consisting of 16 numbered blocks that were to be put together in rows of four, each summing to 34. Copies of the improved Fifteen Puzzle made their way to Syracuse, New York, by way of Noyes' son, Frank, and from there, via sundry connections, to Watch Hill, RI, and finally to Hartford (Connecticut), where students in the American School for the Deaf started manufacturing the puzzle and, by December 1879, selling them both locally and in Boston, Massachusetts. Shown one of these, Matthias Rice, who ran a fancy woodworking business in Boston, started manufacturing the puzzle sometime in December 1879 and convinced a \"Yankee Notions\" fancy goods dealer to sell them under the name of \"Gem Puzzle\". In late January 1880, Dr. Charles Pevey, a dentist in Worcester, Massachusetts, garnered some attention by offering a cash reward for a solution to the Fifteen Puzzle.[11]\nThe game became a craze in the U.S. in February 1880, Canada in March, Europe in April, but that craze had pretty much dissipated by July. Apparently the puzzle was not introduced to Japan until 1889.\nNoyes Chapman had applied for a patent on his \"Block Solitaire Puzzle\" on February 21, 1880. However, that patent was rejected, likely because it was not sufficiently different from the August 20, 1878 \"Puzzle-Blocks\" patent (US 207124) granted to Ernest U. Kinsey.[11]\nSam Loyd[edit]\nSam Loyd's 1914 illustration\nSam Loyd claimed from 1891 until his death in 1911 that he invented the puzzle, for example writing in the Cyclopedia of Puzzles (published 1914): \"The older inhabitants of Puzzleland will remember how in the early seventies I drove the entire world crazy over a little box of movable pieces which became known as the '14-15 Puzzle'.\"[12] However, Loyd had nothing to do with the invention or initial popularity of the puzzle, and in any case the craze was in 1880, not the early 1870s. Loyd's first article about the puzzle was published in 1886 and it was not until 1891 that he first claimed to have been the inventor.[11][13]\nSome later interest was fuelled by Loyd offering a $1,000 prize for anyone who could provide a solution for achieving a particular combination specified by Loyd, namely reversing the 14 and 15.[14] This was impossible, as had been shown over a decade earlier by Johnson & Story (1879), as it required a transformation from an even to an odd combination.\nThe Minus Cube, manufactured in the USSR, is a 3D puzzle with similar operations to the 15-puzzle.\nBobby Fischer was an expert at solving the 15-Puzzle. He had been timed to be able to solve it within 25 seconds; Fischer demonstrated this on November 8, 1972, on The Tonight Show Starring Johnny Carson.\nSeveral browser games are inspired of n-puzzle mechanic, e.g., Continuity[15] or Rooms.[16]\nIt is 2 by 2 quadrants\nhttps://en.wikipedia.org/wiki/Minus_Cube\nhttps://en.wikipedia.org/wiki/File:Minus_Cube_puzzle.jpg\nThe Minus Cube (Russian: «Минус-кубик») is a 3D mechanical variant of the n-puzzle which was manufactured in the Soviet Union. It consists of a bonded transparent plastic box containing seven small cubes, each glued together from two U-shape parts: one white and one coloured. The length of one side of the interior of the box is slightly more than twice the length of the side of a small cube. There is an empty space the size of one small cube inside the box and the small cubes are moveable inside the box by tilting the box causing a cube to fall into the space. The goal of the puzzle is to shuffle the cubes in such a way that on each side of the box, all of the faces of the small cubes are one color.\nhttps://en.wikipedia.org/wiki/Klotski\nThere are four quadrant squares in the game\nhttps://en.wikipedia.org/wiki/File:HuaRongDao.jpg\nThe traditional Chinese wooden game Huarong Dao(華容道), where the largest block must be moved to the bottom middle location so that it can be slid over the border, without any of the other blocks being removed in this way. The game of Klotski may refer to a game with this specific layout, or to an identical game with a different tile setup.\nRush Hour is a sliding block puzzle invented by Nob Yoshigahara in the 1970s. It was first sold in the United States in 1996. It is now being manufactured by ThinkFun (formerly Binary Arts).\nThe game comes with 12 cars and four trucks. 16 all together the last quadrant different. The quadrant model\nhttps://en.wikipedia.org/wiki/Rush_Hour_(board_game)\nThe regular version comes with 40 puzzles split up into 4 different difficulties, ranging from Beginner to Expert. The deluxe edition has a black playing board, 60 new puzzles and has an extra difficulty, the Grand Master, which is harder than Expert. Puzzles falling in this difficulty range can only be sold with expansion packs of the original game. The regular version includes a travel bag. Extra puzzle card packs (in addition to the 40 or 60 cards included with the game) are also available. The deluxe edition also comes with shiny cars. In 2011, the board was changed to black like the deluxe edition; the cards was changed to new levels and to match the board change too.\nThe board is a 6x6 grid with grooves in the tiles to allow cars to slide, and an exit hole which according to the puzzle cards, only the red car can escape. The game comes with 12 cars and 4 trucks, each colored differently. The cars take up 2 squares each; and the trucks take up 3.\nhttps://en.wikipedia.org/wiki/File:15-puzzle-loyd.svg\nSam Loyd's unsolvable 15-puzzle, with tiles 14 and 15 exchanged. This puzzle is not solvable because moving it to the solved state would require a change of the invariant.\nhttps://en.wikipedia.org/wiki/Jeu_de_taquin\nhttps://en.wikipedia.org/wiki/File:Jeu_de_taquin.svg\nExample of a Jeu de taquin slide\nIn the mathematical field of combinatorics, jeu de taquin is a construction due to Marcel-Paul Schützenberger (1977) which defines an equivalence relation on the set of skew standard Young tableaux. A jeu de taquin slide is a transformation where the numbers in a tableau are moved around in a way similar to how the pieces in the fifteen puzzle move. Two tableaux are jeu de taquin equivalent if one can be transformed into the other via a sequence of such slides.\n\"Jeu de taquin\" (literally \"teasing game\") is the French name for the fifteen puzzle.\nIt has four colors- four shapes on a quadrant grid- very famous puzzle\nhttps://en.wikipedia.org/wiki/File:Missing_square_puzzle-AB.png\nhttps://en.wikipedia.org/wiki/Missing_square_puzzle\nThe missing square puzzle is an optical illusion used in mathematics classes to help students reason about geometrical figures; or rather to teach them not to reason using figures, but to use only textual descriptions and the axioms of geometry. It depicts two arrangements made of similar shapes in slightly different configurations. Each apparently forms a 13×5 right-angled triangle, but one has a 1×1 hole in it.\nA true 13×5 triangle cannot be created from the given component parts. The four figures (the yellow, red, blue and green shapes) total 32 units of area. The apparent triangles formed from the figures are 13 units wide and 5 units tall, so it appears that the area should be\n\\textstyle {S={\\frac {13\\times 5}{2}}=32.5} units. However, the blue triangle has a ratio of 5:2 (=2.5), while the red triangle has the ratio 8:3 (≈2.667), so the apparent combined hypotenuse in each figure is actually bent. With the bent hypotenuse, the first figure actually occupies a combined 32 units, while the second figure occupies 33, including the \"missing\" square.\nThe amount of bending is approximately 1/28th of a unit (1.245364267°), which is difficult to see on the diagram of the puzzle, and was illustrated as a graphic. Note the grid point where the red and blue triangles in the lower image meet (5 squares to the right and two units up from the lower left corner of the combined figure), and compare it to the same point on the other figure; the edge is slightly under the mark in the upper image, but goes through it in the lower. Overlaying the hypotenuses from both figures results in a very thin parallelogram (represented with the four red dots) with an area of exactly one grid square, so the \"missing\" area.\nMatsuyama's famous paradox uses four quadrilaterals in a quadrant formation\nhttps://en.wikipedia.org/wiki/File:Missing_square_edit.gif\nMitsunobu Matsuyama's \"Paradox\" uses four congruent quadrilaterals and a small square, which form a larger square. When the quadrilaterals are rotated about their centers they fill the space of the small square, although the total area of the figure seems unchanged. The apparent paradox is explained by the fact that the side of the new large square is a little smaller than the original one. If θ is the angle between two opposing sides in each quadrilateral, then the quotient between the two areas is given by sec2θ − 1. For θ = 5°, this is approximately 1.00765, which corresponds to a difference of about 0.8%.\nALL OF THIS IS IN MY OVER 50 QMR BOOKS\nSam Lloyd's famous paradoxical disseciton also uses four figures. at first the four figures make up 64 squares (Four quadrant model 16's- 64 is the double tetrahedron merkabah vector equilibrium what some call the geometry of existence)\nhttps://en.wikipedia.org/wiki/File:Loyd64-65-dis_b.svg\nSam Loyd's paradoxical dissection. In the \"larger\" rearrangement, the gaps between the figures have a combined unit square more area than their square gaps counterparts, creating an illusion that the figures there take up more space than those in the square figure. In the \"smaller\" rearrangement, each quadrilateral needs to overlap the triangle by an area of half a unit for its top/bottom edge to align with a grid line.\nhttps://en.wikipedia.org/wiki/Death_spiral_(figure_skating)\nDeath spirals can be performed in all four variants of inside/outside and forward/backward edges. The outside edge death spirals are consi"
CommonCrawl
Probability of selecting an even natural number from the set $\Bbb N$. I confirmed on this thread that there are as many as even natural numbers as there are natural numbers. Question : Suppose I have selected a number $n \in \mathbb N$ , what is the probability that $n$ is even? My Thought : $\text{Probability} = \dfrac{\text{n(E)}}{\text{n(S)}}$ Here $\text{n(S)}$ is the set of all natural numbers i.e. $\mathbb N$, and $\text{n(E)}$ is set of all even natural numbers. Since it is proved that number of elements is the set $\mathbb N$ is exactly the same as the number of elements in the set of natural numbers (it's very easy to put the set of natural numbers, $\Bbb N=\{0,1,2,3,\dots\}$, into one-to-one correspondence with the set $\text{E}=\{0,2,4,6,\dots\}$ of even natural numbers; the map $\Bbb N\to \text{E}:n\mapsto 2n$ is clearly a bijection.) ; Thus, Probability $= \boxed 1$ I know this is definitely wrong.Probability must be $0.5$. But where am I wrong? Can any one explain ? probability infinity infinite-groups Jaideep Khare Jaideep KhareJaideep Khare $\begingroup$ You need to specify a distribution telling you with what probability you draw what number. And there is no uniform distribution on the natural numbers. $\endgroup$ – Arthur Apr 12 '17 at 8:28 $\begingroup$ That's what I'm saying: making each natural number equally likely is impossible. $\endgroup$ – Arthur Apr 12 '17 at 8:31 $\begingroup$ math.stackexchange.com/questions/146844/… $\endgroup$ – Asaf Karagila♦ Apr 12 '17 at 10:33 $\begingroup$ "I'm thinking by common sense" - that's going to give you pretty terrible results for anything to do with probability. Human intuition is notoriously awful at probability. $\endgroup$ – user2357112 Apr 12 '17 at 17:33 $\begingroup$ @JaideepKhare: The Monty Hall problem, Simpson's paradox, that thing with Bayes' theorem and the medical test, etc... human intuition breaks down all over the place for probability. It also breaks down all over the place for anything involving infinities, but that just means it's terrible at more than one thing. $\endgroup$ – user2357112 Apr 12 '17 at 17:48 When you write Probability = $\dfrac{\text{n(E)}}{\text{n(S)}}$, you're assuming that you're drawing a number uniformly at random, which means that every number has the same probability to be drawn. This formula is valid if $\text{E}$ is a finite set, but not if $\text{E}$ is infinite. In fact, we can show that there is no way to draw uniformly at random over $\mathbb{N}$ or $\mathbb{Z}$, as said in the comments. You can't do that and at the same time satisfying the properties that are expected from probabilities. See Yikai's answer for a proof of this fact, if you have some knowledge of measure theory. So if you want to compute some probability of getting an even number among natural numbers, you first have to specify what is the distribution on $\mathbb{N}$, but this cannot be the uniform distribution. AugustinAugustin $\begingroup$ Can't you just take the limit as the sample space goes to N? $\endgroup$ – Kevin Apr 12 '17 at 15:50 $\begingroup$ What would be the probability to pick $1$ for example? $\endgroup$ – Augustin Apr 12 '17 at 15:57 $\begingroup$ @Kevin: You can take the limit of the probability of even-ness, and get $\frac{1}{2}$, but then you've computed "the limit, as $n \to \infty$, of the chance of getting an even number when picking uniformly from $\{1,…,n\}$". You can't reasonably call it "the chance of getting an even number when picking uniformly from $\mathbb{N}$" unless you can find a way to make sense of what "picking uniformly from $\mathbb{N}$" means in itself — and this is rather difficult to make sense of, since taking a limit of probability distributions doesn't give a probability distribution in general. $\endgroup$ – Peter LeFanu Lumsdaine Apr 12 '17 at 17:40 $\begingroup$ @PeterLeFanuLumsdaine: I agree, but I think that limit is more concisely called "the proportion of even numbers to natural numbers." If you use primality instead of evenness, you get "the proportion of primes to naturals is zero" as a trivial corollary of the Prime Number Theorem, or in even simpler terms, "almost all natural numbers are composite." So this isn't totally meaningless, it's just not a "real" probability distribution (in the sense that it doesn't have all of the properties we want in a probability distribution). $\endgroup$ – Kevin Apr 12 '17 at 17:59 $\begingroup$ @Kevin: absolutely, agreed — it's a very meaningful and interesting limit to consider, and is extensively used and studied, mainly under the names natural density or asymptotic density. $\endgroup$ – Peter LeFanu Lumsdaine Apr 12 '17 at 18:14 You first have to define a probability measure over the sample space $\Omega = \mathbb{N}$. In your question, the sigma algebra contains all singleton sets, i.e., $\{i\}$ for $i \geq 0$. Suppose there is a probability measure on $\Omega$ such that $\epsilon =\Pr(\{0\})= \Pr(\{1\}) = \Pr(\{2\}) = \cdots$ By definition of probability, we have $$ 1 = \Pr(\mathbb{N}) = \sum_{i=0}^\infty \Pr(\{i\}) = \sum_{i=0}^\infty \epsilon \tag{$1$} $$ This is a contradiction since if $\epsilon > 0$, then $\sum_{i=0}^\infty \epsilon$ can not be $1$ and if $\epsilon = 0$, then $\sum_{i=0}^\infty \epsilon = 0$ violating $(1)$. drhab PSPACEhardPSPACEhard $\begingroup$ So you mean, that may be, selecting $2$ is more or less probable than selecting $1$? $\endgroup$ – Jaideep Khare Apr 12 '17 at 8:46 $\begingroup$ @JaideepKhare Note that there is a $\cdots$ behind them. In other words, there is no probability measure such that all natural numbers have the same probability. $\endgroup$ – PSPACEhard Apr 12 '17 at 8:48 $\begingroup$ @JaideepKhare It's up to you to choose whatever distribution on the natural numbers you see fit. Any one of these can work. $\endgroup$ – rwols Apr 12 '17 at 10:40 $\begingroup$ How is this problem normally resolved? Don't sample from $\mathbb{N}$? Or define a different probability measure? Something else? $\endgroup$ – Burnsba Apr 12 '17 at 12:40 $\begingroup$ @Burnsba There are other probability measures defined on $\mathbb{N}$ such as Poisson distribution and Geometric distribution. $\endgroup$ – PSPACEhard Apr 12 '17 at 12:41 What you're missing here is that when you say $$\text{Probability}=\frac{n(E)}{n(S)}$$ You're forgetting that $N(E)$ and $N(S)$ are both infinite, so you're claiming: $$\text{Probability}=\frac\infty\infty$$ You can't make the assumption that this equals one. Mitchell FaasMitchell Faas $\begingroup$ See : en.wikipedia.org/wiki/Cardinality $\endgroup$ – Jaideep Khare Apr 12 '17 at 8:28 $\begingroup$ You're right that $n(E)=n(S)$ and that both equal aleph-0. But it doesn't matter that the cardinalities are equal, you still can't divide them. You're using countable infinity as a number, which is a dangerous game. It's more coherently seen as an ever continuing process. After all, $\frac{n(E)+1}{n(S)}=1$ if we used it as a number, but we know that $\frac{a}{b}\neq\frac{a+1}b$ $\endgroup$ – Mitchell Faas Apr 12 '17 at 8:32 $\begingroup$ That's what's my question is, why can't we, if we compare them? $\endgroup$ – Jaideep Khare Apr 12 '17 at 8:33 $\begingroup$ Because it's not a number, it's a process. $\infty+1=\infty$ might seem logical, but the following also holds: $\infty-\infty=\infty$. The problem is that they never end, so normal arithmetic is impossible. We say that the cardinalities are equal, but that only gets us so far as to say that they continue in the same way. Example: countable infinity is usually proven by making a list and showing how it progresses. We can't use that for uncountable infinities, those progress differently. $\endgroup$ – Mitchell Faas Apr 12 '17 at 8:40 $\begingroup$ @JaideepKhare: We can't precisely because trying to leads to contradictions. $\endgroup$ – Will R Jul 6 '17 at 19:25 You can define a probability law on the set of natural numbers such that $P(Even)=1$ by many ways, but it is completely unrelated to the existence of one-to-one mapping between even natural numbers and all natural numbers. You seem to be assuming that probability is always defined as a fraction like $\dfrac{n(E)}{n(S)}$ - it is not. You can define probabilities whatever you like, given the probability axioms are not violated. By defining probabilities as fractions you seem to suggest the uniform probability law, but as others already said the uniform probability law on an infinitely countable set is impossible. Returning to your question "where I am wrong?" - you are wrong in assuming that one-to-one mapping between even and all natural numbers is somehow related to defining a probability law on the set of natural numbers. kludgkludg The OP implicitly required equiprobability for every natural. But let's try something else. We ask the following (starting from $1$, not $0$) What is the probability of choosing an even natural number if the probability of each number $n$ being chosen is $p_n = 2^{-n}?$ The good thing here is that we have just provided ourselves with a proper probability measure over $\mathbb N$, because $$\sum_{n=1}^{\infty}p_n = \sum_{n=1}^{\infty}\frac {1}{2^n} = 1$$ The sum of probabilities related to even numbers under this scheme is $$P[n \;\;\text{is even}] = \frac 1{2^2} + \frac 1{2^4} + \frac 1{2^6} +... =\frac 1{(2^2)} + \frac 1{(2^2)^2} + \frac 1{(2^2)^3}+... $$ $$=\frac 1{4} + \frac 1{4^2} + \frac 1{4^3}+... = \frac 13$$ ...only. This is in reality a special case of the Geometric distribution with parameter $p=1/2$. Alecos PapadopoulosAlecos Papadopoulos $\begingroup$ I've seen that sum used in showing 1/3 of all reduced fractions have even denominators. $\endgroup$ – James Waldby - jwpat7 Apr 13 '17 at 5:21 Define the natural density of any set $S$ of positive integers as follows: Given natural number $n,$ let $s(n) =$ number of elements of $S$ that are $\le n.$ (Note that $s(n) =$ cardinality of the intersection of $S$ and $\{1, 2, \dots, n\}.$) Now define the natural density of $S: d(S) =$ limit of $s(n)/n$ as $n\to\infty $ (assuming this limit exists for the moment). We now define the probability that a natural number drawn is from $S$ by $d(S).$ Example: $S = \{\text{evens}\}.$ It is easy to prove that $d(S) = 1/2$ here. Example: $S = \{\text{multiples of pos. int.} \;m\}.$ Now $d(S) = 1/m.$ Example: $S$ is finite, so $d(S) = 0.$ I believe this is the sense in which we say that the probability of picking an even number is $1/2,$ and it would conform to our intuition. Dr. Michael W. Ecker/ Associate Professor of Mathematics (retired, effective July 1, 2016)/ Pennsylvania State University Wilkes-Barre Campus/ Lehman, PA 18627 edited Jul 8 at 17:12 Adrian Keister Dr Mike EckerDr Mike Ecker $\begingroup$ I think this answer deserves a lot more upvotes, as you've managed to define a probability that corresponds both to intuition as well as to what you would undoubtedly get in a computer simulation. The probably of getting an even number out of picking a random integer simply has to be $1/2$. $\endgroup$ – Adrian Keister Jul 8 at 18:56 I'm going to approach this question the other way around. What is the probability that you will pick one number among natural numbers? If we go by your formula: $$\text{Probability}=\frac1\infty=0$$ Picking one number has the probability of zero, but what if we tried picking $n$ numbers and expanded $n$ to infinity? So let $i$ be a number and the probability of picking it $Pr(i)$: $$\lim_{n \rightarrow \infty}(\text{Pr(i)}\times n) = \lim_{n \rightarrow \infty}(0\times n) = 0 $$ This isn't really an answer and we really shouldn't be going at it this way around, but I wanted to show you that, if you tried to answer it, this question would have more than one answer, therefore it can't have an answer. Also, I've already upvoted Yikai's answer and Mitchell Faas' answer, which are the two right ways of approaching this matter. John HamiltonJohn Hamilton Obviously, if you limit yourself to a drawing from any bounded and contiguous subset of the integers, i.e. a range $[m,n]$, the number of odd and even numbers will differ by at most one and the probabilities can be exactly or very close to $\frac12$. This must be why our intuition tells us that the drawing will be balanced, because we actually reason in terms of finite sets, and much less in terms of infinite ones. It is also true that in the limit for $n\to\infty$, the probabilities are one half. Yves DaoustYves Daoust The Laplace formula is only valid for finite probabilistic spaces in which the postulate of Indifference is fulfilled. It would be necessary to define a probability (or measure) in N that fulfills the axioms of Kolmogorov. Guillemus CallelusGuillemus Callelus Not the answer you're looking for? Browse other questions tagged probability infinity infinite-groups or ask your own question. Why isn't there a uniform probability distribution over the positive real numbers? Probability of picking a random natural number As many even numbers as natural numbers. How to divide aleph numbers Is there a symbol for 'could be' Randomly selecting a natural number The cardinality of the even numbers is half of the cardinality of the natural numbers? Prove countable set function: natural numbers and pairs of natural numbers Probability of randomly selecting a random real number Probability of selecting a number in a repeating decimal series What is wrong with this proof that the power set of natural numbers is countable? Probability of number of boys = number of girls Finding the given probability(Uniform distributions involved). Defining a probability on the natural numbers set Probability that a sample is generated from a distribution
CommonCrawl
Anti-aging and tyrosinase inhibition effects of Cassia fistula flower butanolic extract Pornngarm Limtrakul1, Supachai Yodkeeree1, Pilaiporn Thippraphan1, Wanisa Punfa1 & Jatupol Srisomboon2 Natural products made from plant sources have been used in a variety of cosmetic applications as a source of nutrition and as a whitening agent. The flowers of Cassia fistula L, family Fabaceae, have been used as a traditional medicine for skin diseases and wound healing and have been reported to possess anti-oxidant properties. The anti-aging effect of C. fistula flower extract on human skin fibroblast was investigated. The butanolic extraction of C. fistula flowers was completed and the active compounds were classified. The cytotoxicity of fibroblasts was evaluated by SRB assay for the purposes of selecting non-toxic doses for further experiments. The collagen and hyaluronic acid (HA) synthesis was then measured using the collagen kit and ELISA, respectively. Moreover, the enzyme activity, including collagenase, matrixmelloproteinase-2 (MMP-2) and tyrosinase, were also evaluated. It was found that the flower extract did not affect skin fibroblast cell growth (IC50 > 200 μg/mL). The results did show that the flower extract significantly increased collagen and HA synthesis in a dose dependent manner. The flower extract (50–200 μg/mL) also significantly inhibited collagenase and MMP-2 activity. Furthermore, this flower extract could inhibit the tyrosinase activity that causes hyperpigmentation, which induces skin aging. The C. fistula flower extract displayed a preventive effect when used for anti-aging purposes in human skin fibroblasts and may be an appropriate choice for cosmetic products that aim to provide whitening effects, and which are designated as anti-aging facial skin care products. Skin aging is a complicated biochemical progression resulting from many individual intrinsic and extrinsic factors such as age, hormones and exposure to UV light [1]. The progression occurs in the epidermal and dermal layers and is mainly related to extracellular matrix (ECM) degradation [2]. The enzymes involved in ECM degradation are matrix metalloproteinases (MMPs) such as gelatinases (MMP-2) and collagenase [3]. Skin loses its tensile strength due to the effect of ECM degradation by MMPs. In this process, the wrinkling of skin occurs and roughness and dryness also markedly arise along with certain pigment abnormalities such as hypo- or hyper-pigmentation [2, 4]. For the treatment of hyper-pigmentation, tyrosinase inhibitors have been investigated. Tyrosinase is a rate-limiting enzyme that converts tyrosine to melanin [5]. Tyrosine inhibitors thus play an importance role as skin-lightening agents [6] while hyaluronic acid (HA) synthesis regulates skin hydration and the occurrence of wrinkles. HA, a non-sulphated glycosaminoglycan (GAG), is composed of repeating units of disaccharides including D-glucuronic acid and N-acetyl-D-glucosamine [7]. HA also regulates the repair of tissues, including the enhancement of the immune system response by activation of inflammatory cells and in response to the injury of fibroblasts [8–10]. Natural products made from plant sources such as Glycyrrhiza glabra [11] or green tea [12] have been used in cosmetic applications as a source of nutrition and as a whitening agent. A significant amount of evidence has pointed to the beneficial effects of orally or topically administered phytochemicals from plant extracts, especially with regard to the improvement of skin conditions. Some examples of the beneficial effects are that skin aging and skin inflammation can be reduced. C. fistula (golden shower), family Fabaceae, is found in numerous Asian countries such as Thailand, China, Myanmar and India. In previous studies, C. fistula flower extract was found to possess antioxidant, anticancer, antibacterial, antifungal, and antidiabetic properties [13, 14]. The effect of C. fistula in Ayurvedic medicine is known to be involved with treating various disorders including, skin diseases, leprosy, haematemesis, pruritus and diabetes [15, 16]. Various parts of C. fistula have displayed pharmacological properties [17]. The flower, seed, fruit and pulp have been used to treat skin diseases including leprosy [18]. The pulp has been recognized for its antidiabetic properties [15] and has been used in a tonic that has been applied in treatments of gout and rheumatism [19]. The leaves and ripe pods have been traditionally used as a laxative [20, 21]. The phenolics and flavonoid phytochemicals of C. fistula have also been reported to be useful in treating skin diseases [14] . Therefore, in this study we are interested in the roles of C. fistula flower extract in cosmeceutical applications. The preventive effects of ECM degradation along with skin aging enzymes that include collagenase and MMP-2 activities, as well as tyrosinase, have been examined. Moreover, collagen and HA synthesis have been determined. Dulbecco's Modified Eagle Medium (DMEM), DQ gelatin and penicillin-streptomycin were supplied from Gibco (Grand Island, NY, USA). Fetal bovine serum (FBS) was supplied from Thermo Scientific (USA). Sirius Red/Fast Green Collagen Staining Kit was purchased from Chondrex, Inc. (Redmond, WA, USA). Sulforhodamine B reagent, hyaluronicacid and mushroom tyrosinase were obtained from Sigma-Aldrich (St. Louis, MO, USA). Plant materials C. fistula flowers were obtained from Lampang Herb Conservation, Lampang Province, Thailand. A voucher specimen number (023197) was certified by Wannaree Charoensup, Botanist at the herbarium of the Flora of Thailand, Faculty of Pharmacy, Chiang Mai University, Thailand. Preparation of Cassia fistula flower extract C. fistula flowers were dried in an airy room and then 500 g of dried flowers were finely ground into powder. After that, the powder was soaked in 50% ethanol and then shaken at 70 rpm for 24 h. After 24 h, the samples were then filtered through filter paper to separate the residue. The residue was soaked in 50% ethanol and shaken at 70 rpm for 24 h and this step was repeated 2 times. The filtrated samples were pooled together and evaporated by a rotary vacuum evaporator (BUCHI, Switzerland) to obtain the water fractions. Hexane was added to the water fraction at a ratio of hexane 2:1. The samples were shaken and allowed to separate over 30 min. The samples were separated into two fractions, which were hexane and water fractions. Then, the water fraction was collected and 10% charcoal was added for a 30 min period with mild stirring at room temperature. The samples were filtered through filter paper and celite to separate the charcoal. The samples were mixed with saturated butanol at a ratio of saturated butanol 2:1 and the samples were allowed to separate over a 12 h period and this step was repeated 3 times. The samples were separated into two fractions, which included water and butanol fractions. Water was further added to the butanol fractions to an equal volume and these fractions were allowed to evaporate through the use of a rotary vacuum evaporator. The evaporated samples were freeze-dried to obtain the C. fistula flower extract powder. Quantification of total phenolic content in C. fistula flower extract using UV-visible spectrophotometer Total phenolic content in C. fistula flower extract was determined using the Folin-Ciocalteu assay. Briefly, 0.4 mL of C. fistula flower extract were mixed with 0.3 mL of 10% Folin-Ciocalteau reagent and incubated in a dark at room temperature for 3 min. Then, 0.3 mL of sodium carbonate solution was added and the solution was further incubated in the dark at room temperature for 30 min. The absorbance was evaluated at 765 nm using a UV-visible spectrophotometer. The concentration of the total phenolic content was calculated and compared with a standard curve for gallic acid (GA) at 0–20 μg/mL. The total phenolic content was reported as milligrams of GA equivalents per gram of C. fistula. flower extract (mg GAE/g extract). Quantification of phenolic compounds in C. fistula flower extract using HPLC analysis Various types of phenolic compounds in C. fistula flower extracts were analyzed using HPLC analysis and were then compared with standard gallic acid, catechins, protocatecheuic acid, vanillic acid, chlorogenic acid, ferulic acid and coumaric acid. The flower extracts were dissolved in 50% ethanol and were further assessed by HPLC (Agilent Tecnologies, CA, USA) using reversed-phase C18 column (WATER, MA, USA). The mobile phase consisted of methanol (A) and 0.1% trifluoroacetic acid (TFA) in water (B) with gradient condition. The flow rate was set at 1.0 mL/min and the detection wavelength was recorded at 280 with a UV detector. The concentration levels of the phenolic compounds were calculated and compared with the standard curve considering the standard concentrations and peak areas (mg/g extract). Quantification of total flavonoid content in C. fistula flower extract using colorimetric assay Total flavonoid content was determined by modified aluminium chloride (AlCl3) colorimetric assay as was previously described [22]. Briefly, 0.25 mL of flower extract were mixed with 0.125 mL of 5% sodium nitrite (NaNO2) and the solution was incubated for 5 min at room temperature. Then, 0.125 mL of 10% AlCl3 were mixed into the mixture and it was kept in the dark for 5 min. After which, 1 mL of sodium hydroxide (NaOH) was added and the solution was incubated for 15 min at room temperature in the dark. The absorbance was measured at 510 nm using a UV-visible spectrophotometer. The total flavonoid content was calculated and compared with the standard catechin values and expressed as mg catechin equivalent per gram of extract (mg CE/g extract). Primary human skin fibroblasts were aseptically isolated from an abdominal scar after a surgical procedure involving a cesarean delivery at the surgical ward of CM Maharaj Hospital, Chiang Mai University (Chiang Mai, Thailand) (Study code: BIO-2558-03549 approved by Medical Research Ethics Committee, Chiang Mai University). Fat was removed from the starting material and it was soaked in DMEM containing anti-penicillin and streptomycin. After that, the skin was immersed in DMEM containing 1% protease (Dispase, Gibco, Grand Island, NY, USA) for 48 h at 4 °C. Epidermal layers were removed and the normal fibroblasts were isolated from the dermis. The fibroblasts were cultured in DMEM supplemented with 10% FBS, 2 mM L-glutamine, 50 U/mL penicillin, and 50 μg/mL streptomycin. The cells were maintained in a 5% CO2 humidified incubator at 37 °C. Cell viability assay The cell viability assay of C. fistula flower extract against skin fibroblast cells was measured using a sulforhodamine B (SRB) assay as was previously described [23]. Briefly, the cells (8x103cells/well) were seeded in a 96-well plate and incubated at 37 °C, 5% CO2 for 24 h. After that, the cells were treated with or without various concentrations (0–200 μg/mL) of C. fistula flower extract for 48 h. After 48 h, 10% (w/v) Trichloroacetic acid (TCA) was added to the cells and the cells were then incubated at 4 °C for 1 h. The medium was removed and the cells were rinsed with slow running tap water. 0.057% (w/v) SRB solution (100 μL) was added to each well and the cells were incubated for 30 min at room temperature. The SRB solution was removed and then the cells were washed 4 times with 1% (v/v) acetic acid and they were allowed to dry at room temperature. The dye was dissolved with 10 mM Tris base solution (pH 10.5) and the absorbance was measured at 510 nm using a microplate reader. Collagen synthesis assay Total intracellular collagen in the fibroblast cells was determined using Sirius Red/Fast Green Collagen Staining Kit (Chondrex, Inc, Redmond, WA, USA) according to the manufacturer's instructions. Briefly, skin fibroblast cells were seeded in 24 well plates for 24 h at 37 °C, 5% CO2. After that, the cells were pre-treated with the serum free DMEM medium for 24 h. The medium was removed and the cells were then incubated with or without C. fistula flower extract (0-150 μg/mL) for 48 h. After 48 h, the cultured medium was removed. Then, the cells were washed with PBS and fixed with Kahle fixative for 10 min. The fixative was removed and the cells were washed with PBS. The Sirius Red/Fast Green dye solution was added to each well and the specimens were incubated for 30 min at room temperature. The dye was removed and the cells were washed four times with DI water or until the fluid was colorless. Finally, the extraction buffer was added and the absorbance was measured at 540 nm and 605 nm using a UV-visible spectrophotometer. Collagenase activity assay The collagenase activity was measured using modified fluorogenic DQ™-gelatin assay as has been previously described [24]. Briefly, various concentrations of C. fistula flower extract (0–200 μg/mL) were added in 96 well plates. One U/ml of collagenase was added in each well (100 μL/well). After that, 15 μg/mL of gelatin (DQ gelatin) was added and the mixtures were incubated for 10 min. The rate of proteolysis was determined by measuring the absorbance at 2 min intervals for 20 min using a microplate reader at an excitation wavelength of 485 nm and an emission wavelength of 528 nm. Enzyme activity was estimated by examining the slope of linear regression between time and absorbance over a 2-6 min period. MMP-2 activity assay The MMP-2 activity was determined by gelatin zymography as was previously described [25]. Fibroblast cells were cultured in DMEM serum-free medium for 24 h. After that, the culture supernatant was collected. The culture supernatant was subjected to 10% polyacrylamide gels containing 0.1% w/v of gelatin under non-reducing conditions. The gels were washed twice with 2.5% v/v of Triton X-100 for 30 min at room temperature to remove SDS. The gel slab was cut into slices which corresponded to the lanes and then the slices were put into different tanks and were incubated with an activating buffer (50 mM Tris-HCl, 200 mM NaCl, 10 mM CaCl2, pH 7.4) containing various concentrations of C. fistula flower extract (0–200 μg/mL) at 37 °C for 18 h. After that, a strip of the gels was washed and stained with Coomassie Brillant Blue R (0.1% w/v) and then was destained in 30% methanol and 10% acetic acid. MMP-2 activity appeared as a clear band against a blue background. Digestion bands were quantitated by the Image J program. Hyaluronic Acid (HA) synthesis assay by ELISA The effect of C. fistula flower extract on HA synthesis was determined using an ELISA kit as previously described [26]. The fibroblast cells (5.0 × 104 cells/well) were seeded in 24-well-plates for 24 h at 37 °C, 5%CO2. The cells were pre-treated in serum free medium for 6 h and were then treated with or without various concentrations of C. fistula flower extract (0–200 μg/mL) for 48 h. The cultured medium was collected and HA synthesis was measured by ELISA. The absorbance was measured at 450 and 570 nm using a microplate reader. The HA concentration in the cultured supernatant acquired from the treated fibroblast cells was calculated and compared with the standard curve of HA. Tyrosinase inhibition assay Tyrosinase inhibition assay was determined as was previously described [27]. Various concentrations of C. fistula flower extract were added to a 96-well-plate. The reaction was carried out in a sodium buffer (pH 6.4) containing 100 U/mL mushroom tyrosinase. L-DOPA substrate (1 mM) was added into the reaction mixture and it was then incubated for 10 min at room temperature. The change of the absorbance at 490 nm was measured every 1.5 min for 15 min using a microplate reader. The percent inhibition of tyrosinase was calculated by the following formula: $$ \mathrm{Tyrosinase}\ \mathrm{inhibition}\ \left(\%\right) = \left[\left(\mathrm{A}-\mathrm{B}\right)/\mathrm{A}\right] \times 100, $$ A = slope of control at 490 nm B = slope of test at 490 nm Antioxidant activity assay The antioxidant activity of the C. fistula flower extract was determined using 1, 1-diphenyl-2-picrylhydrazyl (DPPH)-free radical activity, as previously described [28]. Various concentrations of the flower extract (0–100 μg/mL) were prepared in methanol. 1 mL of 0.002% of DPPH was added to 1 mL of the flower extract solution and the mixture was kept in the dark for 30 min. Absorbance was measured at 517 nm using a spectrophotometer. The % inhibition was calculated and compared with standard vitamin E (1–100 μg/mL) using the following formula: $$ \mathrm{Percent}\ \mathrm{inhibition}\ \mathrm{of}\ \mathrm{DPPH}\ \mathrm{activity} = \frac{{\mathrm{A}}_{\mathrm{control}\ \mathrm{at}\ 517\ \mathrm{nm}}-{\mathrm{A}}_{\mathrm{sample}\ \mathrm{at}\ 517\ \mathrm{nm}}}{{\mathrm{A}}_{\mathrm{control}\ \mathrm{at}\ 517\ \mathrm{nm}}}\times 100 $$ The antioxidant activity of the C. fistula flower extracts was confirmed using ABTS assay, as was previously described [29]. ABTS [2, 2′- azino-bis (ethylbenzthiazoline-6-sulfonic acid)] radical cation was prepared by mixing 7 mM ABTS stock solution with 2.45 mM potassium persulfate (K2S2O8) (1/1, v/v). The mixture was incubated in the dark for 12–16 h until the reaction was completed. The assay was conducted on 990 μL of ABTS solution and 10 μL of the flower extract (0–4 μg/mL). After 6 min, the absorbance was recorded immediately at 734 nm using a spectrophotometer. The percent inhibition of ABTS activity of the flower extract was calculated using the following equation: $$ \mathrm{Percent}\ \mathrm{inhibition}\ \mathrm{of}\ \mathrm{ABTS}\ \mathrm{activity} = \kern0.5em \frac{{\mathrm{A}}_{\mathrm{control}\ \mathrm{at}\ 734\ \mathrm{nm}}-{\mathrm{A}}_{\mathrm{sample}\ \mathrm{at}\ 734\ \mathrm{nm}}}{{\mathrm{A}}_{\mathrm{control}\ \mathrm{at}\ 734\ \mathrm{nm}}}\times 100 $$ All data are presented as mean ± standard deviation (S.D.) values. Statistical analysis was analyzed by Prism version 6.0 software using one-way ANOVA with Dunnett's test. Statistical significance was determined at * p < 0.05, ** p < 0.01, ***p < 0.001 or **** p < 0.0001. Phytochemical characterization of C. fistula flower extract After the step involving freeze-drying the specimens, the C. fistula flower extract was weighed and further subjected to phytochemical and biological studies. 18.08 g of C. fistula flower extract was obtained from 500 g of raw material and the percent yield of the flower extract was 3.62. The HPLC fingerprint of the C. fistula flower extracts was determined by HPLC analysis (Fig. 1). HPLC profile that showed the peak of the phenolic compounds was labeled in terms of the individual components at each retention time. The characterization of the phenolic compounds of C. fistula flower extract is shown in Table 1. The total phenolic content in the flower extract was 275.32 ± 14.21 mg GAE/g extract. The hydroxybenzoic acid derivatives, including vanillic acid and protocatechuic acid, were found to be 0.95 ± 0.09 and 2.56 ± 0.42 mg/g extract, respectively, whereas gallic acid was recorded at 0.60 ± 0.02 mg/g extract. The hydroxycinnamic acid derivatives, including coumaric acid, ferulic acid, and chlorogenic acid, were detected at 0.39 ± 0.08, 0.80 ± 0.09, and 0.83 ± 0.03 mg/g extract, respectively. The flavonoid content was 27.62 ± 3.56 mg CE/g extract. Catechin, as a flavonol derivative, was measured at 1.10 ± 0.00 mg/ g extract. HPLC fingerprint of C. fistula flower extract. The HPLC fingerprint of the flower extract was evaluated using reversed-phase C18 column and its mobile phase consisted of methanol and 0.1% trifluoroacetic acid (TFA). The flow rate was set at 1.0 mL/min and the detection wavelength was 280 and 325 nm Table 1 The phenolic compound content in C. fistula flower extracts Effect of C. fistula flower extract on fibroblast cells cytotoxicity Cytotoxicity testing of C. fistula flower extract on fibroblast cells was measured using SRB assay. Fibroblast cells were treated with or without various concentrations of the flower extract (0–200 μg/mL). After a treatment of 48 h, the flower extract was found to have no effect on skin fibroblast cell growth (0–200 μg/mL). The IC20 and IC50 of the flower extract was found to be more than 200 μg/mL, which could also be applied in other experiments without toxicity (Fig. 2). Cytotoxicity of C. fistula flower extract on skin fibroblasts. The cells were treated with or without various concentrations of the flower extract (0–200 μg/mL) for 48 h. The cell viability was determined using SRB assay. All assays have been performed in triplicate and the mean ± standard deviations are shown Effect of C. fistula flower extract on collagen synthesis in human skin fibroblast cells Collagen plays a key role in both skin wound healing and the skin rejuvenation process. Collagen synthesis from skin fibroblast cells was achieved using a Sirius Red/Fast Green Collagen Staining Kit. Collagen synthesis from fibroblast cells was found to have significantly increased in a dose dependent manner after cells were treated with various concentrations (100–150 μg/mL) of the flower extract (Fig. 3). The effects of C. fistula flower extract on collagen synthesis were determined using a collagen kit. Fibroblast cells were pre-treated in serum free medium for 24 h. and were further treated with or without various concentrations of flower extract (0–150 μg/mL) for 48 h. The cells were stained with Sirius Red/Fast Green dye and the absorbance was determined using a UV-spectrophotometer. All assays have been performed in triplicate and the mean ± standard deviations are shown as ****p < 0.0001 versus the non-treated cells Effect of C. fistula flower extract on collagenase activity Collagenases are the enzymes that digest native collagen in the triple helix region. Therefore, the inhibition of collagenase activity could protect against collagen breakdown. Collagenase activity was measured using fluorogenic DQ™-gelatin assay. Collagenase activity was dramatically decreased in a dose dependent manner after treating the fibroblasts with the flower extract. At high concentrations, the flower extract (200 μg/mL) could completely inhibit collagenase activity (Fig. 4). The effect of C. fistula flower extract on collagenase activity was determined using fluorogenic DQ™-gelatin assay. Various concentrations of the flower extract (0–200 μg/mL) were mixed with collagenase. After that, DQ gelatin substrate was added and incubated for 10 min. The rate of proteolysis was evaluated using a microplate reader. All assays have been demonstrated in triplicate and the mean ± standard deviations are shown as ****p < 0.0001 versus the control without the extract Effect of C. fistula flower extract on MMP-2 activity MMP-2 is an enzyme that is involved in the breakdown of the extracellular matrix (ECM) and plays an important role in influencing normal homeostasis, aging and the wound-healing of the skin. MMP-2 activity was measured using gelatin zymography. It was determined that MMP-2 secreted from skin fibroblast cells could digest gelatin in the gel. However, after the gel was incubated with various concentrations of the flower extract (50–200 μg/mL), the level of MMP-2 activity was significantly reduced in a dose dependent manner (Fig. 5). The effects of C. fistula flower extract on MMP-2 activity were determined using gelatin zymography. Fibroblast cells were cultured in DMEM serum-free medium for 24 h and the culture supernatant was subjected to 10% polyacrylamide gels containing 0.1% w/v of gelatin under non-reducing conditions. The gels were incubated with or without various concentrations of red rice extract (0–200 μg/mL) and were then stained with Coomassie Brillant Blue R. The digestion bands were quantitated by the Image J program. All assays have been performed in triplicate and the mean ± standard deviations are shown as *p < 0.05, ***p < 0.001 and ****p < 0.0001 versus the non-treated culture supernatant Effect of C. fistula flower extract on HA synthesis in human skin fibroblast cells HA synthesis from skin fibroblast cells was evaluated using an ELISA kit. After the cells were treated with various concentrations of the flower extract for 48 h, HA synthesis was found to have significantly increased in a dose dependent manner (50–200 μg/mL). After the fibroblasts were treated with flower extracts at 200 μg/mL, HA synthesis was induced at a level that was four-fold when compared with the non-treated cells (Fig. 6). The effects of C. fistula flower extract on HA synthesis were determined using an ELISA kit. Fibroblast cells were pre-treated in serum free medium for 6 h and were further treated with or without various concentrations of the flower extract (0–200 μg/mL) for 48 h. The cultured medium was collected for ELISA and the absorbance was measured at 450 and 570 nm using a microplate reader. All assays have been performed in triplicate and the mean ± standard deviations are shown as **p < 0.01, ***p < 0.001 and ****p < 0.0001 versus the non-treated cells Effect of C. fistula flower extract on tyrosinase activity Tyrosinase is an enzyme that is involved in the rate-limiting step for the control of melanin production. Therefore, the inhibition of tyrosinase activity tends to induce skin whitening due to a reduction of melanin synthesis. When the tyrosinase enzyme was incubated with the flower extract, it could inhibit tyrosinase activity in a dose dependent manner at a concentration of 50–200 μg/mL (Fig. 7). Tyrosinase inhibition assay was determined in the C. fistula flower extract. Various concentrations of the flower extract were mixed with 100 U/mL of mushroom tyrosinase in a sodium buffer (pH 6.4). L-DOPA substrate (1 mM) was added into the reaction mixture and it was then incubated for 10 min. The change of the absorbance was measured at 490 using a microplate reader. All assays have been demonstrated in triplicate and the mean ± standard deviations are shown ****p < 0.0001 versus the control without the extract Anti-oxidant activity of C. fistula flower extract The free radical scavenging activity of C. fistula flower extract was then examined by DPPH and ABTS assay. C. fistula flower extract dose dependently inhibited oxidant activity. Vitamin e and trolox were used as a positive control in each experiment. C. fistula flower extract exhibited scavenging activity (DPPH assay) at a value of 65% at 100 μg/mL. At 25 μg/mL of the flower extract, the radical scavenging activity was found to still be approximately 33% and the IC50 of the flower extract and vitamin E were recorded at 70 and 72 μg/mL, respectively. These findings indicate that C. fistula flower extract is a potent antioxidant and in this capacity is comparable to vitamin E (Fig. 8a). To confirm the anti-oxidant activity of the flower extract, the ABTS assay was determined and showed % inhibition of 47% at 4 μg/mL and IC50 of the flower extract and the trolox were 4.8 and 3 μg/mL, respectively, which was in accordance with the data acquired from the DPPH assay (Fig. 8b). Free radical scavenging activity of C. fistula flower extracts was determined using DPPH assay (a) and ABTS assay (b). 1 mL of 0.002% of DPPH was added to 1 mL of the flower extract solution and it was kept in the dark for 30 min. Absorbance was measured at 517 nm using a spectrophotometer. Percent inhibition of the DPPH radical was calculated and compared with that of standard vitamin E. For ABTS assay, the flower extract was mixed with ABTS solution and was then incubated for 6 min. Absorbance was measured at 734 nm using a spectrophotometer. Percent inhibition of the ABTS radical was calculated and compared with that of the standard trolox. All assays have been demonstrated in triplicate and the mean ± standard deviations are shown as ****p < 0.0001 versus control Extrinsic and/or environmental factors cause skin aging signs which can include wrinkles and pigment spot formations [30]. In previous studies, UV-radiation that is known to induce skin aging has been a major topic of research through the focus of the pathogenesis and molecular mechanisms. The generation of ROS can stimulate skin inflammation leading to the activation of transcription factors which regulate the degradation of the skin collagen and the extracellular matrix (ECM) [30]. These events result in a loss of the skin's ability to resist stretching, which ultimately leads to skin aging. C. fistula flower extract has been traditionally used for the treatment of skin diseases, abdominal pain and wound-healing [17]. Our results show that the major phytochemicals represented in the C. fistula flower extract were the phenolic compounds and flavonoids. The main phenolic components in the C. fistula flower extract were protocatecheuic acid followed by vanillic acid, chlorogenic acid and ferulic acid. In addition, Bahorun T, et al have reported that C.fistula flowers contain various types of flavonoids including kaempferol, rhein, fistulin, alkaloids and triterpenes. Among those phytochemical compounds, kamempferol, catechins, ferulic acid, chlorogenic acid and protocatecheuic acid have been proven to exhibit antiaging activities. In this study, the antiaging activity of the C. fistula flower extract was investigated in order to determine the effects of the extract on collagen, HA and melanin production. Our results indicate that high concentrations of C. fistula flower extract (200 μg/ml) did not have an effect on the viability of human skin fibroblast cells. Therefore, C. fistula flower extracts could be safe in applications to the human skin. Collagen synthesis in skin fibroblasts plays a major role in skin rejuvenation. The reduction of types I and III procollagen synthesis is a critical feature of aged skin leading to skin thinning and the increased fragility of skin. [31]. Hence, the inhibition of collagen synthesis or a loss in the function of collagen results in chronologically aged skin. Our results indicate that the C. fistula flower extract significantly induced collagen synthesis from the skin fibroblasts and also dramatically inhibited collagenase activity, which is the enzyme involved in collagen breakdown. Chronologically aged skin induced by UV radiation also occurs through an increase in MMPs production, including, MMP-1, MMP-2, MMP-3 and MMP-9, which causes an imbalance of collagen synthesis by the induction of collagen or by ECM degradation [32]. This is the first report indicating that C. fistula flower extracts significantly inhibit MMP-2 activity in a dose dependent manner. For applications in cosmetic formulations, the C. fistula extract at a concentration of 50 μg/mL should be considered. These findings suggest that C. fistula flower extract possesses useful booster collagen benefiting the skin via reduced collagen breakdown. Glycosaminoglycans (GAGs) or hyaluronic acid (HA), a major component of extracellular matrix, is induced during wound-healing and skin regeneration and keeps skin hydrated [33]. Environmental factors such as UV radiation induce the type of skin aging that results in a loss of skin elasticity causing skin to become wrinkled by decreasing HA synthesis [34]. This result indicates that the C. fistula flower extract dramatically increased HA synthesis in a dose dependent manner. Hence, the flower extract can enhance skin moisture and can result in skin being less dry by increasing HA synthesis. Hyperpigmentation causes human skin aging and occurs as a result of both internal and external factors including those related to hormones, UV exposure, drugs, and the presence of various chemicals [4]. Melanin biosynthesis is a pathway that appears in melanocytes. Hyperpigmentation is particularly obvious in darker skin and is often difficult to treat. Cosmetic scientists have conducted various in vivo and in vitro studies on skin lightening agents. The key enzyme that regulates melanin synthesis is tyrosinase, which is involved in two steps of melanin synthesis, including the hydroxylation of tyrosine to β-3,4-dihydroxyphenylalanine (DOPA) and the oxidation of DOPA to DOPA quinone [4]. Our results indicate that the C. fistula flower extract can successfully reduce tyrosinase activity. This result was similar to that of certain previous studies, which showed that C. fistula pods have displayed skin whitening activity in vitro and in vivo by using tyrosinase activity as an endpoint bioassay [35]. Therefore, it can be concluded that this C. fistula flower extract can reduce hyperpigmentation in human skin. Previous studies have shown that some parts of the C. fistula plant exhibited anti-oxidant activity [36–38]. The aqueous and methanolic extracts of the C. fistula bark showed the free radical scavenging effect of DPPH in a dose dependent manner [38]. The hydroalcoholic extract of the C. fistula flower and fruit pulp showed antioxidant activity by inhibiting DPPH and hydroxyl radicals [36, 37]. Additionally, our study on the anti-oxidant activity of the butanolic extract of the C. fistula flower similarly displayed the free radical scavenging effect of DPPH and ABTS in a dose dependent manner. These results indicate that C. fistula flower extract displays a high potential for anti-aging in normal skin fibroblast cells, both in terms of the inhibition of wrinkles and a decrease in the number of pigment spots. C. fistula flower extract could prevent skin aging via an increase in collagen and HA production. Moreover, the flower extract also inhibited collagenase, MMP-2 and tyrosinase activity, all of which are involved in skin aging. Therefore, C. fistula extract that has displayed a non-toxic effect might be an alternative ingredient for use in cosmetics or supplements that are being developed for anti-aging applications. ABTS: Azino-bis (ethylbenzthiazoline-6-sulfonic acid) DI water: DMEM: Dulbecco's Modified Eagle Medium Diphenyl-2-picrylhydrazyl ECM: Extracellular matrix ERK: Extracellular-signal-regulated kinases FBS: Fetal bovine serum HA: HPLC: JNK: c-Jun N-terminal protein kinase mg: mL: Milliliter Millimolar MMP-2: Matrixmelloproteinase-2 MMPs: Metalloproteinases nm: Phosphate buffer saline rpm: Round per minute Sodium dodecyl sulfate SRB: Sulforhodamine B TCA: TFA: Trifluoroacetic acid μg: Microgram μL: Microliter Gragnani A, Cornick SM, Chominski V, Ribeiro de Noronha SM, Alves Corrêa de Noronha SA, Ferreira LM. Review of major theories of skin aging. Adv Aging Res. 2014;03:265–84. Quan T, Qin Z, Xia W, Shao Y, Voorhees JJ, Fisher GJ. Matrix-degrading metalloproteinases in photoaging. J Investig Dermatol Symp Proc. 2009;14:20–4. Nagase H, Woessner Jr JF. Matrix metalloproteinases. J Biol Chem. 1999;274:21491–4. Costin GE, Hearing VJ. Human skin pigmentation: melanocytes modulate skin color in response to stress. FASEB J. 2007;21:976–94. Bae-Harboe YSC, Park HY. Tyrosinase: a central regulatory protein for cutaneous pigmentation. J Invest Dermatol. 2012;132:2678–80. Slominski A, Tobin DJ, Shibahara S, Wortsman J. Melanin pigmentation in mammalian skin and its hormonal regulation. Physiol Rev. 2004;84:1155–228. Papakonstantinou E, Roth M, Karakiulakis G. Hyaluronic acid: a key molecule in skin aging. Dermatoendocrinol. 2012;4:253–8. Weigel PH, Fuller GM, LeBoeuf RD. A model for the role of hyaluronic acid and fibrin in the early events during the inflammatory response and wound healing. J Theor Biol. 1986;119:219–34. Bai KJ, Spicer AP, Mascarenhas MM, Yu LY, Ochoa CD, Garg HG, et al. The role of hyaluronan synthase 3 in ventilator-induced lung injury. Am J Respir Crit Care Med. 2005;172:92–8. Turley EA. The role of a cell-associated hyaluronan-binding protein in fibroblast behaviour. Ciba Found Symp. 1989;143:121–33. discussion 33-7, 281-5. Morteza-Semnani K, Saeedi M, Shahnavaz B. Comparison of antioxidant activity of extract from roots of licorice (Glycyrrhiza glabra L.) to commercial antioxidants in 2% hydroquinone cream. J Cosmet Sci. 2003;54:551–8. Gianeti MD, Mercurio DG, Campos PMBGM. The use of green tea extract in cosmetic formulations: not only an antioxidant active ingredient. Dermatol Ther. 2013;26:267–71. Duraipandiyan V, Ignacimuthu S. Antibacterial and antifungal activity of Cassia fistula L.: an ethnomedicinal plant. J Ethnopharmacol. 2007;112:590–4. Luximon-Ramma A, Bahorun T, Soobrattee MA, Aruoma OI. Antioxidant activities of phenolic, proanthocyanidin, and flavonoid components in extracts of Cassia fistula. J Agric Food Chem. 2002;50:5042–7. Manonmani G, Bhavapriya V, Kalpana S, Govindasamy S, Apparanantham T. Antioxidant activity of Cassia fistula (Linn.) flowers in alloxan induced diabetic rats. J Ethnopharmacol. 2005;97:39–42. Bhalodia NR, Shukla VJ. Antibacterial and antifungal activities from leaf extracts of Cassia fistula l.: an ethnomedicinal plant. J Adv Pharm Technol Res. 2011;2:104–9. Bahorun T, Neergheen VS, Aruoma OI. Phytochemical constituents of Cassia fistula. Afr J Biotechnol. 2005;4:1530–40. Bhalodia NR, Nariya PB, Acharya RN, Shukla VJ. In vitro antibacterial and antifungal activities of Cassia fistula Linn. fruit pulp extracts. Ayu. 2012;33:123–9. Hanninen, Kaartinen K, Rauma AL, Nenonen M, Torronen R, Hakkinen AS, et al. Antioxidants in vegan diet and rheumatic disorders. Toxicology. 2000;155:45–53. Mangmeesri P, Wonsuphasawad K, Viseshsindh W, Gritsanapan W. The comparison between the laxative effectiveness of Cassia fistula pod pulp extract and Cassia angustifolia in Thai constipated patients. Planta Med. 2014;80:1404. Sakulpanich A, Gritsanapan W. Determination of anthraquinone contents in Cassia fistula leaves for alternative source of laxative drugs. Planta Med. 2009;75:994. Zhishen JMT, Jianming W. The determination of flavonoid contents in mulberry and their scavenging effects on Superoxide radicals. Food Chem. 1999;64:555–9. Vichai V, Kirtikara K. Sulforhodamine B colorimetric assay for cytotoxicity screening. Nat Protoc. 2006;1:1112–6. Vandooren J, Geurts N, Martens E, Van den Steen PE, Jonghe SD, Herdewijn P, et al. Gelatin degradation assay reveals MMP-9 inhibitors and function of O-glycosylated domain. World J Biol Chem. 2011;2:14–24. Toth M, Sohail A, Fridman R. Assessment of gelatinases (MMP-2 and MMP-9) by gelatin zymography. Methods Mol Biol. 2012;878:121–35. Gan SD, Patel KR. Enzyme immunoassay and enzyme-linked immunosorbent assay. J Invest Dermatol. 2013;133:e12. Moon J-Y, Yim E-Y, Song G, Lee NH, Hyun C-G. Screening of elastase and tyrosinase inhibitory activity from Jeju Island plants. EurAsia J Biosci. 2010;41. Braca A, Sortino C, Politi M, Morelli I, Mendez J. Antioxidant activity of flavonoids from Licania licaniaeflora. J Ethnopharmacol. 2002;79:379–81. Re R, Pellegrini N, Proteggente A, Pannala A, Yang M, Rice-Evans C. Antioxidant activity applying an improved ABTS radical cation decolorization assay. Free Radic Biol Med. 1999;26:1231–7. Pillai S, Oresajo C, Hayward J. Ultraviolet radiation and skin aging: roles of reactive oxygen species, inflammation and protease activation, and strategies for prevention of inflammation-induced matrix degradation - a review. Int J Cosmet Sci. 2005;27:17–34. Rittie L, Fisher GJ. UV-light-induced signal cascades and skin aging. Ageing Res Rev. 2002;1:705–20. West MD, Pereira-Smith OM, Smith JR. Replicative senescence of human skin fibroblasts correlates with a loss of regulation and overexpression of collagenase activity. Exp Cell Res. 1989;184:138–47. Tammi R, Pasonen-Seppanen S, Kolehmainen E, Tammi M. Hyaluronan synthase induction and hyaluronan accumulation in mouse epidermis following skin injury. J Invest Dermatol. 2005;124:898–905. Radaeva IF, Kostina GA, Zmievskii AV. Hyaluronic acid: biological role, structure, synthesis, isolation, purification, and application (review). Prikl Biokhim Mikrobiol 1997;33:133–7. Khan BA, Akhtar N, Hussain I, Abbas KA, Rasul A. Whitening efficacy of plant extracts including Hippophae rhamnoides and Cassia fistula extracts on the skin of Asian patients with melasma. Postepy Dermatol Alergol. 2013;30:226–32. Bhalodia NR, Nariya PB, Acharya RN, Shukla VJ. In vitro antioxidant activity of hydro alcoholic extract from the fruit pulp of Cassia fistula Linn. Ayu. 2013;34:209–14. Bhalodia NR, Nariya PB, Acharya RN, Shukla VJ. Evaluation of in vitro antioxidant activity of flowers of Cassia fistula Linn. Int J PharmTech Res. 2011;3:589–99. Ilavarasana R, Mallikab M, Venkataramanc S. Anti-inflammatory and antioxidant activities of Cassia fistula Linn bark extracts. Afr J Trad CAM. 2005;2:70–85. This research study was granted financial support by the Agricultural Research Development Agency (Public Organization) (ARDA), the National Research Council of Thailand (NRCT) and the Department of Biochemistry, Faculty of Medicine, Chiang Mai University, Thailand. This research study was granted financial support by the Agricultural Research Development Agency (Public Organization) (ARDA) and the National Research Council of Thailand (NRCT). All raw data of this study has been deposited in an appropriate repository. Data are available upon request from the authors. PL designed all experiments in this study, analyzed and interpreted the data and wrote the manuscript. SY, WP, PT conducted the experiment and interpreted the data. JS provided the skin specimens that were processed for ethical procedure. All authors read and approved the final manuscript for submission. This study obtained ethical approval by the Medical Research Ethics Committee, Chiang Mai University (Study code: BIO-2558-035490). Primary human skin fibroblasts were aseptically isolated from an abdominal scar after the surgical procedure involving a cesarean delivery at the surgical ward of CM Maharaj Hospital, Chiang Mai University (Chiang Mai, Thailand). Department of Biochemistry, Faculty of Medicine, Chiang Mai University, Chiang Mai, 50200, Thailand Pornngarm Limtrakul, Supachai Yodkeeree, Pilaiporn Thippraphan & Wanisa Punfa Department of Obstetrics & Gynecology, Faculty of Medicine, Chiang Mai University, Chiang Mai, 50200, Thailand Jatupol Srisomboon Pornngarm Limtrakul Supachai Yodkeeree Pilaiporn Thippraphan Wanisa Punfa Correspondence to Pornngarm Limtrakul. Limtrakul, P., Yodkeeree, S., Thippraphan, P. et al. Anti-aging and tyrosinase inhibition effects of Cassia fistula flower butanolic extract. BMC Complement Altern Med 16, 497 (2016). https://doi.org/10.1186/s12906-016-1484-3 Collagenase Collagen synthesis Glycosaminoglycan Matrix metalloproteinase
CommonCrawl
Environmental Engineering Research Korean Society of Environmental Engineers (대한환경공학회) 2005-968X(eISSN) The Environmental Engineering Research (EER) is published quarterly by the Korean Society of Environmental Engineers (KSEE). The EER covers a broad spectrum of the science and technology of air, soil, and water management while emphasizing scientific and engineering solutions to environmental issues encountered in industrialization and urbanization. Particularly, interdisciplinary topics and multi-regional/global impacts (including eco-system and human health) of environmental pollution as well as scientific and engineering aspects of novel technologies are considered favorably. The scope of the Journal includes the following areas, but is not limited to: 1. Atmospheric Environment & Climate Change: Global and local climate change, greenhouse gas control, and air quality modeling 2. Renewable Energy & Waste Management: Energy recovery from waste, incineration, landfill, and green energy 3. Environmental Biotechnology & Ecology: Nano-biosensor, environmental genomics, bioenergy, and environmental eco-engineering 4. Physical & Chemical Technology: Membrane technology and advanced oxidation 5. Environmental System Engineering: Seawater desalination, ICA (instrument, control, and automation), and water reuse 6. Environmental Health & Toxicology: Micropollutants, hazardous materials, ecotoxicity, and environmental risk assessment http://submit.eeer.org/ KSCI KCI SCOPUS SCIE Feasibility Study on Production of Liquid Fertilizer in a 1 ㎥ Reactor Using Fishmeal Wastewater for Commercialization Gwon, Byeong-Geun;Kim, Joong-Kyun 3 https://doi.org/10.4491/eer.2012.17.1.003 PDF KSCI KPUBS A scaled-up bioconversion of fishmeal wastewater (FMW) into liquid fertilizer was performed five times in a $1m^3$ reactor in order to examine the feasibility of commercialization. The importance of aeration was marked. Analyses indicated that dissolved oxygen (DO) level was closely related to the value of oxidation-reduction potential (ORP) and it was crucial to achieve high-quality liquid fertilizer. When pure oxygen was supplied through four diffusers into the reactor, DO levels and ORP values were maintained over 1.2 mg/L and 0.2 mV, respectively all the time during 52 hr of bioconversion. The pH changed from 6.8 to 5.9. The average removal percentages of chemical oxygen demand ($COD_{Cr}$) and total nitrogen (TN) were 75.0% and 71.6%, respectively. Compared to the result acquired in a 5-L reactor, bioconversion of FMW into liquid fertilizer was achieved in a shorter time under the same removal percentages of $COD_{Cr}$ and TN. The 52-hr culture of inoculated FMW was phytotoxic-free and it possessed comparable fertilizing ability to a liquid fertilizer made from the fish waste in hydroponic culture with amino acid contents of 5.93 g/ 100 g sample. From all the above results, transferring lab-scale data to large-scale production appeared to be successful. As a result, the commercialization of a liquid fertilizer made from FMW was feasible. An Efficiency Evaluation of Iron Concentrates Flotation Using Rhamnolipid Biosurfactant as a Frothing Reagent Khoshdast, Hamid;Sam, Abbas 9 The effect of a rhamnolipid biosurfactant produced by a Pseudomonas aeruginosa MA01 strain on desulfurization of iron concentrates was studied. Surface tension measurement and frothing characterization indicated better surface activity and frothability of rhamnolipid compared to methyl isobutyl carbinol (MIBC) as an operating frother. Reverse flotation tests using rhamnolipid either as a sole frother or mixed with MIBC, showed that the desulfurization process is more efficient at pH 4.5 and high concentration of rhamnolipid in the presence of MIBC. However, under these conditions water recovery decreased due to the change in rhamnolipid aggregates morphology. Results from the present study seemed promising to introduce the biosurfactant from Pseudomonas aeruginosa as a new frother. The Effect of Corrosion Inhibitor on Corrosion Control of Copper Pipe and Green Water Problem Lee, Ji-Eun;Lee, Hyun-Dong;Kim, Gi-Eun 17 Concern about green water problem has surfaced as a serious issue in Korea. In order to solve this problem, it is necessary to research inhibition of green water and corrosion control of copper pipe in water service. This paper discovered that moderate corrosion inhibitors can solve the green water problem and copper corrosion in water service by adding the optimal concentration of corrosion inhibitors based on regulation. Firstly, in the case of phosphate based corrosion inhibitors, as dosage of the corrosion inhibitor increases from 1 mg/L to 5 mg/L, the relative effect of corrosion inhibitor declines rapidly. Secondly, except for 1 mg/L dosage of silicate based inhibitor, relative effects of the inhibitor displays a positive number depending on inhibitor concentration. The most significant result is that the amount of copper release shows a downward trend, whereas the phosphate based inhibitor accelerates copper ion release as the inhibitor dosage increases. Thirdly, as the dosage of mixed inhibitors increases to 10 mg/L, the copper release change shows a similar trend of phosphate based inhibitor. Lastly, according to the Langelier saturation index (LI), silicate based inhibitors have the most non corrosive value. Larson ratio (LR) indicates that phosphate based inhibitors are the least corrosive. Korea water index (KWI) represents that silicate based inhibitors are most effective in controlling copper pipe corrosion. CO2 Emission from the Rail and Road Transport using Input-Output Analysis: an Application to South Korea Pruitichaiwiboon, Phirada;Lee, Cheul-Kyu;Lee, Kun-Mo 27 This paper deals with the evaluation of environmental impact of rail and road transport in South Korea. A framework of energy input-output analysis is employed to estimate the total energy consumption and $CO_2$ emission in acquiring and using a life cycle of passenger and freight transport activity. The reliability of $CO_2$ emission based on uncertainty values is assessed by means of a Monte Carlo simulation. The results show that on a passenger-kilometers basis, passenger roads have life cycle emissions about 1.5 times those of rail, while that ratio is ten times greater when the scope of evaluation regards the tailpipe. In the case of freight transport, on a million ton-kilometers basis, the value for road mode is estimated to be about three times compared to those of rail mode. The results also show that the main contribution of $CO_2$ emission for road transport is the operation stage, accounting for 70%; however, the main contribution for rail transport is the construction and supply chain stage, accounting for over 50% emission. Application of Multiparametric Flow Cytometry (FCM) to Enumerate the Diagnosis of Pseudomonas aeruginosa and Escherichia coli Hwang, Myoung-Goo;Oh, Jung-Woo;Katayama, Hiroyuki;Ohgaki, Shinichiro;Cho, Jin-Kyu 35 In this study, multiparametric flow cytometry (FCM) was installed to enumerate the diagnosis of Pseudomonas aeruginosa ATCC 10145 and Escherichia coli K12 (IFO 3301). The nucleic acids (DNA/RNA) were double stained by a LIVE/DEAD bacLight viability kit, involving green SYTO 9 and red propidium iodide (PI), based on the permeability of two chemicals according to the integrity of plasma membrane. As the results showed, the gate for dead bacteria was defined as the range of $0.2{\times}10^0$ to $6.0{\times}10^1$ photo multiplier tube (PMT) 2 fluorescence (X-axis) and $2.0{\times}10^0$ to $2.0{\times}10^2$ PMT 4 fluorescence (Y-axis), and the gate for live bacteria was defined as the range of $6.0{\times}10^0$ to $6.0{\times}10^2$ PMT 2 fluorescence (X-axis) and $2.0{\times}10^0$ to $4.0{\times}10^2$ PMT 4 fluorescence (Y-axis). In the comparison of the number of the tested bacteria detected by FCM (viability assessment) and plate culture (cultivability assessment), the number of bacteria detected by FCM well represented the number of bacteria that was detected by the colony forming unit (CFU) counting method when bacteria were exposed to isopropyl alcohol and silver/copper cations. Consequently, it is concluded that the application of FCM to monitor the functional effect of disinfectants on the physiological status of target bacteria can offer more rapid and reliable data than the plate culture colony counting method. Anaerobic Treatment of Food Waste Leachate for Biogas Production Using a Novel Digestion System Lim, Bong-Su;Kim, Byung-Chul;Chung, In 41 In this study, the performance of new digestion system (NDS) for the treatment of food waste leachate was evaluated. The food waste leachate was fed intermittently to an anaerobic reactor at increasing steps of 3.3 L/day (hydraulic retention time [HRT] = 30 day), 5 L/day (HRT = 20 day), and finally 10 L/day (HRT = 10 day). In the anaerobic reactor, the pH and alkalinity were maintained at 7.6 to 8.2 and 8,940-14,400 mg/L, respectively. Maximum methane yield determined to be 0.686L $CH_4$/g volatile solids (VS) containing HRT over 20 day. In the digester, 102,328 mg chemical oxygen demand (COD)/L was removed to produce 350 L/day (70% of the total) of biogas, but in the digested sludge reduction (DSR) unit, only 3,471 mg COD/L was removed with a biogas production of 158 L/day. Without adding any chemicals, 25% of total nitrogen (TN) and 31% of total phosphorus (TP) were removed after the DSR, while only 48% of TN and 32% of TP were removed in the nitrogen, phosphorus, and heavy metals (NPHM) removal unit. Total removal of TN was 73% and total removal of TP was 63%. Analysis of Nonpoint Source Pollution Runoff from Urban Land Uses in South Korea Rhee, Han-Pil;Yoon, Chun-Gyeong;Lee, Seung-Jae;Choi, Jae-Ho;Son, Yeong-Kwon 47 A long-term nationwide nonpoint-source pollution monitoring program was initiated by the Ministry of Environment Republic of Korea (ME) in 2007. Monitoring devices including rain gauges, flow meters, and automatic samplers were installed in monitoring sites to collect dynamic runoff data in 2008-2009. More than 10 rainfall events with three or more antecedent dry days were monitored per year. More than 10 samples were collected and analyzed per event. So far, five land use types (single family, apartments, education facilities, power plants, and other public facilities) have been monitored 23 to 24 times each. Characterization of the runoff from different land use types will aid unit load estimation in Korea and hopefully in other countries with similar land use. The monitoring results will be reported regularly at national and international levels.
CommonCrawl
Magnetic memory driven by topological insulators Two-terminal spin–orbit torque magnetoresistive random access memory Noriyuki Sato, Fen Xue, … Shan X. Wang Giant nonvolatile manipulation of magnetoresistance in magnetic tunnel junctions by electric fields via magnetoelectric coupling Aitian Chen, Yan Wen, … Yonggang Zhao Impact of Spin-Orbit Torque on Spin-Transfer Torque Switching in Magnetic Tunnel Junctions Sachin Pathak, Chanyoung Youm & Jongill Hong Sub-volt switching of nanoscale voltage-controlled perpendicular magnetic tunnel junctions Yixin Shao, Victor Lopez-Dominguez, … Pedram Khalili Amiri Two-dimensional spintronics for low-power electronics Xiaoyang Lin, Wei Yang, … Weisheng Zhao A conductive topological insulator with large spin Hall effect for ultralow power spin–orbit torque switching Nguyen Huynh Duy Khang, Yugo Ueda & Pham Nam Hai Magnetization switching of multi-state magnetic structures with current-induced torques Shubhankar Das, Liran Avraham, … Lior Klein Ultrahigh efficient spin orbit torque magnetization switching in fully sputtered topological insulator and ferromagnet multilayers Tuo Fan, Nguyen Huynh Duy Khang, … Pham Nam Hai Field-free Magnetization Switching by Utilizing the Spin Hall Effect and Interlayer Exchange Coupling of Iridium Yang Liu, Bing Zhou & Jian-Gang (Jimmy) Zhu Hao Wu ORCID: orcid.org/0000-0001-8608-30351 na1, Aitian Chen ORCID: orcid.org/0000-0003-3535-94702 na1, Peng Zhang1 na1, Haoran He1, John Nance3, Chenyang Guo4, Julian Sasaki5, Takanori Shirokura ORCID: orcid.org/0000-0002-9030-71315, Pham Nam Hai ORCID: orcid.org/0000-0001-6685-49125,6, Bin Fang2, Seyed Armin Razavi ORCID: orcid.org/0000-0003-0487-54641, Kin Wong ORCID: orcid.org/0000-0001-6776-38521, Yan Wen2, Yinchang Ma2, Guoqiang Yu ORCID: orcid.org/0000-0002-7439-69204, Gregory P. Carman3, Xiufeng Han ORCID: orcid.org/0000-0001-8053-793X4, Xixiang Zhang ORCID: orcid.org/0000-0002-3478-64142 & Kang L. Wang ORCID: orcid.org/0000-0002-9363-12791 Nature Communications volume 12, Article number: 6251 (2021) Cite this article Electronic and spintronic devices Magnetic devices Topological insulators Giant spin-orbit torque (SOT) from topological insulators (TIs) provides an energy efficient writing method for magnetic memory, which, however, is still premature for practical applications due to the challenge of the integration with magnetic tunnel junctions (MTJs). Here, we demonstrate a functional TI-MTJ device that could become the core element of the future energy-efficient spintronic devices, such as SOT-based magnetic random-access memory (SOT-MRAM). The state-of-the-art tunneling magnetoresistance (TMR) ratio of 102% and the ultralow switching current density of 1.2 × 105 A cm−2 have been simultaneously achieved in the TI-MTJ device at room temperature, laying down the foundation for TI-driven SOT-MRAM. The charge-spin conversion efficiency θSH in TIs is quantified by both the SOT-induced shift of the magnetic switching field (θSH = 1.59) and the SOT-induced ferromagnetic resonance (ST-FMR) (θSH = 1.02), which is one order of magnitude larger than that in conventional heavy metals. These results inspire a revolution of SOT-MRAM from classical to quantum materials, with great potential to further reduce the energy consumption. Non-volatile magnetic memory is a promising candidate for next-generation memory technology beyond complementary metal–oxide–semiconductor (CMOS). Such magnetic random-access memory (MRAM)1,2, has ultralow energy consumption (~fJ), ultrafast speed (~ns) and almost infinite endurance (1015 cycles). The information of MRAM is stored in the magnetic tunnel junction (MTJ) of a ferromagnetic electrode/insulator/ferromagnetic electrode (FM/I/FM) structure, where the tunneling resistance strongly depends on the magnetization orientations of two FM electrodes, and thus the information of "0" and "1" can be stored in the parallel and antiparallel magnetization states, respectively3,4,5,6,7,8. Besides the magnetic field, current-induced spin torques such as spin-transfer torque (STT) and spin-orbit torque (SOT) can be used to provide an efficient switching mechanism of the magnetization9,10,11,12. In particular, SOT-driven magnetization switching has been demonstrated in heavy metal/ferromagnet (HM/FM) based structures, where the spin current generated by the strong spin-orbit coupling in HMs exerts a spin torque to the adjacent FM and thus switches the magnetization with a current (typically with a density around 107 A cm−2)11,12,13,14,15. Compared to the 2-terminal STT-MRAM in which the writing current flows vertically through the MTJ stack, in the 3-terminal SOT-MRAM, the writing current only flows transversely in a separate bottom electrode, and thus the electromigration resulting damage to the tunneling barrier can be minimized, which dramatically increases the endurance of MRAM16,17,18,19,20,21. Reducing the energy consumption is a major challenge for SOT-MRAM. Quantum materials such as topological insulators (TIs) inspire a promising route to overcome the limitation of charge-spin conversion efficiency θSH = Js /Je < 1 in classical materials22,23,24,25,26,27,28,29,30,31,32,33,34, where Js and Je represent the spin current density and charge current density, respectively. In TIs, the topological surface states give rise to a large θSH due to the spin-momentum locking of the surface Dirac electrons, where the bulk is insulating in the ideal case25,27,35. A great interest has been focused on the SOT in TI/FM bilayer structures, in which θSH is found to be 1-2 orders of magnitude larger than that in HMs even at room temperature28,29,30,31,32,33. However, there is still a big challenge for integrating TIs with MTJs for SOT-MRAM applications: conventional SOT-MRAM is based on the current Si-based CMOS technology, whereas the single-crystal TI layer needs to be epitaxially grown on specific substrates (such as GaAs and Al2O3) by molecular beam epitaxy (MBE)36. Therefore, for TI-driven SOT-MRAM, the following issues need to be solved: How to control the interface between TIs and MTJs to achieve state-of-the-art tunneling magnetoresistance (TMR) ratio and at the same time preserve the topological surface states? How to reduce the element diffusion that damages the TI surface states during the annealing treatment for MTJ? How to avoid the chemical degradation of the TI crystal quality during the photolithography process of SOT devices? In this article, we demonstrate a TI-driven SOT-MRAM cell with a state-of-the-art TMR ratio (over 100%) and an ultralow switching current density Jc (105 A cm−2) at room temperature, where the topological surface states contribute to the large charge-spin conversion (θSH > 1). Two types of SOT switching are demonstrated: the collinear spin polarization σ and easy axis (EA), for which can realize the field-free switching with lower Jc (1.2 × 105 A cm−2); the orthogonal σ and EA, which leads to a much faster switching speed. The charge-spin conversion efficiency θSH in TIs is quantified by two methods: the SOT-induced shift of the magnetic switching field and the SOT-induced ferromagnetic resonance (ST-FMR), which give rise to θSH = 1.59 and θSH = 1.02, respectively. At last, SOT switching is demonstrated in the all-sputtered TI-MTJ device for potential industry-level applications. This work demonstrates the SOT-MRAM cell driven by TIs with ultralow energy consumption. Device structure and magnetic properties The full TI-MTJ stack consists of a sequence of layers: (BiSb)2Te3(10)/Ru(5)/CoFeB(2.5)/MgO(1.9)/CoFeB(5)/Ta(8)/Ru(7) (thickness in nanometers). The TI of (BiSb)2Te3 is epitaxially grown on the Al2O3(0001) substrate by the MBE method, where the layer-by-layer growth mode is monitored by the reflection high-energy electron diffraction (RHEED) patterns. Then, the (BiSb)2Te3 sample is transferred from the MBE chamber to the magnetron sputtering chamber to grow the MTJ stack of Ru(5)/CoFeB(2.5)/MgO(1.9)/CoFeB(5)/Ta(8)/Ru(7). The Ru interlayer is inserted to decouple the exchange interaction between (BiSb)2Te3 and CoFeB and block the element diffusion during annealing, which may destroy the topological surface states37,38. The TI-MTJ stack is patterned into the SOT devices by the photo and electron-beam lithography combined with the ion milling process. A 300 oC annealing process is performed to improve the crystal quality of the MgO barrier and the TMR ratio, during which an in-plane magnetic field (8 kOe) is applied to induce an in-plane magnetic easy axis (EA) for the top and bottom CoFeB (T-CoFeB and B-CoFeB) electrodes39,40. Figure 1a shows the helical Dirac-cone band structure of the topological surface states, where the spin-momentum locking gives rise the spin-polarized electron current. In the schematic of the 3-terminal SOT device of TI-MTJ, as shown in Fig. 1a, the writing current is applied between terminal 1 (T1) and T2, where the spin-polarized current in topological surface states is employed to provide the SOT and switch the magnetization of the free-layer (B-CoFeB) of the MTJ. For reading, a small vertical current between T1 and T3 is applied to pass through the tunneling barrier MgO, where the tunneling resistance strongly depends on the magnetization orientation between the free-layer (B-CoFeB) and the fixed-layer (T-CoFeB): low resistance for the parallel state ("0" state) and high resistance for the antiparallel state ("1" state), respectively, i.e., the TMR effect. From the cross-sectional scanning transmission electron microscopy (STEM) results, we see the layer-by-layer (i.e., van der Waals) structure of the TI [(BiSb)2Te3] and the clear interface between TI and MTJ. Fig. 1: Schematic of the TI-driven SOT-MRAM cell. a Schematic of the 3-terminal SOT-MRAM cell with a topological insulator (TI). The writing current applied between T1 and T2 is used to switch the magnetic tunnel junction (MTJ) between the parallel and antiparallel states by the spin-orbit torque (SOT), and the reading is done by the tunneling magnetoresistance (TMR) between T1 and T3. In the TI, the surface is conducting while the bulk is insulating, and the spin-momentum locking of the surface states provides a giant SOT. Cross-sectional scanning transmission electron microscopy (STEM) results show the layer-by-layer structure of the TI [(BiSb)2Te3] and the clear interface between TI and MTJ. b The two-step switching process in the M-H curves shows the different coercive fields of the bottom and top CoFeB layers in the MTJ stack. c Microscopic picture of the patterned SOT-MRAM device, where the TI layer serves as the bottom electrode, and the MTJ device on top is 2 μm × 6 μm in size. The scale bar is 20 μm. d Tunneling resistance R and TMR ratio as a function of the magnetic field, where the 102% TMR ratio indicates the high quality of MTJ. The magnetic hysteresis (M-H) loop of the TI-MTJ stack is shown in Fig. 1b, where the magnetic field is scanned along the EA of the MTJ. The 2-step magnetization reversal process clearly shows the different coercive fields of T-CoFeB (Hc1 = 10 Oe) and B-CoFeB (Hc2 = 20 Oe), which supports an antiparallel state between Hc1 and Hc2. The microscopic picture of the patterned SOT device is shown in Fig. 1c, where the TI layer serves as the bottom electrode, and the MTJ stack of a 2 μm × 6 μm size is located at the intersection region between the bottom and top electrodes. Figure 1d shows the tunneling resistance R and the TMR ratio as a function of the magnetic field H at room temperature, where a TMR ratio of 102% indicates the high-quality of the MTJ on top of the TI surface. Current-driven SOT switching The current-driven SOT switching is performed in two types of configurations41: the collinear case between σ and EA (EA along y) and the orthogonal case (EA along x). A writing current pulse Je (1 ms) between T1 and T2 is applied to provide the SOT, followed by a small reading current pulse JR (10 μA, 1 ms) that passes through the MTJ between T1 and T3 to detect the magnetization at 1-s later. For the collinear σ∥EA case (EA along y), as shown in Fig. 2a, the damping-like torque [\(-m\times (m\times \sigma )\)] breaks the mirror symmetry between +my and −my and thus efficiently switches the magnetization without the external magnetic field, i.e., field-free switching. Macrospin simulation results show the magnetization gradually precesses from +my to −my states, i.e., precessional switching mode, with a switching speed of 7.5 ns (Supplementary Note 1 and Supplementary Fig. 1b). Figure 2b shows the MTJ resistance as a function of the writing current density in the TI layer (R-Je) at room temperature, with a critical switching current density Jc of 1.2 × 105 A cm−2, which is 1-2 orders of magnitude lower than that in HM-based systems11,12,13,14,15. Fig. 2: Current-driven SOT switching. a, b show the SOT switching for the collinear case between the spin polarization (σ) and the easy axis (σ∥EA). b The current-driven SOT switching shows the deterministic switching without external magnetic field, i.e., field-free switching, where the magnetic field Hz has not effect on the switching polarity. c, d show the SOT switching for the orthogonal case of σ⊥EA, where a Hz is needed for deterministic switching. d For the σ⊥EA case, an external magnetic field Hz is needed to break the mirror symmetry between +mx and −mx for deterministic SOT switching, as indicated by the opposite SOT switching polarities under Hz = ±100 Oe. e, SOT switching for the MTJ scaling down from 4 µm × 8 µm to 100 nm × 200 nm (σ∥EA). For the orthogonal σ⊥EA case (EA along x), an external magnetic field Hz is needed to break the mirror symmetry between +mx and −mx for the deterministic switching41, as shown in Fig. 2c. For σ⊥EA, due to the polarity of the magnetization changes once the SOT is applied, the dynamic reversal mode produces a much shorter switching trajectory and a much faster switching speed of 1.0 ns (Supplementary Note 1 and Supplementary Fig. 1e). Figure 2d shows that Jc in the σ⊥EA case (4.1 × 105 A cm−2) is more than 3 times higher than that in the σ∥EA case (1.2 × 105 A cm−2). The opposite switching polarities at Hz = ±100 Oe indicate the standard SOT switching characteristic. Only the partial SOT switching is achieved in large-size (µm) MTJs, because of the magnetic domain pining from the steps at the (BiSb)2Te3 surface induced by the layer-by-layer growth. In order to realize the full SOT switching, the size of MTJ is scaling down from 4 µm × 8 µm to 100 nm × 200 nm, as shown in Fig. 2e, and the results show that the SOT switching ratio increases with scaling down the MTJ, where the almost full SOT switching is achieved in 100 nm × 200 nm (switching ratio ΔTMR/TMR = 95%) and 150 nm × 300 nm (switching ratio 92%) MTJs. The MTJ size for full SOT switching is consistent with the crystal grain size (200~300 nm) of the (BiSb)2Te3 surface42. SOT-induced shift of the magnetic switching field The SOT-induced effective field HSOT is quantified by the shift of the magnetic switching field Hc2 of B-CoFeB in the R-Hy loops43, under a bias SOT current Je in the bottom electrode between T1 and T2, where EA is along the y axis, as shown in Fig. 3a. Due to the HSOT, the Hc2 is shifted to the opposite direction under ±Je, respectively, and the shift field Hc2shift = (Hc2+shift + Hc2−shift)/2 = HSOT( + Je) − HSOT( − Je) = 2 HSOT(Je), where Hc2+shift and Hc2−shift represent the shift of the positive and negative switching fields of B-CoFeB, respectively. Fig. 3: SOT-induced shift of the magnetic switching field. a R-Hy curves measured under the opposite bias SOT current ±Je between T1 and T2, where the shift of the magnetic switching field of B-CoFeB indicates the SOT-induced effective field Hc2shift = (Hc2+shift + Hc2−shift)/2 = HSOT(+Je) − HSOT(−Je) = 2HSOT(Je). b HSOT as a function of the bias SOT current Je, where the linear dependence shows the typical SOT characteristic. The HSOT as a function of Je is plotted in Fig. 3b, and the linear dependence is consistent with the spin-momentum locking induced spin polarization in topological surface states. The SOT efficiency χSOT = HSOT/Je = 24.1 × 10−6 Oe A−1 cm2 is obtained by fitting the HSOT-Je curve, and thus contributes to a charge-spin conversion efficiency of \({\theta }_{SH}=(2|e|{M}_{s}{t}_{F}/\hslash )\times {\chi }_{SOT}=1.59\), where e is the electron charge, Ms is the saturation magnetization (1100 emu cm−3), tF is the magnetic film thickness (2.5 nm), and \(\hslash\) is the reduced Planck constant. The obtained value of θSH (1.59) here is consistent with that (θSH = 2.5) in (BiSb)2Te3/Ti/CoFeB/MgO system with perpendicular magnetic anisotropy by the 2nd harmonic measurement31. By considering the 2-dimentional (2D) current distribution in the TI surface, we can also obtain the interfacial charge-spin conversion efficiency \({q}_{{{{{{\rm{ICS}}}}}}}={J}_{{{{{{\rm{s}}}}}}}/{J}_{{{{{{\rm{e}}}}}}}^{2{{{{{\rm{D}}}}}}}={\theta }_{{{{{{\rm{SH}}}}}}}/{t}_{{{{{{\rm{s}}}}}}}=1.06\,{{{{{{\rm{nm}}}}}}}^{-1}\), where \({{J}_{{{{{{\rm{e}}}}}}}}^{{{{{{\rm{2D}}}}}}}\) represents the 2D electric current density, and ts represents the surface thickness of TI [1.5 nm for (BiSb)2Te3]31,44. SOT-induced ferromagnetic resonance The SOT-induced ferromagnetic resonance (ST-FMR)23,34,45 is also employed to quantify the charge-spin conversion efficiency in (BiSb)2Te3, with the structure of (BiSb)2Te3(10)/Ru(5)/CoFeB(2.5)/MgO(1.9), which is the same as the free layer in the TI-MTJ stack. As shown in the schematic of the ST-FMR measurement in Fig. 4a, a microwave current (12 dBm) is applied to excite the magnetic resonance, where an in-plane magnetic field H is scanned with a fixed angle θ = 45°. The microwave is modulated by the lock-in amplifier with a 20 kHz frequency. The damping-like torque τDL and the field-like torque τFL originate from the SOT and the Oersted field of the ac current, respectively. Figure 4b shows the magnetic resonant frequency f as the function of the magnetic field H, which can be fitted well by the standard Kittel equation \({f}=\frac{\gamma} {2{{{{{\rm{\pi}}}}}}}\sqrt{H_{{{{{\rm{res}}}}}} ({H}_{{{{{\rm{res}}}}}} +4{{{{{{\rm{\pi}}}}}}}{M}_{{{{{{\rm{eff}}}}}}})}\) with an effective in-plane magnetization 4πMeff ~ 1.19 T. Fig. 4: SOT-induced ferromagnetic resonance (FMR). a Schematic of the SOT-induced FMR measurement. A microwave current is applied to excite the magnetic resonance, and an applied in-plane magnetic field H with θ = 45° is scanned during the measurement. The damping-like torque τDL is dominated by the SOT, where the field-like torque τFL mainly comes from the Oersted field. b Resonance frequency f as a function of the magnetic field H, which can be fitted by the Kittel equation. c The Vmix-H curve can be fitted by Eq. (1), where the SOT and Oersted field contributions can be obtained by the symmetric and antisymmetric parts, respectively. d The charge-spin conversion efficiency θSH is almost independent with the frequency from 5 to 8 GHz. The error bar comes from the fitting process in Fig. 4c. During the magnetization precession, the mixing of the oscillation magnetoresistance and the ac current produces a dc voltage Vmix, as shown in the Fig. 4c, which can be fitted by \({V}_{{{{{\rm{mix}}}}}}=S\frac{{\triangle}^{2}}{{\triangle}^{2}{+}{({H}_{{{{{\rm{ext}}}}}}-{H}_{{{{{\rm{res}}}}}})}^{2}}{+}A\frac{\triangle ({H}_{{{{{\rm{ext}}}}}}-{H}_{{{{{\rm{res}}}}}})}{{\triangle }^{2}{+}{({H}_{{{{{\rm{ext}}}}}}-{H}_{{{{{\rm{res}}}}}})}^{2}}\), where Δ is the linewidth, Hres is the resonant magnetic field, S and A represents the coefficient of symmetric part and antisymmetric part, respectively. The symmetric part S is attributed to the SOT (τDL) from (BiSb)2Te3, which is proportional to the current density JTI; on the other hand, the antisymmetric part A comes from the Oersted field contribution (τFL), which is dominated by the current density JRu in the Ru layer. The charge-spin conversion efficiency θSH could be expressed as: $${\theta }_{{{{{{\rm{SH}}}}}}}=\frac{{J}_{{{{{{\rm{Ru}}}}}}}}{{J}_{{{{{{\rm{TI}}}}}}}}\frac{S}{A}\frac{e{\mu }_{0}{M}_{s}{t}_{{{{{{\rm{Ru}}}}}}}{t}_{{{{{{\rm{CoFeB}}}}}}}}{\hbar }\sqrt{1+(4{{{{{\rm{\pi }}}}}}{M}_{{{{{{\rm{eff}}}}}}}/{H}_{{{{{{\rm{res}}}}}}})}$$ where tRu and tCoFeB represent the thickness of Ru and CoFeB, respectively. After subtracting the spin pumping contribution (27% of the symmetric part, Supplementary Note 6), the obtained θSH by ST-FMR for f = 5-8 GHz is shown in the Fig. 4d, and the θSH is almost constant at different frequencies, with the average value of 1.02 (qICS = 0.68 nm−1), which is consistent with θSH = 1.59 (qICS = 1.06 nm−1) from the SOT-induced switching field shift in the TI-MTJ device. All sputtered BiSb-MTJ device In order to be compatible with the industry-level manufacture, the topological insulator of BiSb is prepared by the magnetron sputtering method29,46, and the MTJ stack of Ru(5)/CoFeB(2.5)/MgO(2)/CoFeB(5)/Ta(8)/Ru(7) is in-situ deposited on top of the sputtered BiSb(10) without breaking the vacuum (thickness in nanometers), i.e., all sputtered BiSb-MTJ. The TMR ratio of the all sputtered BiSb-MTJ is around 90%, as shown in Fig. 5a. The current-induced SOT can efficiently switch the bottom CoFeB layer and thus the resistance state of MTJ at room temperature, with a critical switching current density Jc of 1.4 × 106 A cm−2, as shown in Fig. 5b. Even Jc for the sputtered BiSb is 1 order of magnitude higher than the MBE-grown (BiSb)2Te3, the high conductivity of the sputtered BiSb (1.8 × 105 Ω−1 m−1) could help to reduce the Ohmic loss, especially from the shunting effect. Fig. 5: SOT switching in all sputtered BiSb-MTJ. a R-H curve for the all sputtered BiSb-MTJ device. b Current-driven SOT switching in the all sputtered BiSb-MTJ device (1 µm × 3 µm MTJ size, and σ∥EA). In conclusion, the SOT-MRAM cell is demonstrated in a TI-MTJ device with over 100% TMR ratio, where the energy consumption is significantly reduced due to the ultralow switching current density of 105 A cm−2. The >1 charge-spin conversion efficiency (θSH) in (BiSb)2Te3 is quantified by the SOT-induced switching field shift and the ST-FMR measurements at room temperature, which breaks down the limitation of θSH < 1 in classical material systems. This work paves a path for the application of TI-driven magnetic memory, and potentially inspires the revolution of current magnetic memory technology from classical to quantum materials. Sample growth and device fabrication The high-quality (Bi0.2Sb0.8)2Te3 films were grown on Al2O3(0001) substrates by using a Perkin Elmer MBE system in ultrahigh vacuum. Before the growth, the substrate was pre-annealed in the vacuum chamber at up to 700 °C to clean the surface. High-purity Bi (99.9999%), Te (99.9999%) and Sb (99.999%) were co-evaporated by conventional effusion cells and cracker cells. During the TI growth, the substrate was maintained at 200 °C, where the Bi, Sb and Te cells were kept at 457 °C, 387 °C and 340 °C, respectively. The layer-by-layer epitaxial growth was monitored by the in-situ reflection high-energy electron diffraction (RHEED). The MTJ stacks of Ru (5 nm)/CoFeB (2.5 nm)/MgO (1.9 nm)/CoFeB (5 nm)/Ta (8 nm)/Ru (7 nm) were deposited by using a Singulus ROTARIS magnetron sputtering system at room temperature with a base pressure of 1 × 10−6 Pa. The CoFeB denotes Co40Fe40B20 alloy with nominal target compositions. When depositing the CoFeB ferromagnetic layers, a magnetic field of 50 Oe was applied to induce the magnetic easy axis. The MTJ devices with a rectangular shape were fabricated by using two photolithography, one electron-beam lithography and two Ar ion milling steps. To avoid the chemical degradation of the TI films caused by the developer, a poly(methyl methacrylate) (PMMA) layer with a thickness of 300 nm was spin-coated on top of the film before the photolithography step, and then removed by O2 plasma before the Ar ion milling step. Subsequently, the final patterned MTJ devices were annealed in the vacuum at 300 °C for 1 h with a magnetic field of 8 kOe. Magnetic and spin transport measurements The SOT switching in the 3-terminal TI-MTJ device was measured by the probe station system with an electromagnet, where the pulse current was applied by the Keithley 2612 current source, and the voltage through the MTJ was measured by a Keithley 2182 A voltmeter. For the ST-FMR measurement, a signal generator was used to apply the microwave current with a nominal power of 12 dBm, where a lock-in amplifier (Stanford Research SR-830) was used to measure the voltage. In order to improve the signal-to-noise ratio, the microwave current was modulated by a sine function from the lock-in amplifier with 20 kHz. The magnetic properties of the TI-MTJ stack were measured by a vibrating sample magnetometer (VSM) system. All measurements were performed at room temperature. The data that support the findings of this study are available from the corresponding authors upon reasonable request. Apalkov, D., Dieny, B. & Slaughter, J. M. Magnetoresistive random access memory. Proc. IEEE 104, 1796–1830 (2016). Bhatti, S. et al. Spintronics based random access memory: a review. Mater. Today 20, 530–548 (2017). Julliere, M. Tunneling between ferromagnetic films. Phys. Lett. A 54, 225–226 (1975). Miyazaki, T. & Tezuka, N. Giant magnetic tunneling effect in Fe/Al2O3/Fe junction. J. Magn. Magn. Mater. 139, L231–L234 (1995). Moodera, J. S., Kinder, L. R., Wong, T. M. & Meservey, R. Large magnetoresistance at room temperature in ferromagnetic thin Film tunnel junctions. Phys. Rev. Lett. 74, 3273–3276 (1995). Butler, W. H., Zhang, X. G., Schulthess, T. C. & MacLaren, J. M. Spin-dependent tunneling conductance of Fe|MgO|Fe sandwiches. Phys. Rev. B 63, 54416 (2001). Yuasa, S., Nagahama, T., Fukushima, A., Suzuki, Y. & Ando, K. Giant room-temperature magnetoresistance in single-crystal Fe/MgO/Fe magnetic tunnel junctions. Nat. Mater. 3, 868–871 (2004). Parkin, S. S. P. et al. Giant tunnelling magnetoresistance at room temperature with MgO (100) tunnel barriers. Nat. Mater. 3, 862–867 (2004). Slonczewski, J. C. Current-driven excitation of magnetic multilayers. J. Magn. Magn. Mater. 159, L1–L7 (1996). Ralph, D. C. & Stiles, M. D. Spin transfer torques. J. Magn. Magn. Mater. 320, 1190–1216 (2008). Miron, I. M. et al. Perpendicular switching of a single ferromagnetic layer induced by in-plane current injection. Nature 476, 189–193 (2011). Liu, L. et al. Spin-torque switching with the giant spin Hall effect of tantalum. Science 336, 555–558 (2012). Ryu, J., Lee, S., Lee, K.-J. & Park, B.-G. Current-induced spin–orbit torques for spintronic applications. Adv. Mater. 32, 1907148 (2020). Yu, G. et al. Switching of perpendicular magnetization by spin–orbit torques in the absence of external magnetic fields. Nat. Nanotechnol. 9, 548–554 (2014). Manchon, A. et al. Current-induced spin-orbit torques in ferromagnetic and antiferromagnetic systems. Rev. Mod. Phys. 91, 035004 (2019). Article ADS MathSciNet CAS Google Scholar Rahaman, S. Z. et al. Size-dependent switching properties of spin-orbit torque MRAM with manufacturing-friendly 8-inch wafer-level uniformity. IEEE J. Electron Devices Soc. 8, 163–169 (2020). Sato, N., Xue, F., White, R. M., Bi, C. & Wang, S. X. Two-terminal spin–orbit torque magnetoresistive random access memory. Nat. Electron. 1, 508–511 (2018). Wang, M. et al. Field-free switching of a perpendicular magnetic tunnel junction through the interplay of spin–orbit and spin-transfer torques. Nat. Electron. 1, 582–588 (2018). Nguyen, M.-H. et al. Efficient switching of 3-terminal magnetic tunnel junctions by the giant spin Hall effect of Pt85Hf15 alloy. Appl. Phys. Lett. 112, 062404 (2018). Rahaman, S. Z. et al. Pulse-width and temperature effect on the switching behavior of an etch-stop-on-MgO-barrier spin-orbit torque MRAM cell. IEEE Electron Device Lett. 39, 1306–1309 (2018). Kong, W. J. et al. All-electrical manipulation of magnetization in magnetic tunnel junction via spin–orbit torque. Appl. Phys. Lett. 116, 162401 (2020). Fan, Y. et al. Magnetization switching through giant spin–orbit torque in a magnetically doped topological insulator heterostructure. Nat. Mater. 13, 699–704 (2014). Mellnik, A. R. et al. Spin-transfer torque generated by a topological insulator. Nature 511, 449–451 (2014). Yasuda, K. et al. Current-nonlinear Hall effect and spin-orbit torque magnetization switching in a magnetic topological insulator. Phys. Rev. Lett. 119, 137204 (2017). Fan, Y. et al. Electric-field control of spin–orbit torque in a magnetically doped topological insulator. Nat. Nanotechnol. 11, 352–359 (2016). Wang, Y. et al. Magnetization switching by magnon-mediated spin torque through an antiferromagnetic insulator. Science 366, 1125–1128 (2019). Che, X. et al. Strongly surface state carrier-dependent spin–orbit torque in magnetic topological insulators. Adv. Mater. 32, 1907661 (2020). Han, J. et al. Room-temperature spin-orbit torque switching induced by a topological insulator. Phys. Rev. Lett. 119, 077702 (2017). Article ADS PubMed Google Scholar Dc, M. et al. Room-temperature high spin–orbit torque due to quantum confinement in sputtered BixSe(1–x) films. Nat. Mater. 17, 800–807 (2018). Khang, N. H. D., Ueda, Y. & Hai, P. N. A conductive topological insulator with large spin Hall effect for ultralow power spin–orbit torque switching. Nat. Mater. 17, 808–813 (2018). Wu, H. et al. Room-temperature spin-orbit torque from topological surface states. Phys. Rev. Lett. 123, 207205 (2019). Wu, H. et al. Spin-orbit torque switching of a nearly compensated ferrimagnet by topological surface states. Adv. Mater. 31, 1901681 (2019). Wang, Y. et al. Room temperature magnetization switching in topological insulator-ferromagnet heterostructures by spin-orbit torques. Nat. Commun. 8, 1364 (2017). Wang, Y. et al. Topological surface states originated spin-orbit torques in Bi2Se3. Phys. Rev. Lett. 114, 257202 (2015). Article ADS MathSciNet PubMed CAS Google Scholar Tang, P., Han, X. & Zhang, S. Spin transport and dynamic properties of two-dimensional spin-momentum locked states. Europhys. Lett. 130, 58001 (2020). Pai, C.-F. Switching by topological insulators. Nat. Mater. 17, 755–757 (2018). Wray, L. A. et al. A topological insulator surface under strong Coulomb, magnetic and disorder perturbations. Nat. Phys. 7, 32–37 (2011). Zhang, J., Velev, J. P., Dang, X. & Tsymbal, E. Y. Band structure and spin texture of Bi2Se3 3d ferromagnetic metal interface. Phys. Rev. B 94, 014435 (2016). Chen, A. et al. Giant nonvolatile manipulation of magnetoresistance in magnetic tunnel junctions by electric fields via magnetoelectric coupling. Nat. Commun. 10, 243 (2019). Article ADS PubMed PubMed Central CAS Google Scholar Chen, A. et al. Full voltage manipulation of the resistance of a magnetic tunnel junction. Sci. Adv. 5, eaay5141 (2019). Fukami, S., Anekawa, T., Zhang, C. & Ohno, H. A spin–orbit torque switching scheme with collinear magnetic easy axis and current configuration. Nat. Nanotechnol. 11, 621–625 (2016). Wu, H. et al. Ferrimagnetic skyrmions in topological insulator/ferrimagnet heterostructures. Adv. Mater. 32, 2003380 (2020). Liu, Y.-T. et al. Determination of spin-orbit-torque efficiencies in heterostructures with in-plane magnetic anisotropy. Phys. Rev. Appl. 13, 044032 (2020). Park, K., Heremans, J. J., Scarola, V. W. & Minic, D. Robustness of topologically protected surface states in layering of Bi2Te3 thin films. Phys. Rev. Lett. 105, 186801 (2010). Article ADS PubMed CAS Google Scholar Liu, L., Moriyama, T., Ralph, D. C. & Buhrman, R. A. Spin-torque ferromagnetic resonance induced by the spin Hall effect. Phys. Rev. Lett. 106, 036601 (2011). Khang, N. H. D., Nakano, S., Shirokura, T., Miyamoto, Y. & Hai, P. N. Ultralow power spin–orbit torque magnetization switching induced by a non-epitaxial topological insulator on Si substrates. Sci. Rep. 10, 12185 (2020). This work was supported by the NSF Award Nos. 1935362, 1909416, 1810163 and 1611570, the Nanosystems Engineering Research Center for Translational Applications of Nanoscale Multiferroic Systems (TANMS), the U.S. Army Research Office MURI program under Grants No. W911NF-16-1-0472 and No. W911NF-15-1-10561. The work at King Abdullah University of Science and Technology (KAUST) was supported by KAUST Office of Sponsored Research (OSR) under award No. CRF-2017-3427-CRG6. The work at Tokyo Tech. was supported by the CREST program (No. JPMJCR18T5) of the Japan Science and Technology Agency (JST), and the Spintronics Research Network of Japan (Spin-RNJ). H.W. acknowledges the help of the schematic design from Chin-Chung Chen. These authors contributed equally: Hao Wu, Aitian Chen, Peng Zhang. Department of Electrical and Computer Engineering, and Department of Physics and Astronomy, University of California, Los Angeles, CA, 90095, USA Hao Wu, Peng Zhang, Haoran He, Seyed Armin Razavi, Kin Wong & Kang L. Wang Physical Science and Engineering Division, King Abdullah University of Science and Technology, Thuwal, 23955-6900, Saudi Arabia Aitian Chen, Bin Fang, Yan Wen, Yinchang Ma & Xixiang Zhang Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, CA, 90095, USA John Nance & Gregory P. Carman Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing, 100190, China Chenyang Guo, Guoqiang Yu & Xiufeng Han Department of Electrical and Electronic Engineering, Tokyo Institute of Technology, Tokyo, 152-8550, Japan Julian Sasaki, Takanori Shirokura & Pham Nam Hai Center for Spintronics Research Network (CSRN), The University of Tokyo, Tokyo, 113-8656, Japan Pham Nam Hai Aitian Chen Haoran He Chenyang Guo Julian Sasaki Takanori Shirokura Bin Fang Seyed Armin Razavi Kin Wong Yan Wen Yinchang Ma Guoqiang Yu Gregory P. Carman Xiufeng Han Xixiang Zhang Kang L. Wang H.W. and K.L.W. conceived and supervised the project. P.Z., A.C., H.W. and X.Z. grew materials. A.C., H.W., B.F., Y.W., Y.M. and K.W. fabricated devices. H.W., P.Z., A.C. and S.A.R. performed magnetic and electrical transport measurements. H.H. and H.W. performed ST-FMR measurements. J.N., H.W. and G.P.C. performed macrospin and micromagnetic simulations. C.G., G.Y. and X.H. performed structural characterizations. J.S., T.S. and P.N.H. prepared the all-sputtered TI-MTJ. All authors contributed to discussions. H.W. and K.L.W. wrote the manuscript with the input from all authors. Correspondence to Hao Wu or Kang L. Wang. Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Wu, H., Chen, A., Zhang, P. et al. Magnetic memory driven by topological insulators. Nat Commun 12, 6251 (2021). https://doi.org/10.1038/s41467-021-26478-3 Accepted: 29 September 2021 Above-room-temperature strong intrinsic ferromagnetism in 2D van der Waals Fe3GaTe2 with large perpendicular magnetic anisotropy Gaojie Zhang Fei Guo Haixin Chang Associated Content 2021 Top 25 Physics Articles
CommonCrawl
DCDS-B Home Global exponential attraction for multi-valued semidynamical systems with application to delay differential equations without uniqueness April 2019, 24(4): 1989-2015. doi: 10.3934/dcdsb.2019026 Well-posedness and numerical algorithm for the tempered fractional differential equations Can Li 1,2, , Weihua Deng 3, and Lijing Zhao 4, Department of Applied Mathematics, Xi'an University of Technology, Xi'an, Shaanxi 710054, China Beijing Computational Science Research Center, Beijing 10084, China School of Mathematics and Statistics, Gansu Key Laboratory of Applied Mathematics and Complex Systems, Lanzhou University, Lanzhou 730000, China Department of Applied Mathematics, Northwestern Polytechnical University, Xi'an, Shaanxi 710054, China Received November 2015 Revised January 2019 Published January 2019 Trapped dynamics widely appears in nature, e.g., the motion of particles in viscous cytoplasm. The famous continuous time random walk (CTRW) model with power law waiting time distribution (having diverging first moment) describes this phenomenon. Because of the finite lifetime of biological particles, sometimes it is necessary to temper the power law measure such that the waiting time measure has convergent first moment. Then the time operator of the Fokker-Planck equation corresponding to the CTRW model with tempered waiting time measure is the so-called tempered fractional derivative. This paper focus on discussing the properties of the time tempered fractional derivative, and studying the well-posedness and the Jacobi-predictor-corrector algorithm for the tempered fractional ordinary differential equation. By adjusting the parameter of the proposed algorithm, high convergence order can be obtained and the computational cost linearly increases with time. The numerical results shows that our algorithm converges with order $ N_I $, where $ N_I $ is the number of used interpolating points. Keywords: Tempered fractional operators, well-posedness, Jacobi-predictor-corrector algorithm, convergence. Mathematics Subject Classification: Primary: 34A08, 74S25; Secondary: 26A33. Citation: Can Li, Weihua Deng, Lijing Zhao. Well-posedness and numerical algorithm for the tempered fractional differential equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1989-2015. doi: 10.3934/dcdsb.2019026 A. Z. Al-Abedeen and H. L. Arora, A global existence and uniqueness theorem for ordinary differential equation of generalized order, Canad. Math. Bull., 21 (1978), 271-276. doi: 10.4153/CMB-1978-047-1. Google Scholar N. Atanasova and I. Brayanov, Computation of some unsteady flows over porous semi-infinite flat surface, in Large-Scale Scientific Computing, Lecture Notes in Computer Science, 3743, Springer, Berlin, 2006,621–628. doi: 10.1007/11666806_71. Google Scholar M. Benchohra, J. Henderson, S. K. Ntouyas and A. Ouahab, Existence results for fractional order functional differential equations with infinite delay, J. Math. Anal. Appl., 338 (2008), 1340-1350. doi: 10.1016/j.jmaa.2007.06.021. Google Scholar B. Baeumera and M. M. Meerschaert, Tempered stable Lévy motion and transient super-diffusion, J. Comput. Appl. Math., 233 (2010), 2438-2448. doi: 10.1016/j.cam.2009.10.027. Google Scholar R. G. Buschman, Decomposition of an integral operator by use of Mikusinski calculus, SIAM J. Math. Anal., 3 (1972), 83-85. doi: 10.1137/0503010. Google Scholar H. Brunner and P. J. van der Houwen, The Numerical Solution of Volterra Equations, North-Holland Publishing Co., Amsterdam, 1986. Google Scholar [7] H. Brunner, Collocation Methods for Volterra Integral and Related Functional Equations, Cambridge University Press, Cambridge, 2004. doi: 10.1017/CBO9780511543234. Google Scholar M. H. Chen and W. H. Deng, Discretized fractional substantial calculus, ESAIM: Math. Mod. Numer. Anal., 49 (2015), 373-394. Google Scholar Á. Cartea and D. del-Castillo-Negrete, Fractional diffusion models of option prices in markets with jumps, Phys. A, 374 (2007), 749-763. doi: 10.1063/1.2336114. Google Scholar Á. Cartea and D. del-Castillo-Negrete, Fluid limit of the continuous-time random walk with general Lévy jump distribution functions, Phys. Rev. E, 76 (2007), 041105. doi: 10.1103/PhysRevE.76.041105. Google Scholar E. A. Coddington and N. Levinson, Theory of Ordinary Differential Equations, McGraw-Hill, New York, 1955. Google Scholar W. H. Deng, Numerical algorithm for the time fractional Fokker-Planck equation, J. Comp. Phys., 227 (2007), 1510-1522. doi: 10.1016/j.jcp.2007.09.015. Google Scholar W. H. Deng, Smoothness and stability of the solutions for nonlinear fractional differential equations, Nonl. Anal., 72 (2010), 1768-1777. doi: 10.1016/j.na.2009.09.018. Google Scholar K. Diethelm and N. Ford, Analysis of fractional differential equations, J. Math. Anal. Appl., 265 (1902), 229-248. doi: 10.1006/jmaa.2000.7194. Google Scholar K. Diethelm and N. J. Ford, Multi-order fractional differential equations and their numerical solution, Appl. Math. Comput., 154 (2004), 621-640. doi: 10.1016/S0096-3003(03)00739-2. Google Scholar K. Diethelm, N. J. Ford and A. D. Freed, A predictor-corrector approach for the numerical solution of fractional differential eqations, Nonlinear Dynam., 29 (2002), 3-22. doi: 10.1023/A:1016592219341. Google Scholar A. M. A. El-Sayed, Fractional differential equations, Kyungpook Math. J., 28 (1988), 119-122. Google Scholar R. Friedrich, F. Jenko, A. Baule and S. Eule, Anomalous diffusion of inertial, weakly damped particles, Phys. Rev. Lett., 96 (2006), 230601. doi: 10.1103/PhysRevLett.96.230601. Google Scholar J. Gajda and M. Magdziarz, Fractional Fokker-Planck equation with tempered $\alpha$-stable waiting times: Langevin picture and computer simulation, Phys. Rev. E, 82 (2010), 011117. doi: 10.1103/PhysRevE.82.011117. Google Scholar A. Hanyga, Wave propagation in media with singular memory, Math. Comput. Model., 34 (2001), 1399-1421. doi: 10.1016/S0895-7177(01)00137-6. Google Scholar B. I. Henry, T. A. M. Langlands and S. L. Wearne, Anomalous diffusion with linear reaction dynamics: From continuous time random walks to fractional reaction-diffusion equations, Phys. Rev. E, 74 (2006), 031116. doi: 10.1103/PhysRevE.74.031116. Google Scholar R. Hilfer, Applications of Fractional Calculus in Physics, World Scientific, Singapore, 2000. doi: 10.1142/9789812817747. Google Scholar A. A. Kilbas, H. M. Srivastava and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, North-Holland Mathematics Studies, 204, Elsevier, North Holland, 2006. Google Scholar A. A. Kilbas and J. J. Trujillo, Differential equation of fractional order: methods, results and problems-Ⅰ, Appl. Anal., 78 (2001), 153-192. doi: 10.1080/00036810108840931. Google Scholar V. Lakshmikantham, S. Leela and J. Vasundhara Devi, Theory of Fractional Dynamic Systems, Cambridge Scientific Publishers, 2009. Google Scholar C. P. Li and W. H. Deng, Remarks on fractional derivatives, Appl. Math. Comput., 187 (2007), 777-784. doi: 10.1016/j.amc.2006.08.163. Google Scholar Y. J. Li and Y. J.Wang, Uniform asymptotic stability of solutions of fractional functional differential equations, Abst. Appl. Anal., 2013 (2013), 532589. doi: 10.1155/2013/532589. Google Scholar C. Li and W. H. Deng, High order schemes for the tempered fractional diffusion equations, Adv. Comput. Math., 42 (2016), 543-572. doi: 10.1007/s10444-015-9434-z. Google Scholar R. Metzler and J. Klafter, The random walk's guide to anomalous diffusion: A fractional dynamics approach, Physics Reports, 339 (2000), 1-77. doi: 10.1016/S0370-1573(00)00070-3. Google Scholar M. M. Meerschaert, Y. Zhang and B. Baeumer, Tempered anomalous diffusion in heterogeneous systems, Geophys. Res. Lett., 35 (2008), L17403. doi: 10.1029/2008GL034899. Google Scholar M. M. Meerschaert and A. Sikorskii, Stochastic Models for Fractional Calculus, De Gruyter Studies in Mathematics, 43, Walter de Gruyter & Co., Berlin, 2012. Google Scholar M. M. Meerschaert, F. Sabzikar, M. S. Phanikumar and A. Zeleke, Tempered fractional time series model for turbulence in geophysical flows, J. Stat. Mech. Theory Exp., 14 (2014), 1742-5468. doi: 10.1088/1742-5468/2014/09/P09023. Google Scholar [33] K. B. Oldham and J. Spanier, The Fractional Calculus, Academic Press, New York, 1974. Google Scholar A. Quarteroni, R. Sacco and F. Saleri, Numerical Mathematics, Springer-Verlag, New York, 2000. Google Scholar E. Pitcher and W. E. Sewell, Existence theorems for solutions of differential equations of non-integer order, Bull. Amer. Math. Soc., 44 (1938), 100-107. doi: 10.1090/S0002-9904-1938-06695-5. Google Scholar [36] I. Podlubny, Fractional Differential Equations, Academic Press, San Diego, 1999. Google Scholar M. G. W. Schmidt, F. Sagués and I. M. Sokolov, Mesoscopic description of reactions for anomalous diffusion: A case study, J. Phys.: Condens. Matter, 19 (2007), 065118. doi: 10.1088/0953-8984/19/6/065118. Google Scholar S. Samko, A. Kilbas and O. Marichev, Fractional Integrals and Derivatives: Theory and Applications, Gordon and Breach, London, 1993. Google Scholar F. Sabzikar, M. M. Meerschaert and J. H. Chen, Tempered fractional calculus, J. Comput. Phys., 293 (2015), 14-28. doi: 10.1016/j.jcp.2014.04.024. Google Scholar J. L. Schiff, The Laplace Transform: Theory and Applications, Springer, New York, 1991. doi: 10.1007/978-0-387-22757-3. Google Scholar J. Shen, T. Tang and L. L. Wang, Spectral Methods: Algorithms, Analysis and Applications, Springer Series in Computational Mathematics, 41, Springer-Verlag, Heidelberg, 2011. doi: 10.1007/978-3-540-71041-7. Google Scholar I. M. Sokolov, M. G. W. Schmidt and F. Sagués, Reaction-subdiffusion equations, Phys. Rev. E, 73 (2006), 031102. doi: 10.1103/PhysRevE.73.031102. Google Scholar H. M. Srivastava and R. G. Buschman, Convolution Integral Equations with Special Function Kernels, John Wiley & Sons, New York, 1977. Google Scholar L. Turgeman, S. Carmi and E. Barkai, Fractional Feynman-Kac equation for non-Brownian functionals, Phys. Rev. Lett., 103 (2009), 190201. doi: 10.1103/PhysRevLett.103.190201. Google Scholar N. Tatar, The decay rate for a fractional differential equation, J. Math. Anal. Appl., 295 (2004), 303-314. doi: 10.1016/j.jmaa.2004.01.047. Google Scholar Ž. Tomovski, Generalized Cauchy type problems for nonlinear fractional differential equations with composite fractional derivative operator, Nonl. Anal., 75 (2012), 3364-3384. doi: 10.1016/j.na.2011.12.034. Google Scholar L. J. Zhao and W. H. Deng, Jacobian-predictor-corrector approach for fractional differential equations, Adv. Comput. Math., 40 (2014), 137-165. doi: 10.1007/s10444-013-9302-7. Google Scholar M. Zayernouri, M. Ainsworth and G. Karniadakis, Tempered fractional Sturm-Liouville eigenproblems, SIAM J. Sci. Comput., 37 (2015), A1777–A1800. doi: 10.1137/140985536. Google Scholar Y. Zhou, Basic Theory of Fractional Differential Equations, World Scientific, Singapore, 2014. doi: 10.1142/9069. Google Scholar Table 1. Maximum errors and convergence orders of Example 1 solved by the scheme (56)-(57) with $ T = 1,N = 20,N_I = 7 $, and $ \alpha = 0.5 $ $ \lambda=0 $ $ \lambda=2 $ $ \lambda=6 $ $ \tau $ error order error order error order 1/10 1.5207e-004 2.3516e-005 1.4300e-006 1/20 4.6202e-007 8.3626 1.4040e-007 7.3879 3.3507e-008 5.4154 1/160 3.5305e-014 7.8443 1.2794e-014 7.6383 7.0913e-015 7.6629 Table 2. Maximum errors and convergence orders of Example 1 solved by the scheme (56)-(57) with $T = 1,N = 20,N_I = 6$, and $\alpha = 1.0$ $\lambda=0$ $\lambda=2$ $\lambda=6$ $\tau$ error order error order error order Table 4. Maximum errors and convergence orders of Example 2 solved by the scheme (66) with $ T = 1.1, N = 26, \tilde{N} = 40, N_I = 2, T_0 = 0.1,\mu = 1 $, and $ \lambda = 5 $ $ \alpha=0.2 $ $ \alpha=0.9 $ $ \alpha=1.8 $ Table 5. Maximum errors and convergence orders of Example 2 solved by the scheme (66) with $ T = 1.1, N = 26, \tilde{N} = 40, N_I = 2, T_0 = 0.1,\mu = 1 $, and $ \lambda = 10 $ Antonio Coronel-Escamilla, José Francisco Gómez-Aguilar. A novel predictor-corrector scheme for solving variable-order fractional delay differential equations involving operators with Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 561-574. doi: 10.3934/dcdss.2020031 Soodabeh Asadi, Hossein Mansouri. A Mehrotra type predictor-corrector interior-point algorithm for linear programming. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 147-156. doi: 10.3934/naco.2019011 Caochuan Ma, Zaihong Jiang, Renhui Wan. Local well-posedness for the tropical climate model with fractional velocity diffusion. Kinetic & Related Models, 2016, 9 (3) : 551-570. doi: 10.3934/krm.2016006 Junxiong Jia, Jigen Peng, Kexue Li. Well-posedness of abstract distributed-order fractional diffusion equations. Communications on Pure & Applied Analysis, 2014, 13 (2) : 605-621. doi: 10.3934/cpaa.2014.13.605 Jean-Daniel Djida, Arran Fernandez, Iván Area. Well-posedness results for fractional semi-linear wave equations. Discrete & Continuous Dynamical Systems - B, 2020, 25 (2) : 569-597. doi: 10.3934/dcdsb.2019255 Yonggeun Cho, Gyeongha Hwang, Soonsik Kwon, Sanghyuk Lee. Well-posedness and ill-posedness for the cubic fractional Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2015, 35 (7) : 2863-2880. doi: 10.3934/dcds.2015.35.2863 Markus Grasmair. Well-posedness and convergence rates for sparse regularization with sublinear $l^q$ penalty term. Inverse Problems & Imaging, 2009, 3 (3) : 383-387. doi: 10.3934/ipi.2009.3.383 Hui Huang, Jian-Guo Liu. Well-posedness for the Keller-Segel equation with fractional Laplacian and the theory of propagation of chaos. Kinetic & Related Models, 2016, 9 (4) : 715-748. doi: 10.3934/krm.2016013 Luciano Abadías, Carlos Lizama, Pedro J. Miana, M. Pilar Velasco. On well-posedness of vector-valued fractional differential-difference equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2679-2708. doi: 10.3934/dcds.2019112 Ugur G. Abdulla. On the optimal control of the free boundary problems for the second order parabolic equations. I. Well-posedness and convergence of the method of lines. Inverse Problems & Imaging, 2013, 7 (2) : 307-340. doi: 10.3934/ipi.2013.7.307 Vanessa Barros, Felipe Linares. A remark on the well-posedness of a degenerated Zakharov system. Communications on Pure & Applied Analysis, 2015, 14 (4) : 1259-1274. doi: 10.3934/cpaa.2015.14.1259 Boris Kolev. Local well-posedness of the EPDiff equation: A survey. Journal of Geometric Mechanics, 2017, 9 (2) : 167-189. doi: 10.3934/jgm.2017007 Elissar Nasreddine. Well-posedness for a model of individual clustering. Discrete & Continuous Dynamical Systems - B, 2013, 18 (10) : 2647-2668. doi: 10.3934/dcdsb.2013.18.2647 Jerry Bona, Nikolay Tzvetkov. Sharp well-posedness results for the BBM equation. Discrete & Continuous Dynamical Systems - A, 2009, 23 (4) : 1241-1252. doi: 10.3934/dcds.2009.23.1241 Thomas Y. Hou, Congming Li. Global well-posedness of the viscous Boussinesq equations. Discrete & Continuous Dynamical Systems - A, 2005, 12 (1) : 1-12. doi: 10.3934/dcds.2005.12.1 David M. Ambrose, Jerry L. Bona, David P. Nicholls. Well-posedness of a model for water waves with viscosity. Discrete & Continuous Dynamical Systems - B, 2012, 17 (4) : 1113-1137. doi: 10.3934/dcdsb.2012.17.1113 Nils Strunk. Well-posedness for the supercritical gKdV equation. Communications on Pure & Applied Analysis, 2014, 13 (2) : 527-542. doi: 10.3934/cpaa.2014.13.527 A. Alexandrou Himonas, Curtis Holliman. On well-posedness of the Degasperis-Procesi equation. Discrete & Continuous Dynamical Systems - A, 2011, 31 (2) : 469-488. doi: 10.3934/dcds.2011.31.469 Massimo Cicognani, Michael Reissig. Well-posedness for degenerate Schrödinger equations. Evolution Equations & Control Theory, 2014, 3 (1) : 15-33. doi: 10.3934/eect.2014.3.15 Hongjun Gao, Chengfeng Sun. Well-posedness of stochastic primitive equations with multiplicative noise in three dimensions. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 3053-3073. doi: 10.3934/dcdsb.2016087 PDF downloads (123) Can Li Weihua Deng Lijing Zhao
CommonCrawl
Progress in Earth and Planetary Science Impacts of cloud microphysics on trade wind cumulus: which cloud microphysics processes contribute to the diversity in a large eddy simulation? Yousuke Sato1, Seiya Nishizawa1, Hisashi Yashiro1, Yoshiaki Miyamoto1, Yoshiyuki Kajikawa1 & Hirofumi Tomita1 Progress in Earth and Planetary Science volume 2, Article number: 23 (2015) Cite this article This study investigated the impact of several cloud microphysical schemes on the trade wind cumulus in the large eddy simulation model. To highlight the differences due to the cloud microphysical component, we developed a fully compressible large eddy simulation model, which excluded the implicit scheme and approximations as much as possible. The three microphysical schemes, the one-moment bulk, two-moment bulk, and spectral bin schemes were used for sensitivity experiments in which the other components were fixed. Our new large eddy simulation model using a spectral bin scheme successfully reproduced trade wind cumuli, and reliable model performance was confirmed. Results of the sensitivity experiments indicated that precipitation simulated by the one-moment bulk scheme started earlier, and its total amount was larger than that of the other models. By contrast, precipitation simulated by the two-moment scheme started late, and its total amount was small. These results support those of a previous study. The analyses revealed that the expression of two processes, (1) the generation of cloud particles and (2) the conversion from small droplets to raindrops, were crucial to the results. The fast conversion from cloud to rain and the large amount of newly generated cloud particles at the cloud base led to evaporative cooling and subsequent stabilization in the sub-cloud layer. The latent heat released at higher layers by the condensation of cloud particles resulted in the development of the boundary layer top height. The effect of clouds is one of the most uncertain factors in climate projection and numerical weather prediction. Shallow clouds (e.g., stratus, stratocumulus, shallow cumulus) play particularly important roles in the energy budget of the earth through radiation process because they cover a broad area of the earth (e.g., Randall et al. 1984). The 5th Intergovernmental Panel on Climate Change (IPCC) report suggested that the uncertainties with regards to shallow clouds should be reduced for reliable assessments (IPCC AR5, Stocker et al. 2013). In global scale models (e.g., general circulation model (GCM)) and regional models with coarse grid spacing, shallow clouds are usually expressed by parameterizations (e.g., Tiedtke 1993; Considine et al. 1997; Kain 2004), but these parameterizations have not been able to effectively simulate the shallow cloud cover observed from satellites (e.g., Chepfer et al. 2008; Naud et al. 2010). To improve the expression of shallow cloud, the results of large eddy simulation (LES) models have been utilized. For example, Bretherton and Park (2009) used the results of an LES model to develop a new moist turbulent parameterization. Suzuki et al. (2004) and Posselt and Lohmann (2008) introduced an autoconversion parameterization that was typically used in an LES model (Khairoutdinov and Kogan 2000) into a global scale model. Many studies using LES models have been conducted to determine the characteristics of shallow cloud and improve large-scale modeling (e.g., Wang and Feingold 2009; Xue et al. 2008, Savic-Jovcic and Stevens 2008, Yamaguchi and Randall 2012). However, the results of LES models are diverse, as indicated by several LES model intercomparison studies targeting shallow cloud (e.g., Stevens et al. 2005; Ackerman et al. 2009; van Zanten et al. 2011; Siebesma et al. 2003). It has been suggested that the difference in the microphysical schemes used in LES models is one of the reasons for the diversity in the results (Ackerman et al. 2009; van Zanten et al. 2011). However, it is difficult to investigate the exact effect of the different cloud microphysical schemes (i.e., the effect on the results that comes from changing the cloud microphysical scheme while keeping all other components fixed), because each LES model uses a different scheme, not only in cloud microphysics but also many other components (e.g., governing equation, turbulent scheme, advection scheme, and others). Thus, it is also difficult to determine which microphysical processes contribute most to the diversity of the LES results. Sensitivity experiments, which change only the cloud microphysical scheme, are required to better understand the exact effects of the cloud microphysical scheme. The kinematic driver (KiD) model developed by Shipway and Hill (2012) enables us to conduct sensitivity simulations by changing only the cloud microphysics. Using KiD, we can consider the exact effects of the differences in cloud microphysical schemes. However, KiD ignores feedbacks to the atmosphere, even though the feedbacks of microphysics can affect the microphysical properties of shallow clouds and the turbulent structure of the boundary layer (e.g., Stevens et al. 1998; Wang et al. 2010). It is necessary to consider the feedback of microphysical processes on the dynamics when determining which process causes the diversity in the results of LES models. We should use the model that excludes approximation and implicit schemes as much as possible, because these features also affect the cloud microphysical properties simulated by the model. This study developed a model that satisfies these requirements. Using the model, we attempted to reproduce the diversity in the results of the LES model used in van Zanten et al. (2011) and to determine impact of the various cloud microphysical schemes. Three types of cloud microphysical schemes (one-moment bulk, two-moment bulk, and spectral bin schemes) were considered in this study, because these three schemes have been used in previous intercomparison studies. Of the several types of shallow cloud, we focused on trade wind cumulus because their variability in the results of LES models was larger than that of other types of shallow clouds. For example, the variability of surface precipitation in an intercomparison of trade wind cumulus (van Zanten et al. 2011) was larger than that in an intercomparison of stratocumulus (Ackerman et al. 2009). van Zanten et al. (2011) proposed that one of the reasons for the diversity in the microphysical properties of trade wind cumuli was the different cloud microphysical models. They interpreted that a simple (one-moment bulk) microphysical scheme produced large amounts of precipitation (i.e., Table 3 of van Zanten et al. (2011)) and liquid water simulated by one-moment bulk schemes tended to be distributed in the lower layer. By contrast, the liquid water was located in the higher layer, and the precipitation flux was small in most of the two-moment schemes (i.e., Figure 6a of van Zanten et al. (2011)). In this study, we confirmed the validity of their interpretation through a simulation using our new fully compressible LES model and determined the main processes contributing to the diversity in the results of LES model intercomparison studies. Experimental setup, dynamic framework, turbulence model, and external forcing This section describes the common parts of the model with its experimental setup (i.e., the dynamic framework, turbulence model, and external forcing). The different parts of the microphysical schemes are highlighted in the subsequent section. The dynamic model used in this study is an LES model that is included in the Scalable Computing for Advanced Library and Environment library (SCALE). Henceforth, we call this LES model SCALE-LES. The details of SCALE-LES are found at http://scale.aics.riken.jp/. A fully compressible system is adopted for the governing equation of SCALE-LES. The prognostic variables are the three-dimensional momentum (ρu, ρv, ρw), total density (ρ), mass-weighted potential temperature (ρθ), and mass concentration of tracers (ρq s), where q s includes specific humidity, ratio of mass, and number concentration ratio of hydrometeors to total mass. Explicit time integration is used in all directions. Furthermore, the fourth-order central difference scheme is adopted for spatial discretization to avoid the numerically implicit diffusion that would be induced by odd-ordered difference schemes. The three-step Runge–Kutta scheme is adopted. To retain stability of the model, fourth-order superviscosity/diffusion is applied for all prognostic variables. Using SCALE-LES, developed as described above, we exclude the effects of approximation in the governing equation system and implicit diffusion as far as possible, enabling a consideration of the effects of the target component (i.e., cloud microphysics in this study). To guarantee monotonicity, the flux-corrected transport (FCT) scheme (Zalesak 1979) is applied for all prognostic variables except for density. The effects of sub-grid scale turbulence are calculated using a Smagorinsky-type scheme based on Brown et al. (1994) and Scotti et al. (1993). The experimental setup is almost the same as that of the previous Global Energy and Water Exchange project (GEWEX) Cloud System Study (GCSS) intercomparison of Rain in Cumulus over the Ocean (RICO) (van Zanten et al. 2011). The calculation domain covers 12.8 × 12.8 km2 horizontally with a double periodic boundary condition and 4.0 km vertically. The horizontal and vertical grid intervals are 100 and 40 m, respectively. Rayleigh damping is applied for three-dimensional momentum over the upper atmosphere of z > 3.5 km. The strength of the numerical diffusion is set as 1.25 × 105 m4 s−1, which is determined by the sensitivity experiment (the results of the sensitivity experiment are shown in Appendix 1 of this paper). The simulations are conducted for 24 h with time steps (Δt) of 0.05, 0.15, and 0.5 s for dynamics, cloud microphysics, and other physics. The Δt for cloud microphysics is determined as the largest time step to avoid artificial noise, and the ratio of Δt for dynamics to Δt for other physics is set to 10 based on the sensitivity experiments (see Appendix 2 of this paper for details of the sensitivity experiment). The external forcing of the radiation, the surface flux, and the large-scale horizontal advection are applied in the same way as in van Zanten et al. (2011). The forcing of the large-scale subsidence is applied for all prognostic variables including density, except for u and v. In nature, large-scale subsidence generates a divergence of total density, which makes the air mass flow out from the domain. However, it cannot flow out in the compressible model using the configuration of van Zanten et al. (2011) due to the periodicity in the lateral boundary condition. Although this problem can be ignored in the anelastic and Boussinesq systems due to the fixed density, it is necessary to consider this problem in the compressible system. Although no description is available for this problem in van Zanten et al. (2011), the density of each layer should be reduced according to the divergence. The density reduction rate and the equation system with large-scale forcing are given below. Large-scale subsidence (w LS) was given by van Zanten et al. (2011) as: $$ {w}_{\mathrm{LS}}=\left\{\begin{array}{c}\hfill -\frac{0.005}{2260}z\kern1em \left(z<2260\kern0.1em \mathrm{m}\right)\hfill \\ {}\hfill -0.005\kern1em \left(z\ge 2260\kern0.1em \mathrm{m}\right)\hfill \end{array}\right. $$ where z is the height. Instead of this formulation, we give the subsidence formulated directly to vertical momentum as: $$ \rho {w}_{\mathrm{LS}}=\left\{\begin{array}{c}\hfill -\frac{0.005}{2260}z\kern1em \left(z<2260\kern0.1em \mathrm{m}\right)\hfill \\ {}\hfill -0.005\kern1em \left(z\ge 2260\kern0.1em \mathrm{m}\right)\hfill \end{array}\right. $$ Consequently, the density reduction rate (D) is given as: $$ D\equiv \frac{\partial \left(\rho {w}_{\mathrm{LS}}\right)}{\partial z}=\left\{\begin{array}{c}\hfill -\frac{0.005}{2260}\kern1em \left(z<2260\mathrm{m}\right)\hfill \\ {}\hfill 0\kern1em \left(z\ge 2260\mathrm{m}\right)\hfill \end{array}\right. $$ The continuous equation modified with the subsidence term is given by $$ \frac{\partial \rho }{\partial t}+\frac{\partial \left(\rho u\right)}{\partial x}+\frac{\partial \left(\rho v\right)}{\partial y}+\frac{\partial \left(\rho \left(w+{w}_{\mathrm{LS}}\right)\right)}{\partial z}=D $$ The density reduction derived from the divergence by the large-scale subsidence is added in the right-hand side (rhs) of Eq. (1). Note that this equation is identical to the equation without large-scale subsidence. The Lagrangian conservation equation of the scalar quantities (ϕ) is given as: $$ \rho \frac{\partial \phi }{\partial t}+\rho u\frac{\partial \phi }{\partial x}+\rho v\frac{\partial \phi }{\partial y}+\rho \left(w+{w}_{\mathrm{LS}}\right)\frac{\partial \phi }{\partial z}=0 $$ The prognostic variable of SCALE-LES is a mass-weighted value, and the equation of mass-weighted values is derived from Eqs. (1) and (2) as: $$ \frac{\partial \left(\rho \phi \right)}{\partial t}+\frac{\partial \left(\rho u\phi \right)}{\partial x}+\frac{\partial \left(\rho v\phi \right)}{\partial y}+\frac{\partial \left(\rho \left(w+{w}_{\mathrm{LS}}\right)\phi \right)}{\partial z}=D\phi $$ As shown in the rhs of Eq. (3), the scalar quantities (ϕ) flow out from each layer of the system by subsidence. The equation for the vertical momentum is modified, as is that of the scalar quantities. The vertical flux ρw LS ϕ at the top boundary can be determined so that such additional convergence of the flux is canceled with Dϕ at the top layer. Three microphysical schemes To reproduce the diversity in the results of the RICO study, three types of cloud microphysical scheme are used for this study: the one-moment bulk microphysical scheme (Tomita 2008), the two-moment bulk scheme (Seiki and Nakajima 2014), and the spectral bin microphysical scheme (Suzuki et al. 2010). The one-moment bulk scheme and the two-moment bulk scheme are based on Berry (1968) and Seifert and Beheng (2001), respectively. Both of the original bulk schemes were used in the RICO study. The essential difference among the three schemes is their treatment of the size distribution of the number of hydrometeor particles. The one-moment bulk scheme expresses this value by the Marshall–Palmer distribution, with the assumption of a constant total number of particles. By this assumption, only the mass concentration is needed to determine the size distribution. Although the two-moment bulk scheme conceptually treats the size distribution almost the same way, it differs from the one-moment bulk scheme in the assumption about the type of size distribution function and in the process of its determination. The two-moment bulk scheme assumes the generalized gamma distribution as the size distribution function, and the size distribution itself is determined not only by the mass concentration but also by the number concentration. In this sense, the two-moment bulk scheme is more sophisticated than the one-moment bulk scheme. The spectral bin scheme is an intrinsically sophisticated method compared with the others. It explicitly predicts the size distribution. To compensate for the detailed expression of size distribution, the spectral bin scheme requires about four- to fivefold larger number of prognostic variables compared with the other two schemes. Although the three microphysical schemes treat both warm and ice phase cloud, the ice phase was not calculated because the temperature at the model top is greater than 273.15 K. In this case, the categorization of hydrometeors for each scheme is as follows. The one-moment and two-moment bulk schemes address two types of hydrometeors: cloud droplet and raindrop for warm rain process. The spectral bin scheme addresses only a type of water drop, which covers the particle size of both cloud droplets and raindrops. For the spectral bin scheme, the radius of newly generated cloud particles by nucleation is set to 3 μm (Suzuki et al. 2006). The size distribution of hydrometeors is discretized to 33 bins; its configuration has been established by several previous studies (e.g., Khain and Sednev 1996; Iguchi et al. 2008). The center of mass of ith bin (m i ) is set by using the i-1-th bin (m i-1 ) as m i = 1.874 m i-1 . m 1 is the mass of cloud particles whose radius is 3 μm. All three schemes consider the generation of cloud droplets, condensation, evaporation, and sedimentation of cloud hydrometeors. Although the two-moment bulk scheme also considers the breakup of cloud droplets, this difference is minor based on sensitivity experiments examining the breakup process (figure not shown). Sedimentation is calculated by the first-order upwind scheme for all three schemes. Since the generation of new cloud droplets is one of the critical processes in this experiment, the difference in this process among the schemes should be noted. The mass of newly generated cloud droplets is calculated by saturation adjustment in the one-moment bulk scheme, which was also used in some one-moment bulk schemes in the RICO study. By contrast, it is calculated by the nucleation schemes in the two-moment bulk and spectral bin schemes. The number concentration of the cloud droplets (N c,nucl) generated by the nucleation process is calculated as follows (e.g., Pruppacher and Klett 1997): $$ {N}_{\mathrm{c},\mathrm{nucl}} = {N}_0{S_{\mathrm{w}}}^k, $$ where S w is supersaturation over water. The constants N 0 and k are set as N 0 = 100 × 106 m−3 and k = 0.462. This scheme was also used in several models used in the RICO study. The growth of cloud droplets into raindrops is another key issue for this experiment, as well as the underlying creation process. In the bulk scheme, this is expressed as autoconversion and accretion. To investigate the strength of the impact of the autoconversion and accretion processes, we conducted the same simulation with autoconversion and accretion ratios that were twice as large (0.067-fold smaller) as the two-moment (one-moment) scheme. The autoconversion rate (P auto) in the one-moment bulk scheme is calculated as in Tomita (2008), which was based on Berry (1968): $$ {P}_{\mathrm{auto}}=\frac{1}{\rho}\left[16.7\times {\left(\rho {q}_{\mathrm{c}}\right)}^2\left(5+\frac{3.6\times {10}^{-5}{N}_{\mathrm{c},\mathrm{T}08}}{D_{\mathrm{d}}\rho {q}_{\mathrm{c}}}\right)\right]\kern1em \left[\mathrm{kg}\ {\mathrm{kg}}^{\hbox{-} 1}{\mathrm{s}}^{\hbox{-} 1}\right], $$ where D d = 0.1456−5.964 × 10−2 log (N c,T08 /2000), q c is the cloud water mixing ratio, and N c,T08 is the number concentration of cloud droplets. In Tomita (2008), N c,T08 was set as 50 cm−3, but in this study, N c,T08 was set as 70 cm−3 based on the experimental setup in the RICO study (van Zanten et al. 2011). Another autoconversion scheme from Khairoutdinov and Kogan (2000) was implemented into the one-moment bulk scheme because it had performed better than the scheme of Berry (1968) in a GCM (Suzuki et al. 2004). The scheme was also adopted in some models used in the RICO study. P auto in the Khairoutdinov and Kogan (2000) scheme is calculated as: $$ {P}_{\mathrm{auto}}=1350\times {q}_{\mathrm{c}}^{2.47}\times {N}_{\mathrm{c},\mathrm{T}08}^{-1.79}\kern1em \left[{\mathrm{kgkg}}^{\hbox{-} 1}{\mathrm{s}}^{\hbox{-} 1}\right] $$ Using the schemes shown above, we attempted to reproduce the diversity of the LES results in the RICO study. Basic performance of SCALE-LES The validity of SCALE-LES was confirmed through comparison with a previous study (van Zanten et al. 2011). The results of the spectral bin scheme were regarded as a reference solution of SCALE-LES, because it is the most sophisticated scheme of the three. First, the results of the reference solution were compared with the previous study. As shown in Figs. 1 and 2, the results of our model (SCALE-LES) are within the range between the maximum and minimum of the intercomparison study in terms of temporal evolution and vertical profile for several quantities. This indicates that our model could reproduce the shallow cumulus simulated by the LES models used in the previous study. Comparison of the time evolution between SCALE-LES and a previous intercomparison study. Time evolution of the a liquid water path, b vertically integrated turbulence kinetic energy, and c boundary layer top height averaged over the entire calculation domain simulated by (black line) SCALE-LES, with the spectral bin scheme. The blue line, dark gray shading, and light gray shading indicate the median, range between the first and third quartiles, and range between the maximum and minimum values, respectively, of the previous intercomparison study (van Zanten et al. 2011) Comparison of vertical profiles between SCALE-LES and a previous intercomparison study. Horizontally averaged profile of the a liquid water potential temperature (θ l) and total water mixing ratio (q t), b liquid water mixing ratio (q l), c precipitation flux, d cloud fraction, e vertical velocity in cloud core, f variance of resolved w', g w'θ l ', h w'q t ', and i horizontal wind velocity, averaged during the last 4 h. The solid line indicates the results of SCALE-LES. The dashed line, heavy gray shading, and light gray shading indicate the median, range between the first and third quartiles, and range between the maximum and minimum values, respectively, of the previous intercomparison study (van Zanten et al. 2011) As well as the physical performance, the computational performance and the scalability of SCALE were investigated. The elapsed time for a time step (Δt) and the performance efficiency are shown in Table 1. The elapsed time and performance of SCALE-LES do not change when it is used with a large number of Message Passing Interface (MPI) processes (e.g., over 10,000 MPI processes). This indicates that SCALE-LES has excellent scalability and a reasonable performance in massive parallel computing. Table 1 Computational performance of SCALE-LES Impacts of cloud microphysical scheme on simulation results Second, we show the results of the same simulation (RICO) by using three microphysical schemes for investigating the impacts of the cloud microphysical scheme. Figure 3 indicates the differences among the three schemes. The precipitation flux simulated by the two-moment bulk scheme is small, and its peak value is distributed in the upper layer. By contrast, the precipitation flux by the one-moment bulk scheme is large and the peak value locates in the lower layer. The precipitation by the spectral bin scheme is between the other two (Fig. 3a). The trend is consistent with the previous study (Figure 6a of van Zanten et al. 2011). Comparison of the three cloud microphysical schemes. Horizontally averaged profile of the a precipitation flux, b liquid water mixing ratio, c cloud fraction, d total water mixing ratio and liquid water potential temperature, and time evolution of e the liquid water path and f cloud cover. The red, green, and sky-blue lines show results of the spectral bin scheme, the two-moment bulk scheme, and the one-moment bulk scheme, respectively, and the black line, dark gray shading, and light gray shading indicate the median, range between the first and third quartiles, and range between the maximum and minimum values, respectively, of the previous intercomparison study (van Zanten et al. 2011). The small figure in d indicates the extension of the profile of liquid water potential temperature below 1000 m The impacts of the different cloud microphysical schemes on the vertical distribution of the precipitation flux, the liquid water mixing ratio (q l), and cloud fraction are clearly shown in Fig. 3a–c. The q l in the one-moment bulk scheme is distributed in the lower layer (the peak value is represented at z ~ 900 m). Table 2 shows the surface precipitation averaged during the last 4 h for the three schemes. The precipitation amount in the one-moment bulk scheme is the largest, and the precipitation begins earliest among the three. The surface precipitation over 0.1 W m−2 starts at 1.8 h in the one-moment bulk scheme, at 10.05 h in the two-moment bulk scheme, and at 2.63 h in the spectral bin scheme. The liquid water simulated in the two-moment bulk scheme is located in the upper layer (the peak value is located at z ~ 2400 m), and only trace amounts of precipitation reach the surface (Table 2). The cloud fraction of the one-moment scheme is located in the lower layer and is small in the upper layer. On the other hand, the positive cloud fraction in the two-moment bulk scheme reaches a higher layer. The spectral bin scheme simulates an intermediate value between the other two, with the same trend as the RICO study. These results are consistent with the result of the previous study (van Zanten et al. 2011). Table 2 Comparison of surface precipitation flux. The surface precipitation flux averaged over the whole calculation domain during the last 4 h of calculation The large amount of precipitation in the one-moment bulk scheme carries a large amount of liquid water to the lower layer. As shown in Fig. 3d, the total water mixing ratio (q t) of the lower layer (i.e., z < 1500 m) in the one-moment bulk scheme is larger than that in the others. Despite the difference in the vertical distribution of liquid water, the liquid water path (LWP) shows the same value (Fig. 3e). By contrast, the cloud coverage of the one-moment scheme is smaller than in the other scheme (Fig. 3f). This implies that the amount of cloud that extends horizontally at the top of boundary layer is small in the one-moment scheme, which is because of the liquid water removed from the cloud layer earlier by precipitation. This is also found in Fig. 3c, where the peak of the cloud fraction does not appear around the top of the boundary layer (i.e., z ~ 2000 m). The impacts of cloud microphysics appear not only on the precipitation and vertical distribution of liquid water, but also the turbulent properties such as the turbulence kinetic energy (TKE), the variance in resolved vertical velocity (w'), and boundary layer top height. The boundary layer top height in the two-moment bulk scheme is the highest among the three schemes, whereas the one-moment bulk scheme simulates the lowest boundary layer height. This trend in boundary layer height continues to the end of the simulation time, as shown in Fig. 4a, and the difference among the schemes gradually becomes large. The variance in w' (Fig. 4c) and TKE (Fig. 4d) in the two-moment bulk scheme attributes a larger value to the upper layer. By contrast, those in the one-moment scheme are smaller in the upper layer. Vertically integrated TKE tends to be large and small in the two-moment and the one-moment bulk schemes, respectively (Fig. 4b). Comparison of turbulence properties between the three schemes. Time evolution of the a boundary layer top height and b vertically integrated turbulence kinetic energy (grid resolved + sub-grid scale) and horizontally averaged profile of c the variance of resolved w' and d grid-resolved turbulence kinetic energy averaged during the last 4 h of calculation From the results shown above, we concluded that the turbulent properties of the boundary layer as well as the cloud microphysical properties of cumulus are significantly affected by the different microphysical schemes when other components are unchanged. These results support the proposal of van Zanten et al. (2011) that the use of different cloud microphysical schemes is one of the main reasons for the diversity among LES models. We have confidence in this conclusion because direct effects other than the cloud microphysical schemes were excluded in our experiments. Reasons for the differences among the results of the three schemes The impacts of the cloud microphysical schemes are clearly indicated by the differences in the boundary layer top height, vertical distribution of liquid water, and the precipitation flux, as shown in the previous section. The reasons for these differences will be discussed in this section. Since the impacts of each cloud microphysical scheme originate from the expression of the liquid water in each scheme, an examination of the tendency of q l and potential temperature (θ) in the cloud microphysical process is helpful for understanding the differences among the results. We first investigate the difference during t = 3−4 h of the calculation, because the effects of feedbacks of cloud physics to dynamics is not large and it is easy to interpret the difference. The tendencies at each height averaged during t = 3 h to t = 4 h are shown in Fig. 5a, b. The results from t = 0 h to t = 3 h were removed to ignore the effects of spin-up. The effect of sedimentation was omitted from the tendencies. The generation of liquid water at the cloud base in the one-moment bulk scheme is more active than that in the others (Fig. 5a). This is attributed to the difference in the mechanism for generating cloud particles. In the one-moment bulk scheme, newly generated cloud particles are calculated by saturation adjustment, whereas in the other schemes, they are calculated based on Eq. (4). Because the saturation adjustment does not permit supersaturation, it can generate larger amounts of liquid water at the cloud base than that can be generated by the scheme based on Eq. (4), which allows for supersaturation. This large amount of liquid water generation results in a large heat release at the cloud base (Fig. 5b) and subsequently a strong vertical velocity (Fig. 5c). Profiles and size distribution function averaged during t = 3~4 h. Horizontally averaged profile of the a tendency of the liquid water, b tendency of potential temperature, c variance of w', e precipitation flux, and f liquid water potential temperature averaged during t = 3 to 4 h. d Number density distribution (n (log m)) averaged over the whole cloudy grid at (solid) z = 1000 m and (dashed) z = 1500 m, where n and m are the number concentration and mass of liquid water, respectively. The red, green, and sky-blue lines show results of the spectral bin, two-moment bulk, and one-moment bulk schemes, respectively. The extended figures in b and f show the tendencies of potential temperature and liquid water potential temperature below 500 m, respectively In addition to the large amount of liquid water generated in the one-moment bulk scheme, it is clear from the particle size distribution that the conversion from cloud to rain is fast. Size distributions at the lower part of cloud (i.e., z = 1000 m) are shown in Fig. 5d. The generation of drizzle and raindrops (i.e., particles over 40 μm in radius) in the lower part of the cloud is active in the one-moment bulk scheme, whereas small cloud particles are dominant in the two-moment bulk schemes. This indicates that the conversion from cloud to rain in the one-moment bulk scheme is faster than that in the others, and larger numbers of raindrops are generated in the lower part of the cloud by the one-moment bulk scheme. The generation of large raindrops leads to the fast terminal velocity of the hydrometeors and a large precipitation flux in the one-moment bulk scheme (Fig. 5e), which, in turn, leads to the large precipitation flux at the surface and the fast onset of surface precipitation. Figure 5a, b also show that the peak negative tendency of q l and θ near the cloud top (z ~ 1700 m), which corresponds to the evaporation of cloud droplets at the top of the boundary layer, is located in a lower layer in the one-moment scheme than in the others. This indicates that the large precipitation volume and large cloud size in the one-moment bulk scheme restrain the cloud particles from reaching the upper layer. Therefore, the boundary layer top height of the one-moment bulk scheme becomes lower (Fig. 4a). The large amount of raindrops in the one-moment scheme creates feedback for the thermodynamic structure below the cloud. The water loading due to raindrops leads to active evaporative cooling below the cloud (shown in the negative tendency of q l and θ below the cloud as shown in Fig. 5a, b and the lower potential temperature below the cloud shown in Fig. 5f). This evaporative cooling stabilizes the boundary layer and suppresses the heat transfer from the ground to the upper part of the boundary layer, which is shown in the fact that the positive tendency of θ in the one-moment bulk scheme does not reach z > 1200 m but reaches z ~ 1400 m in the other scheme. This supports Stevens et al. (1998), who indicated that precipitation suppresses cloud growth and entrainment. This suppression can limit the development of the boundary layer and results in a more stable boundary layer. The difference between the two-moment bulk and the spectral bin schemes in the tendency of q l and θ are relatively small. However, the difference in size distribution function between the two schemes is clearly apparent (Fig. 5d). A larger amount of raindrops (r > 100 μm) were present in the spectral bin scheme than in the case in the two-moment bulk scheme. It is indicated that the conversion from cloud droplets to raindrops is slow in the two-moment bulk scheme. Because the amount of large raindrops is small in the two-moment bulk scheme (shown as green lines in Fig. 5d), liquid water is carried to the upper layer more easily than in the other schemes. The liquid water evaporates at the top of the boundary layer. This is shown by the negative tendency of q l, and by θ locating in a higher layer in the two-moment bulk scheme than in the others (Fig. 5a, b). The presence of larger particles in the spectral bin scheme subsequently increases the rate of collisions with other particles, which leads to more rapid growth of particles and earlier precipitation in the spectral bin scheme than in the two-moment bulk scheme. This provides feedback due to the large amount of precipitation, which is the same feedback that occurs in the one-moment bulk scheme. With the feedback, the differences among the three schemes increase with the integration time. The difference in the boundary layer top height in the three schemes becomes large as the integration time increases (Fig. 4a), and the difference in the liquid water potential temperature below clouds during the last 4 h (Fig. 3d) is larger than at t = 3–4 h (Fig. 5f). The differences in the tendencies of θ and q l shown above also become clear (figure not shown). In summary, the one-moment bulk scheme creates a larger amount of precipitation, because the saturation adjustment was adopted in the one-moment bulk scheme, and raindrops are subsequently produced by the fast conversion from cloud to raindrops. This results in earlier onset and larger amounts of surface precipitation. The evaporative cooling by raindrops, which occurs actively below the cloud, stabilized the boundary layer in the one-moment bulk scheme. The two-moment scheme creates raindrops more slowly, resulting in a smaller amount of precipitation compared with the other schemes. The smaller amount of precipitation results in an active latent heat release in the higher layer (shown in Fig. 5b) and a high boundary layer. The spectral bin scheme shows an intermediate rate of conversion and creates an intermediate amount of precipitation, with values between those of the two-moment and one-moment bulk schemes. The large number of cloud particles newly generated at the cloud bottom by saturation adjustment, the difference in the speed of conversion from cloud to rain, and the difference in the timing of the surface precipitation all originate from the variety of microphysical schemes used in this study. The differences in each scheme would not always appear when the results of other one-moment, two-moment, and spectral bin schemes are compared. By contrast, the evaporative cooling and subsequent stabilization of the sub-cloud layer and the suppression of the development of boundary layer height, which appeared in the results of the one-moment scheme used in this study, can be expected if the fast conversion from cloud to rain or the active generation of cloud particles at the cloud base occurs as a result of natural phenomena, regardless of which scheme is used. The active latent heat release and high boundary layer, which appeared in the two-moment scheme used in this study, can also be expected. The results are commonly expected for trade wind cumulus. Speed of conversion from cloud to rain The analyses in the previous sections hint that the performance of the bulk microphysical schemes could be modified by changing the nucleation schemes and the conversion speed from cloud to rain. In the bulk scheme, autoconversion and accretion are the main processes involved in the conversion from cloud to rain. To obtain information for the modification of the parameterization of these two processes, a comparison of the autoconversion and accretion rates among the three schemes is helpful. Figure 6 shows the autoconversion and accretion rates averaged during t = 3~4 h. The autoconversion rate of the spectral bin scheme is regarded as the rate of increasing mass of raindrops (defined as liquid particles whose radius is larger than 40 μm) generated by the coagulation between cloud particles (defined as liquid particles whose radius is smaller than 40 μm). The accretion rate of the spectral bin scheme is determined as the increasing rate of mass due to the coagulation between cloud particles and raindrops. Figure 6 shows that the autoconversion rate of the one-moment scheme is about 15 times larger than that of the spectral bin scheme. By contrast, the accretion rate of the two-moment scheme is 1.5~2 times smaller than that of the spectral bin scheme. Based on these results, the sensitivity of these two processes in each scheme was investigated and is discussed in the next section. Autoconversion and accretion rate averaged during t = 3~4 h. The horizontally averaged (solid line) autoconversion rate and (dashed line) accretion rate simulated by the spectral bin scheme (red), two-moment scheme (green), one-moment scheme (sky blue), and one-moment scheme with the Khairoutdinov and Kogan (2000) scheme (pink), averaged during t = 3~4 h Sensitivity of the autoconversion to the one-moment bulk scheme As shown above, the one-moment bulk scheme overestimates the conversion speed from cloud droplet to raindrop. We first investigated the difference between the original autoconversion scheme of Tomita (2008) and that of Khairoutdinov and Kogan (2000) shown in Eq. (6). The results of the one-moment bulk scheme with the Khairoutdinov and Kogan (2000) scheme are more similar to the results of the spectral bin scheme and the previous study than those calculated by Eq. (5) (Fig. 7). This indicates that the conversion speed calculated by Eq. (5) is too fast for shallow clouds because the validity of the one-moment bulk scheme with Eq. (5) was confirmed through experiments with deep convective clouds (Tomita 2008). Cloud microphysical properties simulated by Khairoutdinov and Kogan's auto conversion scheme. Horizontally averaged profile of the a precipitation flux, b total water mixing ratio (q t), c liquid water mixing ratio, and d cloud fraction averaged during the last 4 h. The red, green, sky-blue, and pink lines show results by the spectral bin scheme, the two-moment bulk scheme, the one-moment bulk scheme, and one-moment bulk scheme using the Khairoutdinov and Kogan (2000) autoconversion rate, respectively. The black line, dark gray shading, and light gray shading indicate the median, range between the first and third quartiles, and range between the maximum and minimum values, respectively, of the previous intercomparison study (van Zanten et al. 2011) In addition to the experiment with the Khairoutdinov and Kogan (2000) scheme, other sensitivity experiments were conducted by reducing the autoconversion rate. Based on the analyses in Fig. 6, a 0.067-fold (i.e., 1/15) smaller autoconversion rate than the default value was used for the sensitivity experiment. Another sensitivity experiment with an accretion rate that was 0.067-fold smaller than the default value was also conducted. Figure 8 shows the profile of q l and the precipitation flux, which were calculated using the smaller autoconversion and accretion rate. It can be seen that a small autoconversion rate results in liquid water locating in the upper layer, whereas it does not locate in the upper layer when the accretion rate was reduced to the same extent as the autoconversion rate. Hence, the autoconversion process is more sensitive to the production of raindrops. Cloud microphysical properties simulated in the sensitivity experiment with a varying conversion ratio in the one-moment scheme. Horizontally averaged profile of the a liquid water mixing ratio (q l) and b precipitation flux averaged during the last 4 h. The solid sky-blue, dashed sky-blue, and dotted sky-blue lines show the results using the one-moment bulk scheme with the default autoconversion rate, one-moment bulk scheme with an autoconversion rate 0.067-fold (i.e., 1/15) smaller, and one-moment bulk scheme with accretion ratio 0.067-fold smaller, respectively. The black line, dark gray shading, and light gray shading indicate the median, range between the first and third quartiles, and range between the maximum and minimum values, respectively, for a previous intercomparison study (van Zanten et al. 2011) Sensitivity of the accretion to the two-moment bulk scheme Contrasting the one-moment bulk scheme, the two-moment bulk scheme underestimates the conversion speed from cloud to raindrop. We speculate that the faster conversion from cloud droplet to raindrop in the two-moment scheme produced similar results using the spectral bin scheme. Sensitivity experiments were conducted by changing both the autoconversion and accretion rates of the two-moment scheme. Based on the analyses in Fig. 6, a twofold increase in the accretion rate was used, and a twofold increase of the autoconversion rate was also used for the sensitivity experiment. Figure 9 shows the results of these experiments. In the two-moment bulk scheme, both the precipitation rate and q l increase when the autoconversion rate is doubled. However, doubling the autoconversion rate does not result in a peak value of q l and a precipitation flux in the lower layer as simulated by the one-moment bulk scheme. By contrast, the precipitation flux simulated when the accretion rate is doubled is considerably larger than both the default and when the autoconversion rate is doubled. This indicates that the accretion process was the major contributor to the creation of liquid water in the lower layer in the two-moment bulk scheme. Cloud microphysical properties simulated in the sensitivity experiment with a changing conversion ratio in the two-moment scheme. The horizontally averaged profile of the a liquid water mixing ratio (q l) and b precipitation flux averaged during the last 4 h. The solid green, dashed green, and dotted green lines show the results using the two-moment bulk scheme with the default autoconversion rate, two-moment bulk scheme with an autoconversion rate twice as large as the default, and two-moment bulk scheme with an accretion ratio twice as large as the default. The black line, dark gray shading, and light gray shading indicate the median, range between the first and third quartiles, and the range between the maximum and minimum values, respectively, for a previous intercomparison study (van Zanten et al. 2011) In short, the sensitivity of the accretion and the autoconversion processes to the cloud microphysical properties differ between the schemes. Component level intercomparison In this study, we investigated the exact effects of the various cloud microphysical schemes on model simulations of trade wind cumulus. If we change components other than the cloud microphysical scheme (e.g., dynamical core, turbulence scheme, advection scheme), the response of the cloud microphysical properties would change as suggested by van Zanten et al. (2011). It is necessary to conduct sensitivity experiments by changing each component while keeping all other components fixed, as we did when targeting the cloud microphysical scheme in this study. We refer to these sensitivity experiments as a "Component level intercomparison". Grabowski (2014) suggested a piggyback approach to better understand the exact effects of cloud microphysics and the interaction between microphysics and dynamics. This approach can be also applied to the other components. Using a component level intercomparison and the piggyback approach, we can discuss the effects of each component separately and obtain knowledge that would improve the parameterization of global scale models or regional models with coarse resolution. In this study, we developed a large eddy simulation model (SCALE-LES), which excludes approximations and implicit schemes as much as possible. The results of a benchmark test indicated that SCALE-LES effectively reproduces the trade wind cumuli simulated in a previous LES intercomparison study (van Zanten et al. 2011). Using SCALE-LES, we investigated the impacts of cloud microphysical schemes on shallow cumulus and investigated which processes were critical for the diversity observed in previous LES intercomparison studies. Three types of cloud microphysical scheme, the one-moment bulk scheme of Tomita (2008), the two-moment bulk scheme of Seiki and Nakajima (2014), and the one-moment spectral bin scheme of Suzuki et al. (2010), were implemented with SCALE-LES for the sensitivity experiments. The results indicated that the precipitation at the surface increases, in order, from the two-moment bulk scheme to the spectral bin scheme and the one-moment bulk scheme. Surface precipitation begins first in the one-moment bulk scheme, followed in order by the spectral bin and two-moment bulk schemes. These results support the suggestion of a previous intercomparison study (van Zanten et al. 2011) Our analyses confirmed that the differences between the schemes were derived mainly from the generation of cloud particles and the speed of conversion from cloud droplets to raindrops. The differences in the two processes originated from the differences in the microphysical schemes used. By contrast, the phenomena generated by this variety, i.e., evaporative cooling and stabilization below the cloud and a low boundary layer, active latent heat release, and a high boundary layer can be expected regardless of the scheme used, if the active conversion from cloud to rain and the active generation of new cloud particles occur in nature. The sensitivity of the autoconversion and accretion processes to the cloud microphysical properties simulated by the bulk microphysical schemes was also investigated. In the two-moment bulk scheme the accretion process was more sensitive to the cloud microphysical properties, whereas the autoconversion was more sensitive in the one-moment bulk scheme. These results indicate that the tuning method of the microphysical scheme differs from scheme to scheme, and a component level intercomparison is useful to obtain the exact method for each scheme. DYCOMS: Dynamics and Chemistry of Marine Stratocumulus GCSS: GEWEX Cloud System Study GEWEX: Global Energy and Water Exchange project the kinematic driver LES: large eddy simulation LWP: liquid water path Message Passing Interface RICO: Rain in Cumulus over the Ocean Scalable Computing for Advanced Library and Environment TKE: turbulence kinetic energy Ackerman AS, van Zanten MC, Stevens B, Savic-Jovcic V, Bretherton CS, Chlond A, Golaz J-C, Jiang H, Khairoutdinov M, Krueger SK, Lewellen DC, Lock A, Moeng C-H, Nakamura K, Petters MD, Snider JR, Weinbrecht S, Zulauf M (2009) Large-eddy simulations of a drizzling, stratocumulus-topped marine boundary layer. Mon Weather Rev 137:1083–1110. doi:10.1175/2008MWR2582.1 Berry EX (1968) Modification of the warm rain process. In: Proceeding of First Conference on Weather Modification., pp 81–85 Bretherton CS, Park S (2009) A new moist turbulence parameterization in the Community Atmosphere Model. J Clim 22:3422–3448. doi:10.1175/2008JCLI2556.1 Brown AR, Derbyshire SH, Mason PJ (1994) Large-eddy simulation of stable atmospheric boundary layers with a revised stochastic subgrid model. Q J R Meteorol Soc 120:1485–1512. doi:10.1002/qj.49712052004 Chepfer H, Bony S, Winker D, Chiriaco M, Dufresne JL, Sèze G (2008) Use of CALIPSO lidar observations to evaluate the cloudiness simulated by a climate model. Geophys Res Lett 35:1–6. doi:10.1029/2008GL034207 Considine G, Curry JA, Wielicki B (1997) Modeling cloud fraction and horizontal variability in marine boundary layer clouds. J Geophys Res 102:13517. doi:10.1029/97JD00261 Grabowski WW (2014) Extracting microphysical impacts in large-eddy simulations of shallow convection. J Atmos Sci 71:4493–4499. doi:10.1175/JAS-D-14-0231.1 Iguchi T, Nakajima T, Khain AP, Saito K, Takemura T, Suzuki K (2008) Modeling the influence of aerosols on cloud microphysical properties in the east Asia region using a mesoscale model coupled with a bin-based cloud microphysics scheme. J Geophys Res 113:D14215. doi:10.1029/2007JD009774 Kain, JS (2004) The Kain–Fritsch convective parameterization: An update. J Appl Meteorol. doi:10.1175/1520-0450(2004)043<0170:TKCPAU>2.0.CO;2 Khain AP, Sednev I (1996) Simulation of precipitation formation in the Eastern Mediterranean coastal zone using a spectral microphysics cloud ensemble model. Atmos Res 43:77–110. doi:10.1016/S0169-8095(96)00005-1 Khairoutdinov M, Kogan Y (2000) A new cloud physics parameterization in a large-eddy simulation model of marine stratocumulus. Mon Weather Rev 128:229–243. doi:10.1175/1520-0493(2000)128<0229:ANCPPI>2.0.CO;2 Naud CM, Del Genio AD, Bauer M, Kovari W (2010) Cloud vertical distribution across warm and cold fronts in cloudsat-CALIPSO data and a general circulation model. J Clim 23:3397–3415. doi:10.1175/2010JCLI3282.1 Nishizawa S, Yashiro H, Sato Y, Miyamoto Y, Tomita H (2015) Influence of grid aspect ratio on planetary boundary layer turbulence in large-eddy simulations. Geosci Model Dev Discuss 8:6021–6094. doi:10.5194/gmdd-8-6021-2015 Posselt R, Lohmann U (2008) Introduction of prognostic rain in ECHAM5: design and single column model simulations. Atmos Chem Phys 8:2949–2963. doi:10.5194/acpd-7-14675-2007 Pruppacher, HR, and Klett, JD, 1997, Microphysics of Clouds and Precipitation, 2nd ed., Kluwer Academic Publisher, Dordrecht, The Netherlands, 954pp. Randall DA, Coakley JA, Lenschow DH, Fairall CW, Kropfli RA (1984) Outlook for research on subtropical marine stratification clouds. Bull Am Meteorol Soc 65:1290–1301. doi:10.1175/1520-0477(1984)065<1290:OFROSM>2.0.CO;2 Savic-Jovcic V, Stevens B (2008) The structure and mesoscale organization of precipitating stratocumulus. J Atmos Sci 65:1587–1605. doi:10.1175/2007JAS2456.1 Scotti A, Meneveau C, Lilly DK (1993) Generalized Smagorinsky model for anisotropic grids. Phys Fluids A Fluid Dyn 5:2306–2308. doi:10.1063/1.858537 Seifert A, Beheng KD (2001) A double-moment parameterization for simulating autoconversion, accretion and self collection. Atmos Res 59–60:265–281. doi:10.1016/S0169-8095(01)00126-0 Seiki T, Nakajima T (2014) Aerosol effects of the condensation process on a convective cloud simulation. J Atmos Sci 71:833–853. doi:10.1175/JAS-D-12-0195.1 Shipway BJ, Hill AA (2012) Diagnosis of systematic differences between multiple parametrizations of warm rain microphysics using a kinematic framework. Q J R Meteorol Soc 138:2196–2211. doi:10.1002/qj.1913 Siebesma A, Bretherton CS, Brown A, Chlond A, Cuxart J, Duynkerke P, Jiang H, Khairoutdinov M, Lewellen D, Moeng C-H, Sanchez E, Stevens B, Stevens DE (2003) A large-eddy simulation intercomparison study of shallow cumulus convection. J Atmos Sci 60:1201–1219. doi:10.1175/1520-0469(2003)060<1201:AALESIS>2.0.CO;B2 Stevens B, Cotton WR, Feingold G, Moeng C-H (1998) Large-eddy simulations of strongly precipitating, shallow, stratocumulus-topped boundary layers. J Atmos Sci 55:3616–3638. doi:10.1175/1520-0469(1998)055<3616:LESOSP>2.0.CO;2 Stevens B, Moeng C-H, Ackerman AS, Bretherton CS, Chlond A, de Roode S, Edwards J, Golaz J-C, Jiang H, Khairoutdinov M, Kirkpatrick MP, Lewellen DC, Lock A, Müller F, Stevens DE, Whelan E, Zhu P (2005) Evaluation of large-eddy simulations via observations of nocturnal marine stratocumulus. Mon Weather Rev 133:1443–1462. doi:10.1175/MWR2930.1 Stocker, TF, Qin D, Plattner G-K, Tignor M, Allen SK, Boschung J, Nauels A, Xia Y, Bex V, and Midgley PM (2013), IPCC, 2013: Climate Change 2013: The Physical Science Basis, Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. Suzuki K, Nakajima T, Nakajima TY, Khain A (2006) Correlation pattern between effective radius and optical thickness of water clouds simulated by a spectral bin microphysics cloud model. SOLA 2:116–119. doi:10.2151/sola.2006-030 Suzuki K, Nakajima T, Nakajima TY, Khain AP (2010) A study of microphysical mechanisms for correlation patterns between droplet radius and optical thickness of warm clouds with a spectral bin microphysics cloud model. J Atmos Sci 67:1126–1141. doi:10.1175/2009JAS3283.1 Suzuki K, Nakajima T, Numaguti A, Takemura T, Kawamoto K, Higurashi A (2004) A study of the aerosol effect on a cloud field with simultaneous use of GCM modeling and satellite observation. J Atmos Sci 61:179–194. doi:10.1175/1520-0469(2004)061<0179:ASOTAE>2.0.CO;2 Tiedtke M (1993) Representation of clouds in large-scale models. Mon Weather Rev 121:3040–3061. doi:10.1175/1520-0493(1993)121<3040:ROCILS>2.0.CO;2 Tomita H (2008) New microphysical schemes with five and six categories by diagnostic generation of cloud ice. J Meteorol Soc Japan 86A:121–142. doi:10.2151/jmsj.86A.121 van Zanten MC, Stevens B, Nuijens L, Siebesma AP, Ackerman AS, Burnet F, Cheng A, Couvreux F, Jiang H, Khairoutdinov M, Kogan Y, Lewellen DC, Mechem D, Nakamura K, Noda A, Shipway BJ, Slawinska J, Wang S, Wyszogrodzki A (2011) Controls on precipitation and cloudiness in simulations of trade-wind cumulus as observed during RICO. J Adv Model Earth Syst 3:M06001. doi:10.1029/2011MS000056 Wang H, Feingold G (2009) Modeling mesoscale cellular structures and drizzle in marine stratocumulus. Part II: the microphysics and dynamics of the boundary region between open and closed cells. J Atmos Sci 66:3257–3275. doi:10.1175/2009JAS3120.1 Wang H, Feingold G, Wood R, Kazil J (2010) Modelling microphysical and meteorological controls on precipitation and cloud cellular structures in Southeast Pacific stratocumulus. Atmos Chem Phys 10:6347–6362. doi:10.5194/acp-10-6347-2010 Xue H, Feingold G, Stevens B (2008) Aerosol effects on clouds, precipitation, and the organization of shallow cumulus convection. J Atmos Sci 65:392–406. doi:10.1175/2007JAS2428.1 Yamaguchi T, Randall DA (2012) Cooling of entrained parcels in a large-eddy simulation. J Atmos Sci 69:1118–1136. doi:10.1175/JAS-D-11-080.1 Zalesak ST (1979) Fully multidimensional flux-corrected transport algorithms for fluids. J Comput Phys 31:335–362. doi:10.1016/0021-9991(79)90051-2 Part of the results is obtained by the K computer at the RIKEN Advanced Institute for Computational Science. This work was supported by FOCUS Establishing Supercomputing Center of Excellence. SCALE-LES developed by Team-SCALE of the RIKEN Advanced Institute for Computational Sciences. The data from GCSS intercomparison studies used in several figures were downloaded from http://www.knmi.nl/samenw/rico/. RIKEN Advanced Institute for Computational Science, 7-1-26 Minatojima-Minami-machi, Chuo-ku, Kobe, Hyogo, 650-0047, Japan Yousuke Sato, Seiya Nishizawa, Hisashi Yashiro, Yoshiaki Miyamoto, Yoshiyuki Kajikawa & Hirofumi Tomita Yousuke Sato Seiya Nishizawa Hisashi Yashiro Yoshiaki Miyamoto Yoshiyuki Kajikawa Hirofumi Tomita Correspondence to Yousuke Sato. YS implemented the cloud microphysical schemes in the SCALE library, designed this study, conducted the numerical simulations, analyzed the results of the simulations, and developed the manuscript. SN, HY, and YM developed the main frames of the SCALE library. YK collaborated with the corresponding author in the creation of the manuscript. HT proposed the development of the SCALE library. All authors read and approved the final manuscript. Sensitivity of the ratio of the physical time step to the dynamical time step. In this study, the time step of dynamics (Δt dyn), cloud microphysics (Δt microphy), and other physics (Δt phy) were set as 0.05, 0.15, and 0.5 s, respectively. The time step of dynamics was determined by the Courant–Friedrichs–Lewy (CFL) condition for the acoustic wave. The time step for physics (except for cloud microphysics) was set as 10 × Δt dyn based on the sensitivity experiment (Nishizawa et al., 2015). The Δt microphy was determined in sensitivity experiments examining the ratio of Δt microphy to Δt dyn. The results of the sensitivity experiments examining the ratio N DT (=Δt microphy/Δt dyn) are shown in this section. For this sensitivity test, the experimental setup of the second Dynamics and Chemistry of Marine Stratocumulus Research Flight 1 (DYCOMS-II RF01) (Stevens et al. 2005) was used. In this study, the experimental setup of the RICO study was used in most cases, but the effects of the acoustic wave appeared more clearly in the DYCOMS-II case (figure not shown). Consequently, the ratio (N DT) was determined using the DYCOMS-II RF01 experimental setup. For this sensitivity experiment, Δt dyn was set as 0.01 s and N DT was swept from 1 to 30 (i.e., Δt microphy was set from 0.01 to 0.3 s). The results of the sensitivity experiment indicated that the effects of N DT were mostly small (figures not shown), except for the TKE budget. Figure 10 shows the profile of the buoyancy production term, shear production term, and transport term of TKE production. The transport term was quite noisy when N DT was large (N DT > 10). Noise is also present in the shear production term. This noise was derived from the acoustic wave that was generated at every time step of the microphysical processes. When N DT was large, the variation of θ at each step of the microphysical processes was also large. The large variation in θ generates an acoustic wave, which produced the noise. This indicates that N DT should be smaller than 5 to render the effects of the acoustic wave negligibly. From these results and for safety, N DT was set as 3 for the RICO experiment (i.e., Δt microphy = 3 × Δt dyn). Cloud microphysical properties in the sensitivity experiment of N DT (=Δt microphy /Δt dyn). Hourly averaged profile of the a buoyancy production, b shear production, and c transport terms averaged over the entire calculation domain during the last hour of each simulation. The red, green, pink, black, and orange lines represent the results of N DT = 1, 5, 10, 20, and 30, respectively. The blue line, dark gray shading, and light gray shading indicate the mean, standard deviation, and the range between the maximum and minimum values, respectively, from a previous intercomparison study (Stevens et al. 2005) Numerical filter. This section describes the sensitivity experiments of the strength of the numerical diffusion. The nth ordered superviscosity/diffusion is defined as: $$ \begin{array}{l}\frac{\partial }{\partial x}\left[\nu \rho \frac{\partial^{n-1}f}{\partial {x}^{n-1}}\right]\kern1em f\in \left\{u,v,w,\theta, {q}_{\mathrm{s}}\right\}.\\ {}\frac{\partial }{\partial x}\left[\nu \frac{\partial^{n-1}\rho }{\partial {x}^{n-1}}\right]\end{array} $$ The v is written as: $$ \nu ={\left(-1\right)}^{\frac{n}{2}+1}\gamma \frac{\Delta {x}^n}{2^n\Delta t} $$ where γ is a non-dimensional coefficient, Δx is the grid spacing, and n is the order of the numerical filter. This study adopted a fourth-order numerical diffusion (i.e., n = 4) to dampen the artificial noise. n was set as 4 because of the computational efficiency. To investigate the sensitivity of the strength of the numerical diffusion, simulations of the RICO study were conducted while changing the strength of the numerical diffusion. Based on the rough estimation of Nishizawa et al. (2015), the non-dimensional coefficient (γ) should be smaller than O (10−3)–O (100) for n = 4. Thus, γ was changed from 10−3 to 10−7. The strength of the numerical diffusion for each γ is shown in Table 3. The two-moment bulk scheme was used for the sensitivity experiment. As a result of the numerical diffusion, one-dimensional sinusoidal two-grid noise will decay to 1/e with a 1/γ time step. Table 3 List of non-dimensional coefficient for sensitivity experiment Figure 11 shows the results of the sensitivity experiment. The vertically integrated TKE increased with a weakening of the numerical filter, even though the TKE of all experiments was located in the range between the maximum and minimum of the previous studies. This is because a strong numerical diffusion can efficiently dampen small-scale turbulence. As well as the TKE, the liquid water mixing ratio of all the sensitivity experiments was also located in the range of previous intercomparison studies regardless of the strength of the numerical diffusion. However, the cloud fraction simulated with a small numerical diffusion (i.e., γ ≤ 10−5) was much larger than that of the intercomparison study, and the cloud fraction was completely outside the range of the inter comparison studies. The cloud fraction, core fraction, and variance of w' were also outside the range of the inter comparison studies when γ was smaller than 10−5 (figure not shown). The temporal evolution of the cloud fraction indicates that the large cloud fraction, with small numerical diffusion, seems to originate from artificial noise. However, it is difficult to identify the reason for the large cloud fraction being artificial noise. The same experiment must be conducted with a fine grid resolution to divide all of the elements of the wave into a physically meaningful wave and artificial noise. However, computational limitations prevented us from conducting these experiments. Consequently, γ was determined using the results of the intercomparison study (van Zanten et al. 2011) as a reference solution, and γ was set as 10−3 (i.e., the strength of the numerical diffusion is 1.25 × 105 m4 s−1). Results of a sensitivity experiment of the strength of numerical diffusion. Time evolution of the a vertically integrated turbulence kinetic energy averaged over the whole calculation domain and the profile of the b liquid water mixing ratio and c cloud fraction averaged over the whole calculation domain during the last 4 h. The red, green, and sky-blue line shows the results with the coefficient (γ) as 10−3, 10−5, and 10−7, respectively. The black line, thick gray shade, and thin gray shade indicate the median, range between the first and third quartiles, and range between the maximum and minimum values of a previous intercomparison study, respectively (van Zanten et al. 2011) Sato, Y., Nishizawa, S., Yashiro, H. et al. Impacts of cloud microphysics on trade wind cumulus: which cloud microphysics processes contribute to the diversity in a large eddy simulation?. Prog. in Earth and Planet. Sci. 2, 23 (2015). https://doi.org/10.1186/s40645-015-0053-6 Accepted: 06 August 2015 Shallow clouds Atmospheric and hydrospheric sciences
CommonCrawl
In avoiding experimenting with more Russian Noopept pills and using instead the easily-purchased powder form of Noopept, there are two opposing considerations: Russian Noopept is reportedly the best, so we might expect anything I buy online to be weaker or impure or inferior somehow and the effect size smaller than in the pilot experiment; but by buying my own supply & using powder I can double or triple the dose to 20mg or 30mg (to compensate for the original under-dosing of 10mg) and so the effect size larger than in the pilot experiment. Cytisine is not known as a stimulant and I'm not addicted to nicotine, so why give it a try? Nicotine is one of the more effective stimulants available, and it's odd how few nicotine analogues or nicotinic agonists there are available; nicotine has a few flaws like short half-life and increasing blood pressure, so I would be interested in a replacement. The nicotine metabolite cotinine, in the human studies available, looks intriguing and potentially better, but I have been unable to find a source for it. One of the few relevant drugs which I can obtain is cytisine, from Ceretropic, at 2x1.5mg doses. There are not many anecdotal reports on cytisine, but at least a few suggest somewhat comparable effects with nicotine, so I gave it a try. This calculation - reaping only \frac{7}{9} of the naive expectation - gives one pause. How serious is the sleep rebound? In another article, I point to a mice study that sleep deficits can take 28 days to repay. What if the gain from modafinil is entirely wiped out by repayment and all it did was defer sleep? Would that render modafinil a waste of money? Perhaps. Thinking on it, I believe deferring sleep is of some value, but I cannot decide whether it is a net profit. How much of the nonmedical use of prescription stimulants documented by these studies was for cognitive enhancement? Prescription stimulants could be used for purposes other than cognitive enhancement, including for feelings of euphoria or energy, to stay awake, or to curb appetite. Were they being used by students as smart pills or as "fun pills," "awake pills," or "diet pills"? Of course, some of these categories are not entirely distinct. For example, by increasing the wakefulness of a sleep-deprived person or by lifting the mood or boosting the motivation of an apathetic person, stimulants are likely to have the secondary effect of improving cognitive performance. Whether and when such effects should be classified as cognitive enhancement is a question to which different answers are possible, and none of the studies reviewed here presupposed an answer. Instead, they show how the respondents themselves classified their reasons for nonmedical stimulant use. I am not alone in thinking of the potential benefits of smart drugs in the military. In their popular novel Ghost Fleet: A Novel of the Next World War, P.W. Singer and August Cole tell the story of a future war using drug-like nootropic implants and pills, such as Modafinil. DARPA is also experimenting with neurological technology and enhancements such as the smart drugs discussed here. As demonstrated in the following brain initiatives: Targeted Neuroplasticity Training (TNT), Augmented Cognition, and High-quality Interface Systems such as their Next-Generational Nonsurgical Neurotechnology (N3). More recently, the drug modafinil (brand name: Provigil) has become the brain-booster of choice for a growing number of Americans. According to the FDA, modafinil is intended to bolster "wakefulness" in people with narcolepsy, obstructive sleep apnea or shift work disorder. But when people without those conditions take it, it has been linked with improvements in alertness, energy, focus and decision-making. A 2017 study found evidence that modafinil may enhance some aspects of brain connectivity, which could explain these benefits. Last spring, 100 people showed up at a Peak Performance event where psychedelic psychologist James Fadiman said the key to unleashing the cognition-enhancing effects of LSD — which he listed as less anxiety, better focus, improved sleep, greater creativity — was all in the dosage. He recommended a tenth of a "party dose" — enough to give you "the glow" and enhance your cognitive powers without "the trip." 3 days later, I'm fairly miserable (slept poorly, had a hair-raising incident, and a big project was not received as well as I had hoped), so well before dinner (and after a nap) I brew up 2 wooden-spoons of Malaysia Green (olive-color dust). I drank it down; tasted slightly better than the first. I was feeling better after the nap, and the kratom didn't seem to change that. Many people quickly become overwhelmed by the volume of information and number of products on the market. Because each website claims its product is the best and most effective, it is easy to feel confused and unable to decide. Smart Pill Guide is a resource for reliable information and independent reviews of various supplements for brain enhancement. So, I thought I might as well experiment since I have it. I put the 23 remaining pills into gel capsules with brown rice as filling, made ~30 placebo capsules, and will use the one-bag blinding/randomization method. I don't want to spend the time it would take to n-back every day, so I will simply look for an effect on my daily mood/productivity self-rating; hopefully Noopept will add a little on average above and beyond my existing practices like caffeine+piracetam (yes, Noopept may be as good as piracetam, but since I still have a ton of piracetam from my 3kg order, I am primarily interested in whether Noopept adds onto piracetam rather than replaces). 10mg doses seem to be on the low side for Noopept users, weakening the effect, but on the other hand, if I were to take 2 capsules at a time, then I'd halve the sample size; it's not clear what is the optimal tradeoff between dose and n for statistical power. The search to find more effective drugs to increase mental ability and intelligence capacity with neither toxicity nor serious side effects continues. But there are limitations. Although the ingredients may be separately known to have cognition-enhancing effects, randomized controlled trials of the combined effects of cognitive enhancement compounds are sparse. Because smart drugs like modafinil, nicotine, and Adderall come with drawbacks, I developed my own line of nootropics, including Forbose and SmartMode, that's safe, widely available, and doesn't require a prescription. Forskolin, found in Forbose, has been a part of Indian Ayurvedic medicine for thousands of years. In addition to being fun to say, forskolin increases cyclic adenosine monophosphate (cAMP), a molecule essential to learning and memory formation. [8] Flow diagram of cognitive neuroscience literature search completed July 2, 2010. Search terms were dextroamphetamine, Aderrall, methylphenidate, or Ritalin, and cognitive, cognition, learning, memory, or executive function, and healthy or normal. Stages of subsequent review used the information contained in the titles, abstracts, and articles to determine whether articles reported studies meeting the inclusion criteria stated in the text. A 2015 review of various nutrients and dietary supplements found no convincing evidence of improvements in cognitive performance. While there are "plausible mechanisms" linking these and other food-sourced nutrients to better brain function, "supplements cannot replicate the complexity of natural food and provide all its potential benefits," says Dr. David Hogan, author of that review and a professor of medicine at the University of Calgary in Canada. A total of 14 studies surveyed reasons for using prescription stimulants nonmedically, all but one study confined to student respondents. The most common reasons were related to cognitive enhancement. Different studies worded the multiple-choice alternatives differently, but all of the following appeared among the top reasons for using the drugs: "concentration" or "attention" (Boyd et al., 2006; DeSantis et al., 2008, 2009; Rabiner et al., 2009; Teter et al., 2003, 2006; Teter, McCabe, Cranford, Boyd, & Guthrie, 2005; White et al., 2006); "help memorize," "study," "study habits," or "academic assignments" (Arria et al., 2008; Barrett et al., 2005; Boyd et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; Low & Gendaszek, 2002; Rabiner et al., 2009; Teter et al., 2005, 2006; White et al., 2006); "grades" or "intellectual performance" (Low & Gendaszek, 2002; White et al., 2006); "before tests" or "finals week" (Hall et al., 2005); "alertness" (Boyd et al., 2006; Hall et al., 2005; Teter et al., 2003, 2005, 2006); or "performance" (Novak et al., 2007). However, every survey found other motives mentioned as well. The pills were also taken to "stay awake," "get high," "be able to drink and party longer without feeling drunk," "lose weight," "experiment," and for "recreational purposes." Using prescription ADHD medications, racetams, and other synthetic nootropics can boost brain power. Yes, they can work. Even so, we advise against using them long-term since the research on their safety is still new. Use them at your own risk. For the majority of users, stick with all natural brain supplements for best results. What is your favorite smart pill for increasing focus and mental energy? Tell us about your favorite cognitive enhancer in the comments below. Attention-deficit/hyperactivity disorder (ADHD), a behavioral syndrome characterized by inattention and distractibility, restlessness, inability to sit still, and difficulty concentrating on one thing for any period of time. ADHD most commonly occurs in children, though an increasing number of adults are being diagnosed with the disorder. ADHD is three times more… The Stroop task tests the ability to inhibit the overlearned process of reading by presenting color names in colored ink and instructing subjects to either read the word (low need for cognitive control because this is the habitual response to printed words) or name the ink color (high need for cognitive control). Barch and Carter (2005) administered this task to normal control subjects on placebo and d-AMP and found speeding of responses with the drug. However, the speeding was roughly equivalent for the conditions with low and high cognitive control demands, suggesting that the observed facilitation may not have been specific to cognitive control. The question of whether stimulants are smart pills in a pragmatic sense cannot be answered solely by consideration of the statistical significance of the difference between stimulant and placebo. A drug with tiny effects, even if statistically significant, would not be a useful cognitive enhancer for most purposes. We therefore report Cohen's d effect size measure for published studies that provide either means and standard deviations or relevant F or t statistics (Thalheimer & Cook, 2002). More generally, with most sample sizes in the range of a dozen to a few dozen, small effects would not reliably be found. But though it's relatively new on the scene with ambitious young professionals, creatine has a long history with bodybuilders, who have been taking it for decades to improve their muscle #gains. In the US, sports supplements are a multibillion-dollar industry – and the majority contain creatine. According to a survey conducted by Ipsos Public Affairs last year, 22% of adults said they had taken a sports supplement in the last year. If creatine was going to have a major impact in the workplace, surely we would have seen some signs of this already. "In 183 pages, Cavin Balaster's new book, How to Feed A Brain provides an outline and plan for how to maximize one's brain performance. The "Citation Notes" provide all the scientific and academic documentation for further understanding. The "Additional Resources and Tips" listing takes you to Cavin's website for more detail than could be covered in 183 pages. Cavin came to this knowledge through the need to recover from a severe traumatic brain injury and he did not keep his lessons learned to himself. This book is enlightening for anyone with a brain. We all want to function optimally, even to take exams, stay dynamic, and make positive contributions to our communities. Bravo Cavin for sharing your lessons learned!" I noticed on SR something I had never seen before, an offer for 150mgx10 of Waklert for ฿13.47 (then, ฿1 = $3.14). I searched and it seemed Sun was somehow manufacturing armodafinil! Interesting. Maybe not cost-effective, but I tried out of curiosity. They look and are packaged the same as the Modalert, but at a higher price-point: 150 rather than 81 rupees. Not entirely sure how to use them: assuming quality is the same, 150mg Waklert is still 100mg less armodafinil than the 250mg Nuvigil pills. Most people would describe school as a place where they go to learn, so learning is an especially relevant cognitive process for students to enhance. Even outside of school, however, learning plays a role in most activities, and the ability to enhance the retention of information would be of value in many different occupational and recreational contexts. The nonmedical use of substances—often dubbed smart drugs—to increase memory or concentration is known as pharmacological cognitive enhancement (PCE), and it rose in all 15 nations included in the survey. The study looked at prescription medications such as Adderall and Ritalin—prescribed medically to treat attention deficit hyperactivity disorder (ADHD)—as well as the sleep-disorder medication modafinil and illegal stimulants such as cocaine. With all these studies pointing to the nootropic benefits of some essential oils, it can logically be concluded then that some essential oils can be considered "smart drugs." However, since essential oils have so much variety and only a small fraction of this wide range has been studied, it cannot be definitively concluded that absolutely all essential oils have brain-boosting benefits. The connection between the two is strong, however. The pill delivers an intestinal injection without exposing the drug to digestive enzymes. The patient takes what seems to be an ordinary capsule, but the "robotic" pill is a sophisticated device which incorporates a number of innovations, enabling it to navigate through the stomach and enter the small intestine. The Rani Pill™ goes through a transformation and positions itself to inject the drug into the intestinal wall. "Piracetam is not a vitamin, mineral, amino acid, herb or other botanical, or dietary substance for use by man to supplement the diet by increasing the total dietary intake. Further, piracetam is not a concentrate, metabolite, constituent, extract or combination of any such dietary ingredient. [...] Accordingly, these products are drugs, under section 201(g)(1)(C) of the Act, 21 U.S.C. § 321(g)(1)(C), because they are not foods and they are intended to affect the structure or any function of the body. Moreover, these products are new drugs as defined by section 201(p) of the Act, 21 U.S.C. § 321(p), because they are not generally recognized as safe and effective for use under the conditions prescribed, recommended, or suggested in their labeling."[33] Contact us at [email protected] | Sitemap xml | Sitemap txt | Sitemap
CommonCrawl
Immensely Happy hartley & zisserman solutions c.r. wylie jr. solutions Introduction to Projective Geometry Solutions 5.4 The Invariance of the Classification A 3 minute read, posted on 22 Aug 2019 Last modified on 22 Aug 2019 Tags computer vision, projective geometry, problem solution Please read this introduction first before looking through the solutions. Here's a quick index to all the problems in this section. 1. If $X_1$ and $X_2$ are linearly independent vectors, is it necessarily true that $BX_1$, $BX_2$ are linearly independent? This one is easy to contradict. If either or both $X_1$ and $X_2$ are in the nullspace of $B$, then for a non-zero value of at least one $c_i$ the equation $c_1BX_1 + c_2BX_2 = 0$ will hold. Hence the given statement is not true. 2. Do $A$ and $B^{-1}AB$ always have the same characteristic values? As $(B^{-1})^{-1} = B$, we can rewrite the second matrix as $B^{-1}A(B^{-1})^{-1}$. Writing $B^{-1}$ as $D$, the second matrix becomes $DAD^{-1}$. But from Theorem 1, the matrices $A$ and $DAD^{-1}$ must have the same characteristic values. Hence $A$ and $B^{-1}AB$ have the same characteristic values. 3. Given $A = \begin{pmatrix}1 & 0 & 0 \\ 2 & 2 & -1 \\ 2 & 1 & 0\end{pmatrix}$ and $B = \begin{pmatrix}-1 & 0 & 1 \\ 4 & 1 & -3 \\ 2 & 1 & -2\end{pmatrix}$. Verify that $A$ and $BAB^{-1}$ have the same characteristic values. Verify that $A - kI$ and $BAB^{-1} -kI$ have the same rank for each characteristic value. Find the characteristic vectors of $BAB^{-1}$, and verify that multiplying each on the left by $B^{-1}$ yields a characteristic vector for $A$. $$BAB^{-1} = \begin{pmatrix} 3 & 1 & 0 \\ -4 & -1 & 0 \\ -2 & -1 & 1 \end{pmatrix}$$ Both $A$ and $BAB^{-1}$ have a characteristic value of 1 repeated 3 times. Rank of both $A - I$ and $BAB^{-1} -I$ is 1. The characteristic vectors of $BAB^{-1}$ are $(1, -2, 0)$ and $(0, 0, 1)$. Multiplying them on the left by $B^{-1}$ we get, $(1, -2, 0)$, $(-1, 1, -1)$ which are characteristic vectors of $A$ (multiplying them by $A$ just scales the vector). 4. Work Exercise $3$, given $A = \begin{pmatrix}-3 & -2 & 2 \\ 14 & 7 & -5 \\ 10 & 4 & -2\end{pmatrix}$ and $B = \begin{pmatrix}1 & 2 & 0 \\ -1 & 1 & 1 \\ -2 & 1 & 2\end{pmatrix}$ $$BAB^{-1} = \begin{pmatrix} 17 & -36 & 14 \\ 18 & -37 & 14 \\ 27 & -57 & 22 \end{pmatrix}$$ Both $A$ and $BAB^{-1}$ have a characteristic values 2, -1 and 1. Rank of both $A - kI$ and $BAB^{-1} -kI$ is 2 for each characteristic values. The characteristic vectors of $BAB^{-1}$ are $(2, 2, 3)$, $(7, 7, 10)$ and $(5, 6, 9)$. Multiplying them on the left by $B^{-1}$ we get, $(0, 1, 1)$, $(-1, 3, 2)$ and $(-1, 4, 2)$ which are characteristic vectors of $A$ (multiplying them by $A$ just scales the vector). 5. Work Exercise $3$, given $A = \begin{pmatrix}2 & 1 & -2 \\ -2 & 5 & -7 \\ -2 & 2 & -3\end{pmatrix}$ and $B = \begin{pmatrix}1 & 0 & 1 \\ 0 & 2 & -1 \\ 1 & 1 & 0\end{pmatrix}$ $$BAB^{-1} = \begin{pmatrix} -7 & -2 & 7 \\ -12 & -1 & 10 \\ -12 & -3 & 12 \end{pmatrix}$$ Both $A$ and $BAB^{-1}$ have a characteristic values 3, -1 and 2. Rank of both $A - kI$ and $BAB^{-1} -kI$ is 2 for each characteristic values. The characteristic vectors of $BAB^{-1}$ are $(1, 2, 2)$, $(5, 6, 6)$ and $(1, 6, 3)$. Multiplying them on the left by $B^{-1}$ we get, $(3, 3, 0)$, $(-1, -5, -4)$ and $(-2, 8, 4)$ which are characteristic vectors of $A$ (multiplying them by $A$ just scales the vector). © 2019-2023 immenselyhappy.com. All Rights Reserved.
CommonCrawl
Problems in Mathematics Problems by Topics Gauss-Jordan Elimination Inverse Matrix Linear Transformation Vector Space Eigen Value Cayley-Hamilton Theorem Diagonalization Exam Problems Group Homomorphism Sylow's Theorem Module Theory Ring Theory LaTex/MathJax Login/Join us Solve later Problems My Solved Problems You solved 0 problems!! Solved Problems / Solve later Problems Eigenvalues and Eigenvectors of Linear Transformations Let $T:V \to V$ be a linear transformation from a vector space $V$ to itself. We say that $\lambda$ is an eigenvalue of $T$ if there exists a nonzero vector $\mathbf{v}\in V$ such that $T(\mathbf{v})=\lambda \mathbf{v}$. For each eigenvalue $\lambda$ of $T$, nonzero vectors $\mathbf{v}$ satisfying $T(\mathbf{v})=\lambda \mathbf{v}$ is called eigenvectors corresponding to $\lambda$. =solution Let $T$ be the linear transformation from the vector space $\R^2$ to $\R^2$ itself given by \[T\left( \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \right)= \begin{bmatrix} 3x_1+x_2 \\ x_1+3x_2 \end{bmatrix}.\] (a) Verify that the vectors \[\mathbf{v}_1=\begin{bmatrix} 1 \\ \end{bmatrix} \text{ and } \mathbf{v}_2=\begin{bmatrix} \end{bmatrix}\] are eigenvectors of the linear transformation $T$, and conclude that $B=\{\mathbf{v}_1, \mathbf{v}_2\}$ is a basis of $\R^2$ consisting of eigenvectors. (b) Find the matrix of $T$ with respect to the basis $B=\{\mathbf{v}_1, \mathbf{v}_2\}$. Let $P_1$ be the vector space of all real polynomials of degree $1$ or less. Consider the linear transformation $T: P_1 \to P_1$ defined by $T(ax+b)=(3a+b)x+a+3$ for any $ax+b\in P_1$. (a) With respect to the basis $B=\{1, x\}$, find the matrix of the linear transformation $T$. (b) Find a basis $B'$ of the vector space $P_1$ such that the matrix of $T$ with respect to $B'$ is a diagonal matrix. (c) Express $f(x)=5x+3$ as a linear combination of basis vectors of $B'$. Let $\mathrm{P}_2$ denote the vector space of polynomials of degree $2$ or less, and let $T : \mathrm{P}_2 \rightarrow \mathrm{P}_2$ be the derivative linear transformation, defined by $T( ax^2 + bx + c ) = 2ax + b$. Is $T$ diagonalizable? If so, find a diagonal matrix which represents $T$. If not, explain why not. Let $V$ be a real vector space of all real sequences $(a_i)_{i=1}^{\infty}=(a_1, a_2, \dots)$. Let $U$ be a subspace of $V$ defined by \[U=\{(a_i)_{i=1}^{\infty}\in V \mid a_{n+2}=2a_{n+1}+3a_{n} \text{ for } n=1, 2,\dots \}.\] Let $T$ be the linear transformation from $U$ to $U$ defined by \[T\big((a_1, a_2, \dots)\big)=(a_2, a_3, \dots). \] (a) Find the eigenvalues and eigenvectors of the linear transformation $T$. (b) Use the result of (a), find a sequence $(a_i)_{i=1}^{\infty}$ satisfying $a_1=2, a_2=7$. Let $T:\R^2 \to \R^2$ be a linear transformation and let $A$ be the matrix representation of $T$ with respect to the standard basis of $\R^2$. Prove that the following two statements are equivalent. (a) There are exactly two distinct lines $L_1, L_2$ in $\R^2$ passing through the origin that are mapped onto themselves: $T(L_1)=L_1 \text{ and } T(L_2)=L_2$. (b) The matrix $A$ has two distinct nonzero real eigenvalues. Let $V$ be a real vector space of all real sequences $(a_i)_{i=1}^{\infty}=(a_1, a_2, \dots)$. Let $U$ be the subspace of $V$ consisting of all real sequences that satisfy the linear recurrence relation $a_{k+2}-5a_{k+1}+3a_{k}=0$ for $k=1, 2, \dots$. Let $T$ be the linear transformation from $U$ to $U$ defined by \[T\big((a_1, a_2, \dots)\big)=(a_2, a_3, \dots). \] Let $B=\{\mathbf{u}_1, \mathbf{u}_2\}$ be a basis of $U$, where \begin{align*} \mathbf{u}_1&=(1, 0, -3, -15, -66, \dots)\\ \mathbf{u}_2&=(0, 1, 5, 22, 95, \dots). \end{align*} Let $A$ be the matrix representation of the linear transformation $T: U \to U$ with respect to the basis $B$. (a) Find the eigenvalues and eigenvectors of $T$. (b) Use the result of (a), find a sequence $(a_i)_{i=1}^{\infty}$ satisfying the linear recurrence relation $a_{k+2}-5a_{k+1}+3a_{k}=0$ and the initial condition $a_1=1, a_2=1$. (c) Find the formula for the sequences $(a_i)_{i=1}^{\infty}$ satisfying the linear recurrence relation $a_{k+2}-5a_{k+1}+3a_{k}=0$ and express it using $a_1, a_2$. Linear Algebra Version 0 (11/15/2017) Elementary Row Operations Gaussian-Jordan Elimination Solutions of Systems of Linear Equations Linear Combination and Linear Independence Nonsingular Matrices Inverse Matrices Subspaces in $\R^n$ Bases and Dimension of Subspaces in $\R^n$ General Vector Spaces Subspaces in General Vector Spaces Linearly Independency of General Vectors Bases and Coordinate Vectors Dimensions of General Vector Spaces Linear Transformation from $\R^n$ to $\R^m$ Linear Transformation Between Vector Spaces Orthogonal Bases Determinants of Matrices Computations of Determinants Introduction to Eigenvalues and Eigenvectors Eigenvectors and Eigenspaces Diagonalization of Matrices The Cayley-Hamilton Theorem Dot Products and Length of Vectors Jordan Canonical Form abelian group augmented matrix basis basis for a vector space characteristic polynomial commutative ring determinant determinant of a matrix diagonalization diagonal matrix eigenvalue eigenvector elementary row operations exam field theory finite group group group homomorphism group theory homomorphism ideal inverse matrix invertible matrix kernel linear algebra linear combination linearly independent linear transformation matrix matrix representation nonsingular matrix normal subgroup null space Ohio State Ohio State.LA rank ring ring theory subgroup subspace symmetric matrix system of linear equations transpose vector vector space Search More Problems Membership Level Free If you are a member, Login here. Problems in Mathematics © 2020. All Rights Reserved.
CommonCrawl
Trace: • Anomalies advanced_notions:quantum_field_theory:anomalies To understand anomalies consider the ground state of fermions. One useful way to imagine the ground state of fermions is as a Dirac sea, as shown on the right-hand side. In the ground state configuration all negative energy states are filled and no positive energy states are filled, i.e. no electrons or positrons exist. A positron would be a hole in the filled up Dirac sea of negative energy states. Now, the anomaly comes about when we consider what happens when we investigate what happens to this ground state as soon as the fermions start interacting with a gauge field. Source: Effects of Dirac's Negative Energy Sea on Quantum Numbers by R. Jackiw As a result of the presence of the gauge field the energy levels shift. Some states get lifted up from the Dirac sea and become positive energy states. Other empty states with former positive energy become now negative energy states and thus holes get pushed down the Dirac sea. The ground state then is no longer empty. Instead, we have electrons and positrons (=holes in the Dirac sea). This is what we call an anomaly. A bit more precise, the anomaly is that we would expect that a quantity is conserved, but upon closer inspection we find that it isn't. Instead we can observe the production of the quantity. "we must assign physical reality to Dirac's negative energy sea,because it produces the chiral anomaly, whose effects areexperimentally observed, principally in the decay of the neutral pionto two photons, but there are other physical consequences as well." R. Jackiw For more on this intuitive perspective of anomalies, see Axial anomaly, Dirac sea, and the chiral magnetic effect by Dmitri E. Kharzeev A classical theory possesses a symmetry if the action $S(\phi)$ is unchanged by a transformation $\phi \to \delta \phi$. In a quantum theory, however, we have a symmetry if the path integral $\int D \phi e^{iS(\phi)}$ is invariant under a given transformation $\phi \to \delta \phi$. The key observation is now that invariance of the action $S(\phi)$ does not necessarily imply invariance of the path integral since the measure $D \phi$ can be non-invariant too. In more technical terms, the reason for this is that whenever we change the integration variables, we need to remember that the Jacobian can be non-trivial. An anomaly is an obstruction to the construction of a quantum theory that has the same symmetry group as its action. For example, the standard method to construct quantum theories ensures that if there is a symmetry group of the system, the Hilbert space should also be a representation of this group. However, through the renormalization techniques extra $U(1)$ phases are introduced that ruin the standard argument that the Hilbert space is a representation of the system's symmetry group. Given a Lagrangian or a Hamiltonian we can search for symmetries of the action and then use Noether's theorem to calculate the conserved charges. Each charge corresponds $Q_a$ to one generator $G_a$ of the symmetry of the action. These Noether charges represent the generators on our Hilbert space in a quantum theory or on our phase space in a classical theory. We can then use the charges and put them into the corresponding Lie bracket. In the classical theory, this is the Poisson bracket, in a quantum theory the commutator. In most cases the Noether charges form a closed algebraic structure which is exactly the same as the algebra of the symmetry of the action. $$ [Q_a,Q_b] = f_{ab}^c G_c = [G_a,G_b] $$ However, for some systems this is not the case. Instead of the original symmetry algebra the Noether charges contains an additional term on the right-hand side. $$ [Q_a,Q_b] = f_{ab}^c G_c + X_c \neq [G_a,G_b] $$ Therefore, the Noether charges in such systems generate a different symmetry. We say they form a centrally extended version of the original algebra. The additional term $X_c$ on the right-hand side is known as the Schwinger term. Take note that people usually talk anomalies only in a quantum context and anomalies are described as quantum mechanical symmetry breaking. However it can also happen in a classical system that we can't realize our symmetry on the phase space and hence classical anomalies also exist. Non-conserved currents An important implications of the situation discussed above is that currents where we would think they are conserved since they correspond to symmetries via Noether's theorem, are not really conserved. A famous example is the axial current $J_A$ in the standard model. Purely by looking at the Lagrangian we would believe $$ \partial_\mu J_A^\mu = 0, $$ thanks to Noether's theorem. However, upon closer inspection (e.g. by calculation the famous Adler-Bell-Jackiw triangle diagram), we find that in the full quantum theory, we get instead $$ \partial_\mu J_A^\mu = \frac{g^2C}{16\pi^2} G^{\mu \nu a} \tilde{G}_{\mu \nu }^a$ , $$ where $G^{\mu \nu a}$ is the field-strength tensor and $\tilde{G}_{\mu \nu }^a$ its dual. Therefore, we find that axial rotations are not really symmetries of the Lagrangian. As a consequence, whenever we perform a axial rotation our Lagrangian changes by $$ L \to L + \Delta L = L + \partial_\mu J_A^\mu . $$ This is important since we always need an axial rotation to diagonalize the quark mass matrices and this is an important part of the strong CP puzzle. The anomaly phenomenon is sometimes called quantum mechanical symmetry breaking, since the theory naively appears to have a certain symmetry, but the Hilbert space is not quite a representation of this symmetry, due to the subtleties of how quantum field theories are defined. Not Even Wrong by P. Woit More generally, the anomaly refers to a subtlety in quantization: a symmetry of the classical theory does not work in the expected way in the quantum theory. You already see this in the phenomenon of the one-half energy of the ground state in the harmonic oscillator. You can get rid of this by redefining the Hamiltonian, but that changes how the symmetries of the classical system are implemented in the quantum system. For a finite number of degrees of freedom, you can work with either Hamiltonian, but in QFT, with an infinite number of degrees of freedom, you don't have a finite shift and this causes the anomaly. http://www.math.columbia.edu/~woit/wordpress/?p=7847 There are different kinds of anomalies: Gauge Anomalies In addition to the anomaly we have been discussing, which affects the global symmetries studied in current algebra, there can also be an anomaly in the gauge symmetry of a theory. This is called a gauge anomaly. Gauge anomalies are less well understood, but definitely interfere with the standard methods for dealing with the gauge symmetry of Yang-Mills quantum field theory. If one throws out the quarks and considers the standard model with just the leptons, one finds that this theory has a gauge anomaly, and it ruins the standard renormalisation of the quantum field theory as first performed by 't Hooft and Veltman. To this day it is unknown whether or not there is some way around this problem, but it can be avoided since if one puts the quarks back in the theory, one gets an equal and opposite gauge anomaly that cancels the one coming from the leptons. The full standard model has no gauge anomaly due to this cancellation, and the principle that gauge anomalies should cancel is often insisted upon when considering any extension of the standard model. The existence of anomalies associated with global currents does not necessarily mean difficulties for the theory. On the contrary, as we saw in the case of the axial anomaly, its existence provides a solution of the Sutherland–Veltman paradox and an explanation of the electromagnetic decay of the pion. The situation is very different when we deal with local symmetries. A quantum mechanical violation of gauge symmetry leads to many problems, from lack of renormalizability to nondecoupling of negative norm states. This is because the presence of an anomaly in the theory implies that the Gauss' law constraint $D · E_A = ρ A$ cannot be consistently implemented in the quantum theory. As a consequence, states that classically were eliminated by the gauge symmetry become propagating in the quantum theory, thus spoiling the consistency of the theory. page 189 in Invitation to Quantum Field Theory by Alvarez-Gaume et. al. In quantum field theories it is believed that anomalies in gauge symmetries (in contrast to rigid symmetries) cannot be coped with and must be canceled at the level of the elementary fields. May be the earliest work on the subject is: C. Bouchiat, J. Iliopoulos and P. Meyer, "An Anomaly free Version of Weinberg's Model" Phys. Lett. B38, 519 (1972). But certainly, one of the most famous ones is the Gross-Jakiw article: Effect of Anomalies on Quasi-Renormalizable Theories Phys. Rev. D 6, 477–493 (1972) They argued that the 'tHooft-Veltman perturbative proof of the renormalizability of gauge theories requires the anomalous currents not to be coupled to gauge fields. In the more modern BRST quantization language, gauge anomalies give rise to anomalous terms in the Slavnov-taylor identities which cannot be canceled by local counter-terms therefore ruin the combinatorial proof of perturbative renormalizability and of the decoupling of the gauge components and ghosts which results a non-unitary S-matrix. https://physics.stackexchange.com/a/34022/37286 Why is anomaly cancellation required for consistency? Two reasons are often cited. The first reason is that anomalies cause a loss of unitarity or Lorentz invariance. The point is that the gauge anomaly is a breakdown of gauge invariance at the quantum level. But we need gauge invariance to establish the equivalence of the covariant gauge and physical gauge formulations of a gauge theory, and thus to assure that the theory can be so formulated as to satisfy both unitarity and Lorentz invariance simultaneously [2]. The second reason is that anomalies cause a loss of renormalizability. The gauge anomaly causes the divergence structure of the theory, softened by gauge invariance, to become more severe, so that an infinite number of counterterms are generated. http://www.theory.caltech.edu/~preskill/pubs/preskill-1991-anomalies.pdf The Chiral Anomaly The chiral anomaly can be understood in a perfectly in a pictorial way, by invoking the Dirac see picture. See for example, this chapter. The best explanation of this idea can be found in chapter 9 of An Invitation to Quantum Field Theory Autoren by Alvarez-Gauméand Vázquez-Mozo; and also in Intuitive understanding of anomalies: A Paradox with regularization by Holger Bech Nielsen and Masao Ninomiya. See also, for example, here or here or Effects of Dirac's Negative Energy Sea on Quantum Numbers by R. Jackiw and also in section 4.5 in the book "Anomalies in Quantum Field Theory" by Bertlmann and the many references there. This idea and the fact that this effect can actually be observed in solid state physics, leads, for example, Roman Jackiw to the conclusion: It can then be shown that in the presence of a gauge field, the distinction between 'empty' positive-energy states and 'filled' negative-energy states cannot be drawn in a gauge-invariant manner, for massless, single-helicity fermions. Within this frame- work, the chiral anomaly comes from the gauge non-invariance of the infinite negative-energy sea. Since anomalies have physical consequences, we must assign physical reality to this infinite negative-energy sea. [5] The Unreasonable Effectiveness of Quantum Field Theory by Roman Jackiw The central result of all chiral anomaly analyses are: \begin{align} \text{Classical Physics: }& \partial_\mu J_5^\mu =0 \notag \\ \text{Quantum Physics: }& \partial_\mu J_5^\mu =\frac{e^2}{(4\pi)^2}\epsilon^{\mu\nu\lambda\sigma} F_{\mu\nu}F_{\lambda \sigma} \notag \\ \end{align} This means, the divergence of the axial current $J_5^\mu$ is non-zero through quantum effects, but is instead an operator that can produce two photons. The non-perturbative Anomaly In all our discussion of anomalies we only considered the computation of one- loop diagrams. It might happen that higher loop orders impose additional conditions. Fortunately this is not so: the Adler–Bardeen theorem [12] guarantees that the axial anomaly only receives contributions from one loop diagrams. Therefore, once anomalies are canceled (if possible) at one loop we know that there will be no new conditions coming from higher-loop diagrams in perturbation theory. The Adler–Bardeen theorem, however, only applies in perturbation theory. It is nonetheless possible that nonperturbative effects can result in the quantum violation of a gauge symmetry. This is precisely the case pointed out by Witten [13] with respect to the SU(2) gauge symmetry of the standard model. In this case the problem lies in the nontrivial topology of the gauge group SU(2). The invariance of the theory with respect to non-trivial gauge transformations requires the number of fermion doublets to be even. It is again remarkable that the family structure of the standard model makes this anomaly cancel page 192 in "An Invitation to Quantum Field Theory" by Luis Álvarez-Gaumé et. al. For a nice introduction, see http://www.maths.dur.ac.uk/~dma0saa/lecture_notes.pdf see also the discussion in https://www.math.columbia.edu/~woit/QM/qmbook.pdf and here and section 9.2. " Creation of particles by classical fields" here See also Gauge theory, anomalies and global geometry: The interplay of physics and mathematics by Dana Fine & Arthur Fine Anomalies for Pedestrians by Holstein See also the section "Instantons, fermions, and physical consequences" in the book Classical Solutions in Quantum Field Theory by Erik Weinberg. Important Papers: Uniqueness of quark and lepton representations in the standard model from the anomalies viewpoint by C. Q. Geng and R. E. Marshak Comment on anomaly cancellation in the standard model by J. A. Minahan, P. Ramond, and R. C. Warner CHARGED NEUTRINOS? by R. Foot et. al. Particle creation via relaxing hypermagnetic knots by C. Adam et. al. Recommended Resources: The best explanation for the idea that anomalies are related to extensions of the corresponding Lie algebra, can be found in www.atlantis-press.com/php/download_paper.php?id=754. In addition, the paper nicely summarizes the various approaches that have been used so far to deal with anomalies. An introduction to the geometric picture of anomalies can be found in Chiral Anomalies And Differential Geometry: Lectures Given At Les Houches, August 1983 by Bruno Zumino See also https://www.mathi.uni-heidelberg.de/~walcher/teaching/sose16/geo_phys/Anomalies.pdf see also Geometry and topology of chiral anomalies in gauge theories By R. RENNIE Anomalies and Cocycles by R. Jackiw There is perhaps a dominant perception that quantum anomalies of classical symmetries can occur only in the context of quantum field theories. Typically they arise in the course of regularizing divergent expressions in quantum fields [1, 2], causing the impression that it is these divergences that cause anomalies. It is however known that anomalies can occur in simple quantum mechanical systems such as a particle on a circle or a rigid rotor. Esteve [4, 5] explained long ago that the presence or otherwise of anomalies is a problem of domains of quantum operators. Thus while quantum state vectors span a Hilbert space H, the Hamiltonian H is seldom defined on all vectors of H. For example, the space H of square-integrable functions on R 3 contains nondifferentiable functions ψ, but the Schroedinger Hamiltonian H = − 1 2m ∇2 is not defined on such ψ. Rather H is defined only on a dense subspace DH of H. If a classical symmetry g does not preserve DH, gDH 6= DH, then Hg ψ for ψ ∈ DH is an ill-defined expression. In this case, one says that g is anomalous [4, 5]. See also [6–12]. In the present work, we explore the possibility of overcoming anomalies by using mixed states. There are excellent reasons for trying to do so, there being classical gauge symmetries like SU(3) of QCD or large diffeomorphisms (diffeos) of manifolds (see below) which can become anomalous. Color SU(3) does so in the presence of non-abelian monopoles [13–15], while "large" diffeos do so for suitable Friedman-Sorkin geon manifolds [16–18]. It is surely worthwhile to find ways to properly implement these symmetries. While non-abelian structure groups of twisted bundles are always anomalous, abelian groups also of course can be anomalous. For instance, parity anomaly for a particle on a circle (discussed in section 2 of this work) and the axial U(1)A anomaly in the Standard Model are both abelian. The crucial issue is whether the classical symmetry preserves the domains of appropriate operators like the Hamiltonian. If they do not preserve such domains, then they are anomalous. The important feature of non-abelian structure groups of twisted bundles is that they never preserve the domain of the Hamiltonian. https://arxiv.org/pdf/1108.3898.pdf "In field theory and in modern elementary particle theory, cocycles are used to describe anomalies." Source When chiral fermions are coupled to gauge fields, the algebra of gauge transformations acquires a Mickelsson-Faddeev (MF) cocycle (eq. 22) - a gauge anomaly. Mickelsson, together with Rajeev and maybe others, originally tried to construct representations of the this algebra. However, it was shown in D. Pickrell, On the Mickelsson-Faddeev extensions and unitary representations, Comm. Math. Phys. 123 (1989) 617. that the MF algebra does not possess any faithful unitary representation on a separable Hilbert space (or something like that). As a response to this disappointing result, Mickelsson developed a theory where cocycles depending on an external gauge potential are regarded as generalized representations. It is this theory which evidently is naturally formulated in terms of Gerbes. Thomas Larsson Recently it has become clear that gauge theories with fermion display three different kinds of anomalies, all related to the global topology of the fourdimensional configuration space $C^ 4$ by the family index of the Dirac operator $D^4$ . These are the axial U(l) anomaly [the "$\pi_0(G^3)$ anomaly"], Witten's SU(2) anomaly [2] [from "$\pi_1(G^3)$" ] , and the nonabelian gauge anomaly [3] [from "$\pi_2(G^3)$"]. The diversity of the manifestations of these anomalies seems to belie their common origin, however. In the first case we find particle production in the presence of instanton fields [4], breaking of a global symmetry, and no problem with gauge invariance. In the second we find no problem with chiral charge, but instead a nonperturbative failure of gauge symmetry, while in the latter the same thing occurs even perturbatively. What is going on? Hamiltonian Interpretation of Anomalies by Philip Nelson and Luis Alvarez-Gaume Anomalies have been first discovered in perturbation theory by UV–regularizing a divergent diagram [1,2]. But they are not just a regularization effect, they also show up in quite different procedures. For example, in the method of dispersion relations where they occur as an IR–singularity of the transition amplitude [7,8], or within sum rules [9]. Using a quite different approach, working with path integrals, the anomalies are detected by the chiral transformation of the path integral measure [10]. In the last years a development in modern mathematical techniques attracted much attention in describing the anomalies. This was differential geometry [11–17], cohomology [18,19]and topology (Atiyah–Singer index theorem) [20–29]. https://arxiv.org/pdf/hep-ph/9411254v1.pdf Nowadays there exists a more fundamental geometrical interpretation of anomalies which I think can resolve some of your questions. The basic source of anomalies is that classically and quantum-mechanically we are working with realizations and representations of the symmetry group, i.e., given a group of symmetries through a standard realization on some space we need to lift the action to the adequate geometrical objects we work with in classical and quantum theory and sometimes, this action cannot be lifted. Mathematically, this is called an obstruction to the action lifting, which is the origin of anomalies. The obstructions often lead to the possibility to the realization not of the group of symmetries itself but some extension of it by another group acting naturally on the geometrical objects defining the theory. https://physics.stackexchange.com/questions/33195/classical-and-quantum-anomalies The non-Abelian chiral anomaly is very well understood algebraically and topoplogically, please see the following review by R. A. Bertlmann. In the example following equation (38) in the article, all the quantities associated with the non-Abelian chiral anomaly are algebraically computed (without solving Feynman diagrams) through what is called the Stora-Zumino descent equations. These equations give on the first level the Chern-Simons term, on the second level, the anomaly (divergence of the current), on the third level, the extension in the gauge group commutation relations and on the fourth level, the associator causing the violation of the Jacobi-identity, thus resulting a non associative algebra (please see the following related physics stack exchange question). I mentioned, the descent equations, because there is a modern concept of Gerbes trying to find geometrical realization of these equations (please see also the following Mickelsson's review). This direction of research has the potential of providing deeper understanding what are the quantum structures that we must associate to these algebras (interpreted as classical algebras of Poisson brackets) because the usual Hilbert spaces and unitary representations do not seem to work. The Mickelsson-Faddeev algebra was extensively analyzed within the theory of gerbes, please see for example this work by Hekmati, Murray, Stevenson and Vozzo (and also the above Micklsson's reference). https://physics.stackexchange.com/a/76653/37286 The chiral anomaly can be corrected by adding a Wess-Zumino term to the Lagrangian, but this term is not perturbatively renormalizable, thus does not solve the nonrenormalizability problem. https://physics.stackexchange.com/a/34022/37286 Anomalies are often a useful first line of attack in trying to understand new systems. This is because the presence of anomalies, or the way they are canceled, can often be studied without knowing the detailed dynamics of the theory. They are in a way topological properties of the theory and thus can be studied by approximate methods. https://arxiv.org/pdf/hep-th/0509097.pdf Implications of quantum anomalies are numerous. First and foremost, their knowledge is needed in order to avoid theories which look fully "legitimate" at the classical level, but become terminally sick upon quantization. For instance, suppose one would like to build an extension of the Standard Model, with additional fermions beyond the standard three generations. If the fermion content is chosen inappropriately, such an extension may well be internally inconsistent. Second, the chiral quantum anomalies play an important role in the soft pion theory in QCD, and in the 't Hooft matching condition, which, in turn, presents the foundation for the Seiberg duality in supersymmetric QCD. The scale quantum anomalies which are typical of asymptotically free field theories (such as QCD) can be used for establishing a number of low-energy theorems. https://physique.cuso.ch/fileadmin/physique/document/2015_shifman_lecture_1.pdf In four-dimensional quantum field theories, the problem of the anomaly or Schwinger term is much trickier. Current algebra in four dimensions has led to a significant amount of understanding of the physical aspects of the problem. One of the earliest physical consequences of the anomaly concerned the rate at which neutral pions decay into two photons. If one ignores the anomaly problem, current algebra predicts that this decay will be relatively slow, whereas experimentally it happens very quickly. Once one takes into account the anomaly, the current algebra calculation agrees well with experiment. This calculation depends on the number of colours in QCD, and its success was one of the earliest pieces of evidence that quarks had to come in three colours. Another successful physical prediction related to the anomaly was mentioned earlier. This is the fact that, ignoring the anomaly, there should be nine low-mass pions, the Nambu-Goldstone bosons of the spontaneously broken symmetry in current algebra. In reality, there are nine pions, but only eight of them are relatively low mass. The higher mass of the ninth one can be explained once one takes into account the effect of the anomaly. for me chiral and scale symmetry breaking are completely natural effects, but their description in our present language – quantum field theory – is awkward and leads us to extreme formulations, which make use of infinities. One hopes that there is a more felicitous description, in an as yet undiscovered language. It is striking that anomalies afflict precisely those symmetries that depend on absence of mass: chiral symmetry, scale symmetry. Perhaps when we have a natural language for anomalous symmetry breaking we shall also be able to speak in a comprehensible way about mass, which today remains a mystery. THE UNREASONABLE EFFECTIVENESS OF QUANTUM FIELD THEORY by R. Jackiw On the other hand, anomalies in physics are not always unwanted features to be eradicated. E.g. the trace anomalies associated to dilatation invariance lead to the Callan– Symanzik equations [7]. http://www.tandfonline.com/doi/pdf/10.2991/jnmp.2001.8.4.6 In dimensions other than the critical dimension, the action $S$ of the Bosonic string theory has a conformal anomaly. page 267 in Topology and Quantum Field Theory by Charles Nash It began with the pioneering work of Alvarez-Gaume and Witten [Alvarez-Gaume, Witten 1983] on gravitational anomalies, and the enthusiasm culminated in the discovery of Green and Schwarz [Green, Schwarz 1984]that gauge and gravitational anomalies may cancel each other, however, in a supersymmetric theory in 10 dimensions. Anomalies in Quantum Field Theory by Reinhold A. Bertlmann One of the earliest physical consequences of the anomaly concerned the rate at which neutral pions decay into two photons. If one ignores the anomaly problem, current algebra predicts that this decay will be relatively slow, whereas experimentally it happens very quickly. Once one takes into account the anomaly, the current algebra calculation agrees well with experiment. This calculation depends on the number of colours in QCD, and its success was one of the earliest pieces of evidence that quarks had to come in three colours. Another successful physical prediction related to the anomaly was mentioned earlier. This is the fact that, ignoring the anomaly, there should be nine low-mass pions, the Nambu-Goldstone bosons of the spontaneously broken symmetry in current algebra. In reality, there are nine pions, but only eight of them are relatively low mass. The higher mass of the ninth one can be explained once one takes into account the effect of the anomaly. www.atlantis-press.com/php/download_paper.php?id=754 ->Are theories with gauge anomalies necessarily inconsistent?# No! See Gauge anomalies in an effective field theory by JohnPreskill <-- Why do we study anomalies with the triangle diagram? see https://physics.stackexchange.com/questions/303914/why-do-we-study-anomalies-with-the-triangle-diagram Where does the name anomaly come from? The nomenclature is misleading. At its discovery, the phenomenon was unexpected and dubbed 'anomalous'. By now the surprise has worn off, and the better name today is 'quantum mechanical' symmetry breaking You have to appreciate the frame of mind that field theorists operated in to understand their shock when they discovered in the late 1960s that quantum fluctuations can indeed break classical symmetries. Indeed, they were so shocked as to give this phenomenon the rather misleading name "anomaly", as if it were some kind of sickness of field theory. With the benefits of hindsight, we now understand the anomaly as being no less conceptually innocuous as the elementary fact that when we change integration variables in an integral we better not forget the Jacobian. QFT in a Nutshell by A. Zee [T]he anomaly refers to a subtlety in quantization: a symmetry of the classical theory does not work in the expected way in the quantum theory. You already see this in the phenomenon of the one-half energy of the ground state in the harmonic oscillator. You can get rid of this by redefining the Hamiltonian, but that changes how the symmetries of the classical system are implemented in the quantum system. For a finite number of degrees of freedom, you can work with either Hamiltonian, but in QFT, with an infinite number of degrees of freedom, you don't have a finite shift and this causes the anomaly. Put differently, the anomaly is due to the fact that normal-ordering is needed in QFT, and this sometimes changes how classical symmetries appear after quantization. https://www.math.columbia.edu/~woit/wordpress/?p=7847 It is important to avoid here the misconception that anomalies appear due to a bad choice of the way a theory is regularized in the process of quantization. When we talk about anomalies we mean a classical symmetry that cannot be realized in the quantum theory, no matter how smart we are in choosing the regularization procedure. How can we get rid of anomalies? There are several ways: A remark on the non-uniqueness of the mechanism of anomaly cancellation: We can cancel anomalies by adding a new family of fermions, various Wess-zumino terms (corresponding to different anomaly free subgroups), and may be masses to the gauge fields (as in the Schwinger model). This non-uniqueness, reflects the fact that when anomaly is present, the quantization is not unique, (in other words the theory is not completely defined). This phenomenon is known in many cases in quantum mechanics (inequivalent quantizations of a particle on a circle), and quantum field theory (theta vacua). Finally, my point of view is that the anomaly cancellation does not dismiss the need of finding "representations" to the anomalous current algebras in each sector. This principle works in 1+1 dimensions. It should work in any dimension because according to Wigner quantum theory deals with representations of algebras. This is why I think that Mickelsson's project is important. What about higher order corrections to the Chiral Anomaly? What do anomalies have to do with index theorems? From the point of view of a mathematician, one aspect of the anomaly is that it is related both to the Atiyah-Singer index theorem and to a generalisation known as the index theorem for families. Whereas the original index theorem describes the number of solutions to a single equation and does this in terms of the number of solutions of a Dirac Equation, the families index theorem deals with a whole class or family of equations at once. A family of Dirac equations arises in physics because one has a different Dirac equation for every different Yang-Mills field, so the possible Yang-Mills fields parametrise a family of Dirac equations. This situation turns out to be one ideally suited to the use of general versions of the index theorem already known to mathematicians, and in turn has suggested new versions and relations to other parts of mathematics that mathematicians had not thought of before. As usual, Witten was the central figure in these interactions between mathematicians and physicists, producing a fascinating series of papers about different physical and mathematical aspects of the anomaly problem. Are anomalies really pure quantum effects? No! See www.atlantis-press.com/php/download_paper.php?id=754 Are anomalies UV or IR effects? IR effects. See section 1.6 in https://arxiv.org/pdf/hep-th/0509097.pdf and also It is true that chiral anomalies were discovered in quantum field theories when no ultraviolet regulators respecting the chiral symmetry could be found. But anomaly is actually an infrared property of the theory. The signs for that is the Adler-Bardeen theorem that no higher loop (than one) correction to the axial anomaly is present and more importantly only massless particles contribute to the anomaly. In the operator approach that I tried to adopt in this answer the anomaly is a consequence of a deformation that should be performed on the symmetry generators in order to be well defined on the physical Hilbert space and not a direct consequence of regularization. Can anomalies be observed? Yes! In addition, to indirect effects anomalies can be directly observed in solid state systems. See, for example https://arxiv.org/ftp/arxiv/papers/1605/1605.09214.pdf Why do some anomalies (only) lead to inconsistent quantum field theories see https://physics.stackexchange.com/questions/33972/why-do-some-anomalies-only-lead-to-inconsistent-quantum-field-theories/34022#34022 Early work on current algebra during the 1960s had turned up a rather confusing problem which was dubbed an 'anomaly'. The source of the difficulty was something that had been studied by Schwinger back in 1951, and so became known as the problem of the Schwinger term appearing in certain calculations. The Schwinger term was causing the Hilbert space of the current algebra to not quite be a representation of the symmetry group of the model. The stan- dard ways of constructing quantum mechanical systems ensured that if there was a symmetry group of the system, the Hilbert space should be a representation of it. In the current algebra theory, this almost worked as expected, but the Schwinger term, or equivalently, the anomaly, indicated that there was a problem. The underlying source of the problem had to do with the neces- sity of using renormalisation techniques to define properly the current algebra quantum field theory. As in QED and most quantum field theories, these renormalisation techniques were necessary to remove some infinities that occur if one calculates things in the most straight- forward fashion. Renormalisation introduced some extra U(l) phase transformations into the problem, ruining the standard argument that shows that the Hilbert space of the quantum theory should be a representation of the symmetry group. Some way needed to be found to deal with these extra U(l) phase transformations. In two-dimensional theories, it is now well understood how to treat this problem. In this case, the anomalous U(l) phase transformations can be dealt with by just adding an extra factor of U(l) to the orig- inal infinite-dimensional symmetry group of the theory. The Hilbert space of the two-dimensional theory is a representation, but it is one of a slightly bigger symmetry group than one might naively have thought. This extra U(l) piece of the symmetry group also appears in some of the infinite dimensional Kac-Moody groups. So in two dimensions the physics leading to the anomaly and the mathemat- ics of Kac-Moody groups fit together in a consistent way. In four-dimensional quantum field theories, the problem of the anomaly or Schwinger term is much trickier. Current algebra in four dimensions has led to a significant amount of understanding of the physical aspects of the problem. One of the earliest physical conse- quences of the anomaly concerned the rate at which neutral pions decay into two photons. If one ignores the anomaly problem, current algebra predicts that this decay will be relatively slow, whereas exper- imentally it happens very quickly. Once one takes into account the anomaly, the current algebra calculation agrees well with experiment. This calculation depends on the number of colours in QCD, and its success was one of the earliest pieces of evidence that quarks had to come in three colours. Another successful physical prediction related to the anomaly was mentioned earlier. This is the fact that, ignoring the anomaly, there should be nine low-mass pions, the Nambu-Goldstone bosons of the spontaneously broken symmetry in current algebra. In reality, there are nine pions, but only eight of them are relatively low mass. The higher mass of the ninth one can be explained once one takes into account the effect of the anomaly. page 129ff in Not Even Wrong by P. Woit Ares Marrero E. Hughes Ida Davis Jakob Schwichtenberg Tesmi Tekle advanced_notions/quantum_field_theory/anomalies.txt · Last modified: 2019/07/01 07:37 by jakobadmin
CommonCrawl
Lifestyle behaviors associated with the initiation of renal replacement therapy in Japanese patients with chronic kidney disease: a retrospective cohort study using a claims database linked with specific health checkup results Azusa Hara ORCID: orcid.org/0000-0001-6958-78641, Takumi Hirata ORCID: orcid.org/0000-0003-2899-67622, Tomonori Okamura ORCID: orcid.org/0000-0003-0488-03513, Shinya Kimura ORCID: orcid.org/0000-0003-1525-86804 & Hisashi Urushihara ORCID: orcid.org/0000-0001-6913-99301 Environmental Health and Preventive Medicine volume 26, Article number: 102 (2021) Cite this article Chronic kidney disease (CKD) is an independent risk factor for progression to an end-stage renal disease requiring dialysis or kidney transplantation. We investigated the association of lifestyle behaviors with the initiation of renal replacement therapy (RRT) among CKD patients using an employment-based health insurance claims database linked with specific health checkup (SHC) data. This retrospective cohort study included 149,620 CKD patients aged 40–74 years who underwent a SHC between April 2008 and March 2016. CKD patients were identified using ICD-10 diagnostic codes and SHC results. We investigated lifestyle behaviors recorded at SHC. Initiation of RRT was defined by medical procedure claims. Lifestyle behaviors related to the initiation of RRT were identified using a Cox proportional hazards regression model with recency-weighted cumulative exposure as a time-dependent covariate. During 384,042 patient-years of follow-up by the end of March 2016, 295 dialysis and no kidney transplantation cases were identified. Current smoking (hazard ratio: 1.87, 95% confidence interval, 1.04─3.36), skipping breakfast (4.80, 1.98─11.62), and taking sufficient rest along with sleep (2.09, 1.14─3.85) were associated with the initiation of RRT. Among CKD patients, the lifestyle behaviors of smoking, skipping breakfast, and sufficient rest along with sleep were independently associated with the initiation of RRT. Our study strengthens the importance of monitoring lifestyle behaviors to delay the progression of mild CKD to RRT in the Japanese working generation. A substantial portion of subjects had missing data for eGFR and drinking frequency, warranting verification of these results in prospective studies. Chronic kidney disease (CKD), diagnosed by a gradual reduction in kidney function or renal dysfunction for more than three months, is a significant public health issue worldwide [1]. In Japan, approximately 13.3 million people are considered to have CKD, accounting for one-eighth of the Japanese adult population [2]. CKD is an independent risk factor for progression to end-stage renal disease requiring dialysis or kidney transplantation [3, 4]. In 2017, approximately 335,000 patients were receiving maintenance dialysis, [5] the medical cost of which reached 1600 billion yen and accounted for 4% of the total health care budget in Japan that year [6]. In contrast, far fewer kidney transplantations were performed that year, in approximately 1700 patients [7]. Lifestyle-related diseases, such as diabetes, hypertension, and dyslipidemia, are known risk factors for the prevalence and incidence of developing CKD [3, 8]. Healthy lifestyle behaviors can prevent lifestyle-related diseases and consequently avoid the onset and progression of CKD. However, due to difficulties in ensuring sufficient statistical power to detect and analyze low-incidence RRT events, few studies have investigated factors which predict the initiation of chronic renal replacement therapy (RRT) among CKD patients, including lifestyle factors [9]. "Specific Health Checkups (SHC) and Specific Health Guidance" commenced in 2008 as part of the national health insurance system in Japan [10]. Employers are mandated to provide the insured and their dependents aged 40–74 years with the opportunity to take SHC annually for the early detection of metabolic syndrome, a common lifestyle-related disease [11,12,13]. Here, using an employment-based large-scale claims database linked with SHC data, we investigated the association of cumulative exposure to lifestyle behaviors with the initiation of chronic RRT among CKD patients identified using both claims records and SHC results. This retrospective cohort study was performed using a large-scale claims database linked with the results of SHC items, consisting of laboratory measurements, medical interviews on lifestyle diseases, and a self-administrated questionnaire on lifestyle behaviors. The claims records and linked SHC data were provided by multiple employment-based health insurance plans to JMDC Inc. (Tokyo, Japan). Details of lifestyle behaviors and lifestyle disease profiles of the population in the JMDC database have been described elsewhere [13, 14]. The JMDC database included 1,450,215 enrollee plans as of March 31, 2016, consisting of employees and their dependents covered by company-run health insurance. We used the claims records and SHC results between April 2008 and March 2016. We studied CKD subjects aged 40 to 74 years who underwent SHC during the study period. CKD patients were identified by either or both of the following criteria: (1) a disease code for CKD according to the International Classification of Diseases, Tenth Revision (ICD-10), [15] as shown in eTable 1, recorded in the claims database; and (2) an estimated glomerular filtration rate (eGFR) less than 60 ml/min per 1.73 m2 or positive proteinuria according to the SHC results [3]. The patients were followed from the index date, defined as the first date of SHC after the initial diagnosis of CKD in the claims records or the first date on which they met the second CKD criterion above, whichever came earlier (Fig. 1). In addition, CKD patients eligible for the study had to have a look back period to ascertain the absence of chronic RRT between enrollment in the database and the index date. Advanced cancer patients with the ICD-10 codes of C00─D48 and cancer therapy recorded in the claims database were excluded (eTables 2.1 and 2.2). Patients who were disenrolled from their insurance in the same month as the index date were also excluded. Participants were followed until the end of the study period, or the date they initiated RRT or were disenrolled from insurance, which came earlier. Identification criteria for CKD patients and definition of index date. CKD chronic kidney disease; ICD-10 International Classification of Diseases, Tenth Revision; RRT renal replacement therapy; SHC Specific Health Checkups. (1) Index date*: The first date of SHC after the initial diagnosis of CKD in the claims records. (2) Index date†: The first date when they met the CKD criteria according to the SHC results. CKD patients were identified by meeting either or both of the following criteria: (1) having the disease codes for CKD coded by ICD-10 as shown in eTable 1 recorded in the claims database and (2) having an estimated glomerular filtration rate less than 60 ml/min per 1.73 m2 or positive proteinuria according to the SHC results The primary study outcome was chronic RRT, including dialysis or kidney transplantation, and was determined by the medical procedure codes in the claims records (eTable 3) [16]. Dialysis was considered positive when claims were present for both "outpatient medical management fees for chronic maintenance dialysis patients (B001)" and procedures related to dialysis (C102, C102-2, J038, J042) in the same month. The former code was adopted to identify chronic RRT and exclude temporary dialysis aimed at treating acute renal failure (eTable 3). Kidney transplantation was considered positive when claims were present for procedures concerning cadaveric renal transplantation (K780) or living kidney transplantation (K780-2) (eTable 3). We evaluated the first event of chronic RRT after the index date. Data for body mass index, serum creatinine, and proteinuria were collected by physical measurement and biochemical examination at SHC. eGFR was calculated using serum creatinine level and the following formula for Japanese patients: eGFR (ml/min per 1.73 m2)=194×(serum creatinine)-1.094×age-0.287 (multiply by 0.739 in case of woman) [17]. CKD patients were classified into the categories of the Kidney Disease: Improving Global Outcomes (KDIGO) classification 2012 which was based on disease, eGFR category (G1 to G5) and albuminuria category (A1 to A3) [18]. Since proteinuria in this study was detected by dipstick at SHC, we substituted a result of dipstick test for albuminuria level to classify patients into the albuminuria category [3]. We also classified the CKD patients into the four risk stages in combination of eGFR category and albuminuria category [18]. We investigated the 11-item survey results on lifestyle behaviors using the self-administered questionnaire of the annual SHC, including current smoking, regular exercise, regular walking, walking fast, skipping breakfast, eating speed, eating dinner late, late-evening snacking, frequency and amount of drinking alcohol, and taking sufficient rest along with sleep (eTable 4) [13]. Histories of cardio- and cerebrovascular disease were defined by ICD-10 codes in the claims data (eTable 5), and medications for diabetes, hypertension, and hypercholesterolemia were determined by drug classification code 87 in the Japan Standard Commodity Classification in claims records (eTable 6). Baseline characteristics and lifestyle behaviors at the index date were summarized using descriptive statistics and proportions. Cox proportional hazards regression models were constructed to estimate hazard ratios (HRs) of the initiation of RRT and included lifestyle behaviors as explanatory variables. Cumulative exposure to a lifestyle behavior for each individual was estimated using a recency-weighted cumulative exposure model: [19] $${E}_1={E}_{visit\ 1}$$ $${E}_{n\ \left(n\ge 2\right)}=\frac{E_{visit\ 1}}{2^{n-1}}+{\sum}_{k=2}^n\frac{E_{visit\ k}}{2^{n+1-k}}$$ where En denotes estimated recency-weighted cumulative exposure at visit n, and Evisit n denotes exposure at visit n. We did not adjust for the self-reported amount of daily alcohol intake, although the SHC does have a survey item asking about alcohol intake, including both daily amount and weekly frequency. For 58.5% of respondents who answered "rare" for frequency, the amount of alcohol intake at baseline was missing, indicating that the validity of data for the amount of alcohol intake was suboptimal. We therefore adjusted only for weekly frequency, but not for daily amount of alcohol consumption. The other covariates in the Cox model included age; gender; body mass index; eGFR; proteinuria; medications for diabetes, hypertension, and hypercholesterolemia; histories of cardiovascular disease and cerebrovascular disease; and the total number of checkups taken. All covariates were included in the model as time-dependent covariates (except for gender). Multicollinearity was assessed with Spearman's rank correlation coefficients and the variance inflation factor (VIF) values. Sensitivity analyses were performed using CKD patients defined by either of the criteria mentioned earlier, namely either ICD-10 code (criteria 1, eTable 1) or SHC data (criteria 2). To adjust for confounding by diabetes, hypertension, and hypercholesterolemia, the Cox hazard model in the primary analysis included the use of medications for these diseases as covariates. We added systolic blood pressure, LDL cholesterol, and HbA1c as time-dependent covariates in the Cox regression model for a post hoc analysis. Analyses were conducted using SAS version 9.4 for Windows (SAS Institute Inc., Cary, NC, USA). The study protocol was approved by the Keio University Faculty of Pharmacy ethics committee for research involving humans (No. 190509-2). Consistent with local ethical guidelines for medical research involving human subjects, [20] the requirement that study participants provide informed consent was waived. Characteristics at baseline Among 1,450,215 enrollees, 153,716 CKD patients with SHC data were extracted. We excluded 4096 patients who had RRT during the look-back period (n=669), were disenrolled from their insurance in the month of the index date (n=1710), and had advanced cancer (n=1717). Finally, 149,620 patients were analyzed in the present study (Fig. 2). During a total follow-up of 384,042 patient-years among 149,620 patients (median follow-up of 2.3 years), 295 dialysis cases and no kidney transplantation cases were identified, amounting to an incidence of 0.77 per 1000 patient-years. At the time when RRT was initiated, the number of CKD patients having ICD-10 codes related to glomerulonephritis was the most common (n=130; ICD-10 codes of N02, N03, N04, N05, N06, and N391), followed by diabetic nephropathy (n=98; E102, E112, E132, and E142) and nephrosclerosis (n=23; I12 and N26). There were no CKD patients who had the hereditary renal disease (N07 and E851). Flow chart of subject selection. CKD chronic kidney disease, SHC specific health checkups At baseline, cases had lower eGFR than non-cases (15 [interquartile range (IQR) 8, 33] ml/min/1.73 m2 vs 59 [IQR 56, 74] ml/min/1.73 m2), and a higher prevalence of proteinuria (78.6% vs 40.4%) and a diagnosis of CKD (79.3% vs 29.9%) (Table 1). Cases also had a higher prevalence of high-risk category by KDIGO CKD classification (eTable 7), [18] a history of coronary artery disease and cerebrovascular disease, and medications for hypertension, diabetes, and hypercholesterolemia (Table 1). Further, cases had an approximately 10% lower proportion of female, regular walking, and walking fast and less frequent alcohol drinking than non-cases at baseline (Table 2). The prevalence between the cases and non-cases of the other lifestyle behaviors were similar (Table 2). Table 1 Baseline characteristics Table 2 Lifestyle behaviors at baseline Among all baseline variables, the most common missing value was eGFR, at 65.4% in the cases and 38.8% in the non-cases (eTable 8). The second most common missing value at baseline was the amount of alcohol consumption. The alcohol consumption was missing in 44.7% of the cases and 38.4% of the non-cases (eTable 8). Missing observations for lifestyle behavior items ranged from 6.2 to 44.7% of all patients (eTable 8). The amount of alcohol consumption at baseline was missing in 58.5%, 11.0%, and 9.1% of respondents who answered "rare," "occasional," and "everyday," respectively, to the item on alcohol intake frequency. Patients with missing eGFR values had a higher prevalence of proteinuria (59.6% vs 28.3%) and diagnosis of CKD (44.8% vs 20.7%) than those with eGFR values (eTable 9). Lifestyle behaviors were similar between patients with and without missing eGFR values (eTable 10). Association of daily lifestyle behaviors with the initiation of chronic RRT 60,481 patients with eGFR values were eligible for multivariate Cox hazards regression analysis for HRs of the initiation of RRT (Table 3). 68 RRT cases were included in the analysis. Among the survey items of lifestyle behaviors, current smoking (HR 1.87, 95% confidence interval [CI] 1.04─3.36), frequent breakfast-skipping (HR 4.80, 95% CI 1.98─11.62), and sufficient rest along with sleep (HR 2.09, 95% CI 1.14─3.85) were significantly associated with the initiation of RRT (Table 3). Female gender (HR 0.20, 95% CI 0.05─0.72), higher eGFR level (HR 0.81, 95% CI 0.78─0.84), and number of SHCs taken (HR 0.55, 95% CI 0.38─0.80) were significantly associated with decreased risk for the initiation of RRT, while antidiabetic medication (HR 2.65, 95% CI 1.49─4.70) was associated with increased risk (Table 3). Table 3 Hazard ratios for the initiation of renal replacement therapy (n=60,481) Spearman's rank correlation analysis did not indicate a strong correlation (>0.70) among covariates, with the highest being 0.50 between proteinuria and eGFR. The VIF values of all variables were less than 10 (data not shown). Sensitivity analyses Among the population of patients with CKD (n=15,389) whose definition was based on ICD-10 codes only, 67 cases of RRT were identified. Current smoking (HR 2.21, 95% CI 1.28─3.80), frequent breakfast-skipping (HR 4.88, 95% CI 1.79─13.35), and sufficient rest along with sleep (HR 2.08, 95% CI 1.10─3.93) were significantly associated with an elevated risk of the initiation of RRT. Among 51,931 patients with CKD whose definition was based on lab test results of SHC only, 65 cases were identified. Frequent breakfast-skipping (HR 5.55, 95% CI 2.24─13.72) was significantly associated with RRT initiation and current smoking (HR 1.72, 95% CI 0.97─3.05), whereas sufficient rest along with sleep (HR 1.97, 95% CI 0.99─3.92) was not significantly associated with RRT initiation. HRs for the other lifestyle behaviors for both groups were similar to the results of the primary analysis (data not shown). When systolic blood pressure, LDL cholesterol, and HbA1c were included in the Cox hazards model, similar HRs to the primary analysis were obtained for current smoking (HR 1.80, 95% CI 0.87─3.73), frequent breakfast-skipping (HR 3.49, 95% CI 1.35─9.00), and sufficient rest along with sleep (HR 2.11, 95% CI 1.01─4.43). To our knowledge, this is the first study to examine the association of lifestyle behaviors and initiation of chronic RRT in CKD patients using a large claims database linked with SHC data. Lifestyle behaviors of smoking, skipping breakfast, and sufficient rest along with sleep were independently associated with the initiation of RRT. At baseline, cases had a lower eGFR and a higher prevalence of proteinuria and history of cardio-cerebrovascular diseases than the non-cases by definition. It is apparent that the cases already had decreased renal function and had been in an unhealthy condition at baseline. On the other hand, lifestyle behaviors between the cases and non-cases were similar, except for walking fast and frequency of alcohol drinking. The healthy lifestyle behaviors observed in the cases may imply either or both good compliance with physician guidance on the prevention or treatment of disease, and the voluntary adoption of health-conscious behavior. A strengthening of advice and motivation from occupational doctors might make employees aware of the necessity of improving their health condition and induce them to take action to change their lifestyle to avoid RRT initiation. Smoking had a significant association with the initiation of RRT. Our present results from a large claims database linked with SHC data are consistent with previous findings from a meta-analysis and several cohort studies that indicated that smoking was a risk factor for CKD progression [21,22,23]. In 1951 CKD patients from a cohort study in Korea, the hazard ratio for adverse kidney outcome was attenuated as the duration of smoking cessation increased [24]. Smoking could be a modifiable factor in delaying CKD progression, given that the 2018 Evidence-based Clinical Practice Guideline for CKD recommends smoking cessation for CKD patients [3]. Skipping breakfast was associated with the initiation of RRT in the present study. Associations of skipping breakfast with weight gain, insulin resistance and type 2 diabetes have been suggested [25,26,27]. A sensation of hunger by skipping breakfast may promote overeating later in the day, [28,29,30] and an overactivity in the hypothalamic-pituitary-adrenal axis. This prolonged fasting is reported to cause increased blood pressure [31]. Metabolic syndrome and its components, including abdominal obesity, dyslipidemia, elevated fasting glucose, and high blood pressure, were associated with the progression of CKD [32]. Habitually skipping breakfast is likely to lead to metabolic syndrome components and aggravate renal function. Our results emphasize the importance of strengthening guidance on dietary habits for CKD patients [33]. It may be reasonable to ascribe the positive association between the answer "taking sufficient rest along with sleep" and the initiation of RRT to reverse causality. The SHC survey item which asked about rest and sleep did not specify a definition for "taking sufficient rest along with sleep". Most subjects in the present study were working generation and seemed to be busy with their regular work. They might therefore have perceived that they received insufficient rest and sleep. In contrast, unhealthy subjects may have answered "yes" to the question even though underlying health conditions might have required them to take longer rest and sleep. Because of its ambiguity, this survey item does seem to be a good indicator of the quality of rest. Further, the answer to this item seems to be based on subjective perception. Therefore, it may potentially cause a bias towards both directions. A conclusive answer to this question requires prospective studies with well-designed questionnaires to explore the association between rest and initiation of RRT. Proteinuria was found not to be an independent predictor of initiation of RRT, despite many consistent reports that increasing proteinuria is associated with the risk of CKD progression [34, 35]. Patients who have been diagnosed with CKD and started medical care were no longer required to take an annual SHC, and their lab tests were therefore conducted during their regular hospital visits. Because the study database does not contain the lab results from regular visits, our analysis cannot consider those patients with missing SHC test data, who likely have impaired kidney function, and our results might consequently be applicable only to those patients with mild severity. Short follow-up duration might be another reason for the failure to detect an association between proteinuria and initiation of RRT. In addition, confirmation of a diagnosis of kidney dysfunction would typically require more than a single event of proteinuria at baseline. CKD in the present study was primarily defined using both the ICD-10 code in the claims database and the SHC results to optimize validity. The ICD-10 codes of CKD in the claims database are likely to include the false-positive CKD cases because of the need to ensure reimbursement for lab tests. On the other hand, a single, annual SHC result in this study by itself may not detect patients with a persistently low eGFR level (less than 60 ml/min per 1.73 m2) of more than 3 months' duration, which is the standard definition of CKD in clinical practice [3]. A previous validation study conducted at an acute care hospital in Australia reported that the sensitivity and specificity of ICD-10 codes for identifying CKD cases were 54.1% and 90.2%, respectively [36]. Our definition of CKD likely resulted in fewer false-negative cases, because we identified cases using either or both the ICD-10 codes and eGFR values and in turn likely provided better sensitivity than that in the previous study [36]. The strength of this study is its large sample size and long-term follow-up obtained using a large-scale database which included both claims and checkup records, and which consequently provided sufficient power to detect a small number of events. The incidence of RRT is not high among our study population; however, we believe that improving lifestyle factors is an achievable goal which is potentially associated with reducing the risk of RRT initiation among the Japanese patients with mild CKD. In addition, dialysis is time-intensive and expensive and requires dietary restrictions [37]. Dialysis patients have been reported as having low quality of life [38]. Our findings would contribute to better management of patients with mild kidney impairment. In addition, we employed recency-weighted cumulative exposure as a time-dependent covariate to model the cumulative effects of lifestyle behaviors, as similarly used in a previous study, [19] and consider that this analytical method is suitable for assessing the cumulative, long-term effects of lifestyle behaviors. It is worth mentioning several possible limitations in interpreting our study results. First, the possibility of selection bias needs to be considered when generalizing the present findings. The JMDC database included individuals aged up to 75 years old, and our findings might not therefore be generalizable to the elderly population. A previous study of participants receiving regular health checkups in Japan reported that the prevalence of eGFR<60 ml/min/1.73m2 increased linearly as age advanced from the 20 to 80s and over, from 0.1 to 44.6% in males, and 0.2 to 46.1% in females [2]. Our participants were a working-age population, and likely to have less confounding factors for renal impairment such as aging and comorbidities than an elderly population, as shown by the low incidence of RRT [8]. We therefore consider that the association between the effects of lifestyle behaviors on renal function in our study population were less complicated and that any associations would likely be detected in a more straightforward manner. On the other hand, the possibility of a healthy worker effect cannot be denied. The study subjects in the present study were the employees and dependents of mid-sized and large companies in secondary and tertiary industries which had a sufficiently large financial size and strength to run their own health insurance plans [13]. Missing data are another source of bias which should be considered when interpreting the results and generalizability of the study. The complete dataset of patients without missing covariates was used for the Cox hazard models in the present study. The proportion of missing eGFR values was high, especially among cases, probably because the measurement of serum creatinine was performed at the discretion of payers at the SHC, and most patients with CKD likely had serum creatinine measurements in their regular hospital visits, not in SHC [39]. Missing eGFR values and urine testing appeared "not at random," and therefore likely resulted in biased estimates, as discussed above. Second, we did not fully adjust for the potential effects of drugs on kidney function. Acute kidney injury is common in cancer patients at risk for infection, sepsis, tumor lysis syndrome, drug-associated toxicities, and other comorbidities that significantly increase the risk of acute kidney injury [40]. Chemotherapy and a severely ill condition are strong confounding factors in the association between renal dysfunction and lifestyle behaviors. To minimize potential confounding by chemotherapies and cancer, we excluded patients with advanced cancer. Some other medicines also cause renal disorders, including NSAIDs and antimicrobials, [41] but the use of these medicines was not adjusted for in the analyses because the use of over-the-counter NSAIDs was not captured in the claims database, and because of the difficulty of accurately adjusting for the impact of various kinds of antimicrobial agents, usually used in the short term, on renal function. Third, the self-reported lifestyle assessment may have included reporting bias. In addition, a degree of bias caused by missing data in the self-administrated questionnaire was unavoidable. There is also a potential risk of misclassification bias since covariates of comorbidity and concomitant medication were defined using claims information. Although we did not perform a validation study of claims diagnosis codes for cardiovascular and cerebrovascular diseases, several studies assessing the validity of ICD-10 codes to identify cardiovascular and cerebrovascular diseases from the Japanese claims database reported reasonable PPVs for those outcomes [42,43,44]. A previous study showed the high sensitivity of the information on medication use collected from the nationwide electronic pharmacy records with medication-containing blood samples among patients at a Danish university hospital (sensitivity=0.93) [45]. Claims data is reported to be a useful tool to capture the regular medication users [46]. Finally, given the nature of retrospective evaluation using a secondary database, potential bias due to unmeasured or unknown confounders is unavoidable. A well-designed prospective study which considers all possible factors associated with the initiation of chronic RRT is warranted. Among CKD patients, lifestyle behaviors of smoking, skipping breakfast, and sufficient rest along with sleep were independently associated with the initiation of chronic RRT. Our study strengthens the importance of monitoring lifestyle behaviors to delay the progression of mild CKD to RRT in the Japanese working generation. A substantial portion of subjects had missing data for eGFR and drinking frequency, warranting verification of these results in prospective studies. The datasets for the study are not publicly available due to the data license agreement with Japan Medical Data Center Inc. Data are, however, available from the corresponding author upon reasonable request and with the permission of the Japan Medical Data Center Inc. CKD: RRT: Renal replacement therapy SHC: Specific health checkup ICD-10: International Classification of Diseases, Tenth Revision IQR: eGFR: Estimated glomerular filtration rate Hazard ratio KDIGO: Kidney Disease: Improving Global Outcomes Hill NR, Fatoba ST, Oke JL, Hirst JA, O'Callaghan CA, Lasserson DS, et al. Global prevalence of chronic kidney disease - a systematic review and meta-analysis. PLoS One. 2016;11(7):e0158765. https://doi.org/10.1371/journal.pone.0158765. Imai E, Horio M, Watanabe T, Iseki K, Yamagata K, Hara S, et al. Prevalence of chronic kidney disease in the Japanese general population. Clin Exp Nephrol. 2009;13(6):621–30. https://doi.org/10.1007/s10157-009-0199-x. Japanese Society of Nephrology. Evidence-based clinical practice guideline for CKD 2018. Tokyo: Tokyo Igakusha; 2018. Levey AS, de Jong PE, Coresh J, El Nahas M, Astor BC, Matsushita K, et al. The definition, classification, and prognosis of chronic kidney disease: a KDIGO Controversies Conference report. Kidney Int. 2011;80(1):17–28. https://doi.org/10.1038/ki.2010.483. Nitta K, Masakane I, Hanafusa N, Goto S, Abe M, Nakai S, et al. 2018 Annual Dialysis Data Report, JSDT Renal Data Registry. J Jpn Soc Dial Ther. 2019;52(12):679–754. Ministry of Health, Labour and Welfare. Overview of Estimates of National Medical Care Expenditure, FY2017. https://www.mhlw.go.jp/toukei/saikin/hw/k-iryohi/17/dl/data.pdf . Published 2019. Accessed 21 Nov 2019 (in Japanese). Japanese Society for Clinical Renal Transplantation and The Japan Society for Transplantation. Annual progress report from the japanese renal transplant registry: number of renal transplantations in 2017 and a follow-up survey. Jpn J Transplant. 2018;53:89–108. Yamagata K, Ishida K, Sairenchi T, Takahashi H, Ohba S, Shiigai T, et al. Risk factors for chronic kidney disease in a community-based population: a 10-year follow-up study. Kidney Int. 2007;71(2):159–66. https://doi.org/10.1038/sj.ki.5002017. Palmer SC, Sciancalepore M, Strippoli GF. Trial quality in nephrology: how are we measuring up? Am J Kidney Dis. 2011;58(3):335–7. https://doi.org/10.1053/j.ajkd.2011.06.006. Ministry of Health, Labour and Welfare. Specific Health Checkups and Specific Health Guidance. http://www.mhlw.go.jp/stf/seisakunitsuite/bunya/kenkou_iryou/kenkou/seikatsu/index.html. Published in 2013. Accessed 10 Aug 2020 (in Japanese). Okamura T, Sugiyama D, Tanaka T, Dohi S. Worksite wellness for the primary and secondary prevention of cardiovascular disease in Japan: the current delivery system and future directions. Prog Cardiovasc Dis. 2014;56(5):515–21. https://doi.org/10.1016/j.pcad.2013.09.011. Tsushita K, Hosler A, Miura K, Ito Y, Fukuda T, Kitamura A, et al. Rationale and descriptive analysis of specific health guidance: the nationwide lifestyle intervention program targeting metabolic syndrome in Japan. J Atheroscler Thromb. 2018;25(4):308–22. https://doi.org/10.5551/jat.42010. Fukasawa T, Tanemura N, Kimura S, Urushihara H. Utility of a specific health checkup database containing lifestyle behaviors and lifestyle diseases for employee health insurance in Japan. J Epidemiol. 2020;30(2):57–66. https://doi.org/10.2188/jea.JE20180192. Nagai K, Tanaka T, Kodaira N, Kimura S, Takahashi Y, Nakayama T. Data resource profile: JMDC claims database sourced from health insurance societies. J Gen Fam Med. 2021. https://doi.org/10.1002/jgf2.422. Australian Institute of Health and Welfare. Acute kidney injury in Australia: a first national snapshot. https://www.aihw.gov.au/getmedia/7e0f5313-d61d-4de3-ad8d-389dcc7a03dc/19380.pdf Published 2015. Accessed 06 Apr 2018. Ministry of Health, Labour and Welfare. Various Information of Medical Fee. https://shinryohoshu.mhlw.go.jp/shinryohoshu/downloadMenu/ Published 2020. Accessed 14 Apr 2020 (in Japanese). Matsuo S, Imai E, Horio M, Yasuda Y, Tomita K, Nitta K, et al. Revised equations for estimated GFR from serum creatinine in Japan. Am J Kidney Dis. 2009;53(6):982–92. https://doi.org/10.1053/j.ajkd.2008.12.034. Kidney Disease: Improving Global Outcomes (KDIGO) CKD Work Group. KDIGO clinical practice guideline for the evaluation and management of chronic kidney disease. Kidney Int Suppl. 2013;3:1–150. Hu FB, Stampfer MJ, Rimm E, Ascherio A, Rosner BA, Spiegelman D, et al. Dietary fat and coronary heart disease: a comparison of approaches for adjusting for total energy intake and modeling repeated dietary measurements. Am J Epidemiol. 1999;149(6):531–40. https://doi.org/10.1093/oxfordjournals.aje.a009849. Ministry of Education, Culture, Sports, Science and Technology and Ministry of Health, Labour and Welfare. Ethical Guidelines for Medical and Health Research Involving Human Subjects. https://www.mhlw.go.jp/file/06-Seisakujouhou-10600000-Daijinkanboukouseikagakuka/0000153339.pdf. Published 2014. Updated 2017. Accessed 21 Nov 2019 (in Japanese). Elihimas Junior UF, Elihimas HC, Lemos VM, Leao Mde A, Sa MP, Franca EE, et al. Smoking as risk factor for chronic kidney disease: systematic review. J Bras Nefrol. 2014;36(4):519–28. https://doi.org/10.5935/0101-2800.20140074. Jin A, Koh WP, Chow KY, Yuan JM, Jafar TH. Smoking and risk of kidney failure in the Singapore Chinese health study. PLoS One. 2013;8(5):e62962. https://doi.org/10.1371/journal.pone.0062962. Hall ME, Wang W, Okhomina V, Agarwal M, Hall JE, Dreisbach AW, et al. Cigarette smoking and chronic kidney disease in African Americans in the Jackson Heart Study. J Am Heart Assoc. 2016;5(6). https://doi.org/10.1161/JAHA.116.003280. Lee S, Kang S, Joo YS, Lee C, Nam KH, Yun HR, et al. Smoking, smoking cessation, and progression of chronic kidney disease: results from KNOW-CKD study. Nicotine Tob Res. 2021;23(1):92–8. https://doi.org/10.1093/ntr/ntaa071. van der Heijden AA, Hu FB, Rimm EB, van Dam RM. A prospective study of breakfast consumption and weight gain among U.S. men. Obesity (Silver Spring). 2007;15(10):2463–9. https://doi.org/10.1038/oby.2007.292. Farshchi HR, Taylor MA, Macdonald IA. Deleterious effects of omitting breakfast on insulin sensitivity and fasting lipid profiles in healthy lean women. Am J Clin Nutr. 2005;81(2):388–96. https://doi.org/10.1093/ajcn.81.2.388. Uemura M, Yatsuya H, Hilawe EH, Li Y, Wang C, Chiang C, et al. Breakfast skipping is positively associated with incidence of type 2 diabetes mellitus: evidence from the Aichi workers' cohort study. J Epidemiol. 2015;25(5):351–8. https://doi.org/10.2188/jea.JE20140109. Astbury NM, Taylor MA, Macdonald IA. Breakfast consumption affects appetite, energy intake, and the metabolic and endocrine responses to foods consumed later in the day in male habitual breakfast eaters. J Nutr. 2011;141(7):1381–9. https://doi.org/10.3945/jn.110.128645. Reutrakul S, Hood MM, Crowley SJ, Morgan MK, Teodori M, Knutson KL. The relationship between breakfast skipping, chronotype, and glycemic control in type 2 diabetes. Chronobiol Int. 2014;31(1):64–71. https://doi.org/10.3109/07420528.2013.821614. Mita T, Osonoi Y, Osonoi T, Saito M, Nakayama S, Someya Y, et al. Breakfast skipping is associated with persistently increased arterial stiffness in patients with type 2 diabetes. BMJ Open Diabetes Res Care. 2020;8(1). https://doi.org/10.1136/bmjdrc-2019-001162. Witbracht M, Keim NL, Forester S, Widaman A, Laugero K. Female breakfast skippers display a disrupted cortisol rhythm and elevated blood pressure. Physiol Behav. 2015;140:215–21. https://doi.org/10.1016/j.physbeh.2014.12.044. Saito T, Mochizuki T, Uchida K, Tsuchiya K, Nitta K. Metabolic syndrome and risk of progression of chronic kidney disease: a single-center cohort study in Japan. Heart Vessels. 2013;28(3):323–9. https://doi.org/10.1007/s00380-012-0254-5. The Japanese Society of Nephrology. Manual of diet and lifestyle modification for CKD patients: practical guide for dietary guidance. https://cdn.jsn.or.jp/guideline/pdf/H25_Life_Diet_guidance_manual.pdf. Published 2015. Accessed 30 Dec 2018 (in Japanese). Iseki K, Kinjo K, Iseki C, Takishita S. Relationship between predicted creatinine clearance and proteinuria and the risk of developing ESRD in Okinawa, Japan. Am J Kidney Dis. 2004;44(5):806–14. Ruggenenti P, Perna A, Mosconi L, Pisoni R, Remuzzi G. Urinary protein excretion rate is the best independent predictor of ESRF in non-diabetic proteinuric chronic nephropathies. "Gruppo Italiano di Studi Epidemiologici in Nefrologia" (GISEN). Kidney Int. 1998;53(5):1209–16. https://doi.org/10.1046/j.1523-1755.1998.00874.x. Ko S, Venkatesan S, Nand K, Levidiotis V, Nelson C, Janus E. International statistical classification of diseases and related health problems coding underestimates the incidence and prevalence of acute kidney injury and chronic kidney disease in general medical patients. Intern Med J. 2018;48(3):310–5. https://doi.org/10.1111/imj.13729. Al Salmi I, Kamble P, Lazarus ER, D'Souza MS, Al Maimani Y, Hannawi S. Kidney disease-specific quality of life among patients on hemodialysis. Int J Nephrol. 2021;2021:8876559. https://doi.org/10.1155/2021/8876559. Boateng EA, East L. The impact of dialysis modality on quality of life: a systematic review. J Ren Care. 2011;37(4):190–200. https://doi.org/10.1111/j.1755-6686.2011.00244.x. Ministry of Health, Labour and Wealfare. Standard Health Checkup and Counseling Guidance Program. Chapter 2: Medical Checkup. https://www.mhlw.go.jp/content/10900000/000496784.pdf. Published 2013. Updated 2018. Accessed 05 May 2018 (in Japanese). Rosner MH, Perazella MA. Acute kidney injury in patients with cancer. N Engl J Med. 2017;376(18):1770–81. https://doi.org/10.1056/NEJMra1613984. Committee of Clinical Practice Guideline for Drug-induced Kidney Disease. Practice guidelines for drug-induced kidney disease 2016. Nihon Jinzo Gakkai Shi. 2016;58(4):477–555. Yamana H, Moriwaki M, Horiguchi H, Kodan M, Fushimi K, Yasunaga H. Validity of diagnoses, procedures, and laboratory data in Japanese administrative data. J Epidemiol. 2017;27(10):476–82. https://doi.org/10.1016/j.je.2016.09.009. Ando T, Ooba N, Mochizuki M, Koide D, Kimura K, Lee SL, et al. Positive predictive value of ICD-10 codes for acute myocardial infarction in Japan: a validation study at a single center. BMC Health Serv Res. 2018;18(1):895. https://doi.org/10.1186/s12913-018-3727-0. Ono Y, Taneda Y, Takeshima T, Iwasaki K, Yasui A. Validity of claims diagnosis codes for cardiovascular diseases in diabetes patients in Japanese Administrative Database. Clin Epidemiol. 2020;12:367–75. https://doi.org/10.2147/CLEP.S245555. Glintborg B, Hillestrom PR, Olsen LH, Dalhoff KP, Poulsen HE. Are patients reliable when self-reporting medication use? Validation of structured drug interviews and home visits by drug analysis and prescription data in acutely hospitalized patients. J Clin Pharmacol. 2007;47(11):1440–9. https://doi.org/10.1177/0091270007307243. Matsumoto M, Harada S, Iida M, Kato S, Sata M, Hirata A, et al. Validity assessment of self-reported medication use for hypertension, diabetes, and dyslipidemia in a pharmacoepidemiologic study by comparison with health insurance claims. J Epidemiol. 2020. https://doi.org/10.2188/jea.JE20200089. We express our particular thanks to Mr. Kazuaki Enomoto and Dr. Nanae Tanemura for supporting this study. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Division of Drug Development and Regulatory Science, Faculty of Pharmacy, Keio University, 1-5-30, Shibakoen, Minato-ku, Tokyo, 105-8512, Japan Azusa Hara & Hisashi Urushihara Department of Public Health, Hokkaido University Faculty of Medicine, Sapporo, Japan Takumi Hirata Department of Preventive Medicine and Public Health, Keio University School of Medicine, Tokyo, Japan Tomonori Okamura JMDC Inc., Tokyo, Japan Shinya Kimura Azusa Hara Hisashi Urushihara HU conceived of and designed the study; SK provided the data; and AH carried out the statistical analyses. AH drafted the original manuscript with HU. TH and TO provided the intellectual input, and all authors critically revised the manuscript and approved the final manuscript. Correspondence to Hisashi Urushihara. The study protocol was approved by the Keio University Faculty of Pharmacy ethics committee for research involving humans (No. 190509-2). Informed consent was waived because of the use of anonymous data. Not required. KS is a representative director of Japan Medical Data Center Inc., which provided data for this study. HU is a part-time consultant for Eisai Co., Ltd. and Nippon Boehringer Ingelheim Co., Ltd. and has received research funds from CAC Croit Corporation, Shionogi Pharma Co., Ltd., Daiichi-Sankyo Co., Ltd., Astellas Pharma Inc., and Mitsubishi Tanabe Pharma Corporation. All other authors declare no potential competing interests. Additional file 1: eTable 1 . Diagnosis codes for chronic kidney disease. eTable 2. Codes in relation to cancer and cancer therapy. eTable 2.1. Diagnosis codes for cancer. eTable 2.2. Therapeutic category codes for cancer therapy. eTable 3. Medical procedure codes related to the initiation of renal replacement therapy. eTable 4. Details of lifestyle behaviors. eTable 5. Diagnosis codes related to cardio- and cerebrovascular diseases. eTable 6. Therapeutic category codes for drugs on diabetes, hypertension, and dyslipidemia. eTable 7. Kidney Disease: Improving Global Outcomes (KDIGO) CKD Classification at baseline. eTable 8. Prevalences of missing value in each variable. eTable 9. Baseline characteristics of patients with or without missing of eGFR. eTable 10. Baseline lifestyle behaivors of patients with or without missing of eGFR. Hara, A., Hirata, T., Okamura, T. et al. Lifestyle behaviors associated with the initiation of renal replacement therapy in Japanese patients with chronic kidney disease: a retrospective cohort study using a claims database linked with specific health checkup results. Environ Health Prev Med 26, 102 (2021). https://doi.org/10.1186/s12199-021-01022-3 Received: 23 June 2021 Japanese workers
CommonCrawl
Physics Meta Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. Why is there no need in extra knowledge to go from the classical to the quantum desctiption of a system? Take, for example, the hydrogen atom. Both the classical and the quantum models are based on the same Hamiltonian, describing the Coulomb potential. The classical model however misses a lot of important properties like the discrete energy spectrum. The quantum model does the job right (of course, the simple Coulomb model only works well to some limit, but that is another story). Apparently, to obtain the correct observables like the energy spectrum one only needs to know that the right description is quantum. No new parameters which are model-specific appear (Plank's constant is universal). Speaking more generally and more loosely, the quantum description becomes relevant at a very small scale. It seems natural to expect that a lot more details are visible at this scale. However, the input of our model, the Hamiltonian, stays basically the same. Only the general theoretical framework changes. Probably, the question may be rephrased as follows. Why do the quantization rules exist? By the quantization rules I mean the procedures that allow to go from the classical description to the quantum in a very uniform fashion that is applicable to many systems? Most likely my question is not too firm and contains some wrong assumptions. However, if it were not for this confusion I would not be asking! quantum-mechanics soft-question Weather ReportWeather Report $\begingroup$ What kind fo answer to this question could there possibly be that would not just provoke the follow-up question "And why is that?". Asking why that which describes nature describes nature is not really an answerable question. $\endgroup$ – ACuriousMind ♦ $\begingroup$ What about when you go to higher energies than QED, for example QCD where you have to introduce non-classical properties like confinement? $\endgroup$ $\begingroup$ The answer is "by pure chance", just because you are looking at very particular systems. If the system were more complicated further information wold need. Think of QFT where renormalization needs much more information than in the classical realm. Even in QM, if you consider classical systems involving observables where products like $x^mp^n$ take place, their quatisation is ambiguous and one really needs further information. The quantum world is the real world and the classical world is just an approximation. No universal quantisation procedures exist therefore. $\endgroup$ – Valter Moretti $\begingroup$ @ACuriousMind Of course, one can ask "why is that" forever. Each correct answer deepens our knowledge though. I expect that some perspectives exists, which shed light on my question. Maybe in the spirit of Valter Moretti's comment. $\endgroup$ – Weather Report $\begingroup$ Renormalisation, in my view, is the symptom that the classical-like framework is not enough to describe an interacting quantum field: you must supply more information at each step than the one you encapsulated in the initial classical-like description. I am referring to the finite renormalization counter terms (those which remain after having subtracted infinities) which are ambiguous and have to be fixed by hand. $\endgroup$ The "reason" why the procedure of quantization works cannot be known. Asking why that which describes nature describes nature is not a question physics can answer. However, the procedure of quantization does not work without extra knowledge. In fact, it's not even known in all cases what the "correct" procedure for quantization is. I'll list several hurdles (without any claim to completeness) that should convince you that there is additional information necessary to quantize a classical system: The Groenewold-van Hove no-go theorem (see also this answer of mine) says that canonical quantization that just replaces Poisson brackets with commutators does not work in the generality in which we would like. There are several possible modifications to the Poisson bracket (or rather the product of classical observables on the phase space) that yield a consistent quantization procedure, but that choice is not unique. You are using additional information when you pick a particular modification. This is essentially the formal reflection of what is usually called an "ordering ambiguity": Given a classical observable $x^np^m = x^{n-1}p^m x = \dots = p^m x^n$, which of these classically equivalent expressions do you turn into the corresponding quantum operator, if you have the CCR between $x$ and $p$ making them all unequal in the quantum theory? Quantum anomalies: For a general discussion of anomalies, see this excellent answer by DavidBarMoshe, for a formal derivation of the possibility of the appearance fo central charges in the passage from the classical to the quantum theory see this answer of mine. The bottom line is that in the course of quantization we can get our classical symmetry groups "enlarged", and formerly invariant objects may not be invariant anymore. This usually introduces a new parameter into the quantum theory, the central charge of the enlarged symmetry group, and again needs additional input to be determined, if it doesn't wreck the quantum theory altogether. In fact, this might be the most important aspect of such an anomaly: If you have an anomaly of a gauge or gravitational symmetry, you don't have a consistent quantum theory. In certain field theories, the anomaly term is naturally determined by the rest of the theory, so unless those "miraculously" cancel, the quantum theory of such field theories does not exist in the usual sense. No amount of additional information can fix this, we simply do not know a consistent quantization of such theories. The lattice problem: Classically, it is rather uncontroversial that we can view comtinuum field theories as the limits of discretized theories. Quantumly, this becomes horrendously difficult: It is not known whether the continuum limit of a quantized lattice theory conincides with the quantization of the continuum theory; in fact, I believe this is not always the case, see, for example, the problem of triviality of the lattice $\phi^4$ theory. However, one might remark that this particular problem is due to the absence of a fully rigorous framework of quantum field theory in general. Finally, let me remark that thinking of quantization as a fundamental operation has it the wrong way around if we take quantum mechanics seriously: It is the classical system that must be obtained from the quantum system in a certain limit, not the other way around. It is perfectly possible that there are quantum systeme without corresponding classical system - they just have no way to view them that would look classical to us. For a handwavy example, think about fermionic/spin-1/2 degrees of freedom: These are very hard to come by in a classical theory since there's simply no motivation to consider them, but they emerge rather naturally from the quantum viewpoint. In this sense, it is remarkable how well quantization works as a general guiding principle, but it shouldn't surprise us that the "we don't need any extra knowledge" is not really accurate. ACuriousMind♦ACuriousMind $\begingroup$ I can't agree with your first sentence: "The "reason" why the procedure of quantization works cannot be known. Asking why that which describes nature describes nature is not a question physics can answer." There could simply be an underlying principle that in turn leads to the need of applying quantization rules to Lagrangian-theories. That principle could in fact just be as reasonable as the principle of relativity or some experimental confirmable statement (e.g. constancy of light for relativistic mechanics). $\endgroup$ – image357 $\begingroup$ I agree with your technical points but not with the general attitude. To quote your last sentence "it is remarkable how well quantization works as a general guiding principle". But that's what my question is about! Given a proper (quantum) theory aren't we able to explain this surprising universality of the classical limits which allow to reconstruct a lot of quantum behaviour without additional input, $f(\hbar)$ from $f(0)$? There are exceptions, of course, but they should not be surprising. It is the success of the naive approach which in my view deserves an explanation. $\endgroup$ $\begingroup$ @WeatherReport: The naive approach is not as naive as it seems: Try it with polar coordinates, or action-angle variables (this was Bohr's and Sommerfeld's original attempt) and it goes wrong rather quickly. The canonical quantization we teach today is finely crafted to be as naive-looking as possible while getting as much right as possible. $\endgroup$ $\begingroup$ @ACuriousMind Well, that's might be it, the textbooks fool us! I've always been suspecting that. Unfortunately, this would be hard to back up in detail. Let it stay a working assumption unless something better will show up. $\endgroup$ Some years ago I started with almost the same question: "What is it that makes us quantizate a system or what happens when we quantizate a system?" Questions like this were asked approximately 90-50 years ago in a similar way by analyzing whether or not the description of quantum mechanics is complete and real (that is wheter all elements have a real counterpart). The topic was setteled with the so called Copenhagen interpretation, the EPR paradox and finally with Bell's inequalites, which all together tell us that quantum mechanics is a bit strange. For example one shouldn't think of the wavefunction as a real particle unless currently meassured by some classical measurment apparatus and that such things are in absolut contradiction to a reasonable pictorial explanation of quantum mechanics. I found all that a bit dissatisfying and went on to find a flaw in that view of quantum mechanics. The first thing I stumbled across was bohmian mechanics which tries to explain the quantization procedure onto the fact that we indeed just didn't know the "right" classcial equations. One can show, that solving the Schrödinger equation (which one arives at by canonical quantization) $$ \left(-\frac{\hbar^2}{2m}\Delta + V(x)\right)\ \Psi = i\hbar \frac{\partial}{\partial t}\ \Psi $$ is equivalent to solving two equations \begin{align} (1)&\ \ \dot{\vec{p}} = \vec{F} - \vec{\nabla} Q\\ (2)&\ \ \frac{\partial R^2}{\partial t} + \vec{\nabla} \cdot \left(\frac{\vec{p}}{m} \cdot R^2\right) = 0 \end{align} when one considers wavefunctions $\Psi = R \cdot \exp \left(i\frac{S}{\hbar}\right)$ which is no restriction to generality. Equation (2) is the continuity equation for a charge-denstiy $\rho = R^2$ whichs happens to be the probability distribution $\varrho = |\Psi|^2 = R^2$ in quantum mechanics. The first equation (1) is just usual classical mechanics extended by an additional potential $Q = -\frac{\hbar^2}{2m} \frac{\Delta R}{R}$ the so called quantum potential. This interpretation has some problems though. First and foremost, it can't explain (just axiomize) why a real charge-distribution $R^2$ governs the whole statistical behavior of a system regardless of the other acting forces $\vec{F}$. The key to understanding quantum mechanics is understanding its statistical nature. So could it be that quantum mechanics is some kind of usual classical statistical mechanics (since both seem to be related by the same Lagrangians/Hamiltonians)? Bell investigated this question by his famous Bell inequalities and came to the conclusion, that there are indeed expectation values in quantum mechanics (which are in agreement with experiment) that cannot be reproduced by any classical statistical mechanics (in the usual sense of non-instantanious action, e.g. relativistic mechanics). He was nominated for a Nobel prize which accounts for the credibility physicist put into these inequalities. As a result there should be no way of describing quantum mechanics onto the basis of classical statistical mechanics. However, as far as my analysis goes there is a mayor flaw in the derivation of those inequalites, which make them devoid of meaning (e.g. classical systems can violate them too). I'm not the first to come to this conclusion, in fact there is a huge list of so called loopholes in Bell's theorem, which for the most part concentrate on the measurment process and on whether or not violations if found can be interpreted according to Bell's theorem. Unfortunately, due to the philosophical nature of that question, that whole field of research has been drifted to the crackpot area. Only lately (last 10-20 years or so) it became a bit more popular again. Now, if you accept my statement that Bell's theorem is wrong, there is no need to discard the possiblity of quantum mechanics being some kind of statistical mechanics. In fact, there might be a way to show that the process of quantizating a theory is just doing classical statistical mechanics with some further assumptions. Still, this cannot account for the fact that usual classical statistical mechanics is an ensemble statistical mechanics while standard QM and experiments are usually about single particles. In ensemble mechanics, one calculates expectaion values on the basis of many similar and independant particles that have different initial values (e.g. place and momentum). In an experiment however, it seems a single particle mystically knows how to behave according to different and not present ensemble particles. This problem can be solved by the so called principle of Ergodicity, which states that for some systems the mean value over time is the same as the ensemble mean. Usually, this only holds for chaotical systems, which clearly we have counterexamples for (not every system we observe behaves chaotically). The current pinnacle of quantum mechanics QFT gets rid of the description of nature onto the basis of particles. Everything becomes a field, whichs is an object with infinitely many degress of freedom. E.g., there is an electron field, as well as an photon field. Only later one introduces states, that are in close relation to particles as we know them. In context, of the classical statistical interpretation this means, that particles are just statistical artifacts of the theory, that is, the fields can be in states that "simulate" the behaviour of particles. Due to the infinity of degrees of freedom of such a field, it is quite possible that the principle of Ergodicity holds, such that a measurment within a certain finite time interval $\Delta t$ actually reflects the ensemble mean of the field! As a result we have regained the following pictorial view of quantum mechanics: Take for instance the hydrogen atom. It consists of an electron field, a photon field and a proton field (or rather quark and gluon fields, that form the proton). Those fields behave according to the non-quantizated equations of QFT-Lagrangians. Due to the infinity of degrees of freedom, the behaviour is highly chaotic. As a result, we are only interested in the mean behavior of such a system. One would then try to calculate the time mean of that system which is (due to the principle of Ergodicity) the same as the ensemble mean. The process of canonical quantization is now just the usage of usual ensemble statistical mechanics. We know that there are statistical states that correspond to our pictorial view of single particles and thus we can explain why experiments show that the hydrogen atom consists of particles that act differently than free particles. E.g. the electron doesn't radiate Bremsstrahlung and has a quantizated mean energy level because bound (statistical) partical states are ultimatly different to the ones formed by non-interacting fields (free statistical particle states). So, to come back to your question: "Why is there no need in extra knowledge to go from the classical to the quantum desctiption of a system?" Answer: We simply do statistical mechanics based on the classical equations. This is a highly hypothetically standpoint, but it represent my currents views on the quantization process and quantum mechanics. It all falls and stand with the assumption: $\textrm{quantization} \leftrightarrow \textrm{statistical mechanics}$. There has been some work on this topic, e.g. in the form of Koopman–von Neumann classical mechanics which shows that statistical mechanics can be brought into a form of operators on Hilbert-spaces. Recently I also found a way to derive the quantization rule $\vec{p} \rightarrow -i\hbar \vec{\nabla}$ based on a classical statistical mechanical expectation value, but it's not yet in a form that can be published. So take all this with caution. image357image357 $\begingroup$ I suspect that the down voter stopped at the place where Bohm was mentioned. Sorry, but that's my gut feeling, too. I would rather prefer the explanation from the traditional viewpoint, not the alternative. Of course, unless they are fully equivalent. And showing that should be quite problematic in your case, or maybe you do not expect that at all? As a side note, the counter examples to the naive quantization that other people point out do not bother me, but should really be a concern for you, given the generality of the answer you propose. $\endgroup$ $\begingroup$ @WeatherReport: Actually I didn't explain it in context of bohmian mechanics, just mentioned it as a approach that has some difficulties. And yes, bohmian mechanics is completly equivalent to standard QM with the Schrödinger equation as far as any calculation goes. Only the interpretations are different, which is why most people reject it. However my main point doesn't thouch the topic of bohmian mechanics at all. Strictly speaking I say: Bell is wrong, there was some work to show that quantization could be statistical mechanics. All this is currently hypothetically. $\endgroup$ $\begingroup$ @WeatherReport: Concerning the problems the others have mentioned: I can't comment so much on renormalization and how that is a problem for quantization. I do know that the current quantization procedure breaks down when trying to apply it to the equations of general relativity. Also it doesn't work with curvilinear coordinate systems, which I consider to be based on the same problem. My viewpoint is that the "statistical quantization" procedure can in fact give some general rules that reduce to "ordinary quantization" for flat spacetime and different ones for general relativity. $\endgroup$ $\begingroup$ I'm not the downvoter, but I definitely don't think the OP's question has much to do with the question of "whether or not the description of quantum mechanics is complete and real." $\endgroup$ – Peter Shor $\begingroup$ @PeterShor: quantization is part of the quantum-mechanical process of finding the right equations. The question whether or not all of the parts of quantum mechanics are real and complete (e.g not complete could mean that quantization is indeed to be extended on basis of a more general principle) thus touches also the topic of quantization. Furthermore, I have to counter Bell's Theorem, which is closely related to the topic of reality and completness, in order to make my arguments valid. $\endgroup$ Thanks for contributing an answer to Physics Stack Exchange! Why exactly do sometimes universal covers, and sometimes central extensions feature in the application of a symmetry group to quantum physics? Classical and quantum anomalies Entanglement, real or just math? Canonical Commutation Relations in arbitrary Canonical Coordinates Possible alternatives to quantum theory that explain the double slit experiment? Are there any theorems that support the commutation relations in QFT? How does one geometrically quantize the Bloch equations? What is the difference between classical and quantum Ising model? Can I swap quantum mechanical ground state for some classical trajectory distribution and have it sit still after the swap? What exactly is the relationship between the algebraic formulation of Quantum Mechanics and the geometric formulation of Classical Mechanics? How the classical trajectory arise in the classical limit of quantum harmonic oscillator? Deriving the quantum Hamiltonian from the expression of classical energy Why do we need the quantization for lattice vibration? Can we identify configurations of the classical Heisenberg model with pure states of the spin-${1\over2}$ quantum Heisenberg model? Operator Ordering Conventions and Symmetry
CommonCrawl
The projective symplectic geometry of higher order variational problems: Minimality conditions JGM Home Neighboring extremal optimal control for mechanical systems on Riemannian manifolds September 2016, 8(3): 273-304. doi: 10.3934/jgm.2016008 Shape analysis on Lie groups with applications in computer animation Elena Celledoni 1, , Markus Eslitzbichler 1, and Alexander Schmeding 1, Department of Mathematical Sciences, Norwegian University of Science and Technology, NO-7491 Trondheim, Norway, Norway, Norway Received June 2015 Revised May 2016 Published September 2016 Shape analysis methods have in the past few years become very popular, both for theoretical exploration as well as from an application point of view. Originally developed for planar curves, these methods have been expanded to higher dimensional curves, surfaces, activities, character motions and many other objects. In this paper, we develop a framework for shape analysis of curves in Lie groups for problems of computer animations. In particular, we will use these methods to find cyclic approximations of non-cyclic character animations and interpolate between existing animations to generate new ones. Keywords: Shape analysis, infinite dimensional manifolds, computer animation., curve matching, Lie groups. Mathematics Subject Classification: 65D18, 58J90, 58D10, 49Q1. Citation: Elena Celledoni, Markus Eslitzbichler, Alexander Schmeding. Shape analysis on Lie groups with applications in computer animation. Journal of Geometric Mechanics, 2016, 8 (3) : 273-304. doi: 10.3934/jgm.2016008 A. Bastiani, Applications différentiables et variétés différentiables de dimension infinie}, J. Analyse Math., 13 (1964), 1-114. doi: 10.1007/BF02786619. Google Scholar M. Bauer and M. Bruveris, A new riemannian setting for surface registration, 2011,, 182-193, (): 182. Google Scholar M. Bauer, M. Bruveris, S. Marsland and P. W. Michor, Constructing reparameterization invariant metrics on spaces of plane curves, Differential Geometry and its Applications, 34 (2014), 139-165. doi: 10.1016/j.difgeo.2014.04.008. Google Scholar M. Bauer, M. Bruveris and P. W. Michor, Overview of the geometries of shape spaces and diffeomorphism groups,, Journal of Mathematical Imaging and Vision, (): 1. Google Scholar M. Bauer, M. Eslitzbichler and M. Grasmair, Landmark-Guided Elastic Shape Analysis of Human Character Motions,, arXiv:1502.07666 [cs], (). Google Scholar M. Bauer, P. Harms and P. W. Michor, Sobolev metrics on shape space of surfaces, Journal of Geometric Mechanics, 3 (2011), 389-438. Google Scholar M. Bauer, M. Bruveris and P. W. Michor, Why use Sobolev metrics on the space of curves, in Riemannian computing in computer vision, Springer, Cham, 2016, 233-255. Google Scholar Carnegie-Mellon, Carnegie-Mellon Mocap Database, 2003,, URL , (). Google Scholar E. Celledoni, H. Marthinsen and B. Owren, An introduction to lie group integrators - basics, new developments and applications, J. Comput. Phys., 257 (2014), 1040-1061. doi: 10.1016/j.jcp.2012.12.031. Google Scholar E. Celledoni and B. Owren, Lie group methods for rigid body dynamics and time integration on manifolds, Computer Methods in Applied Mechanics and Engineering, 192 (2003), 421-438. doi: 10.1016/S0045-7825(02)00520-0. Google Scholar J. Cheeger and D. G. Ebin, Comparison Theorems in Riemannian Geometry, North-Holland Publishing Co., Amsterdam-Oxford; American Elsevier Publishing Co., Inc., New York, 1975. Google Scholar C. J. Cotter, A. Clark and J. Peiró, A Reparameterisation Based Approach to Geodesic Constrained Solvers for Curve Matching, International Journal of Computer Vision, 99 (2012), 103-121. doi: 10.1007/s11263-012-0520-0. Google Scholar T. Dobrowolski, Every infinite-dimensional Hilbert space is real-analytically isomorphic with its unit sphere, J. Funct. Anal., 134 (1995), 350-362. doi: 10.1006/jfan.1995.1149. Google Scholar R. Engelking, General Topology, vol. 6 of Sigma Series in Pure Mathematics, 2nd edition, Heldermann Verlag, Berlin, 1989. Google Scholar M. Eslitzbichler, Modelling character motions on infinite-dimensional manifolds, The Visual Computer, 31 (2015), 1179-1190. doi: 10.1007/s00371-014-1001-y. Google Scholar M. Fuchs, B. Jüttler, O. Scherzer and H. Yang, Shape Metrics Based on Elastic Deformations, Journal of Mathematical Imaging and Vision, 35 (2009), 86-102. doi: 10.1007/s10851-009-0156-z. Google Scholar H. Glöckner, Regularity properties of infinite-dimensional Lie groups, and semiregularity, 2012, URL http://arxiv.org/abs/1208.0715, arXiv: 1208.0715 [math]. Google Scholar H. Glöckner, Fundamentals of submersions and immersions between infinite-dimensional manifolds, 2015,, URL , (). Google Scholar G. González Castro, M. Athanasopoulos and H. Ugail, Cyclic animation using partial differential equations, The Visual Computer, 26 (2010), 325-338. Google Scholar F. Hausdorff, Die symbolische Exponentialformel in der Gruppentheorie, Leipz. Ber., 58 (1906), 19-48. Google Scholar J. Hilgert and K. H. Neeb, Structure and Geometry of Lie Groups, Springer Monographs in Mathematics, Springer, New York, 2012. doi: 10.1007/978-0-387-84794-8. Google Scholar A. Iserles, H. Z. Munthe-Kaas, S. P. Nrsett and A. Zanna, Lie-group methods, Acta Numerica, 9 (2000), 215-365. doi: 10.1017/S0962492900002154. Google Scholar E. Klassen and A. Srivastava, A path-straightening method for finding geodesics in shape spaces of closed curves in R3,, SIAM Journal of Applied Mathematics, (). Google Scholar L. Kovar and M. Gleicher, Flexible Automatic Motion Blending with Registration Curves, in Proceedings of the 2003 ACMSIGGRAPH/Eurographics Symposium on Computer Animation, SCA '03, Eurographics Association, Aire-la-Ville, Switzerland, Switzerland, 2003, 214-224. Google Scholar L. Kovar and M. Gleicher, Automated extraction and parameterization of motions in large data sets, in ACM Transactions on Graphics (TOG), ACM, 23 (2004), 559-568. doi: 10.1145/1186562.1015760. Google Scholar L. Kovar, M. Gleicher and F. Pighin, Motion graphs, ACM Trans. Graph., 21 (2002), 473-482. Google Scholar A. Kriegl and P. W. Michor, Regular infinite dimensional Lie groups, Journal of Lie Theory, 7 (1997), 61-99. Google Scholar A. Kriegl and P. W. Michor, The convenient Setting of Global Analysis, vol. 53 of Mathematical Surveys and Monographs, American Mathematical Society, Providence, RI, 1997. doi: 10.1090/surv/053. Google Scholar S. Kurtek, E. Klassen, Z. Ding and A. Srivastava, A novel riemannian framework for shape analysis of 3d objects, in 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010, 1625-1632. doi: 10.1109/CVPR.2010.5539778. Google Scholar S. Kurtek and A. Srivastava, Elastic symmetry analysis of anatomical structures, in 2012 IEEE Workshop on Mathematical Methods in Biomedical Image Analysis (MMBIA), 2012, 33-38. doi: 10.1109/MMBIA.2012.6164739. Google Scholar S. Lahiri, D. Robinson and E. Klassen, Precise matching of PL curves in $\mathbbR^N$ in the square root velocity framework, Geom. Imaging Comput., 2 (2015), 133-186, \urlprefixhttp://arxiv.org/abs/1501.00577, arXiv:1501.00577 [math]. doi: 10.4310/GIC.2015.v2.n3.a1. Google Scholar S. Lang, Fundamentals of Differential Geometry, vol. 191 of Graduate Texts in Mathematics, Springer-Verlag, New York, 1999. doi: 10.1007/978-1-4612-0541-8. Google Scholar P. W. Michor and D. Mumford, An overview of the Riemannian metrics on spaces of curves using the Hamiltonian approach, Applied and Computational Harmonic Analysis, 23 (2007), 74-113. doi: 10.1016/j.acha.2006.07.004. Google Scholar W. Mio, A. Srivastava and S. Joshi, On shape of plane elastic curves, Int. J. Comput. Vision, 73 (2007), 307-324. doi: 10.1007/s11263-006-9968-0. Google Scholar K.-H. Neeb, Towards a Lie theory of locally convex groups, Jpn. J. Math., 1 (2006), 291-468. doi: 10.1007/s11537-006-0606-y. Google Scholar T. Pejsa and I. Pandzic, State of the art in example-based motion synthesis for virtual characters in interactive applications, Computer Graphics Forum, 29 (2010), 202-226. doi: 10.1111/j.1467-8659.2009.01591.x. Google Scholar A. Schmeding and C. Wockel, The Lie group of bisections of a Lie groupoid, Ann. Global Anal. Geom., 48 (2015), 87-123. doi: 10.1007/s10455-015-9459-z. Google Scholar T. Sebastian, P. Klein and B. Kimia, On aligning curves, IEEE Transactions on Pattern Analysis and Machine Intelligence, 25 (2003), 116-125. doi: 10.1109/TPAMI.2003.1159951. Google Scholar E. Sharon and D. Mumford, 2d-shape analysis using conformal mapping, International Journal of Computer Vision, 70 (2006), 55-75. doi: 10.1109/CVPR.2004.1315185. Google Scholar K. Shoemake, Animating rotation with quaternion curves, SIGGRAPH Comput. Graph., 19 (1985), 245-254. doi: 10.1145/325334.325242. Google Scholar A. Srivastava, S. Joshi, W. Mio and X. Liu, Statistical shape analysis: Clustering, learning, and testing, IEEE Trans. Pattern Anal. Mach. Intell, 27 (2005), 590-602. doi: 10.1109/TPAMI.2005.86. Google Scholar A. Srivastava, E. Klassen, S. Joshi and I. Jermyn, Shape analysis of elastic curves in euclidean spaces, IEEE Transactions on Pattern Analysis and Machine Intelligence, 33 (2011), 1415 -1428. doi: 10.1109/TPAMI.2010.184. Google Scholar J. Su, S. Kurtek, E. Klassen and A. Srivastava, Statistical analysis of trajectories on Riemannian manifolds: Bird migration, hurricane tracking and video surveillance, The Annals of Applied Statistics, 8 (2014), 530-552. doi: 10.1214/13-AOAS701. Google Scholar J. Su, A. Srivastava, F. de Souza and S. Sarkar, Rate-invariant analysis of trajectories on riemannian manifolds with application in visual speech recognition, in 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, 620-627. doi: 10.1109/CVPR.2014.86. Google Scholar L. Younes, Computable elastic distances between shapes, SIAM Journal on Applied Mathematics, 58 (1998), 565-586. doi: 10.1137/S0036139995287685. Google Scholar L. Younes, Spaces and manifolds of shapes in computer vision: An overview, Image and Vision Computing, 30 (2012), 389-397. doi: 10.1016/j.imavis.2011.09.009. Google Scholar Uri M. Ascher, Egor Larionov, Seung Heon Sheen, Dinesh K. Pai. Simulating deformable objects for computer animation: A numerical perspective. Journal of Computational Dynamics, 2021 doi: 10.3934/jcd.2021021 Giovanni De Matteis, Gianni Manno. Lie algebra symmetry analysis of the Helfrich and Willmore surface shape equations. Communications on Pure & Applied Analysis, 2014, 13 (1) : 453-481. doi: 10.3934/cpaa.2014.13.453 Dennis I. Barrett, Rory Biggs, Claudiu C. Remsing, Olga Rossi. Invariant nonholonomic Riemannian structures on three-dimensional Lie groups. Journal of Geometric Mechanics, 2016, 8 (2) : 139-167. doi: 10.3934/jgm.2016001 Eleonora Bardelli, Andrea Carlo Giuseppe Mennucci. Probability measures on infinite-dimensional Stiefel manifolds. Journal of Geometric Mechanics, 2017, 9 (3) : 291-316. doi: 10.3934/jgm.2017012 Javier Pérez Álvarez. Invariant structures on Lie groups. Journal of Geometric Mechanics, 2020, 12 (2) : 141-148. doi: 10.3934/jgm.2020007 André Caldas, Mauro Patrão. Entropy of endomorphisms of Lie groups. Discrete & Continuous Dynamical Systems, 2013, 33 (4) : 1351-1363. doi: 10.3934/dcds.2013.33.1351 Gerard Thompson. Invariant metrics on Lie groups. Journal of Geometric Mechanics, 2015, 7 (4) : 517-526. doi: 10.3934/jgm.2015.7.517 Charlene Kalle, Niels Langeveld, Marta Maggioni, Sara Munday. Matching for a family of infinite measure continued fraction transformations. Discrete & Continuous Dynamical Systems, 2020, 40 (11) : 6309-6330. doi: 10.3934/dcds.2020281 Srdjan Stojanovic. Interest rates risk-premium and shape of the yield curve. Discrete & Continuous Dynamical Systems - B, 2016, 21 (5) : 1603-1615. doi: 10.3934/dcdsb.2016013 Luigi Ambrosio, Federico Glaudo, Dario Trevisan. On the optimal map in the $ 2 $-dimensional random matching problem. Discrete & Continuous Dynamical Systems, 2019, 39 (12) : 7291-7308. doi: 10.3934/dcds.2019304 Benjamin Couéraud, François Gay-Balmaz. Variational discretization of thermodynamical simple systems on Lie groups. Discrete & Continuous Dynamical Systems - S, 2020, 13 (4) : 1075-1102. doi: 10.3934/dcdss.2020064 Velimir Jurdjevic. Affine-quadratic problems on Lie groups. Mathematical Control & Related Fields, 2013, 3 (3) : 347-374. doi: 10.3934/mcrf.2013.3.347 M. F. Newman and Michael Vaughan-Lee. Some Lie rings associated with Burnside groups. Electronic Research Announcements, 1998, 4: 1-3. Firas Hindeleh, Gerard Thompson. Killing's equations for invariant metrics on Lie groups. Journal of Geometric Mechanics, 2011, 3 (3) : 323-335. doi: 10.3934/jgm.2011.3.323 Gregory S. Chirikjian. Information-theoretic inequalities on unimodular Lie groups. Journal of Geometric Mechanics, 2010, 2 (2) : 119-158. doi: 10.3934/jgm.2010.2.119 Nikolaos Karaliolios. Differentiable Rigidity for quasiperiodic cocycles in compact Lie groups. Journal of Modern Dynamics, 2017, 11: 125-142. doi: 10.3934/jmd.2017006 Adriano Da Silva, Alexandre J. Santana, Simão N. Stelmastchuk. Topological conjugacy of linear systems on Lie groups. Discrete & Continuous Dynamical Systems, 2017, 37 (6) : 3411-3421. doi: 10.3934/dcds.2017144 Robert L. Griess Jr., Ching Hung Lam. Groups of Lie type, vertex algebras, and modular moonshine. Electronic Research Announcements, 2014, 21: 167-176. doi: 10.3934/era.2014.21.167 Jan J. Dijkstra and Jan van Mill. Homeomorphism groups of manifolds and Erdos space. Electronic Research Announcements, 2004, 10: 29-38. Maciej J. Capiński, Emmanuel Fleurantin, J. D. Mireles James. Computer assisted proofs of two-dimensional attracting invariant tori for ODEs. Discrete & Continuous Dynamical Systems, 2020, 40 (12) : 6681-6707. doi: 10.3934/dcds.2020162 Elena Celledoni Markus Eslitzbichler Alexander Schmeding
CommonCrawl
Linking seasonal home range size with habitat selection and movement in a mountain ungulate Duarte S. Viana1,2, José Enrique Granados3, Paulino Fandos4, Jesús M. Pérez5, Francisco Javier Cano-Manuel3, Daniel Burón4, Guillermo Fandos6, María Ángeles Párraga Aguado7, Jordi Figuerola1 & Ramón C. Soriguer1 Space use by animals is determined by the interplay between movement and the environment, and is thus mediated by habitat selection, biotic interactions and intrinsic factors of moving individuals. These processes ultimately determine home range size, but their relative contributions and dynamic nature remain less explored. We investigated the role of habitat selection, movement unrelated to habitat selection and intrinsic factors related to sex in driving space use and home range size in Iberian ibex, Capra pyrenaica. We used GPS collars to track ibex across the year in two different geographical areas of Sierra Nevada, Spain, and measured habitat variables related to forage and roost availability. By using integrated step selection analysis (iSSA), we show that habitat selection was important to explain space use by ibex. As a consequence, movement was constrained by habitat selection, as observed displacement rate was shorter than expected under null selection. Selection-independent movement, selection strength and resource availability were important drivers of seasonal home range size. Both displacement rate and directional persistence had a positive relationship with home range size while accounting for habitat selection, suggesting that individual characteristics and state may also affect home range size. Ibex living at higher altitudes, where resource availability shows stronger altitudinal gradients across the year, had larger home ranges. Home range size was larger in spring and autumn, when ibex ascend and descend back, and smaller in summer and winter, when resources are more stable. Therefore, home range size decreased with resource availability. Finally, males had larger home ranges than females, which might be explained by differences in body size and reproductive behaviour. Movement, selection strength, resource availability and intrinsic factors related to sex determined home range size of Iberian ibex. Our results highlight the need to integrate and account for process dependencies, here the interdependence of movement and habitat selection, to understand how animals use space. This study contributes to understand how movement links environmental and geographical space use and determines home range behaviour in large herbivores. The extent of space animals use to live and reproduce, commonly known as home range, is considered a fundamental metric in animal ecology [1, 2]. Home range is defined by the interaction between animals and the environment, and its size is the direct result of movement driven by habitat selection and other external factors, biotic interactions, and intrinsic factors related to individual state and characteristics [2]. Although much progress has been done on understanding the processes underlying home range variation, integrative assessments are still lacking. One of the reasons is that movement driven by habitat selection is difficult to separate from movement driven by other factors (i.e. selection-independent movement) [3, 4]. Movement is the primary link between home range size and habitat/resource selection [5], although geographic and environmental space use have been usually addressed separately in the literature [6, 7]. Habitat selection affects home range size at different spatial scales: large-scale selection mediated by the availability and distribution of resources, landscape features and climatic conditions [8] (second-order selection; [9]); and fine-scale resource selection and use within home ranges (third-order selection; [9]). Home range size might also be affected by biotic interactions and intrinsic factors [2]. Known biotic interactions among animals include social interactions that lead to group dynamics, and territorial behaviour associated to reproduction strategies [10], whereas intrinsic factors include sex, age, and the internal state of animals [2]. Home range formation is thus the result of dynamic processes. Both the habitat and internal state of animals might change through time and cause home range size to vary. For example, seasonal variation might be determined by changes in selection according to variation in habitat preference within home ranges (third-order habitat selection) and/or by changes in resource availability and distribution across the landscape (second-order habitat selection) [11]. Broad-scale landscape dynamics might even trigger nomadic or migratory movements that allow animals to track changing resources over time [12, 13]. In addition to habitat selection, reproduction might affect movement during the mating and rutting seasons; as such, intrinsic factors such as sex might also lead home range size to vary among seasons (e.g. [14]). In order to understand how these dynamic processes contribute to determine home range size, we performed an integrative analysis using movement data of a mountain ungulate (the Iberian ibex, Capra pyrenaica). Specifically, we investigated the relative contributions of habitat selection, selection-free movement and intrinsic factors (related to sex) to determine seasonal home range size. Although many of these factors have been reported to affect either habitat selection or movement in mountain ibex and large herbivores in general (see below), an explicit link between movement, habitat selection and home range size has never been made. Therefore, assessing the joint contribution of these factors will contribute to understand space use by animals. Large herbivores are excellent models to establish the link between primary productivity, selection and movement [15]. Accordingly, movement data might be related to environmental information collected through remote sensing at comparable spatial scales and resolutions [16]. The Iberian ibex is a gregarious species with virtually no natural predators in the study area, and thus territory defence and predation avoidance might not be as important as forage and roost selection for explaining space use. Indeed forage availability and the environmental factors that affect it, such as temperature, snow depth, rainfall and daylight, are key drivers of home range size in large herbivores [17, 18], including mountain ibex [19, 20]. Moreover, Iberian ibex live in mountainous areas with marked seasonality associated to altitudinal gradients, and track resource availability by performing progressive altitudinal movements [19]. Therefore, we hypothesised that selection strength and resource availability are key drivers of space use and home range size. We explored this hypothesis by using the selection coefficients derived from integrated Step Selection Analysis (iSSA), i.e. selection strength, as well as proxies of resource availability, including altitude, geographical location and environmental variables related to primary productivity, as predictors of home range size. Because ibex show altitudinal range shifts in response to resource availability, we broadened our definition of "home range" to include space covered during gradual altitudinal movements. As an alternative or complementary process, we also considered the role of selection-independent movement in driving space use and home range size. For example, individual characteristics such as the animal's internal state, territorial behaviour during the mating season or even personality might affect home range size. We expected that the displacement rate and directional persistence not affected by habitat selection would be positively related to home range size, assuming that larger scale landscape constraints are not as important as to restrict home range size. The recently developed integrated Step Selection Analysis allowed us to separate the effects of habitat selection from "selection-free" movement to explain space use. Finally, differences between sexes in space use have been widely documented in ungulates [8]. Mountain ibex are sexually dimorphic, with males having a larger body size, which was shown to affect movement and selection behaviour [21]. Therefore, we hypothesised that sex is an important determinant of home range size, and predicted that females would have smaller home ranges due to restricted mobility resulting from parental care, especially during spring and summer when looking after newborns [22]; and/or smaller body size, which is a general predictor of home range size [23]. Restricted mobility can also lead females to select areas of higher habitat quality, allowing them to have smaller home ranges. On the other hand, there is a possibility of increased home ranges in females owing to higher energetic requirements derived from lactation [24]. Study area and species The study was conducted in Sierra Nevada, mostly within the National Park (37°05′N, 3°28′W; SE Spain; Additional file 1: Figure S1). This park extends over 85,883 ha and is composed of mountains that rise over 3000 m a.s.l., ranging from 1700 to 3500 m. It is dominated by a continental Mediterranean climate with altitudinal gradients of temperature and rainfall. Rainfall is more frequent in spring and autumn, whereas summers are hot and dry and winters are cold with snowfall from November until April. The park has a highly diverse vegetation, with 2100 plant species (some endemic), structured in forests, shrubland and grassland along altitudinal gradients. The Iberian ibex (Capra pyrenaica) is an endemic species of the Iberian Peninsula that inhabits mountainous systems [19]. This species live in social groups, but show spatial sexual segregation for most of the year, only coming together during the courtship (rutting) season, usually from October to December [25]. Kids are born in late spring, usually May. The Iberian ibex is a generalist herbivore, foraging as both a browser and grazer on a wide and varied diet that includes grass, shrubs and sometimes trees [19, 26]. Diet and foraging mode depend on resource availability [19]. Ibex can also perform altitudinal movements as to track seasonal resources that become available depending on climatic factors such as temperature and snow cover [19]. We conducted the study in two different geographical areas inhabited by different ibex population nuclei to control for possible effects of geographical idiosyncrasy, for example in the composition and density of resources. The areas differed mainly in altitude and vegetation cover, with the eastern nucleus being at lower altitudes and having a denser forest cover. Movement data We equipped 22 Iberian ibex with GPS collars (Microsensory, Córdoba, Spain, and Vectronic Aerospace, Berlin, Germany) after capturing them by darting using an anaesthetizing mixture of xylazine (3 mg/Kg) and ketamine (3 mg/Kg). Their movement was monitored over a maximum of two years during 2005–2007 by obtaining positions every one to every four hours depending on the animal. Because some ibex died and some stopped transmitting data before completing at least a full season, we had a final sample of 18 animals (10 males and 8 females) living in two separated geographical areas of the mountains (9 in each of the two nuclei; Additional file 1: Figure S1). All the included ibex were tracked for multiple seasons. After removing the first five fixes and obvious relocation errors, we had a total of 2085–4639 fixes per animal. However, to homogenize the time lags between successive relocations across tracked animals, we subsampled the movement data to obtain relocations every four hours, rendering a total of 700–3230 fixes per animal over a temporal range of 206–576 days. We defined four different seasons according to the period of the year and the biology and life history of the Iberian ibex: spring (kidding season; April–June), summer (July–September), autumn (mating season; October–December) and winter (January–March). For each of the seasons, we first estimated for each of the 18 ibex habitat selection models that accounted for both movement and resource availability. These models allowed us to estimate selection and movement coefficients. Then, we explored to what extent selection-independent movement, selection strength, resource availability, and intrinsic factors (sex) determined seasonal home range size. All the specific analyses are described below. Habitat selection models Movement allows organisms to track the environment, and when estimating habitat selection, failure to account for the movement process may produce biased selection estimates [3]. A recent approach, termed integrated step selection analysis (iSSA) [4], builds on resource and step selection functions [27, 28] and can be used to model habitat selection while accounting for individual differences in movement behaviour. As such, this model can also be used to obtain estimates of selection-independent movement coefficients. We performed iSSA for each animal in each season to obtain individual, rather than population-level, estimates (as recommended in [4, 29]). iSSA simultaneously estimates movement and habitat selection parameters by comparing each used movement step with a set of conditioned available steps randomly sampled from an analytical distribution parameterised based on observed steps (N = 10 in this study). Movement steps were characterised by their length, i.e. the distance between the start-point and end-point of a given step, and direction, defined as the angular deviation (or turn angle) between successive steps. In particular, available step lengths were randomly sampled from a Gamma distribution fitted to observed step lengths of each animal in each season by maximum likelihood, and directions were randomly sampled from a uniform distribution of turn angles between successive steps. Habitat covariate values were extracted for the end-point of each step and consisted of environmental variables related to foraging and roosting habitat: terrain slope (derived from a digital terrain model; www.juntadeandalucia.es/medioambiente/site/rediam), heat load (derived from the same digital terrain model) [30, 31], both with a spatial resolution of 10 m, and a primary productivity index (the Normalized Difference Vegetation Index, NDVI) obtained from satellite imagery (NASA product MOD13Q1; spatial resolution = 250 m, temporal resolution = 16 days). Because NDVI varies over time, each observed and control location was associated to the specific NDVI value corresponding to that location at the closest date. Heat load is an index of incident solar radiation that takes into account the orientation of the terrain slope, thus, depending on the latitude and season, it is associated to vegetation cover and snow depth. All habitat variables were centred and standardized and the respective quadratic effects included also as explanatory variables, as habitat selection might also show non-monotonic responses. Habitat variables were checked for collinearity by performing pair-wise correlation tests (correlation coefficients were all below 0.70; only in one case out of 76, we found a correlation lower than −0.70; Additional file 1: Table S1). Each iSSA model included movement covariates, including step-length, its natural logarithm, and the angular deviation (cosine of the turn angle), as well as all habitat covariates mentioned above. Models were estimated using conditional logistic regression in the R package "survival" [32, 33]. The importance of habitat selection to explain space use was determined by comparing iSSA models containing only the movement covariates with models containing both movement and habitat covariates by means of the Akaike Information Criterion (AIC). In order to estimate mean step length (lmean) while accounting for habitat selection, we combined the estimated coefficients of the step-length with the parameter estimates of the Gamma distributions (used for sampling available step-lengths) as follows [4]: $$ {l}_{mean}=\frac{k+\beta \ln (l)}{\theta^{-1}-{\beta}_l}, $$ where k and θ are the shape and scale of the observed Gamma distribution, respectively, and β l and β ln(l) are the iSSA coefficients for the observed step length and natural-log step length, respectively. Because selection coefficients are not explicit about the range and values of used habitat, and might be dependent on other habitat covariates, we used the relative selection strength (RSS) [34] to show and interpret habitat selection results. RSS is defined as the probability of habitat use in one location over other locations. To obtain "population" (as defined by a group of interest) rather than individual RSS, we bootstrapped the mean RSS and calculated population 95% confidence intervals (see a similar approach in [35]). Home range estimation Home range size was estimated by calculating the area used by the different tracked animals through a bivariate normal utilization-kernel using the R package "adehabitatHR" [36]. We used the reference smoothing parameter (h_ref) [37] to estimate home ranges and the respective 90% and 50% contours (percentages chosen based on [38]), the latter representing an estimate of core areas. Home ranges were estimated for each animal in each season. Determinants of home range size Linear mixed-effects models (LMM) were used to test how home range size varies within and across seasons and what drives this variation. We hypothesised that season, selection-independent movement, selection strength, resource availability, and sex drive home range size. Selection-independent movement was characterised by the mean step length estimated from the iSSAs and the iSSA movement coefficient for the turn angle, which represents a measure of directional persistence (i.e. the concentration parameter of a von Mises distribution; [4]). Home range size was expected to increase with both step length (i.e. displacement rate) and directional persistence. iSSA selection coefficients were used as selection strength covariates. For resource availability, we used several proxies that included altitude and population nucleus as well as the mean and coefficient of variation (CV) of habitat variables related to forage availability (heat load and NDVI). It is worth noting that ibex were found to select a defined range of resource values, as consistently significant unimodal relationships were predicted by the iSSA models (see Results); therefore, higher CV (i.e. lower resource density) meant less resource availability. The log-transformed home range size was used as the response variable, and the predictors included season (four-level factor), population nucleus (two-level factor), altitude, mean step length, directional persistence, slope selection (the linear and quadratic selection coefficients), heat load selection (linear and quadratic), NDVI selection (linear and quadratic), heat load availability (mean and CV), NDVI availability (mean and CV), and sex (two-level factor). Ibex identification was used as a random intercept effect. We performed model selection by comparing all models that included all possible predictor combinations by means of corrected AIC (AICc). Model estimation and selection were performed with the R packages "lme4" [39] and MuMIn [40], respectively. The relative importance of the different home range drivers was assessed by the difference in AICc when removing the target group of predictors. General space use patterns The tracked ibex lived at high altitudes and had home range sizes of 0.39–33.17 km2 (90% kernel contour) and core areas of 0.08–10.79 km2 (50% kernel contour). Covered daily distances (net displacements) ranged from 332 to 3097 m, which were correlated with seasonal home range size (Pearson's r = 0.90, p < 0.001, for the log-log correlation). Ibex performed altitudinal movements in the western nucleus, but not in the eastern nucleus, except for a few males that moved to higher altitudes during summer (Fig. 1a, b). In the western nucleus, ibex moved gradually to higher altitudes during spring, stayed at higher altitudes during summer, and descended during autumn, staying at lower altitudes during winter (Fig. 1a). Primary productivity (NDVI) followed the same seasonal pattern in the geographical areas where the western population lives. Vegetation cover increased from winter to spring, decreased from spring to summer at low and middle altitudes but increased at high altitudes, and decreased in autumn, except at lower altitudes were the NDVI increased again (Fig. 1c). In the eastern nucleus, primary productivity was more stable across the year (Fig. 1d), as this area is located at lower altitude and has a denser forest cover. We also observed sexual segregation in space, with females living at higher altitudes than males in the western nucleus, and the inverse pattern in the eastern nucleus (Fig. 1a, b), which suggests sexual differences in space use. Altitude of tracked ibexes across the year for the western (a) and eastern (b) population nuclei, as well as seasonal variation of primary productivity (NDVI) at different altitudes in the western (c) and eastern (d) nuclei Habitat selection In the iSSA models, the linear effects of slope and heat load were in general the most significant predictors of habitat selection (70 and 62% of the models, respectively): habitat selection increased linearly with increasing slopes (positive iSSA coefficients) and decreasing heat load (negative iSSA coefficients; Figs. 2 and 3). The quadratic effects of heat load and NDVI were also important in almost half of the cases (45 and 49% of the models, respectively), wherein selection increased up to optimum values of both heat load and NDVI but decreased again for higher values (negative iSSA coefficients corresponding to quadratic effects), indicating a unimodal relationship (Figs. 2 and 3). The quadratic effect of slope and the linear effects of heat load and NDVI were sometimes significant (28 and 33% of the models, respectively). Few seasonal trends in habitat selection were found, meaning that in general ibex selected similar habitat. Nevertheless, selection for both heat load and NDVI in winter tended to be stronger than in other seasons, whereas in the autumn (mating) season selection strength was overall reduced (Fig. 2). Differences between sexes and nuclei were larger (Fig. 2). Females tended to select steeper slopes than males, except in the autumn (mating) season, a pattern that was consistent between nuclei. Selection for heat load was more driven by sex, as males chose locations with lower heat load and females locations of intermediate heat load (unimodal relationship). However, in summer, lower heat load was selected by both males and females in the west (higher) nucleus than in the east nucleus (Fig. 2). For NDVI selection, differences between nuclei were larger, with ibex in the east nucleus selecting higher heat load and NDVI values (Fig. 3), though this was the result of local habitat availability rather than differential selection – selection coefficients were overall similar between nuclei (Fig. 3). Relative selection strength (log-transformed RSS) for selecting location x1 over x2 (habitat value in x2 = 50%). The multiple panels correspond to all the combinations between season (rows) and habitat variable (columns). Continuous and dashed lines correspond to females and males, respectively; and dark and light lines correspond to the west and east nucleus, respectively. Note that only the mean RSS is shown to improve interpretation – see Additional file 1: Figure S2 for the figure with associated 95% confidence intervals Boxplots of selection coefficients from the iSSA models pooled across seasons, sexes and population nuclei Space use by every ibex in every season was significantly determined by habitat selection (as estimated by comparing the AIC for iSSA models including and excluding habitat variables; difference in AIC > 2), except for one female ibex during spring. Estimated mean step length, once accounted for habitat selection, was almost always higher than the observed step length (Fig. 4), meaning that habitat selection constrained movement. Difference between observed and estimated mean step-length pooled across seasons, sexes and population nuclei. Zero value represents the situation in which habitat selection does not influence movement behaviour and in turn space use Home range size The most parsimonious model that explained the highest amount of variation in home range size (90% contour) included, by order of importance, selection-independent movement, selection strength, sex, season and resource availability (Table 1; only the results for the best model are shown). For the core area (50% contour) model, selection-free movement was again the most important predictor, followed by season, sex, selection strength, resource availability and nucleus. For both models, home range size increased with increasing displacement rate and directional persistence, as well as with increasing selection strength and resource dispersion (CV) (Table 1). Except for females in the eastern nucleus, which did not show seasonal variation, home range size was larger during spring and autumn and smaller during summer and winter (Fig. 5). In the west nucleus, where all ibex moved to higher altitudes, home ranges were generally larger than in the western nucleus (Fig. 5), where primary productivity is on average lower, though this was only significant for the core areas (50% home range contour). Males generally had larger home ranges than females (Fig. 5). Table 1 Coefficients of the best home range size models (linear mixed models), for both the 90% and 50% contours. The AICc corresponds to a model in which the target group of predictors was removed, thus being a measure of its explanatory importance. NS, predictor not selected Home range size (90% contours) for the different combinations of season, sex and population nucleus Overall, qualitatively similar results were obtained for the 90% and 50% kernel contour of home ranges, and for each model a high proportion of home range size variation was explained (proportion of deviance explained by the marginal effects = 0.84 and 0.82, respectively). The only difference was that nucleus was selected only in the best core area (50% contour) model, and NDVI availability was selected only in the 90% home range model. The patterns of space use by ibex in Sierra Nevada, namely home range size, varied across seasons and were determined by selection-independent movement patterns, resource selection strength and availability, and sex. Ibex that showed higher displacement rates and more directional persistence had, as expected, larger home ranges. This means that unmeasured habitat and landscape features, as well as (or) individual characteristics and motivations are important to explain individual variation in home range size. On the other hand, increased selection strength and less available or more scattered resources also led to larger home ranges. Finally, intrinsic characteristics related to sex also played a role, as males had larger home ranges than females. Reproduction behaviour and/or body size might explain the sex effect on home range size and sexual segregation. The availability and distribution of preferred resources is a general driver of home range size of a wide diversity of animals [2, 41]. For example, herbivores show larger home ranges when forage is less available across the landscape [18]; and polar bears show larger home ranges when the distribution of their preferred prey (seals) is more unpredictable [17]. In Sierra Nevada, the altitudinal gradients in habitat availability during the spring season, e.g. forage quality and cooler temperature (lower heat load), progressively attract ibex to higher altitudes where fresh vegetation grows, which might explain the larger home range sizes. In autumn, vegetation at higher altitude becomes unavailable due to seasonal senescence and snowfall, leading ibex to descend back looking for fresh vegetation at lower altitudes, which again explains the larger home range sizes. On the contrary, during summer and winter, resources are more stable at higher and lower altitudes, respectively, and thus home range size can decrease accordingly. Differences between population nuclei are probably associated to different seasonal dynamics at different altitudes. At higher altitudes (in the western nucleus) snowfall is a major determinant of resource availability throughout the year, with resources becoming progressively available as snow cover retreats. However, at lower altitudes snowfall is not as intense and vegetation is denser (i.e. higher NDVI) and more stable across the year, which might provide constant forage. The differences in home range size between nuclei were explained by the resource availability effect, and this is probably the reason why the effect of nucleus and NDVI availability (CV) were interchanged between the 90% and 50% range contour models. According to our expectations, home range size increased with selection-independent displacement rate and directional persistence. This might be due to individual differences in movement behaviour associated to the individuals' internal state (physiological factors), morphology or even personality affecting, for example, activity, boldness and exploratory behaviour [2, 42]. Although we did not test for individual characteristics, the variation in habitat selection and use observed through the iSSA models (Figs. 2 and 3) suggest that individual differences explain some space use patterns. The higher variability in space use during winter (Fig. 2) is especially evident, and might indicate individual responses to winter conditions. Nevertheless, we cannot discard that unmeasured resources and landscape configuration, that could be implicitly driving the effect of selection-independent movement, also play an important role in home range behaviour. We also note that the relationship between displacement rate and home range size might depend on the temporal scale, for example with weekly or monthly home ranges. As in other ungulate species, we also found evident sexual segregation in space, as indicated by differences in altitude across the entire year. Sexual segregation and consequent space use patterns have been widely discussed and seem to be common among ungulate species (e.g. [43, 44]), including the Iberian ibex [45]. Although we do not have a definitive explanation for this segregation, our results support both the reproductive strategy and forage selection hypotheses, which might be complementary rather than exclusive [44]. Accordingly, on one hand, females have to protect and feed their offspring to maximize their survival (as also observed in other ungulates) (e.g. [43]), and thus tended to choose steeper slopes, low to intermediate heat loads and more opened (less vegetated) areas. These habitats might provide more protection against predation. Although the ibex has no major predators in Sierra Nevada, small carnivores such as foxes and golden eagles can prey on young animals and foster innate anti-predatory behaviour (these ibex still have alarm calls in herds). Females also showed smaller home ranges than males, which might partly be caused by restrictions to movement posed by raising their offspring, for example if the reduced mobility of young animals restricts the movement of females. On the other hand, males are less restricted by young ibex and predators, and might invest more time and space searching for food. This might explain their larger home ranges. Differences in home range size between sexes suggest that there might be a trade-off between foraging and reproductive strategies. According to the optimal foraging theory [46], the size of home ranges should be determined by the balance between the time and energy required to ultimately maximize fitness. Animals have to reproduce, consuming time and energy that could otherwise be used to increase forage efficiency. Because female ibex have the burden of raising offspring, a trade-off between foraging and reproductive strategies might underlie the observed smaller home range sizes. An alternative explanatory hypothesis is that males are bigger than females and thus have higher energetic demands that can be satisfied by having access to larger habitat patches [23]. Although we cannot discard the potential role of other movement drivers, such as territory defence behaviour, exploration of other types of resources, or predator avoidance in explaining observed space use patterns, these ibex are gregarious and have no natural predators (at least adults), thus these seem to be less important factors [19, 25, 26]. Further, we acknowledge that we dealt mainly with third-order habitat selection, i.e. selection within home range, and second-order selection related to resource distribution; however, habitat structure and landscape configuration at both broader and shorter spatial scales might influence space use and home range size by constraining movement at finer temporal scales and coarser spatial scales [8]. We highlight the importance of understanding how space use varies across time. Even finer temporal and spatial scales should provide further insight into habitat selection [47] and home range size [48]. For example, how does the importance of habitat selection and selection-free movement in explaining space use vary with temporal and spatial scale? Do these scaling relationships vary among individuals, species, and related traits? Such knowledge could provide an impartial tool to make comparisons across species and ecosystems that would contribute to delineate general mechanisms of home range behaviour. Still, we note the high proportion of variation explained by our home range size models, which suggest that a significant proportion of space use patterns might be explained by habitat selection and movement processes happening at the 4-h temporal scale and home-range spatial scale. The Iberian ibex is currently undergoing a range expansion process throughout the Iberian Peninsula, both through natural dispersal and via reintroduction programmes [19, 49]. Although it has a conservation status of "least concern" by the IUCN due to current range expansion, two subspecies (C. p. lusitanica and C. p. pyrenaica) have already gone extinct (one in 2000) and another (C. p. victoriae) has a restricted distribution in the northwest Iberian Peninsula [19, 49]. Hence, understanding space use by the Iberian ibex may help to select introduction sites and predict colonisation patterns. The iSSA models allow generating maps of the expected utilisation distribution of animals across different landscapes, and may be a useful tool for management purposes [50]. This study contributes to better understand the ecological determinants of home range behaviour and dynamics. Our results suggest that only an integrative assessment of both movement and habitat selection may allow us to understand home range size in large herbivores. Although space use should be studied at different temporal and spatial scales in the future, the dynamic nature of resource availability and individual responses to changing environmental conditions were important to explain movement behaviour and in turn space use patterns. This provides further insight into how movement ecology drives home range size and the dynamics of space use. Burt WH. Territoriality and home range concepts as applied to mammals. J Mammal. 1943;24:346–52. Börger L, Dalziel BD, Fryxell JM. Are there general mechanisms of animal home range behaviour? A review and prospects for future research. Ecol Lett. 2008;11:637–50. Forester JD, Im HK, Rathouz PJ. Accounting for animal movement in estimation of resource selection functions: sampling and data analysis. Ecology. 2009;90:3554–65. Avgar T, Potts JR, Lewis MA, Boyce MS. Integrated step selection analysis: bridging the gap between resource selection and animal movement. Methods Ecol Evol. 2016;7:619–30. Van Moorter B, Rolandsen CM, Basille M, Gaillard J-M. Movement is the glue connecting home ranges and habitat selection. J Anim Ecol. 2016;85:21–31. Manly BFL, McDonald L, Thomas D, McDonald TL, Erickson WP. Resource selection by animals: statistical design and analysis for field studies. The Netherlands: Kluwer Academic Publisher; 2007. Moorcroft PR. Mechanistic approaches to understanding and predicting mammalian space use: recent advances, future directions. J Mammal. 2012;93:903–16. Van Beest FM, Rivrud IM, Loe LE, Milner JM, Mysterud A. What determines variation in home range size across spatiotemporal scales in a large browsing herbivore? J Anim Ecol. 2011;80:771–85. Johnson DH. The comparision of usage and availability measurements for evaluating resource preference. Ecology. 1980;61:65–71. Giuggioli L, Kenkre VM. Consequences of animal interactions on their dynamics: emergence of home ranges and territoriality. Mov Ecol. 2014;2:20. Van Beest FM, Mysterud A, Loe LE, Milner JM. Forage quantity, quality and depletion as scale-dependent mechanisms driving habitat selection of a large browsing herbivore. J Anim Ecol. 2010;79:910–22. Mueller T, Olson KA, Dressler G, Leimgruber P, Fuller TK, Nicolson C, et al. How landscape dynamics link individual- to population-level movement patterns: a multispecies comparison of ungulate relocation data. Glob Ecol Biogeogr. 2011;20:683–94. Teitelbaum CS, Fagan WF, Fleming CH, Dressler G, Calabrese JM, Leimgruber P, et al. How far to go? Determinants of migration distance in land mammals. Ecol Lett. 2015;18:545–52. Dahle B, Swenson JE. Seasonal range size in relation to reproductive strategies in brown bears Ursus arctos. J Anim Ecol. 2003;72:660–7. Pettorelli N, Ryan S, Mueller T, Bunnefeld N, Jędrzejewska B, Lima M, et al. The normalized difference vegetation index (NDVI). Clim Res. 2011;46:15–27. Neumann W, Martinuzzi S, Estes AB, Pidgeon AM, Dettki H, Ericsson G, et al. Opportunities for the application of advanced remotely-sensed data in ecological studies of terrestrial animal movement. Mov Ecol. 2015;3:8. Ferguson SH, Taylor MK, Born EW, Rosing-Asvid A, Messier F. Determinants of home range size for polar bears (Ursus maritimus). Ecol Lett. 1999;2:311–8. Morellet N, Bonenfant C, Börger L, Ossi F, Cagnacci F, Heurich M, et al. Seasonality, weather and climate affect home range size in roe deer across a wide latitudinal gradient within Europe. J Anim Ecol. 2013;82:1326–39. Acevedo P, Cassinello J. Biology, ecology and status of Iberian ibex Capra pyrenaica: a critical review and research prospectus. Mamm Rev. 2009;39:17–32. Scillitani L, Sturaro E, Monaco A, Rossi L, Ramanzin M. Factors affecting home range size of male alpine ibex (Capra Ibex Ibex) in the Marmolada Massif. Hystrix. Ital J Mammal. 2012;23:19–27. Villaret JC, Bon R, Rivet A. Sexual segregation of habitat by the alpine ibex in the French alps. J Mammal. 1997;78:1273–81. Grignolio S, Rossi I, Bertolotto E, Bassano B, Apollonio M. Influence of the kid on space use and habitat selection of female alpine ibex. J Wildl Manag. 2007;71:713–9. Mysterud A, Pérez-Barbería FJ, Gordon IJ. The effect of season, sex and feeding style on home range area versus body mass scaling in temperate ruminants. Oecologia. 2001;127:30–9. Saïd S, Gaillard J, Duncan P, Guillon N, Guillon N, Servanty S, et al. Ecological correlates of home-range size in spring–summer for female roe deer (Capreolus capreolus) in a deciduous woodland. J Zool. 2005;267:301–8. Granados JE, Pérez JM, Márquez FJ, Serrano E, Soriguer RC, Fandos P. La cabra montés (Capra pyrenaica, Schinz 1838). Galemys. 2001;13:3–37. Martínez T. Utilisation de l'analyse micrographique des fèces pour l'étude du régime alimentaire du bouquetin de la Sierra Nevada (Espagne). Mammalia. 1988;52:465–74. Fortin D, Beyer HL, Boyce MS, Smith DW, Duchesne T, Mao JS. Wolves influence elk movements: behavior shapes a trophic cascade in Yellowstone National Park. Ecology. 2005;86:1320–30. Thurfjell H, Ciuti S, Boyce MS. Applications of step-selection functions in ecology and conservation. Mov Ecol. 2014;2:1–12. https://doi.org/10.1186/2051-3933-2-4. Fieberg J, Matthiopoulos J, Hebblewhite M, Boyce MS, Frair JL. Correlation and studies of habitat selection: problem, red herring or opportunity? Philos T R Soc B. 2010;365:2233–44. McCune B, Keon D. Equations for potential annual direct incident radiation and heat load. J Veg Sci. 2002;13:603–6. Shafer A, Northrup JM, White KS, Boyce MS, Côté SD, Coltman DW. Habitat selection predicts genetic relatedness in an alpine ungulate. Ecology. 2012;93:1317–29. R Development Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3–900051–07-0, URL http://www.R-project.org. 2015. Therneau T. A package for survival analysis in S. version 2.38. URL http//CRANR-project.org/package=Surviv. 2015. Avgar T, Lele SR, Keim JL, Boyce MS. Relative selection strength: quantifying effect size in habitat- and step-selection inference. Ecol Evol. 2017;7:5322–30. Prokopenko CM, Boyce MS, Avgar T. Characterizing wildlife behavioural responses to roads using integrated step selection analysis. J Appl Ecol. 2017;54:470–9. Calenge C. The package "adehabitat" for the R software: a tool for the analysis of space and habitat use by animals. Ecol Model. 2006;197:516–9. Worton BJ. Kernel methods for estimating the utilization distribution in home-range studies. Ecology. 1989;70:164–8. Börger L, Franconi N, Ferretti F, Meschi F, De MG, Gantz A, et al. An integrated approach to identify spatiotemporal and individual-level determinants of animal home range size. Am Nat. 2006;168:471–85. Bates D, Maechler M, Bolker B, Walker S. Fitting linear mixed-effects models using lme4. J Stat Softw. 2015;67:1–48. Bartón K. MuMIn: Multi-model inference. R Package version 1.15.6. 2016. http//CRANR-project.org/package=MuMIn. Mitchell MS, Powell RAA. Mechanistic home range model for optimal use of spatially distributed resources. Ecol Model. 2004;177:209–32. Spiegel O, Leu ST, Bull CM, Sih A. What's your move? Movement as a link between personality and spatial dynamics in animal populations. Ecol Lett. 2017;20:3–18. Bleich VC, Bowyer RT, Wehausen JD. Sexual segregation in mountain sheep: resources or predation? Wildl Monogr. 1997;134:3–50. Main MB. Reconciling competing ecological explanations for sexual segregation in ungulates. Ecology. 2008;89:693–704. Alados CL. Group size and composition of the Spanish ibex (Capra pyrenaica Schinz) in the sierras of Cazorla and Segura. In: The biology and management of mountain ungulates. Croom-helm London; 1985. p. 147. Stephens DW, Krebs JR. Foraging theory. New Jersey: Princeton University Press; 1986. McGarigal K, Zeller KA, Cushman SA. Multi-scale habitat selection modeling: introduction to the special issue. Landsc Ecol. 2016;31:1157–60. Kie JG, Matthiopoulos J, Fieberg J, Powell RA, Cagnacci F, Mitchell MS, et al. The home-range concept: are traditional estimators still relevant with modern telemetry technology? Philos Trans R Soc B Biol Sci. 2010;365:2221–31. Pérez JM, Granados JE, Soriguer RC, Fandos P, Márquez FJ, Crampe JP. Distribution, status and conservation problems of the Spanish ibex, Capra pyrenaica (Mammalia: Artiodactyla). Mamm Rev. 2002;32:26–39. Signer J, Fieberg J, Avgar T. Estimating utilization distributions from fitted step-selection functions. Ecosphere. 2017;8:e01771. We thank the Sierra Nevada National Park Service and the people who helped in the fieldwork, especially Isidro Puga, José López, Elias Martínez, Manuela Fernández, Antonio José Rodríguez and Apolo Sánchez. Björn Reineking provided useful R code used to perform the iSSA analysis. We are also grateful for the constructive comments of Tal Avgar and three anonymous reviewers. This study was funded by the Ministerio de Educación y Ciencia, project CGL2004–03171/BOS. DSV was supported by project RECUPERA 2020, Hito 1.1.1, cofinanced by the Ministerio de Economía y Competitividad and the European Regional Development Fund (ERDF). Estación Biológica de Doñana, CSIC, C/Américo Vespucio, s/n, E-41092, Sevilla, Spain Duarte S. Viana , Jordi Figuerola & Ramón C. Soriguer German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig, Deutscher Platz 5e, 04103, Leipzig, Germany Centro Administrativo Parque Nacional Sierra Nevada, Carretera Antigua Sierra Nevada km 7, 18071 Pinos Genil, Granada, Spain José Enrique Granados & Francisco Javier Cano-Manuel Agencia de Medio Ambiente y Agua, Junta de Andalucía. C/ Johann G. Gutenberg 1, 41092, Sevilla, Spain Paulino Fandos & Daniel Burón Departamento Biología Animal, Biología Vegetal y Ecología, Universidad de Jaén, Campus Las Lagunillas, s.n., 23071, Jaén, Spain Jesús M. Pérez Departamento de Zoología y Antropología Física, Facultad de Biología, Universidad Complutense de Madrid, 28040, Madrid, Spain Guillermo Fandos Fundación Oso Pardo, Calle San Luis 17, 4ºA, Santander, 39010, Spain María Ángeles Párraga Aguado Search for Duarte S. Viana in: Search for José Enrique Granados in: Search for Paulino Fandos in: Search for Jesús M. Pérez in: Search for Francisco Javier Cano-Manuel in: Search for Daniel Burón in: Search for Guillermo Fandos in: Search for María Ángeles Párraga Aguado in: Search for Jordi Figuerola in: Search for Ramón C. Soriguer in: DSV conceived the idea of the manuscript, analysed data and wrote the manuscript. JF and RCS were major contributors in conceiving the study. JEG, PF, JMP, FJCM, DB, GF, MAPA and RCS conceived the experimental design and participated in the fieldwork. All authors read and approved the final manuscript. Correspondence to Duarte S. Viana. All the procedures involving animal capturing and tagging were performed under the current ethical guidelines and approved by the pertinent authorities. Additonal tables and figures. (DOCX 1432 kb) Viana, D.S., Granados, J.E., Fandos, P. et al. Linking seasonal home range size with habitat selection and movement in a mountain ungulate. Mov Ecol 6, 1 (2018) doi:10.1186/s40462-017-0119-8 Animal movement Home range Integrated step selection analysis Resource selection Satellite-tracking
CommonCrawl
In the nearer future, Lynch points to nicotinic receptor agents – molecules that act on the neurotransmitter receptors affected by nicotine – as ones to watch when looking out for potential new cognitive enhancers. Sarter agrees: a class of agents known as α4β2* nicotinic receptor agonists, he says, seem to act on mechanisms that control attention. Among the currently known candidates, he believes they come closest "to fulfilling the criteria for true cognition enhancers." A number of different laboratory studies have assessed the acute effect of prescription stimulants on the cognition of normal adults. In the next four sections, we review this literature, with the goal of answering the following questions: First, do MPH (e.g., Ritalin) and d-AMP (by itself or as the main ingredient in Adderall) improve cognitive performance relative to placebo in normal healthy adults? Second, which cognitive systems are affected by these drugs? Third, how do the effects of the drugs depend on the individual using them? The majority of nonmedical users reported obtaining prescription stimulants from a peer with a prescription (Barrett et al., 2005; Carroll et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; McCabe & Boyd, 2005; Novak et al., 2007; Rabiner et al., 2009; White et al., 2006). Consistent with nonmedical user reports, McCabe, Teter, and Boyd (2006) found 54% of prescribed college students had been approached to divert (sell, exchange, or give) their medication. Studies of secondary school students supported a similar conclusion (McCabe et al., 2004; Poulin, 2001, 2007). In Poulin's (2007) sample, 26% of students with prescribed stimulants reported giving or selling some of their medication to other students in the past month. She also found that the number of students in a class with medically prescribed stimulants was predictive of the prevalence of nonmedical stimulant use in the class (Poulin, 2001). In McCabe et al.'s (2004) middle and high school sample, 23% of students with prescriptions reported being asked to sell or trade or give away their pills over their lifetime. "You know how they say that we can only access 20% of our brain?" says the man who offers stressed-out writer Eddie Morra a fateful pill in the 2011 film Limitless. "Well, what this does, it lets you access all of it." Morra is instantly transformed into a superhuman by the fictitious drug NZT-48. Granted access to all cognitive areas, he learns to play the piano in three days, finishes writing his book in four, and swiftly makes himself a millionaire. Drugs and catastrophe are seemingly never far apart, whether in laboratories, real life or Limitless. Downsides are all but unavoidable: if a drug enhances one particular cognitive function, the price may be paid by other functions. To enhance one dimension of cognition, you'll need to appropriate resources that would otherwise be available for others. We've talk about how caffeine affects the body in great detail, but the basic idea is that it can improve your motivation and focus by increasing catecholamine signaling. Its effects can be dampened over time, however, as you start to build a caffeine tolerance. Research on L-theanine, a common amino acid, suggests it promotes neuronal health and can decrease the incidence of cold and flu symptoms by strengthening the immune system. And one study, published in the journal Biological Psychology, found that L-theanine reduces psychological and physiological stress responses—which is why it's often taken with caffeine. In fact, in a 2014 systematic review of 11 different studies, published in the journal Nutrition Review, researchers found that use of caffeine in combination with L-theanine promoted alertness, task switching, and attention. The reviewers note the effects are most pronounced during the first two hours post-dose, and they also point out that caffeine is the major player here, since larger caffeine doses were found to have more of an effect than larger doses of L-theanine. See Melatonin for information on effects & cost; I regularly use melatonin to sleep (more to induce sleep than prolong or deepen it), and investigating with my Zeo, it does seem to improve & shorten my sleep. Some research suggests that higher doses are not necessarily better and may be overkill, so each time I've run out, I've been steadily decreasing the dose from 3mg to 1.5mg to 1mg, without apparently compromising the usefulness. Remember: The strictest definition of nootropics today says that for a substance to be a true brain-boosting nootropic it must have low toxicity and few side effects. Therefore, by definition, a nootropic is safe to use. However, when people start stacking nootropics indiscriminately, taking megadoses, or importing them from unknown suppliers that may have poor quality control, it's easy for safety concerns to start creeping in. Taking the tryptophan is fairly difficult. The powder as supplied by Bulk Nutrition is extraordinarily dry and fine; it seems to be positively hydrophobic. The first time I tried to swallow a teaspoon, I nearly coughed it out - the power had seemed to explode in my mouth and go down my lungs. Thenceforth I made sure to have a mouth of water first. After a while, I took a different tack: I mixed in as much Hericium as would fit in the container. The mushroom powder is wetter and chunkier than the tryptophan, and seems to reduce the problem. Combining the mix with chunks of melatonin inside a pill works even better. Starting from the studies in my meta-analysis, we can try to estimate an upper bound on how big any effect would be, if it actually existed. One of the most promising null results, Southon et al 1994, turns out to be not very informative: if we punch in the number of kids, we find that they needed a large effect size (d=0.81) before they could see anything: Noopept shows a much greater affinity for certain receptor sites in the brain than racetams, allowing doses as small as 10-30mg to provide increased focus, improved logical thinking function, enhanced short and long-term memory functions, and increased learning ability including improved recall. In addition, users have reported a subtle psychostimulatory effect. I am not alone in thinking of the potential benefits of smart drugs in the military. In their popular novel Ghost Fleet: A Novel of the Next World War, P.W. Singer and August Cole tell the story of a future war using drug-like nootropic implants and pills, such as Modafinil. DARPA is also experimenting with neurological technology and enhancements such as the smart drugs discussed here. As demonstrated in the following brain initiatives: Targeted Neuroplasticity Training (TNT), Augmented Cognition, and High-quality Interface Systems such as their Next-Generational Nonsurgical Neurotechnology (N3). Does little alone, but absolutely necessary in conjunction with piracetam. (Bought from Smart Powders.) When turning my 3kg of piracetam into pills, I decided to avoid the fishy-smelling choline and go with 500g of DMAE (Examine.com); it seemed to work well when I used it before with oxiracetam & piracetam, since I had no piracetam headaches, and be considerably less bulky. I bought 500g of piracetam (Examine.com; FDA adverse events) from Smart Powders (piracetam is one of the cheapest nootropics and SP was one of the cheapest suppliers; the others were much more expensive as of October 2010), and I've tried it out for several days (started on 7 September 2009, and used it steadily up to mid-December). I've varied my dose from 3 grams to 12 grams (at least, I think the little scoop measures in grams), taking them in my tea or bitter fruit juice. Cranberry worked the best, although orange juice masks the taste pretty well; I also accidentally learned that piracetam stings horribly when I got some on a cat scratch. 3 grams (alone) didn't seem to do much of anything while 12 grams gave me a nasty headache. I also ate 2 or 3 eggs a day. It is often associated with Ritalin and Adderall because they are all CNS stimulants and are prescribed for the treatment of similar brain-related conditions. In the past, ADHD patients reported prolonged attention while studying upon Dexedrine consumption, which is why this smart pill is further studied for its concentration and motivation-boosting properties. Cost-wise, the gum itself (~$5) is an irrelevant sunk cost and the DNB something I ought to be doing anyway. If the results are negative (which I'll define as d<0.2), I may well drop nicotine entirely since I have no reason to expect other forms (patches) or higher doses (2mg+) to create new benefits. This would save me an annual expense of ~$40 with a net present value of <820 ($); even if we count the time-value of the 20 minutes for the 5 DNB rounds over 48 days (0.2 \times 48 \times 7.25 = 70), it's still a clear profit to run a convincing experiment. It was a productive hour, sure. But it also bore a remarkable resemblance to the normal editing process. I had imagined that the magical elixir coursing through my bloodstream would create towering storm clouds in my brain which, upon bursting, would rain cinematic adjectives onto the page as fast my fingers could type them. Unfortunately, the only thing that rained down were Google searches that began with the words "synonym for"—my usual creative process. "Love this book! Still reading and can't wait to see what else I learn…and I am not brain injured! Cavin has already helped me to take steps to address my food sensitivity…seems to be helping and I am only on day 5! He has also helped me to help a family member who has suffered a stroke. Thank you Cavin, for sharing all your knowledge and hard work with us! This book is for anyone that wants to understand and implement good nutrition with all the latest research to back it up. Highly recommend!" "We stumbled upon fasting as a way to optimize cognition and make yourself into a more efficient human being," says Manuel Lam, an internal medicine physician who advises Nootrobox on clinical issues. He and members of the company's executive team have implanted glucose monitors in their arms — not because they fear diabetes but because they wish to track the real-time effect of the foods they eat. Going back to the 1960s, although it was a Romanian chemist who is credited with discovering nootropics, a substantial amount of research on racetams was conducted in the Soviet Union. This resulted in the birth of another category of substances entirely: adaptogens, which, in addition to benefiting cognitive function were thought to allow the body to better adapt to stress. I have elsewhere remarked on the apparent lack of benefit to taking multivitamins and the possible harm; so one might well wonder about a specific vitamin like vitamin D. However, a multivitamin is not vitamin D, so it's no surprise that they might do different things. If a multivitamin had no vitamin D in it, or if it had vitamin D in different doses, or if it had substances which interacted with vitamin D (such as calcium), or if it had substances which had negative effects which outweigh the positive (such as vitamin A?), we could well expect differing results. In this case, all of those are true to varying extents. Some multivitamins I've had contained no vitamin D. The last multivitamin I was taking both contains vitamins used in the negative trials and also some calcium; the listed vitamin D dosage was a trivial ~400IU, while I take >10x as much now (5000IU). It isn't unlikely to hear someone from Silicon Valley say the following: "I've just cycled off a stack of Piracetam and CDP-Choline because I didn't get the mental acuity I was expecting. I will try a blend of Noopept and Huperzine A for the next two weeks and see if I can increase my output by 10%. We don't have immortality yet and I would really like to join the three comma club before it's all over." White, Becker-Blease, & Grace-Bishop (2006) 2002 Large university undergraduates and graduates (N = 1,025) 16.2% (lifetime) 68.9%: improve attention; 65.2:% partying; 54.3%: improve study habits; 20%: improve grades; 9.1%: reduce hyperactivity 15.5%: 2–3 times per week; 33.9%: 2–3 times per month; 50.6%: 2–3 times per year 58%: easy or somewhat easy to obtain; write-in comments indicated many obtaining stimulants from friends with prescriptions Flaxseed oil is, ounce for ounce, about as expensive as fish oil, and also must be refrigerated and goes bad within months anyway. Flax seeds on the other hand, do not go bad within months, and cost dollars per pound. Various resources I found online estimated that the ALA component of human-edible flaxseed to be around 20% So Amazon's 6lbs for $14 is ~1.2lbs of ALA, compared to 16fl-oz of fish oil weighing ~1lb and costing ~$17, while also keeping better and being a calorically useful part of my diet. The flaxseeds can be ground in an ordinary food processor or coffee grinder. It's not a hugely impressive cost-savings, but I think it's worth trying when I run out of fish oil. Alpha Lipoic Acid is a vitamin-like chemical filled with antioxidant properties, that naturally occur in broccoli, spinach, yeast, kidney, liver, and potatoes. The compound is generally prescribed to patients suffering from nerve-related symptoms of diabetes because it helps in preventing damage to the nerve cells and improves the functioning of neurons. It can be termed as one of the best memory boosting supplements. Smart Pill is a dietary supplement that blends vitamins, amino acids, and herbal extracts to sustain mental alertness, memory and concentration. One of the ingredients used in this formula is Vitamin B-1, also known as Thiamine, which sustains almost all functions present in the body, but plays a key role in brain health and function. A deficiency of this vitamin can lead to several neurological function problems. The most common use of Thiamine is to improve brain function; it acts as a neurotransmitter helping the brain prevent learning and memory disorders; it also provides help with mood disorders and offers stress relief. Sounds too good to be true? Welcome to the world of 'Nootropics' popularly known as 'Smart Drugs' that can help boost your brain's power. Do you recall the scene from the movie Limitless, where Bradley Cooper's character uses a smart drug that makes him brilliant? Yes! The effect of Nootropics on your brain is such that the results come as a no-brainer. On the other hand, sometimes you'll feel a great cognitive boost as soon as you take a pill. That can be a good thing or a bad thing. I find, for example, that modafinil makes you more of what you already are. That means if you are already kind of a dick and you take modafinil, you might act like a really big dick and regret it. It certainly happened to me! I like to think that I've done enough hacking of my brain that I've gotten over that programming… and that when I use nootropics they help me help people. Contact us at [email protected] | Sitemap xml | Sitemap txt | Sitemap
CommonCrawl
Chicago Fed Letter, No. 450, January 2021 Crossref Did Covid-19 Disproportionately Affect Mothers' Labor Market Activity? By Daniel Aaronson , Luojia Hu , Aastha Rajan School and day care center restrictions during the Covid-19 pandemic have presented enormous challenges to parents trying to juggle work with child-care responsibilities. Still, empirical evidence on the impact of pandemic-related child-care constraints on the labor market outcomes of working parents is somewhat mixed. Some studies suggest the pandemic had no additional impact on the labor supply of parents, while other studies show not only that it did but that the negative impact was disproportionately borne by working mothers.1 The largest impact has been on Black, single, and non-college-educated mothers, mirroring widening employment disparities in the broader labor market since March. In this Chicago Fed Letter, we describe estimates of the impact of the pandemic on prime-age (25 to 54) parents' labor market activity through the fall of 2020, with a particular emphasis on mothers. We show that the labor force participation (LFP) of mothers, i.e., the share of working-age mothers employed or seeking employment, declined by an additional 0.6 percentage points in the spring and 0.3 percentage points in the fall, above and beyond the negative toll that the pandemic had on labor market attachment of prime-age adults without kids. The impact translates to roughly 120,000 and 60,000 fewer prime-age mothers in the labor force in the spring and fall, respectively. This estimate is more than fully driven by a decline in employment. Indeed, we estimate roughly 200,000 fewer prime-age mothers were employed throughout the pandemic. The largest impact has been on Black, single, and non-college-educated mothers, mirroring widening employment disparities in the broader labor market since March. We use a conventional statistical model to estimate how labor market activity was impacted by the pandemic. The model takes the general form: $${{Y}_{it}}={{\unicode{x03B2}}_{0}}+{{\unicode{x03B2}}_{1}}{ }{({Period}_{t})}+{{\unicode{x03B2}_{2}}}({{Period}_{t}}\times{{Female}_{i}})+{\unicode{x03B2}_{3}} ({{Period}_{t}}\times{{HasKid}_{i}})$$ $$+{{\unicode{x03B2}}_{4}} ({{Period}_{t}}\times{{Female}_{i}}\times{{HasKid}_{i}}) + {\unicode{x03B4}}{{X}_{it}} + {{\unicode{x03B5}}_{{it}^{\bullet}}}$$ In words, a person ${i}$'s labor market outcome—say, whether they are participating in the labor force—in a given month ${t}$ (which we denote ${{Y}_{it}}$) depends on whether that person is being observed in one of three pandemic periods: March to May 2020 (which we refer to as the spring school semester), June to August 20202 (summer), or September to November 2020 (fall school semester). The coefficient ${{\unicode{x03B2}}_{1}}$ in front of our pandemic variable (${{Period}_{t}}$) measures the extent to which labor market activity changed during the three seasons of the pandemic thus far, relative to the pre-pandemic January 2019 to February 2020 period. Our focus is on isolating how parents' labor market outcomes changed during the pandemic. These effects are measured by the coefficients in front of three interaction terms: ${{Period}_{t}}\times{{Female}_{i}}$, ${{Period}_{t}}\times{{HasKid}_{i}}$, and ${{Period}_{t}}\times{{Female}_{i}}\times{{HasKid}_{i}}$. In combination, they provide the estimated effect of the Covid-19 shock on the labor supply of individuals with and without kids for both men and women.3 Of particular importance is the coefficient ${{\unicode{x03B2}}_{4}}$, which measures the additional effect on women with kids. We refer to this as a triple-difference estimate because it picks up the differential impact between mothers and fathers while adjusting for any differential gender-related impact between women and men without kids. A rich set of controls, ${{X}_{it}}$, account for an individual's race, age, education, whether there is a spouse in the household, state of residence, and typical industry and occupation.4 The purpose of these controls is to purge other possible observable explanations for changes in labor market activity during the pandemic. Later, we briefly discuss how our results vary by observable characteristics of parents, such as single versus married and high school versus college graduates. We estimate this regression using the U.S. Bureau of Labor Statistics' Current Population Survey (CPS) over the period January 2019 to November 2020. The CPS is a monthly survey of 60,000 households. We restrict our sample to individuals aged 25 to 54, of which there are, on average, roughly 45,000 per month. We discuss results for three labor market outcomes: whether the individual is in the labor force (either employed or unemployed); whether the individual is employed; and, for those working, how many hours they work per week. Figure 1 plots the average decline in labor force participation, after conditioning on the ${{X}_{it}}$ controls, for men and women with and without school-aged kids during 2020. The left four bars represent estimates for the spring (March–May), the middle four bars are for the summer (June–August), and the right four bars are for the fall (September–November). Among women with children, the pandemic lowered labor force participation by 0.7 percentage points in the spring and fall and 0.5 percentage points in the summer. By comparison, the LFP of men with kids fell a more modest –0.3, –0.1, and –0.3 percentage points in the spring, summer, and fall, respectively. Therefore, the difference between fathers and mothers was a consistent 0.4 percentage points throughout the year. 1. Labor force participation during the pandemic, by gender and children Source: Authors' calculations based on data from the U.S. Bureau of Labor Statistics, Current Population Survey. This gender disparity is not nearly as stable among prime-age individuals without kids. Indeed, in the spring and summer, the LFP decline was larger among men without kids than among women without kids. That pattern reversed in the fall. Still, this past fall's 0.16 percentage point difference between women (–0.47 percentage points) and men (–0.31 percentage points) without kids is less than half the 0.44 percentage point difference between mothers and fathers. Our preferred "triple-difference" estimate of the impact on mothers with kids—or again the difference between women and men with kids minus the difference between women and men without kids—is shown in the left panel of figure 2. The pandemic lowered the labor force participation rate of mothers by an additional 0.6 percentage points during the school months in the spring, 0.4 percentage points in the summer, and 0.3 percentage points in the fall. These estimates are all statistically different from zero (the dashed vertical lines are 95 percent confidence bands). For economic context, the labor force included roughly 19.5 million prime-age mothers of children under age 14 in 2019. Therefore, every 0.1 percentage point decline in LFP corresponds to approximately 19,500 women. 2. Impact on women with children: Labor force participation, employment, and hours Note: Dashed lines represent 95% confidence interval. In the middle panel of figure 2, we report the corresponding triple-difference decline in prime-age mothers' employment was roughly 1 percentage point, and statistically different from zero, throughout the year. These are economically sizable effects; 1 percentage point translates to roughly 190,000 prime-age mothers. The employment losses are larger than the LFP losses because the pandemic led to an increase in the number of unemployed mothers. Finally, in the right panel, we plot the impact of the pandemic on mothers' hours worked among those working. Working mothers report somewhat longer schedules—about 15 to 30 minutes more per week—but relative to the mean workweek of 37.5 hours, this effect is economically small. Variation by demographics Both school and day care center closures have been disruptive for working parents. To get some feel for how much may be due to each, we reran our LFP estimates for two samples: prime-age parents with a child under age five and prime-age parents whose youngest child is school age (between ages five and 13). We find a somewhat larger pandemic effect on mothers with children under five. But the difference is economically small and statistically insignificant. For example, the pandemic lowered the labor force participation rate of mothers with preschool-age kids by an additional 0.68 and 0.34 percentage points in the spring and fall, respectively. By comparison, the effect on mothers with school-age kids was 0.56 and 0.25 percentage points, respectively, in the spring and fall. Finally, we examined the pandemic's impact on the LFP of mothers by race, education, and marital status subgroups. For this exercise, we restrict the sample to women.5 Figure 3 plots our estimates for the spring (horizontal axis) and fall (vertical axis). The dashed 45-degree line represents spring and fall estimates that are the same size. Above the dashed line, the LFP effect improved between spring and fall; and below the line, it got worse. 3. LFP of women with children, spring and fall 2020, by demographic groups Note: LFP indicates labor force participation. Perhaps the most striking pattern is the persistent damage to labor market participation among Black (say, relative to non-Hispanic White), single (relative to married), and high-school-educated (relative to college-educated) mothers. While small sample sizes do not allow for the power to find statistical differences between most groups,6 the patterns appear to mirror widening employment disparities observed in the broader labor market since March. Still, most groups, including Black and single mothers, have seen some improvement in labor market attachment since the spring; that improvement is especially notable among White Hispanic mothers, who had the largest LFP decline in the spring but the smallest (and back to pre-pandemic levels) in the fall. By contrast, the LFP effect among non-Hispanic White mothers grew somewhat larger in the fall. Recent research has documented a long-run trend in the U.S. toward women's labor supply behavior looking more and more like that of men.7 However, there have been tangible differences during the pandemic. In particular, mothers' labor force participation declined by an additional 0.6 percentage points in the spring and 0.3 percentage points in the fall, and their employment was roughly 1 percentage point lower throughout the first nine months of the pandemic. These labor market effects have been larger among single, Black, and non-college-educated mothers. While we do not discuss the channels here, other research has noted that some of the gender disparity could be explained by a higher share of women employed in sectors, such as leisure and hospitality, which have been devastated by the pandemic. While we control for indicators of industry and occupation, and can show that this explains some of the disparity, finer dimensions of sector and job type undoubtedly could matter. Others point to persistent social norms of mothers as primary caregivers, which have come into sharp focus with school and day care center restrictions. Household inequities may have been further exacerbated by health restrictions that limited people's ability to rely on extended family members or others for help. Given the persistence of the effects thus far, it would be somewhat surprising to see much of a reversal until schools and day care facilities normalize their operations. Even then, it is an open question as to whether gaps in labor market attachment that have opened up during the pandemic may lead to longer-run obstacles in the labor market. 1 On no impact, see, e.g., Scott Barkowski, Joanne Song McLaughlin, and Yinlin Dai, 2020, "Young children and parents' labor supply during COVID-19," Clemson University and University at Buffalo, research paper, July 27, Crossref, and Misty L. Heggeness, 2020, "Estimating the immediate impact of the COVID-19 shock on parental attachment to the labor market and the double bind of mothers," Review of Economics of the Household, Vol. 18, No. 4, December, pp. 1053–1078, Crossref. On an impact, see, e.g., Titan Alon, Matthias Doepke, Jane Olmstead-Rumsey, and Michèle Tertilt, 2020, "The impact of COVID-19 on gender equality," National Bureau of Economic Research, working paper, No. 26947, April, Crossref, and Liana Christin Landivar, Leah Ruppanner, William J. Scarborough, and Caitlyn Collins, 2020, "Early signs indicate that COVID-19 is exacerbating gender inequality in the labor force," Socius: Sociological Research for a Dynamic World, Vol. 6, January–December, Crossref. 2 The Current Population Survey interview occurs during the week of the 12th. We decided to include August in the summer since most schools began in the second half of that month. 3 ${{\unicode{x03B2}}_{1}}$ gives the estimated effect on labor force participation of men with no kids; ${{\unicode{x03B2}}_{1}}+{{\unicode{x03B2}}_{2}}$ is the effect on women with no kids; ${{\unicode{x03B2}}_{1}}+{{\unicode{x03B2}}_{3}}$ provides the estimated effect on men with at least one school-aged kid; ${{\unicode{x03B2}}_{1}}+{{\unicode{x03B2}}_{2}}+{{\unicode{x03B2}}_{3}}+{{\unicode{x03B2}}_{4}}$ gives an estimate of the effect on women with at least one school-aged kid. 4 To control for seasonal patterns, we also include time-period fixed effects (for March–May, June–August, and September–December) and their interactions with ${{Female}_{i}}$, ${{HasKid}_{i}}$, and ${{Female}_{i}}\times{{HasKid}_{i}}$. 5 The regression specification is otherwise similar to our triple-difference analysis (e.g., we include all relevant controls), but here we are interested in the coefficient estimate on the interaction term ${{Period}_{t}}\times{{HasKid}_{i}}$. This estimate gives the difference in the effect on labor force participation of women with kids relative to women with no kids within a particular demographic group. 6 Exceptions are Hispanic Whites versus non-Hispanic Whites and single versus married mothers, but only in the spring. 7 See Bradley T. Heim, 2007, "The incredible shrinking elasticities: Married female labor supply, 1978–2002," Journal of Human Resources, Vol. 42, No. 4, Fall, pp. 881–918. Crossref Opinions expressed in this article are those of the author(s) and do not necessarily reflect the views of the Federal Reserve Bank of Chicago or the Federal Reserve System. Download Entire Publication What is the Natural Rate of Unemployment? The Self-employment Duration of Younger Men over the Business Cycle The Decline in Teen Labor Force Participation Determinants of Automobile Loan Default and Prepayment
CommonCrawl
Local and global exponential synchronization of complex delayed dynamical networks with general topology July 2011, 16(1): 409-421. doi: 10.3934/dcdsb.2011.16.409 Unboundedness of solutions for perturbed asymmetric oscillators Lixia Wang 1, and Shiwang Ma 1, School of Mathematical Sciences and LPMC, Nankai University, Tianjin 300071, China Received February 2010 Revised August 2010 Published April 2011 In this paper, we consider the existence of unbounded solutions and periodic solutions for the perturbed asymmetric oscillator with damping $x'' + f(x )x' + ax^+ - bx^-$ $+ g(x)=p(t), $ where $x^+ =\max\{x,0\}, x^-$ $=\max\{-x,0\}$, $a$ and $b$ are two positive constants, $f(x)$ is a continuous function and $ p(t)$ is a $2\pi $-periodic continuous function, $g(x)$ is locally Lipschitz continuous and bounded. We discuss the existence of periodic solutions and unbounded solutions under two classes of conditions: the resonance case $\frac{1}{\sqrt{a}}+\frac{1}{\sqrt{b}}\in Q$ and the nonresonance case $\frac{1}{\sqrt{a}}+\frac{1}{\sqrt{b}} \notin Q$. Unlike many existing results in the literature where the function $g(x)$ is required to have asymptotic limits at infinity, our main results here allow $g(x)$ be oscillatory without asymptotic limits. Keywords: resonance, periodic solution, Unbounded solution, nonresonance.. Mathematics Subject Classification: Primary: 34C25, 37B30; Secondary: 37J4. Citation: Lixia Wang, Shiwang Ma. Unboundedness of solutions for perturbed asymmetric oscillators. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 409-421. doi: 10.3934/dcdsb.2011.16.409 J. M. Alonso and R. Ortega, Unbounded solutions of semilinear equations at resonance,, Nonlinearity, 9 (1996), 1099. doi: 10.1088/0951-7715/9/5/003. Google Scholar J. M. Alonso and R. Ortega, Roots of unity and unbounded motions of an asymmetric oscillator,, J. Differential Equations, 143 (1998), 201. doi: 10.1006/jdeq.1997.3367. Google Scholar W. Dambrosio, A note on the existence of unbounded solutions to a perturbed asymmetric oscillator,, Nonlinear Anal., 50 (2002), 333. doi: 10.1016/S0362-546X(01)00765-9. Google Scholar E. N. Dancer, Boundary-value problems for weakly nonlinear ordinary differential equations,, Bull. Austral. Math. Soc., 15 (1976), 321. doi: 10.1017/S0004972700022747. Google Scholar E. N. Dancer, On the Dirichlet problem for weakly nonlinear elliptic partial differential equations,, Proc. Roy. Soc. Edinburgh Sect.A, 76 (1976), 283. Google Scholar S. Fučik, "Sovability of Nonlinear Equations and Boundary Value Problems,", D. Reidel Publishing Co., (1980). Google Scholar M. Kunze, T. Küpper and B. Liu, Boundedness and unboundedness solutions of reversible oscillators at resonance,, Nonlinearity, 14 (2001), 1105. doi: 10.1088/0951-7715/14/5/311. Google Scholar B. Liu, Boundedness in asymmetric oscillations,, J. Math. Anal. Appl., 231 (1999), 355. doi: 10.1006/jmaa.1998.6219. Google Scholar X. Li and Z. H. Zhang, Unbounded solutions and periodic solutions for second order differential equations with asymmetric nonlinearity,, Proc. Amer. Math. Soc., 135 (2007), 2769. doi: 10.1090/S0002-9939-07-08928-9. Google Scholar N. J. Lloyd, "Degree Theory,", Cambridge University Press, (1978). Google Scholar S. W. Ma and J. H. Wu, A small twist theorem and boundedness of solutions for semilinear Duffing equations at resonance,, Nonlinear Anal., 67 (2007), 200. doi: 10.1016/j.na.2006.04.023. Google Scholar L. X. Wang and S. W. Ma, Boundedness and unboundedness of solutions for asymmetric oscillators at resonance,, Preprint., (). Google Scholar Z. H. Wang, Coexistence of unbounded solutions and periodic solutions of Liénard equations with asymmetric nonlinearities at resonance,, Sci. China Ser. A, 50 (2007), 1205. doi: 10.1007/s11425-007-0070-z. Google Scholar Z. H. Wang, Irrational rotation numbers and unboundedness of solutions of the second order differential equations with asymmetric nonlinearities,, Proc. Amer. Math. Soc., 131 (2003), 523. doi: 10.1090/S0002-9939-02-06601-7. Google Scholar Anna Capietto, Walter Dambrosio, Tiantian Ma, Zaihong Wang. Unbounded solutions and periodic solutions of perturbed isochronous Hamiltonian systems at resonance. Discrete & Continuous Dynamical Systems - A, 2013, 33 (5) : 1835-1856. doi: 10.3934/dcds.2013.33.1835 José F. Caicedo, Alfonso Castro. A semilinear wave equation with smooth data and no resonance having no continuous solution. Discrete & Continuous Dynamical Systems - A, 2009, 24 (3) : 653-658. doi: 10.3934/dcds.2009.24.653 Francisco Odair de Paiva, Humberto Ramos Quoirin. Resonance and nonresonance for p-Laplacian problems with weighted eigenvalues conditions. Discrete & Continuous Dynamical Systems - A, 2009, 25 (4) : 1219-1227. doi: 10.3934/dcds.2009.25.1219 Dominique Blanchard, Olivier Guibé, Hicham Redwane. Existence and uniqueness of a solution for a class of parabolic equations with two unbounded nonlinearities. Communications on Pure & Applied Analysis, 2016, 15 (1) : 197-217. doi: 10.3934/cpaa.2016.15.197 Claudianor O. Alves. Existence of periodic solution for a class of systems involving nonlinear wave equations. Communications on Pure & Applied Analysis, 2005, 4 (3) : 487-498. doi: 10.3934/cpaa.2005.4.487 Jingli Ren, Zhibo Cheng, Stefan Siegmund. Positive periodic solution for Brillouin electron beam focusing system. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 385-392. doi: 10.3934/dcdsb.2011.16.385 Kaifa Wang, Aijun Fan. Uniform persistence and periodic solution of chemostat-type model with antibiotic. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 789-795. doi: 10.3934/dcdsb.2004.4.789 Mi-Young Kim. Uniqueness and stability of positive periodic numerical solution of an epidemic model. Discrete & Continuous Dynamical Systems - B, 2007, 7 (2) : 365-375. doi: 10.3934/dcdsb.2007.7.365 Changrong Zhu, Bin Long. The periodic solutions bifurcated from a homoclinic solution for parabolic differential equations. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3793-3808. doi: 10.3934/dcdsb.2016121 Gui-Dong Li, Yong-Yong Li, Xiao-Qi Liu, Chun-Lei Tang. A positive solution of asymptotically periodic Choquard equations with locally defined nonlinearities. Communications on Pure & Applied Analysis, 2020, 19 (3) : 1351-1365. doi: 10.3934/cpaa.2020066 Nikolaos S. Papageorgiou, Patrick Winkert. Double resonance for Robin problems with indefinite and unbounded potential. Discrete & Continuous Dynamical Systems - S, 2018, 11 (2) : 323-344. doi: 10.3934/dcdss.2018018 Jing Li, Boling Guo, Lan Zeng, Yitong Pei. Global weak solution and smooth solution of the periodic initial value problem for the generalized Landau-Lifshitz-Bloch equation in high dimensions. Discrete & Continuous Dynamical Systems - B, 2020, 25 (4) : 1345-1360. doi: 10.3934/dcdsb.2019230 Yongkun Li, Pan Wang. Almost periodic solution for neutral functional dynamic equations with Stepanov-almost periodic terms on time scales. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 463-473. doi: 10.3934/dcdss.2017022 Zaihong Wang, Jin Li, Tiantian Ma. An erratum note on the paper: Positive periodic solution for Brillouin electron beam focusing system. Discrete & Continuous Dynamical Systems - B, 2013, 18 (7) : 1995-1997. doi: 10.3934/dcdsb.2013.18.1995 Changchun Liu, Hui Tang. Existence of periodic solution for a Cahn-Hilliard/Allen-Cahn equation in two space dimensions. Evolution Equations & Control Theory, 2017, 6 (2) : 219-237. doi: 10.3934/eect.2017012 Yingte Sun, Xiaoping Yuan. Quasi-periodic solution of quasi-linear fifth-order KdV equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (12) : 6241-6285. doi: 10.3934/dcds.2018268 Benjamin B. Kennedy. A periodic solution with non-simple oscillation for an equation with state-dependent delay and strictly monotonic negative feedback. Discrete & Continuous Dynamical Systems - S, 2020, 13 (1) : 47-66. doi: 10.3934/dcdss.2020003 Shouchuan Hu, Nikolaos S. Papageorgiou. Double resonance for Dirichlet problems with unbounded indefinite potential and combined nonlinearities. Communications on Pure & Applied Analysis, 2012, 11 (5) : 2005-2021. doi: 10.3934/cpaa.2012.11.2005 D. Ruiz, J. R. Ward. Some notes on periodic systems with linear part at resonance. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 337-350. doi: 10.3934/dcds.2004.11.337 Natalia Ptitsyna, Stephen P. Shipman. A lattice model for resonance in open periodic waveguides. Discrete & Continuous Dynamical Systems - S, 2012, 5 (5) : 989-1020. doi: 10.3934/dcdss.2012.5.989 Lixia Wang Shiwang Ma
CommonCrawl
KRM Home Emergence of aggregation in the swarm sphere model with adaptive coupling laws April 2019, 12(2): 445-482. doi: 10.3934/krm.2019019 Semiconductor Boltzmann-Dirac-Benney equation with a BGK-type collision operator: Existence of solutions vs. ill-posedness Marcel Braukhoff Institute for Analysis and Scientific Computing, Vienna University of Technology, Wiedner Hauptstrasse 8-10, 1040 Wien, Austria Received June 2018 Published November 2018 Fund Project: The author was partially funded by the Austrian Science Fund (FWF) project F 65. A semiconductor Boltzmann equation with a non-linear BGK-type collision operator is analyzed for a cloud of ultracold atoms in an optical lattice: $ \partial _t f + \nabla _pε(p)·\nabla _x f - \nabla _x n_f·\nabla _p f = n_f(1- n_f)(\mathcal{F}_f-f),\;\;\;\; x∈\mathbb{R}^d, p∈\mathbb{T}^d, t>0. $ This system contains an interaction potential $n_f(x,t): = ∈t_{\mathbb{T}^d}f(x,p,t)dp$ being significantly more singular than the Coulomb potential, which is used in the Vlasov-Poisson system. This causes major structural difficulties in the analysis. Furthermore, $ε(p) = -\sum_{i = 1}^d$ $\cos(2π p_i)$ is the dispersion relation and $\mathcal{F}_f$ denotes the Fermi-Dirac equilibrium distribution, which depends non-linearly on $f$ in this context. In a dilute plasma—without collisions (r.h.s $. = 0$ )—this system is closely related to the Vlasov-Dirac-Benney equation. It is shown for analytic initial data that the semiconductor Boltzmann equation possesses a local, analytic solution. Here, we exploit the techniques of Mouhout and Villani by using Gevrey-type norms which vary over time. In addition, it is proved that this equation is locally ill-posed in Sobolev spaces close to some Fermi-Dirac equilibrium distribution functions. Keywords: Vlasov-Dirac-Benney equation, BGK collision operator, Boltzmann equation, optical lattice, ill-posedness. Mathematics Subject Classification: Primary: 35F25, 35F20, 35Q20; Secondary: 35Q83. Citation: Marcel Braukhoff. Semiconductor Boltzmann-Dirac-Benney equation with a BGK-type collision operator: Existence of solutions vs. ill-posedness. Kinetic & Related Models, 2019, 12 (2) : 445-482. doi: 10.3934/krm.2019019 N. B. Abdallah and P. Degond, On a hierarchy of macroscopic models for semiconductors, J. Math. Phys., 37 (1996), 3308-3333. doi: 10.1063/1.531567. Google Scholar A. Al-Masoudi, S. Dörscher, S. Häfner, U. Sterr and C. Lisdat, Noise and instability of an optical lattice clock, Phys. Rev. A, 92 (2015), 063814, 7 pages. Google Scholar N. W. Ashcroft and N. D. Mermin, Solid state physics, Physics Today, 30 (1977), 61. doi: 10.1063/1.3037370. Google Scholar C. Bardos and N. Besse, The Cauchy problem for the Vlasov-Dirac-Benney equation and related issues in fluid mechanics and semi-classical limits, Kinet. Relat. Models, 6 (2013), 893-917. doi: 10.3934/krm.2013.6.893. Google Scholar C. Bardos and N. Besse, Hamiltonian structure, fluid representation and stability for the Vlasov-Dirac-benney equation, In Hamiltonian Partial Differential Equations and Applications. Selected Papers Based on the Presentations at the Conference on Hamiltonian PDEs: Analysis, Computations and applications, Toronto, Canada, January 10–12, 2014, pages 1– 30. Toronto: The Fields Institute for Research in the Mathematical Sciences; New York, NY: Springer, 2015. doi: 10.1007/978-1-4939-2950-4. Google Scholar C. Bardos and N. Besse, Semi-classical limit of an infinite dimensional system of nonlinear Schrödinger equations, Bull. Inst. Math., Acad. Sin. (N.S.), 11 (2016), 43-61. Google Scholar C. Bardos and A. Nouri, A Vlasov equation with Dirac potential used in fusion plasmas, J. Math. Phys., 53 (2012), 115621, 16pp. doi: 10.1063/1.4765338. Google Scholar E. Bloch, Ultracold quantum gases in optical lattices, Nature Physics, 1 (2005), 23-30. Google Scholar M. Braukhoff, Effective Equations for a Cloud of Ultracold Atoms in an Optical Lattice, Ph.D thesis, University of Cologne, Germany, 2017. Google Scholar M. Braukhoff and A. Jüngel, Energy-transport systems for optical lattices: Derivation, analysis, simulation, Mathematical Models and Methods in Applied Sciences, 28 (2018), 579-614. doi: 10.1142/S021820251850015X. Google Scholar O. Dutta, M. Gajda, P. Hauke, M. Lewenstein, D.-S. Lühmann, B. Malomed, T. Sowinski and J. Zakrzewski, Non-standard Hubbard models in optical lattices: A review, Rep. Prog. Phys., 78 (2015), 066001, 47 pages. Google Scholar A. Griffin, T. Nikuni and E. Zaremba, Bose-Condensed Gases at Finite Temperatures, Cambridge University Press, Cambridge, 2009. doi: 10.1017/CBO9780511575150. Google Scholar D. Han-Kwan and T. T. Nguyen, Ill-posedness of the hydrostatic Euler and singular Vlasov equations, Arch. Rational Mech. Anal., 221 (2016), 1317-1344. doi: 10.1007/s00205-016-0985-z. Google Scholar D. Han-Kwan and F. Rousset, Quasineutral limit for Vlasov-Poisson with Penrose stable data, Ann. Sci. cole Norm. Sup., 49 (2016), 1445-1495. doi: 10.24033/asens.2313. Google Scholar P.-E. Jabin and A. Nouri, Analytic solutions to a strongly nonlinear Vlasov equation, C. R., Math., Acad. Sci. Paris, 349 (2011), 541-546. doi: 10.1016/j.crma.2011.03.024. Google Scholar A. Jaksch, Optical lattices, ultracold atoms and quantum information processing, Contemp. Phys., 45 (2004), 367-381. Google Scholar A. Jüngel, Transport Equations for Semiconductors, Lect. Notes Phys., 773. Springer, Berlin, 2009. doi: 10.1007/978-3-540-89526-8. Google Scholar C. Mouhot and C. Villani, On Landau damping, Acta Math., 207 (2011), 29-201. doi: 10.1007/s11511-011-0068-9. Google Scholar N. Ramsey, Thermodynamics and statistical mechanics at negative absolute temperature, Phys. Rev., 103 (1956), 20-28. Google Scholar A. Rapp, S. Mandt and A. Rosch, Equilibration rates and negative absolute temperatures for ultracold atoms in optical lattices, Phys. Rev. Lett., 105 (2010), 220405, 4 pages. Google Scholar U. Schneider, L. Hackermüller, J. Ph. Ronzheimer, S. Will, S. Braun, T. Best, I. Bloch, E. Demler, S. Mandt, D. Rasch and A. Rosch, Fermionic transport and out-of-equilibrium dynamics in a homogeneous Hubbard model with ultracold atoms, Nature Physics, 8 (2012), 213-218. Google Scholar François Dubois. Third order equivalent equation of lattice Boltzmann scheme. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 221-248. doi: 10.3934/dcds.2009.23.221 Gi-Chan Bae, Christian Klingenberg, Marlies Pirner, Seok-Bae Yun. BGK model of the multi-species Uehling-Uhlenbeck equation. Kinetic & Related Models, 2021, 14 (1) : 25-44. doi: 10.3934/krm.2020047 Tong Yang, Seiji Ukai, Huijiang Zhao. Stationary solutions to the exterior problems for the Boltzmann equation, I. Existence. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 495-520. doi: 10.3934/dcds.2009.23.495 Hai-Liang Li, Tong Yang, Mingying Zhong. Diffusion limit of the Vlasov-Poisson-Boltzmann system. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021003 Mia Jukić, Hermen Jan Hupkes. Dynamics of curved travelling fronts for the discrete Allen-Cahn equation on a two-dimensional lattice. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020402 Hao Wang. Uniform stability estimate for the Vlasov-Poisson-Boltzmann system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 657-680. doi: 10.3934/dcds.2020292 Mehdi Badsi. Collisional sheath solutions of a bi-species Vlasov-Poisson-Boltzmann boundary value problem. Kinetic & Related Models, 2021, 14 (1) : 149-174. doi: 10.3934/krm.2020052 Pavel Eichler, Radek Fučík, Robert Straka. Computational study of immersed boundary - lattice Boltzmann method for fluid-structure interaction. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 819-833. doi: 10.3934/dcdss.2020349 Marc Homs-Dones. A generalization of the Babbage functional equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 899-919. doi: 10.3934/dcds.2020303 Julian Tugaut. Captivity of the solution to the granular media equation. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021002 Bilel Elbetch, Tounsia Benzekri, Daniel Massart, Tewfik Sari. The multi-patch logistic equation. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021025 Peter Poláčik, Pavol Quittner. Entire and ancient solutions of a supercritical semilinear heat equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 413-438. doi: 10.3934/dcds.2020136 Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020345 Stefano Bianchini, Paolo Bonicatto. Forward untangling and applications to the uniqueness problem for the continuity equation. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020384 Anh Tuan Duong, Phuong Le, Nhu Thang Nguyen. Symmetry and nonexistence results for a fractional Choquard equation with weights. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 489-505. doi: 10.3934/dcds.2020265 Maicon Sônego. Stable transition layers in an unbalanced bistable equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020370 Oleg Yu. Imanuvilov, Jean Pierre Puel. On global controllability of 2-D Burgers equation. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 299-313. doi: 10.3934/dcds.2009.23.299 Biyue Chen, Chunxiang Zhao, Chengkui Zhong. The global attractor for the wave equation with nonlocal strong damping. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021015 Yukihiko Nakata. Existence of a period two solution of a delay differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1103-1110. doi: 10.3934/dcdss.2020392
CommonCrawl
Monotone Convergence Theorem Given a sequence of functions $\{f_n\}$ which converges pointwise to some limit function $f$, it is not always true that $$\int \lim_{n\to\infty}f_n = \lim_{n\to\infty}\int f_n.$$ (Take this sequence for example.) The Monotone Convergence Theorem (MCT), the Dominated Convergence Theorem (DCT), and Fatou's Lemma are three major results in the theory of Lebesgue integration which answer the question "When do $\displaystyle{ \lim_{n\to\infty} }$ and $\int$ commute?" The MCT and DCT tell us that if you place certain restrictions on both the $f_n$ and $f$, then you can go ahead and interchange the limit and integral. Fatou's Lemma, on the other hand, says "Here's the best you can do if you don't make any extra assumptions about the functions." Last week we discussed Fatou's Lemma. Today we'll look at an example which uses the MCT. And next week we'll cover the DCT. Monotone Converegence Theorem: If $\{f_n:X\to[0,\infty)\}$ is a sequence of measurable functions on a measurable set $X$ such that $f_n\to f$ pointwise almost everywhere and $f_1\leq f_2\leq \cdots$, then $$\lim_{n\to\infty}\int_X f_n=\int_X f.$$ In this statement the $f_n$ are nondecreasing, but the theorem holds for a nonincreasing sequence as well. Let's look at an example which, on the surface, looks quite nasty. But thanks to the MCT, it's not bad at all. Let $X$ be a measure space with a positive measure $\mu$ and let $f:X\to[0,\infty]$ be a measurable function. Prove that $$\lim_{n\to\infty}\int_X n\log\left(1+\frac{f}{n}\right)d\mu \;=\;\int_X f\;d\mu.$$Proof. Begin by defining \begin{align*} f_n&=n\log\left(1+\frac{f}{n}\right)\\ &=\log\left(1+\frac{f}{n}\right)^n \end{align*} and note that each $f_n$ is nonnegative (since both $\log$ and $f$ are nonnegative) and measurable (since the composition of a continuous function with a measurable function is measurable). Further $f_1\leq f_2\leq\cdots$. Indeed, $\log$ is an increasing function and for a fixed $x\in X$ the sequence $\left(1+\frac{f(x)}{n}\right)^n$ is increasing. In fact*, it increases to $e^{f(x)}$. In other words, $$\lim_{n\to\infty}f_n(x)=\lim_{n\to\infty}\log\left(1+\frac{f}{n}\right)^n=\log e^{f(x)}=f(x).$$ Hence, by the Monotone Convergence Theorem $$\lim_{n\to\infty}\int_X f_n\;d\mu=\int_x f\;d\mu$$ as desired. Not so bad, huh? Did you know that the MCT has a "continuous cousin"? (Well, maybe it's more like a second cousin.) Have you come across Dini's Theorem before? Dini's Theorem: If $\{f_n:X\to\mathbb{R}\}$ is a nondecreasing sequence of continuous functions on a compact metric space $X$ such that $f_n\to f$ pointwise to a continuous function $f:X\to\mathbb{R}$, then the convergence is uniform. Here we have a monotone sequence of continuous - instead of measurable - functions which converge pointwise to a limit function $f$ on a compact metric space. By Dini's Theorem, the convergence is actually uniform. So IF the $f_n$ are also Riemann integrable, then we can conclude** $$\lim_{n\to\infty}\int_Xf_n=\int_Xf.$$ Perhaps this doesn't surprise us too much: we've seen before that continuity and measurability are analogous notions (to a certain extent)! *Recall from elementary calculus: $\displaystyle{\lim_{n\to\infty} \left(1+\frac{x}{n}\right)^n=e^x}$ for any $x\in\mathbb{R}$. ** See Rudin's Principles of Mathematical Analysis (3ed.), Theorem 7.16. Borel-Cantelli Lemma (Pictorially) The Back Pocket Resources for Intro-Level Graduate Courses Need Some Disjoint Sets? (A Measure Theory Trick) Baire Category & Nowhere Differentiable Functions (Part One)
CommonCrawl
Helping out in our NOI: Part 1 Revision en9, by cjquines, 2019-06-17 20:44:18 Now that the Philippine IOI team has been selected and this year's competition cycle is ending, I'd like to post a series detailing my participation in our national olympiad's organizing team this year. (I'm also doing this because my contribution is dropping! That's why it's a series, so I get even more contribution. I will do anything for imaginary internet points.) This year, I helped write, make data for, and test problems for our national olympiad. I hope that you'll learn something about problemsetting or testing from this series. And if not, then I hope to introduce you to our olympiad's excellent problems. And if you don't find our problems interesting, then this post isn't for you and I'm sorry for wasting your time. In this first part, I'll talk introduce our olympiad, the NOI.PH, and how I contributed to our elimination round. Here's a picture I made for the elimination round that will maybe convince you to read more: Note that this is a somewhat long post; I expect it to take around twenty minutes to read. There are also a lot of pictures in this post. . The NOI and me The National Olympiad in Informatics – Philippines, or the NOI.PH, is a young olympiad. It started in 2014 when a couple of people got together to organize it. I first joined the NOI unofficially in 2016, and a good performance convinced me to join in both 2017 and 2018. I qualified for the IOI both years, but was only able to go in 2018, and then I did really poorly (which is a story for another time). I also graduated from high school in 2018. Right now, I'm taking a break before I start college in August, which is why I decided to help as much as I can in the NOI, because I'll be busier when college starts. A high school student goes through several rounds in the selection process for the IOI team. You can read about it on the website, so I'll just summarize what happened this year: There was a nine-day, fourteen-problem elimination round held online last January. The top 30 joined in a two-day, on-site contest, held last April. Each day had five problems for five hours. The top 10 participated in online training for six weeks. Each week had a problem set and a five-hour contest at the end of the week. Then there was an intensive week for four days. Each day had a five-hour contest. Immediately after was an inhouse camp for three days. The first two days had two five-hour contests each. The last day had a six-hour contest. These were then used as the basis to pick our IOI team: spacewalker, Something_hacker, dsjong, and Steve120. In this post, I'll talk about our preparations for the elimination round, what we did right, and what we did... not so right. Preparing early, kind of The original schedule for the elimination round would be from January 4 to 13. The NOI president, verngutz, sent out emails to a bunch of people asking for volunteers for the scientific committee: problem proposals, making test data, testing problems, and writing stories. This was in early November, which is two months early. Preparing contests early is good! I'll talk more about these roles later on, but you might be wondering about that last role: writing stories. One of the things that makes NOI.PH problems unique are stories that are funny, punny, or make references to Philippine pop culture or current events. Here's an example from 2018, for all of you tsunderes out there. For the eliminations round, I volunteered to test problems and write stories. Later that November I sent five or six problem proposals for the eliminations. By early December I get feedback from verngutz. All of my proposals got rejected except for one. (Press F to pay respects.) The working name for the proposal that made it was White Box, because it was supposed to be the opposite of the IOI problem Black Box. In White Box, you are given a $$$1 \times n$$$ box with reflectors. And instead of letting you put balls in the box and listen, the problem picks the balls for you and tells you the results. You then have to give the layout of any box that satisfies this. I learned from this that coming up with ideas is easy, but coming up with good ideas is harder. If you try to write a problem around an algorithm, it could be too easy. If you try to write a problem around the real world, it could be too hard. White Box worked because it took an existing problem and reversed it, which was one of the ideas given in a Codeforces round. That is how I came up with the problem. Then nothing happens until early January, due to several real-world things happening. This was very bad for the round! So much for preparing early, huh? Wonderful spreadsheets The eliminations round was originally scheduled from January 4 to 13. But it was already January 3 and nothing was happening, so I was worried. On January 6, verngutz messaged all of the test data makers and the story writers, telling us that the round would be postponed to January 18. Ten of the problem ideas were done. Of these, two of the problems had data. The remaining five problem ideas would be written by robinyu. So this gave us two weeks to prepare data, write the stories, and test the remaining problems. Just to make it clear, two weeks is not a lot of time for this, and this is definitely something to avoid. I was then introduced to the glorious, amazing, beautiful, wonderful, problem sets checklist. Spreadsheet screenshot To make the problemsetting and testing as organized as possible, the NOI.PH uses a spreadsheet as a checklist. On the top row are the names of the problems: here you can see the working names of the ten problem ideas. The first row is the story writer's task. The second row is the task of the person who makes the test data. Then, the remaining rows are the tasks of the tester. Note that one of the problem names is hidden. I will refer to this problem as problem $$$X$$$, because it eventually got removed from the problemset. As I didn't know anything about problemsetting and testing yet, I decided to claim some problems to write the stories for. The first story I wrote was Super Rangers, which is a reference to the Power Rangers. Then I write the stories for problem $$$X$$$, Exchange Gift, and Evening Gown, which are both references to Philippine culture. To write these two stories, I had to do research on the major news stories in the Philippines over the past year. (I went on Reddit to do research! That's new.) You may have also noticed that the NOI.PH problems I've referenced so far have a picture at the beginning, which we call the header. All recent NOI problems have a header. Like the stories, a header is part of what distinguishes NOI problems, to make reading the statement more enjoyable. (This is not as big of an issue as putting pictures in Codeforces rounds, since speed is much less important for our contests.) Example NOI.PH headers I think I am somewhat good at graphic design, so I volunteered to make most of the headers for the elimination round as well. On January 10 and 11, I make the headers for eight of the problems: The first eight headers Let me make it clear at this point that the NOI.PH prioritizes problemsetting and testing. It's just that for the eliminations round, most of my contribution was writing the stories and making pictures, so that will be the nature of this post. In later posts in this series, you'll get to see just how much the NOI cares about setting good problems. To emphasize the importance of testing, let's talk about how I tested my first problem! Testing my first problem One week before the round, the data for four of the problems have been made, and two of these has been tested. Then verngutz has made assignments for who will test which problems. I am assigned to test one of the problems, Lots of Cookies. The spreadsheet looks like this now: Let me explain our process of problemsetting and testing as it's done in the NOI. A similar process is probably done to prepare problems for most programming contests. The problemsetter or the setter comes up with an idea. They write an initial statement. The test data maker makes the test data for the problem. This involves writing the validator, checker, generators, and a model solution. I will explain what each of these are in the second part of this series, when I talk about making test data for the NOI.PH finals. The tester then tests the problem by solving it, without asking for hints, if possible. They write a solution and check if it passes the test data. They check if the test data follows the constraints given in the problem statement. They make sure that slow solutions to the problem are TLE, and that incorrect heuristics get WA. It is often the case that the setter and test data maker are the same person, and I will sometimes use "setter" to refer to both. However, the test data maker and the tester have to be different people. This redundancy is the key part of the setter–tester model, which helps reduce the amount of mistakes as much as possible. Therefore, making a problem involves at least two different people. Testing a problem in the NOI.PH is as simple as filling out the column in our spreadsheet! Testing checklist So to test Lots of Cookies, I: carefully read the statement. Here's a summary of the original statement. Define $$$x_k = \left\lfloor\frac{x}{k}\right\rfloor$$$. You're given two fixed integers $$$a$$$ and $$$b$$$. Consider $$$\sum ax_k^2 + bx_k$$$, where the sum is taken over $$$k = 1, 2, \ldots$$$. Find this sum for $$$Q$$$ different values of $$$x$$$. tried to solve it. While doing so, I saw that the sequence $$$\sum x_k$$$ for $$$x = 1, 2, \ldots$$$ was in the OEIS! And so was the sequence $$$\sum x_k^2$$$, and the OEIS gave a nice formula for both of these as well. Because the eliminations was a long contest, and contestants were allowed to look at resources, this was unacceptable. So Shisuko had to rewrite the problem to its current statement, and make all the data again. carefully read the new statement, where $$$ax_k^2 + bx_k$$$ was changed to $$$ax_k^2 \oplus bx_k$$$ instead, where $$$\oplus$$$ is binary XOR. No obvious issues to me, and I did not see anything wrong with it aside from some minor typos. (kevinsogo eventually spotted that the summation notation needed to be explained.) tried to solve it. I quickly put together the brute force that solved the first subtask, and found the full solution. However, I could not find the intended solution for the second subtask, which was one of my responsibilities. I also noticed that the solution was simpler when $$$a = 0$$$ or $$$b = 0$$$, because the answers were in the OEIS, as mentioned earlier. So I suggested to make a subtask for these. checked that the test data followed the constraints in the problem statement, by checking the validator. It did, which was good. checked the test data to make sure that edge cases were present. (In other words, checking if the constraints were tight.) In this problem, it meant that the minimum and maximum values for each of the variables was present in some test. They were there, which was good. wrote a brute force solution, in C++, that passes the first subtask, and made sure it didn't pass the next subtasks. It did not, even after several constant optimizations, which was good. wrote a full solution to the problem, in C++. It passed all the subtasks, but it was very close to the original time limit of 1 second. Hence the time limit for C++ was turned to 4 seconds instead. wrote variations on the full solution that used slow input and output, since the input and output were large. These were TLE. So a note about using fast I/O was added to the statement. changed the full solution to not use long long. It failed, which is good. translated my full solution to Python. It failed, but this was okay, since we don't guarantee that the problem can be solved in Python. asked what the intended solution to the second subtask was. Apparently it's an $$$O(Q\sqrt N)$$$ solution by summing over constant $$$x_k$$$. wrote a solution for the second subtask to the problem, in C++. It passes the first two subtasks but failed the next subtasks, even after constant optimization, which was good. wrote a solution for the third and fourth subtasks to the problem, in C++. This was after Shisuko accepted by suggestion and made test data for these subtasks. And that was most of the entries in the column. So that's how we test a problem. You read the statement as carefully as you can, making sure that it's as clear and unambiguous as possible. Then you basically write a bunch of solutions, some correct, some slow, some wrong, solving different subtasks, and make sure that they each get the expected verdict. It's not too different from what you do as a contestant, but the crucial difference is that you're trying to come up with wrong and partial solutions as well. Sometimes, this just means solving the problem normally. But sometimes, this also means trying to think what the common mistakes are. Thorough problem testing is very important. You can see how much the problem itself improved during the process of problem testing. The whole process took around two hours, ignoring the time waiting for Shisuko to update the data. It's worth it! After all, the reason NOI.PH problems are well-polished is because of our testing process :D We are definitely completely totally prepared It is January 12 now, five days before the contest. We still have five problems left to fill out of the fifteen, and four more problems to make data for. This is very very bad and this shouldn't happen at all. While reading the statement for one of the problems, Dagohoy Rock, I noticed an issue. The original statement had the same disallowed sequence LDL for everything, so each test case only had the input $$$N$$$. But this made the answers to the problem a sequence in the OEIS again! So Shisuko again rewrote the problem and remade the data. Take note, problemsetters and testers: you might want to check if the answers to your problem are in the OEIS. Then, in January 13, kevinsogo comes to the rescue and adds another problem: Spratly Islands. Then, robinyu prepared the data for two more problems, Saishuu Shinpan and Almost Original. Then, the next day, verngutz prepares the data for Problem $$$X$$$, robinyu prepares the data for Skyscaping, and kevinsogo adds the final problem, Packing Problem. By now, the spreadsheet looks like: Then, verngutz tells me that the statement for Super Rangers is wrong. I apparently misinterpreted the original problem statement, so my story needed a lot of fixing. After several rounds of going back and forth with kevinsogo, I get an acceptable problem statement. So if you're wondering why the Swords have wires joining them, well, that's the reason. Now let me talk about the issue with Problem $$$X$$$. The issue with Problem $$$X$$$ is that, other than the intended solution, there was a slower solution that did work. And the test data maker, verngutz, couldn't figure out how to make test data that separated them. So on January 16, three days before the round, it got scrapped and replaced with another problem. I will call this problem Problem $$$Y$$$. It is now the midnight before January 17, the night before the eliminations round. The spreadsheet is nowhere being finished. This is very, very bad. Two of the problems don't have data, and only three of the problems have been tested. I stay up until 3 AM writing the statements for Packing Problem, Spratly Islands, and Problem $$$Y$$$. I want to take a moment to talk about the statement for Packing Problem. It is actually the third in a series of problems with a similar story, and I consider it a tribute to their story writer, guissmo. I am very proud of this statement because I managed to make some good puns, while still making a natural story for the problem. (If you don't get the puns, well... don't ask me, because I'm not telling.) Anyway, that's just a minor thing. Then kevinsogo, in his infinite awesome power, tests ReMotion, Saishuu Shinpan, Skyscaping, and Almost Original, as well as prepares data for Spratly Islands, all in the same evening! Where would the NOI.PH be without his skills? Since I don't have the capability to contribute to the test data preparation or the testing, I just help make more illustrations. I felt sad that I couldn't contribute more substantially, but anything helps, right? Some more illustrations It is now the day itself, January 18. verngutz and kevinsogo prepare the data for Super Rangers. That morning, the spreadsheet looked like this: Problem $$$Y$$$ still did not have any data. Seven problems still need testing; although I could test them, I didn't think I had enough time. The problem statements are also missing sample explanations and illustrations, which was my job, and which I haven't done. So clearly, we were definitely, completely, totally prepared. Haha, no we weren't. This was really bad, and at this point we were guaranteed to have a mistake. We could only hope that these mistakes would be caught and fixed during the round, or at least not affect the rankings if they aren't caught. I was then assigned to test Spratly Islands. And through some magical power, I managed to solve it in time, so I managed to do some testing that afternoon. Then, kevinsogo marks Exchange Gift as tested. I make as many graphics as I can make in three hours, because what else can I do? Some more graphics It was clear that there was not enough time to make data and test Problem $$$Y$$$, so it was decided to be used a different time. That's the reason why this year's eliminations only had fourteen problems, compared to the usual fifteen. And then verngutz uploaded the problems to HackerRank, and the round began. Pulling through The moment the round opened, I read the problems that robinyu wrote, because I haven't read their statements yet. I offered to make headers for them, and he agreed, so we managed to push the following headers relatively quickly: Headers for Robin's problems Over the next few days the remaining problems would eventually be tested, and editorials would be written. I wrote the editorials for Evening Gown, Exchange Gift, and Spratly Islands. I am very proud of my editorial for Evening Gown, which you can read here. Our guaranteed mistakes did appear. There was a small issue with the checker for Exchange Gift, which was caught and fixed immediately after the contest. I also realized that the test data for Evening Gown was weak, which wasn't caught until weeks after the contest. Thankfully, neither of these affected the rankings that much. These should have been caught during the testing phase, which was something we did wrong, because these were two of the problems that weren't tested that well. Mistakes like these are unavoidable. At least the thorough testing that did happen helped avoid many more possible mistakes. That's one thing that we need to improve on next year: making sure the problems are more thoroughly tested. And the best way to do this is starting early, much, much earlier than two weeks in advance, especially for a fifteen-problem round like the NOI.PH Eliminations. So that was my experience volunteering to organize this year's NOI.PH elimination round! In the end, I set one problem (Exchange Gift, which I am really proud of), and tested two problems (Lots of Cookies and Spratly Islands). It's not much, but I like to think I helped by writing the stories and making illustrations as well. I hope you learned something! And if not, well, let me recommend some problems to try from the elimination round that I think are nice: My problem, Exchange Gift. It's pretty easy, but I think it's a fun problem that's instructive for a beginner. There's a very, very nice way to solve it without writing a long program. For example, my Python solution is only thirty lines long. (With an average of 35 characters per line. It is really thirty lines long, no tricks.) I thought Spratly Islands was nice. It's an interesting twist on a classic concept. Yet Another Packing Problem, the hardest problem from the eliminations, is very fun. It was one of the two problems that none of the (official) contestants solved during the contest. In the next post in the series, I'll talk about how I made test data for my first few problems, setting a problem for the NOI.PH finals, and other preparations we made. en9 cjquines 2019-06-17 20:44:18 102 typo fixes en8 cjquines 2019-06-16 23:27:13 20 Tiny change: 'ke around fifteen minutes t' -> 'ke around twenty minutes t' (published) en7 cjquines 2019-06-13 17:10:21 121 en6 cjquines 2019-06-13 17:09:20 11 en3 cjquines 2019-06-13 16:48:29 2 en1 cjquines 2019-06-13 16:36:47 25348 Initial revision (saved to drafts)
CommonCrawl
Problems in Mathematics Problems by Topics Gauss-Jordan Elimination Inverse Matrix Linear Transformation Vector Space Eigen Value Cayley-Hamilton Theorem Diagonalization Exam Problems Abelian Group Group Homomorphism Sylow's Theorem Module Theory Ring Theory LaTex/MathJax Login/Join us Solve later Problems My Solved Problems You solved 0 problems!! Solved Problems / Solve later Problems Tagged: basis of a vector space by Yu · Published 10/02/2017 Every Basis of a Subspace Has the Same Number of Vectors Problem 577 Let $V$ be a subspace of $\R^n$. Suppose that $B=\{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k\}$ is a basis of the subspace $V$. Prove that every basis of $V$ consists of $k$ vectors in $V$. Read solution Click here if solved 58 Add to solve later by Yu · Published 09/27/2017 · Last modified 10/04/2017 Three Linearly Independent Vectors in $\R^3$ Form a Basis. Three Vectors Spanning $\R^3$ Form a Basis. Let $B=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ be a set of three-dimensional vectors in $\R^3$. (a) Prove that if the set $B$ is linearly independent, then $B$ is a basis of the vector space $\R^3$. (b) Prove that if the set $B$ spans $\R^3$, then $B$ is a basis of $\R^3$. Click here if solved 312 Every $n$-Dimensional Vector Space is Isomorphic to the Vector Space $\R^n$ Let $V$ be a vector space over the field of real numbers $\R$. Prove that if the dimension of $V$ is $n$, then $V$ is isomorphic to $\R^n$. The Inner Product on $\R^2$ induced by a Positive Definite Matrix and Gram-Schmidt Orthogonalization Consider the $2\times 2$ real matrix \[A=\begin{bmatrix} 1 & 1\\ 1& 3 \end{bmatrix}.\] (a) Prove that the matrix $A$ is positive definite. (b) Since $A$ is positive definite by part (a), the formula \[\langle \mathbf{x}, \mathbf{y}\rangle:=\mathbf{x}^{\trans} A \mathbf{y}\] for $\mathbf{x}, \mathbf{y} \in \R^2$ defines an inner product on $\R^n$. Consider $\R^2$ as an inner product space with this inner product. Prove that the unit vectors \[\mathbf{e}_1=\begin{bmatrix} 1 \\ \end{bmatrix} \text{ and } \mathbf{e}_2=\begin{bmatrix} \end{bmatrix}\] are not orthogonal in the inner product space $\R^2$. (c) Find an orthogonal basis $\{\mathbf{v}_1, \mathbf{v}_2\}$ of $\R^2$ from the basis $\{\mathbf{e}_1, \mathbf{e}_2\}$ using the Gram-Schmidt orthogonalization process. All Linear Transformations that Take the Line $y=x$ to the Line $y=-x$ Determine all linear transformations of the $2$-dimensional $x$-$y$ plane $\R^2$ that take the line $y=x$ to the line $y=-x$. Dimension of the Sum of Two Subspaces Let $U$ and $V$ be finite dimensional subspaces in a vector space over a scalar field $K$. Then prove that \[\dim(U+V) \leq \dim(U)+\dim(V).\] Powers of a Matrix Cannot be a Basis of the Vector Space of Matrices Let $n>1$ be a positive integer. Let $V=M_{n\times n}(\C)$ be the vector space over the complex numbers $\C$ consisting of all complex $n\times n$ matrices. The dimension of $V$ is $n^2$. Let $A \in V$ and consider the set \[S_A=\{I=A^0, A, A^2, \dots, A^{n^2-1}\}\] of $n^2$ elements. Prove that the set $S_A$ cannot be a basis of the vector space $V$ for any $A\in V$. Coordinate Vectors and Dimension of Subspaces (Span) Let $V$ be a vector space over $\R$ and let $B$ be a basis of $V$. Let $S=\{v_1, v_2, v_3\}$ be a set of vectors in $V$. If the coordinate vectors of these vectors with respect to the basis $B$ is given as follows, then find the dimension of $V$ and the dimension of the span of $S$. \[[v_1]_B=\begin{bmatrix} \end{bmatrix}, [v_2]_B=\begin{bmatrix} The Subset Consisting of the Zero Vector is a Subspace and its Dimension is Zero Let $V$ be a subset of the vector space $\R^n$ consisting only of the zero vector of $\R^n$. Namely $V=\{\mathbf{0}\}$. Then prove that $V$ is a subspace of $\R^n$. Basis For Subspace Consisting of Matrices Commute With a Given Diagonal Matrix Let $V$ be the vector space of all $3\times 3$ real matrices. Let $A$ be the matrix given below and we define \[W=\{M\in V \mid AM=MA\}.\] That is, $W$ consists of matrices that commute with $A$. Then $W$ is a subspace of $V$. Determine which matrices are in the subspace $W$ and find the dimension of $W$. (a) \[A=\begin{bmatrix} a & 0 & 0 \\ 0 &b &0 \\ 0 & 0 & c \end{bmatrix},\] where $a, b, c$ are distinct real numbers. (b) \[A=\begin{bmatrix} 0 &a &0 \\ 0 & 0 & b \end{bmatrix},\] where $a, b$ are distinct real numbers. Prove a Given Subset is a Subspace and Find a Basis and Dimension \end{bmatrix}\] and consider the following subset $V$ of the 2-dimensional vector space $\R^2$. \[V=\{\mathbf{x}\in \R^2 \mid A\mathbf{x}=5\mathbf{x}\}.\] (a) Prove that the subset $V$ is a subspace of $\R^2$. (b) Find a basis for $V$ and determine the dimension of $V$. Basis and Dimension of the Subspace of All Polynomials of Degree 4 or Less Satisfying Some Conditions. Let $P_4$ be the vector space consisting of all polynomials of degree $4$ or less with real number coefficients. Let $W$ be the subspace of $P_2$ by \[W=\{ p(x)\in P_4 \mid p(1)+p(-1)=0 \text{ and } p(2)+p(-2)=0 \}.\] Find a basis of the subspace $W$ and determine the dimension of $W$. Matrix Representation of a Linear Transformation of the Vector Space $R^2$ to $R^2$ Let $B=\{\mathbf{v}_1, \mathbf{v}_2 \}$ be a basis for the vector space $\R^2$, and let $T:\R^2 \to \R^2$ be a linear transformation such that \[T(\mathbf{v}_1)=\begin{bmatrix} \end{bmatrix} \text{ and } T(\mathbf{v}_2)=\begin{bmatrix} If $\mathbf{e}_1=\mathbf{v}_1+2\mathbf{v}_2 \text{ and } \mathbf{e}_2=2\mathbf{v}_1-\mathbf{u}_2$, where $\mathbf{e}_1, \mathbf{e}_2$ are the standard unit vectors in $\R^2$, then find the matrix of $T$ with respect to the basis $\{\mathbf{e}_1, \mathbf{e}_2\}$. Linear Transformation and a Basis of the Vector Space $\R^3$ Let $T$ be a linear transformation from the vector space $\R^3$ to $\R^3$. Suppose that $k=3$ is the smallest positive integer such that $T^k=\mathbf{0}$ (the zero linear transformation) and suppose that we have $\mathbf{x}\in \R^3$ such that $T^2\mathbf{x}\neq \mathbf{0}$. Show that the vectors $\mathbf{x}, T\mathbf{x}, T^2\mathbf{x}$ form a basis for $\R^3$. (The Ohio State University Linear Algebra Exam Problem) Determine Eigenvalues, Eigenvectors, Diagonalizable From a Partial Information of a Matrix Suppose the following information is known about a $3\times 3$ matrix $A$. \[A\begin{bmatrix} \end{bmatrix}=6\begin{bmatrix} \end{bmatrix}, \quad A\begin{bmatrix} -1 \\ \end{bmatrix}, \quad (a) Find the eigenvalues of $A$. (b) Find the corresponding eigenspaces. (c) In each of the following questions, you must give a correct reason (based on the theory of eigenvalues and eigenvectors) to get full credit. Is $A$ a diagonalizable matrix? Is $A$ an invertible matrix? Is $A$ an idempotent matrix? (Johns Hopkins University Linear Algebra Exam) A Matrix Representation of a Linear Transformation and Related Subspaces Let $T:\R^4 \to \R^3$ be a linear transformation defined by \[ T\left (\, \begin{bmatrix} x_1 \\ x_4 \end{bmatrix} \,\right) = \begin{bmatrix} x_1+2x_2+3x_3-x_4 \\ 3x_1+5x_2+8x_3-2x_4 \\ x_1+x_2+2x_3 (a) Find a matrix $A$ such that $T(\mathbf{x})=A\mathbf{x}$. (b) Find a basis for the null space of $T$. (c) Find the rank of the linear transformation $T$. Give a Formula for a Linear Transformation if the Values on Basis Vectors are Known Let $T: \R^2 \to \R^2$ be a linear transformation. \mathbf{u}=\begin{bmatrix} \end{bmatrix}, \mathbf{v}=\begin{bmatrix} \end{bmatrix}\] be 2-dimensional vectors. \begin{align*} T(\mathbf{u})&=T\left( \begin{bmatrix} \end{bmatrix} \right)=\begin{bmatrix} \end{bmatrix},\\ T(\mathbf{v})&=T\left(\begin{bmatrix} \end{bmatrix}\right)=\begin{bmatrix} \end{bmatrix}. \end{align*} Let $\mathbf{w}=\begin{bmatrix} x \\ \end{bmatrix}\in \R^2$. Find the formula for $T(\mathbf{w})$ in terms of $x$ and $y$. Vector Space of Polynomials and Coordinate Vectors Let $P_2$ be the vector space of all polynomials of degree two or less. Consider the subset in $P_2$ \[Q=\{ p_1(x), p_2(x), p_3(x), p_4(x)\},\] where &p_1(x)=x^2+2x+1, &p_2(x)=2x^2+3x+1, \\ &p_3(x)=2x^2, &p_4(x)=2x^2+x+1. (a) Use the basis $B=\{1, x, x^2\}$ of $P_2$, give the coordinate vectors of the vectors in $Q$. (b) Find a basis of the span $\Span(Q)$ consisting of vectors in $Q$. (c) For each vector in $Q$ which is not a basis vector you obtained in (b), express the vector as a linear combination of basis vectors. Give the Formula for a Linear Transformation from $\R^3$ to $\R^2$ Let $T: \R^3 \to \R^2$ be a linear transformation such that \[T(\mathbf{e}_1)=\begin{bmatrix} \end{bmatrix}, T(\mathbf{e}_2)=\begin{bmatrix} \end{bmatrix},\] where \end{bmatrix}, \mathbf{e}_2=\begin{bmatrix} \end{bmatrix}\] are the standard unit basis vectors of $\R^3$. For any vector $\mathbf{x}=\begin{bmatrix} \end{bmatrix}\in \R^3$, find a formula for $T(\mathbf{x})$. Any Vector is a Linear Combination of Basis Vectors Uniquely Let $B=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ be a basis for a vector space $V$ over a scalar field $K$. Then show that any vector $\mathbf{v}\in V$ can be written uniquely as \[\mathbf{v}=c_1\mathbf{v}_1+c_2\mathbf{v}_2+c_3\mathbf{v}_3,\] where $c_1, c_2, c_3$ are scalars. This website's goal is to encourage people to enjoy Mathematics! This website is no longer maintained by Yu. ST is the new administrator. Linear Algebra Problems by Topics The list of linear algebra problems is available here. Elementary Number Theory (1) Field Theory (27) Group Theory (126) Linear Algebra (485) Math-Magic (1) Module Theory (13) Probability (33) Ring theory (67) Mathematical equations are created by MathJax. See How to use MathJax in WordPress if you want to write a mathematical blog. How to Prove Markov's Inequality and Chebyshev's Inequality How to Use the Z-table to Compute Probabilities of Non-Standard Normal Distributions Expected Value and Variance of Exponential Random Variable Condition that a Function Be a Probability Density Function Conditional Probability When the Sum of Two Geometric Random Variables Are Known Stochastic Matrix (Markov Matrix) and its Eigenvalues and Eigenvectors Math-Magic The Trick of a Mathematical Game. The One's Digit of the Sum of Two Numbers. Given the Variance of a Bernoulli Random Variable, Find Its Expectation For Which Choices of $x$ is the Given Matrix Invertible? Find the Formula for the Power of a Matrix How to Diagonalize a Matrix. Step by Step Explanation. Determine Whether Each Set is a Basis for $\R^3$ Express a Vector as a Linear Combination of Other Vectors How to Find a Basis for the Nullspace, Row Space, and Range of a Matrix Prove that $\{ 1 , 1 + x , (1 + x)^2 \}$ is a Basis for the Vector Space of Polynomials of Degree $2$ or Less The Intersection of Two Subspaces is also a Subspace Rank of the Product of Matrices $AB$ is Less than or Equal to the Rank of $A$ Show the Subset of the Vector Space of Polynomials is a Subspace and Find its Basis Basis of Span in Vector Space of Polynomials of Degree 2 or Less Find a Basis and the Dimension of the Subspace of the 4-Dimensional Vector Space Site Map & Index abelian group augmented matrix basis basis for a vector space characteristic polynomial commutative ring determinant determinant of a matrix diagonalization diagonal matrix eigenvalue eigenvector elementary row operations exam finite group group group homomorphism group theory homomorphism ideal inverse matrix invertible matrix kernel linear algebra linear combination linearly independent linear transformation matrix matrix representation nonsingular matrix normal subgroup null space Ohio State Ohio State.LA probability rank ring ring theory subgroup subspace symmetric matrix system of linear equations transpose vector vector space Search More Problems Membership Level Free If you are a member, Login here. Problems in Mathematics © 2020. All Rights Reserved.
CommonCrawl
QuickPIV: Efficient 3D particle image velocimetry software applied to quantifying cellular migration during embryogenesis Marc Pereyra ORCID: orcid.org/0000-0002-3496-72071 na1, Armin Drusko3 na1, Franziska Krämer2, Frederic Strobl2, Ernst H. K. Stelzer2 & Franziska Matthäus1 BMC Bioinformatics volume 22, Article number: 579 (2021) Cite this article The technical development of imaging techniques in life sciences has enabled the three-dimensional recording of living samples at increasing temporal resolutions. Dynamic 3D data sets of developing organisms allow for time-resolved quantitative analyses of morphogenetic changes in three dimensions, but require efficient and automatable analysis pipelines to tackle the resulting Terabytes of image data. Particle image velocimetry (PIV) is a robust and segmentation-free technique that is suitable for quantifying collective cellular migration on data sets with different labeling schemes. This paper presents the implementation of an efficient 3D PIV package using the Julia programming language—quickPIV. Our software is focused on optimizing CPU performance and ensuring the robustness of the PIV analyses on biological data. QuickPIV is three times faster than the Python implementation hosted in openPIV, both in 2D and 3D. Our software is also faster than the fastest 2D PIV package in openPIV, written in C++. The accuracy evaluation of our software on synthetic data agrees with the expected accuracies described in the literature. Additionally, by applying quickPIV to three data sets of the embryogenesis of Tribolium castaneum, we obtained vector fields that recapitulate the migration movements of gastrulation, both in nuclear and actin-labeled embryos. We show normalized squared error cross-correlation to be especially accurate in detecting translations in non-segmentable biological image data. The presented software addresses the need for a fast and open-source 3D PIV package in biological research. Currently, quickPIV offers efficient 2D and 3D PIV analyses featuring zero-normalized and normalized squared error cross-correlations, sub-pixel/voxel approximation, and multi-pass. Post-processing options include filtering and averaging of the resulting vector fields, extraction of velocity, divergence and collectiveness maps, simulation of pseudo-trajectories, and unit conversion. In addition, our software includes functions to visualize the 3D vector fields in Paraview. Cellular migration in multi-cellular organisms often involves tissues or groups of cells that maintain stable or transient cell-cell contacts to preserve tissue integrity, sustain spatial patterning, or to enable the relocation of non-motile cells [1]. This phenomenon is generally known as collective cell migration, and it plays important roles in developmental processes, such as gastrulation or neural crest migration [2, 3], as well as in wound closure and cancer invasion [4]. Studies of collective cell migration on 2D cell cultures only partially reflect the physiology and architecture of in vivo tissues. Three-dimensional systems—such as model organisms, spheroids or organoids—are preferable, as they maintain physiological cell structures, neighborhood interactions, or mechanical extracellular properties, which have been recognized to play a role in regulating collective cellular migration [5,6,7]. Besides confocal fluorescence microscopy, light-sheet fluorescence microscopy (LSFM) has become one of the preferred techniques for three-dimensional imaging of biological samples, owing to its fast acquisition times, excellent signal-to-noise ratios, high spatial resolutions [8, 9], and low phototoxicity and photobleaching levels [10]. LSFM has been used to generate 3D time-lapse recordings of the complete embryonic morphogenesis of multiple model organisms [11, 12]. Based on light-sheet illumination, novel and improved imaging techniques are continuously being developed. For example, SCAPE (swept confocally-aligned planar excitation) microscopy offers more control over the viewing angle of the sample [13], while SVIM (selective volume illumination microscopy) dramatically increases acquisition times by dilating the light-sheet, at the expense of spatial resolution [14]. High temporal and spatial resolutions can be achieved with lattice light-sheet microscopy, where a combination of ultrathin light sheets and structured illumination are used [15]. The two last-mentioned techniques are particularly promising for resolving cellular migration and tissue rearrangements during quick morphogenetic events. In order to quantify collective cellular migration in dynamic 3D biological data sets, we developed quickPIV, a free and open-source particle image velocimetry (PIV) package that offers fast and robust 3D, as well as 2D, PIV analyses. While several free and open-source 2D PIV software are readily available [16,17,18,19], the same is not true for 3D implementations. To the best of our knowledge, the Python implementation hosted in openPIV is the only other free and open-source PIV package that supports 3D analyses [18]. The fastest implementation in openPIV, however, corresponds to a 2D PIV implementation written in C++. Nevertheless, maintenance of this version was stopped in favor of the high-level and productive environment of its Python counterpart. In order to maximize performance without sacrificing productivity, our software is written in Julia, a modern programming language with high-level syntax similar to Python or Matlab that compiles to highly efficient code on par with C programs [20]. This choice is motivated by the high data volumes of 3D time-lapse recordings, which makes the analysis of multiple data sets computationally very expensive. For instance, a single sequence of 3D images of a developing embryo can easily reach data sizes of several Terabytes. Hence, the design principles of Julia enabled us to prioritize the CPU performance of quickPIV, and together with further optimizations, made it possible to reduce the processing speed of a pair of 3D volumes to several seconds. The next subsection introduces PIV and discusses the strengths and limitations of applying PIV on biological samples. This is followed by a detailed description of the pipeline and the features implemented in quickPIV. The evaluation of our software includes a performance comparison to the C++ (2D) and Python (2D and 3D) implementations hosted in openPIV, as well as the accuracy evaluation of quickPIV on synthetic data. Furthermore, we analyze the ability of quickPIV to characterize migration patterns on three 3D time-lapse data sets of the embryonic development of the red flour beetle Tribolium castaneum [21, 22]. This is done by (1) simulating known translations on a 3D volume of T. castaneum, (2) validating the obtained vector fields against well-known migration patterns during the gastrulation of T. castaneum, and (3) by comparing the robustness of quickPIV on an embryo expressing both actin and nuclear molecular markers. Particle image velocimetry Particle image velocimetry is a segmentation-free technique developed and established in the field of fluid dynamics to obtain displacement fields describing the motion of small tracer particles suspended in a flowing medium [23]. If the density of seeding particles is not exceedingly high [24], the motion of each suspended particle can generally be recovered through particle tracking velocimetry (PTV) [25]. PTV is analog to single-cell tracking, requiring the segmentation of all particles in two consecutive recordings before establishing one-to-one correspondences between the particle positions. While the size and seeding density of the tracer particles in hydro- and aerodynamic PIV experiments can be tuned [26], the segmentability of biological samples is challenged by factors with no or limited experimental control. For example, cell segmentation is hindered by low contrast of the molecular marker, irregular cell morphologies, or high cell densities. Instead of detecting and tracking individual objects, PIV relies on cross-correlation to find the translation that best aligns the intensity patterns contained inside any given sub-region between two consecutive recordings. Vector fields are generated by extracting displacement vectors from multiple sub-regions across the input data [23]. The accuracy of PIV on biological data is mostly explained by the strengths and limitations of cross-correlation. In short, cross-correlation is a pattern-matching operation that is suitable for finding translations of the intensity distributions contained in two successive recordings [27]. Therefore, PIV is appropriate for quantifying collective cell migration, which is dominated by a common translation of the migrating group of cells. Moreover, the pattern-matching nature of cross-correlation extends the application of PIV to non-segmentable data sets, including unstained samples or those stained with any persistent intra-cellular marker. PIV has been used to quantify cell migration in 2D model systems, such as wound healing assays [28], tumor invasion [29, 30], skin patterning [31] and others [32,33,34]. Conversely, cross-correlation is challenged by transformations other than translations, such as rotations, shears or deformations. High temporal resolutions alleviate the contribution of these transformations by approximating them to local translations. Uncoordinated cellular migration also reduces the similarity of intensity patterns between successive recordings, which degrades the accuracy of PIV. However, if the cells are sufficiently different from each other such that they are unambiguously detected by cross-correlation, a PIV analysis matching the size of the cells can be used to effectively track the movement of independently migrating cell [35, 36]. This section outlines the three-dimensional PIV pipeline implemented in quickPIV. The workflow of a PIV analysis in quickPIV is illustrated in Fig. 1. This figure shows input volumes containing Gaussian particles to ease the visualization of the underlying translation. To accommodate all possible labeling schemes of biological samples, we generally refer to structures or intensity patterns in the analyzed data. QuickPIV pipeline The PIV analysis starts by subdividing the input volumes, \(V_{t}\) and \(V_{t+1}\), into a grid of cubic interrogation, IV, and search volumes, SV. Cross-correlation is performed between each IV[i, j, k] and SV[i, j, k] pair, and a displacement vector, (u[i, j, k], v[i, j, k], w[i, j, k]), is computed from each cross-correlation matrix through the position of the maximum peak relative to the center of the cross-correlation matrix. The computed vector components are added to the U, V and W matrices. Optionally, signal-to-noise ratios are computed from each cross-correlation matrix and added to SN. If multi-pass is used, the cross-correlation analysis is repeated at progressively lower scales, which is achieved by scaling down the interrogation size, overlap and search margin parameters at each iteration. During multi-pass, previously computed displacements offset the sampling of the search volumes, effectively refining the computed displacements at each iteration. In order to post-process the PIV-computed vector fields, quickPIV currently implements: signal-to-noise and vector magnitude filtering, space-time averaging, divergence maps, velocity maps, collectiveness maps, pseudo-trajectories and unit conversion. (a) Left, two \(60\times 50 \times 50\) voxel volumes are overlaid, with particles in \(V_t\) shown in red, and particles in \(V_{t+1}\) in blue. Interrogation volume size of \(16 \times 16 \times 16\) voxels leads to \(3\times 3 \times 3\) subdivision of non-overlapping interrogation and search volumes. Right, with 50% overlap the grid subdivision size is \(6 \times 5 \times 5\). (b) Example of 3D cross-correlation between IV[2, 2, 2] and SV[2, 2, 2]. The use of a search margin of 5 voxels is illustrated, enlarging the search volume by 5 voxels in all directions. (c) Example of displacement computation. For clarity, this example portrays low particle densities and big particle radii, which results in sub-optimal accuracy of the 3-point Gaussian sub-voxel approximation The input to a 3D PIV analysis is a pair of 3D volumes taken at consecutive time points, \(V_{t}[x,y,z]\) and \(V_{t+1}[x,y,z]\), where (x, y, z) corresponds to the unique 3D coordinates of each voxel. Both input volumes are assumed to have the same dimensions. First, \(V_{t}\) is subdivided into a 3D grid of cubic sub-regions known as interrogation volumes, IV[i, j, k], each specified by its position in the grid, (i, j, k). The dimensions of the grid subdivision are determined by the interrogation volume size and the overlap between adjacent interrogation volumes, see Fig. 1a. For each interrogation volume, a corresponding search volume, SV[i, j, k], can be defined in \(V_{t+1}\). Structures moving inside IV[i, j, k] by a translation \({\mathbf {s}} = (s_x, s_y, s_z)\) are expected to be found \(\Vert {\mathbf {s}}\Vert\) voxels away in the direction of the translation in SV[i, j, k]. The underlying translation, \({\mathbf {s}}\), of the structures contained in IV[i, j, k] and SV[i, j, k] is recovered through a cross-correlation analysis [27]. The cross-correlation between a pair of interrogation and search volumes results in a 3D cross-correlation matrix. In the absence of other transformations, the vector from the center to the maximum peak of the cross-correlation matrix reflects the underlying translation of the structures contained in IV[i, j, k] and SV[i, j, k]. The structures visible in IV[i, j, k] may move outside the borders of the corresponding SV[i, j, k]. This is known as out-of-frame loss, and it limits the ability of cross-correlation to match the spatial intensity distributions between the pair of interrogation and search volumes. This can be compensated by enlarging the search volumes by a given margin along all dimensions, designated as search margin in quickPIV. The search margin should not be much larger than the expected translation strength of the structures, as enlarging the search volumes comes at the expense of performance. Figure 1b depicts the cross-correlation of the central interrogation and search volumes in Fig. 1a, including a search margin of 5 voxels around the search volume. By computing a displacement vector for each pair of interrogation and search volumes, PIV analyses generate a vector field that describes the velocity distribution of the structures contained in the input volumes. The components of the PIV-computed vector field are returned separately in three 3D matrices: U, V and W. It should be noted that the resolution of the final vector field is decided by the size of the interrogation volumes and their overlap, which determine the grid subdivision of \(V_{t}\) and \(V_{t+1}\). Multi-pass is implemented to overcome this trade-off between resolution and the interrogation size of the PIV analysis. Cross-correlation The cross-correlation of two one-dimensional real-valued functions is defined as: $$\begin{aligned}{}[ f \star g ](s) = \int _{-\infty }^{\infty } f(x)g(x + s) \mathrm {d}x \ , \end{aligned}$$ where s has the effect of shifting g(x) along the x-axis. Cross-correlation involves computing the dot product of f(x) and \(g(x+s)\) for all possible values of s. Since the dot product entails a basic measure of similarity, the value of s that achieves the highest dot product represents the translation that best aligns the two functions. The form of cross-correlation in Eq. (1) is known as spatial cross-correlation. Discrete implementations of spatial cross-correlation have a 1D complexity of \(O(N^2)\). Taking advantage of the convolution theorem, cross-correlation can be computed in the frequency domain through Fourier transforms of f(x) and g(x): $$\begin{aligned} f \star g = {\mathcal {F}}^{-1}\{ \overline{{\mathcal {F}}\{f\}} \cdot {\mathcal {F}}\{g\}\} \ , \end{aligned}$$ where \({\mathcal {F}}\) and \({\mathcal {F}}^{-1}\) denote the Fourier and inverse Fourier transforms, respectively. Each Fourier and inverse Fourier transform in Eq. (2) can be computed efficiently with the Fast Fourier Transform (FFT) algorithm [37], which has a 1D complexity of \(O( N \log {N} )\). Since Eq. (2) does not involve any operations with higher complexities than FFT's, the overall complexity of 1D cross-correlation in the frequency domain is \(O( N \log {N} )\). For this reason, cross-correlation in quickPIV is computed in the frequency domain. We rely on a Julia wrapper around the mature and optimized Fastest Fourier Transform of the West (FFTW) C library [38] to compute all Fourier and inverse Fourier transforms. FFTW implementations of FFT generalize to multi-dimensional data, enabling the efficient three-dimensional computation of cross-correlation. To tackle the bias of the dot product towards high intensities, we implemented zero-normalized cross-correlation (ZNCC). Considering IV and SV as a pair of 3D interrogation and search volumes, ZNCC is calculated at each translation of IV by: $$\begin{aligned} ZNCC[\mathbf{s }] = \sum _{x,y,z} \frac{ (IV[\mathbf{x }]-\mu _{IV})(SV[\mathbf{x } + \mathbf{s } ]-\mu _{SV}) }{ \sqrt{ \sum _{x,y,z}( IV[\mathbf{x }] - \mu _{IV} )^2 \sum _{x,y,z}( SV[\mathbf{x } + \mathbf{s }] - \mu _{SV} )^2 } } \, \end{aligned}$$ where x is a 3D index (x, y, z) running over all voxels of IV, s is the displacement vector \((s_x,s_y,s_z)\), and \(\mu _{IV}\) and \(\mu _{SV}\) are the average intensity values of the interrogation and search volumes, respectively. Zero-normalized cross-correlation is implemented efficiently in quickPIV following the work of Lewis, who noted that the numerator in Eq. (3) can be computed efficiently in the frequency domain, while each sum in the denominator can be calculated with eight operations from an integral array of the search volume [39]. To further improve the pattern-matching robustness of cross-correlation, quickPIV also offers normalized squared error cross-correlation (NSQECC). At each translation of IV, NSQECC is computed as [40]: $$\begin{aligned} NSQECC[\mathbf{s }] = \sum _{x,y,z} \frac{ (IV[\mathbf{x }]-SV[\mathbf{x } + \mathbf{s } ])^2 }{ \sqrt{\sum _{x,y,z}(IV[\mathbf{x }])^2 \sum _{x,y,z}(SV[\mathbf{x } + \mathbf{s }])^2 } } \ , \end{aligned}$$ where x is a 3D index (x, y, z) running over all voxels of IV, and s is the displacement vector \((s_x,s_y,s_z)\). Following the example of [39], Eq. (4) is implemented efficiently in quickPIV by expressing the numerator and denominator in terms of three components: \(\sum (IV[\mathbf{x }])^2\), which is constant, \(\sum (SV[\mathbf{x }+\mathbf{s }])^2\), which is computed efficiently for each translation from an integral array, and \(-2\sum (IV[\mathbf{x }]SV[\mathbf{x }+\mathbf{s }])\), which can be computed as an unnormalized cross-correlation in the frequency domain. For convenience, quickPIV implements the inverse of Eq. (4), \(1 / ( 1 + NSQECC[\mathbf{s }])\), to obtain a maximum peak at the translation that minimizes the differences between the interrogation and search volumes. Peak sub-voxel approximation In order to detect non-integer translations, two sub-voxel interpolation methods are included in quickPIV: the centroid-based and the 3-point Gaussian sub-voxel approximations [41]. In both methods, sub-voxel refinements are computed by considering the direct neighboring values around the maximum peak of the cross-correlation matrix. The centroid-based sub-voxel refinements, \(\Delta\), are computed by $$\begin{aligned} \Delta [\mathbf{d }] = \frac{C[\mathbf{x }+\mathbf{d }] - C[\mathbf{x }-\mathbf{d }]}{C[\mathbf{x }+\mathbf{d }] + C[\mathbf{x }] + C[\mathbf{x }-\mathbf{d }]} \ , \end{aligned}$$ where C refers to the cross-correlation matrix, \(\mathbf{x }\) are the voxel coordinates of the maximum peak in the cross-correlation matrix, and \(\mathbf{d }\) is the standard basis vector for each dimension, e.g. (1, 0, 0) for the first dimension. Following the same notation, the 3-point Gaussian sub-voxel refinement of the integer displacement is given by $$\begin{aligned} \Delta [\mathbf{d }] = \frac{ \ln { (C[\mathbf{x }+\mathbf{d }]) } - \ln { (C[\mathbf{x }-\mathbf{d }]) } }{ 2\;\ln {(C[\mathbf{x }+\mathbf{d }])} - 4\;\ln {(C[\mathbf{x }])} + 2\;\ln {(C[\mathbf{x }-\mathbf{d }])} } \ . \end{aligned}$$ To acquire sub-voxel precision, the interpolated \(\Delta\) is added to the integer displacement vector from the maximum peak to the center of the cross-correlation matrix. QuickPIV defaults to the 3-point Gaussian sub-voxel approximation, which performs particularly well when the input volumes contain Gaussian particles, as the convolution of Gaussians produces another Gaussian distribution [42]. We implemented a multi-pass procedure to increase the accuracy of the PIV analysis and to extend its dynamic range, i.e., the range of detectable displacements. While a search margin can be added to increase the dynamic range of a standard PIV analysis, it does not eliminate the dependence on small interrogation volumes to achieve high resolutions, which limits the specificity and enhances the noise of the intensity patterns contained in the interrogation volumes [43]. Alternatively, high resolutions with good dynamic ranges can be achieved by combining large interrogation volumes with high overlaps. However, this approach is computationally expensive and increases the final resolution by adding redundancy between consecutive cross-correlation computations [44]. The multi-pass algorithm starts the PIV analysis with up-scaled interrogation and search volumes, followed by iterative rounds of PIV analyses with gradually smaller interrogation size and search volumes. Additionally, the displacements calculated during previous rounds are used to offset the sampling of the search volumes at future rounds [45]. The multi-pass factor f defines the number of total rounds that will be conducted. Therefore, multi-pass is enabled by setting f larger than 1. At each multi-pass round, the interrogation size, search margin and overlap parameters are scaled with respect to their user-defined values. The value of these parameters in each round r is computed as follows: $$\begin{aligned} \kappa _r = ( 1 + f - r )\ *\ \kappa _0 \ , \end{aligned}$$ where \(\kappa _0\) designates the user-defined value for interrogation size, search margin or overlap, \(\kappa _r\) is the up-scaled value of these parameters at round r, and f is the multi-pass factor. The final round is performed with a factor of 1, i.e., the initial interrogation sizes. Some of the post-processing features explained below include local information around the vector being processed. In such cases, a square (2D) or cubic (3D) region is sampled around each post-processed vector. For instance, \(r_x\) and \(r_y\) define a square area around an arbitrary vector in a 2D vector field, \(v_{i,j}\), given by \(L = \{ v_{i+r_x,j+r_y} \ | \ -r \le r_x \le r \ and \ -r \le r_y \le r \}\). A PIV-computed vector is considered unreliable if it was computed from a cross-correlation matrix containing multiple peaks with similar heights as the maximum peak. This reveals uncertainty about the underlying displacement, which might be caused by unspecific structures, background noise and/or loss of structure pairs [46, 47]. QuickPIV adopts the primary peak ratio, PPR, to measure the specificity of each computed vector, $$\begin{aligned} \mathrm {PPR} = \frac{C_{\max1}}{C_{\max2}} \ , \end{aligned}$$ where \(C_{\rm max}{1}\) is the height of the primary peak in the cross-correlation matrix and \(C_{\rm max}{2}\) is the height of the secondary peak. Vectors with high PPR values are considered to have high signal-to-noise ratios [48]. Therefore, quickPIV offers filtering of unreliable vectors by discarding those vectors with a PPR value lower than a given threshold, \(th_{\rm PPR}\) [48]. Additionally, quickPIV includes both global and local filtering in terms of vector magnitudes. Currently, quickPIV offers low pass and high pass filters of vector magnitudes, which can be concatenated to perform band-pass filtering. Global magnitude filtering can also be performed on those vectors whose magnitude is more than a certain number of standard deviations away from the mean magnitude of the vector field. Local magnitude filtering is implemented by discarding vectors whose magnitude is at least n standard deviations away from the mean magnitude, computed in a radius r around each vector. All filtering functions in quickPIV accept an optional argument that is used to determine the replacement scheme of the filtered vectors. Currently, quickPIV offers three replacement functions: zero-replacement, mean replacement and median replacement. The former sets all components of the filtered vectors to zero. Both the mean and median replacement schemes are parametrized by the radius of the neighboring region used to compute the mean or median vector. Spatial and temporal averaging Spatial and spatio-temporal averaging of the computed vector fields are included in quickPIV. Spatial averaging depends on one parameter: the radius, \(r_s\), of the considered neighboring region around each vector. Different radii for each dimension can be provided by passing an array of values, \([ r_x, r_y, r_z ]\). Spatio-temporal averaging considers two parameters: the averaging radius in space and the number, \(n_t\), of adjacent vectors along the time axis considered in the temporal averaging, e.g. \(\{ v_{i,j,k,t+r} | -n_t \le r \le n_t \}\). Similarity-selective spatial averaging Spatial averaging tends to dissolve vectors adjacent to the background and creates artifactual vectors in regions containing dissimilar vectors. A similarity-selective spatial averaging has been developed to overcome these limitations, and to enhance the visualization of collective migration. Two vectors are considered to be similar if they point in the same direction, which is established if their normalized dot product is greater than a user-defined threshold. Given any vector in the PIV-computed vector field, \(\mathbf {v}[i,j,k]\), an average vector is built by considering only those neighboring vectors at a radius r that are similar to \(\mathbf {v}[i,j,k]\). The averaged vector is then normalized to unit length, and its magnitude is further re-scaled by the ratio between the number of similar neighboring vectors and the total number of neighboring vectors. Therefore, the effect of similarity-selective averaging is to average the direction of each vector among similar neighboring vectors, and to re-scale the magnitude of each vector by the local collectiveness. QuickPIV provides functions for extracting several relevant quantities from the PIV-computed vector fields. Velocity maps are generated by returning the magnitude of each vector from a given vector field. QuickPIV implements convergence/divergence mappings to detect the presence of sinks and sources in the PIV-computed vector fields. This is done by generating a cube of normalized vectors that either converge (sink) or diverge (source) from the center of the cube, and cross-correlating this cube with the normalized vector field. This mapping is parametrized by the size of the cube, which determines the scale of the convergence/divergence map. Collectiveness maps are built by computing the number of neighboring vectors at a radius r from each vector in the vector field \(v_{i,j}\) whose normalized dot product is greater than a threshold. Pseudo-trajectories Pseudo-trajectories can be generated with quickPIV to visualize the approximate paths of cells and tissues from the PIV-computed vector fields. When computing pseudo-trajectories, a user-defined number of particles is randomly distributed within the dimensions of the vector field. The position of each particle is rounded to integer coordinates in order to sample a displacement from the vector field, which shifts the particle from its current position. By repeating this process, a three-dimensional path is obtained for each simulated particle. It is possible to constrain the computation of pseudo-trajectories to a period of interest by specifying the start and end time points. Moreover, spatially interesting regions can be selected by specifying the spatial range over which to initialize the positions of the particles. Conversion to physical units Last but not least, to convert voxel displacements into physically meaningful velocities both the frame rate and the physical units of each voxel dimension need to be taken into account. These values can be provided during the creation of the PIV-parameter object and quickPIV will automatically re-scale the resulting vector field after the analysis. QuickPIV accuracy evaluation The correct implementation of a PIV analysis depends on its ability to detect translations. Accordingly, the accuracy of quickPIV is assessed by generating pairs of artificial images and volumes containing synthetic particles related by a known translation. Synthetic particles are rendered according to [49]. The bias and random errors are computed to evaluate the agreement of quickPIV predictions to the known translations [49]: $$\begin{aligned}&\epsilon _{\rm bias} = \frac{1}{n}\sum _{i=1}^{n} | d_{PIV,i} - d_{\rm true} | \end{aligned}$$ $$\begin{aligned} \epsilon_{\rm rand} = \sqrt{ \frac{1}{n} \sum _{i=1}^{n} {(d_{{\rm PIV},i}-\overline{d_{\rm PIV}})^2}} \end{aligned}$$ where \(d_{{\rm PIV},i}\) is the \(i^{\mathrm {th}}\) PIV-computed displacement, \(d_{\rm true}\) is the known translation, \(\overline{d_{\rm PIV}}\) is the average PIV-computed displacement and n is the number of repeats. The bias and random errors represent the accuracy and the precision of quickPIV's approximation of the underlying translation, respectively. The effect of the following parameters on the accuracy of quickPIV are evaluated, both in 2D and 3D: interrogation size, particle density, particle diameter, 3-point Gaussian sub-pixel approximation and the use of a search margin to correct for out-of-frame loss. QuickPIV performance evaluation The performance of our software is evaluated by comparing the execution times of quickPIV with those of the C++ and Python implementations hosted in openPIV. First, we analyzed the time required to compute cross-correlation in the frequency domain with the three packages. By comparing the execution times of quickPIV and the C++ implementation, we can determine whether calling the FFTW C-library from Julia adds any noticeable overhead compared to C++. Since the Python implementation uses the NumPy library to compute the Fourier and inverse Fourier transforms, this test also reveals any performance differences between FFTW and NumPy. On the other hand, we compare the execution time of complete 2D and 3D PIV analyses between the three PIV packages. The set of parameters used in these PIV analyses are listed in the description of Table 1. For the sake of using a common benchmarking pipeline, language-specific packages for measuring the execution times are avoided. Each execution time measurement shown in Fig. 2e corresponds to the minimum execution time from 1000 repeated measurements. Taking the minimum execution time filters out random delays originating from background processes [50]. The left panel in Fig. 2e illustrates the interference of background processes in the distribution of 1000 execution measurements of FFT cross-correlation. All measurements presented below were performed on a machine with an Intel Core i5-8300H processor \(4\times 2.3\) GHz. All PIV analyses were executed on a single thread. Accuracy and performance evaluations of quickPIV. a–d Mean biases (red lines) and random errors (green error bars) of unnormalized PIV applied to synthetic data containing particles shifted by homogeneous translations. a PIV errors are reduced by increasing interrogation size. As illustrated under the 2D examples, the intensity patterns contained in small interrogation areas (5\(\times\)5 pixels) display unspecific structures, and are more susceptible to out-of-frame loss. The 2D analyses were performed on 200\(\times\)200 pixel images containing 5k particles, and 3D analyses on 200\(\times\)200\(\times\)200 voxel volumes with 100k particles. b Particle densities of around 15 particles per interrogation region minimize PIV errors. Low particle count are susceptible to out-of-frame loss, while high particle densities degrade PIV accuracies by producing uniform intensity patterns. Interrogation size during this evaluation was 10\(\times\)10 pixels and 10\(\times\)10\(\times\)10 voxels. c Particle sizes of 1-2 pixels achieve optimal PIV accuracies. The 2D examples show that large particle radii blur the intensity pattern inside the interrogation regions, reducing the pattern complexity. d Top, PIV accuracy under non-integer translations oscillates between 0.0 and 0.5. Bottom, with 3-point Gaussian interpolation, errors are reduced by an order of magnitude. The leftmost figures show a slight loss of accuracy due to out-of-frame loss as the translation strength increases. Adding a search margin greater than the translation strength completely compensates for this effect. e Left, execution times distribution of 1000 FFT computations on input images of \(40 \times 40\) pixels. Background processes sporadically slow down FFT execution. Right, comparison of 2D FFT performance between Julia, C++ and Python for increasing input sizes. Julia and C++ calls of FFTW are equally fast, while the FFT implementation in NumPy is approximately three times slower. The execution time of FFT spikes when the input sizes are prime numbers, e.g. 23, 29 or 43 QuickPIV on the embryogenesis of Tribolium castaneum To test the accuracy of quickPIV on biological data, we analyzed three 3D time-lapse data sets of the embryonic development of T. castaneum: (1) two embryos from a hemizygous transgenic line that ubiquitously expresses nuclear-localized mEmerald and (2) one embryo from a double hemizygous transgenic line that expresses nuclear-localized mRuby2 ubiquitously and actin-binding Lifeact-mEmerald only in the serosa [22]. Using LSFM, the embryos were recorded at intervals of (1) 30 minutes or (2) 20 minutes along 4 directions in rotation steps of 90\(^{\circ }\) around the anterior-posterior axis in (1) one or (2) two fluorescence channels [21]. The four directions were fused according to Preibisch et al. [51] to generate evenly illuminated volumes with isotropic resolution. The fused volumes were cropped to \(1000\times 600 \times 600\) voxels (height,width,depth), the embryos were manually placed in the center of the volumes and their anterior–posterior axis was manually aligned with the vertical axis. Three time points during gastrulation were analyzed with quickPIV in the two embryos of data set (i). Two time points of the double hemizygous transgenic line (ii) were analyzed in both channels, allowing to compare the vector fields obtained from the Lifeact-mEmerald actin signal with those from the nuclear-localized mRuby2 marker. The PIV analyses were performed on both data sets with NSQECC. The vector fields resulting from these analyses are shown in Figs. 3 and 4, post-processed with similarity-selective averaging with an averaging radius of 2 neighboring vectors and a similarity threshold of 0.5. The visualization of the embryo volumes and the computed vector fields has been done in Paraview 5.7.0. 3D PIV analysis on the embryogenesis of two T. castaneum embryos. Each vector field in a. 1–3 and b. 1-3 is plotted on top of the two volumes it was computed from, where the red signal corresponds to the initial time point and blue intensities belong to the consecutive time point. A few spurious vectors obtained on the background due to the fluorescence bleeding from the embryo were manually curated. Embryos are shown from their ventral and lateral sides. a-b.1 At the onset of gastrulation, serosa nuclei at the anterior end of both analyzed embryos collectively spread towards the dorsal side of the embryos. Moreover, the central and posterior regions on the ventral side undergo coordinated condensation movements that will later give rise to the internalizing germband. a-b.2 The wide-spread serosa cells over the anterior pole and the dorsal side engage in a highly coordinated movement of the tissue towards the posterior pole. Time points a-b.3 are characterized by a highly collective flow of serosa cells towards the ventral side, leading to the emergence and closing of the serosa window. Serosa cells at the anterior pole, dorsal side and the posterior pole collectively migrate clock-wise towards the ventral midline, giving rise to a cell migration pattern resembling a vortex. c Exemplary post-processing analyses applied to the vector field shown in a.1. From left to right: velocity map showing higher velocities in red, divergence(purple)/convergence(cyan) map, collectiveness map displaying higher local collectiveness in yellow, and pseudo-trajectories at the anterior pole of the embryo in a.1) over 10 time points (5 h) Validation of quickPIV on non-segmentable data sets. a PIV analyses were performed on the actin signal of a double hemizygous transgenic embryo before (top) and during (bottom) gastrulation. For each time point, the two consecutive volumes analyzed with PIV are shown in red and blue, next to the computed vector fields after similarity-selective spatial averaging. b PIV was also performed for the same time points on the nuclear signal, and the resulting similarity-selective averaged vector fields are shown next to the actin vector fields. c The orientation similarity between each pair of vectors in the two channels is computed through their normalized dot product. The Euclidean error between each pair of vectors is computed as well to measure the combined magnitude and direction differences between the vectors. The scatter plot of these two quantities shows that most vectors are clustered around a region of high normalized dot product and low euclidean error, indicating good agreement between the vector fields in (a) and (b). d Three patterns of cell migration can be distinguished in the T. castaneum data set (i): Segmentable and trackable (S/T), segmentable and non-trackable (S/NT) and non-segmentable and non-trackable (NS/NT) nuclei. The serosa consists of segmentable nuclei. While some regions are easily trackable, in others it is difficult to establish unambiguous correspondences of the nuclei between the two time points. High cell densities render nuclei in the gastrulating embryo non-segmentable, and therefore non-trackable. e Three-dimensional mapping of the height of the maximum peak of NSQECC at each interrogation area during the PIV analysis of the two volumes in (d). High values are achieved both in the segmentable and trackable and non-segmentable regions of the embryo, indicating that the interrogation and search patterns in these regions are well approximated by a translation and high PIV accuracies are expected The accuracy evaluation of quickPIV quantitatively reproduces the expected accuracies described in the PIV literature, attesting the correctness of our PIV implementation [52,53,54,55]. Our analysis shows a monotonic decrease of the total error (bias and random errors) with increasing interrogation sizes [55], reaching errors as low as \(0.02 \pm 0.01\) pixels/voxels (Fig. 2a). This is the expected behavior in our synthetic tests, since all simulated particles are subjected to the same translation. Our results also agree on the presence of optimal values for both particle density and particle size [52]. It can be appreciated from the 2D examples included in Fig. 2b and c that high particle densities and large particle sizes generate diffuse images that can not be unambiguously matched by cross-correlation. Without sub-pixel/voxel interpolation, the PIV analysis cannot capture the decimal components of the simulated translations, shown in the top row of Fig. 2d [52, 53]. As described in the literature, the 3-point Gaussian sub-pixel approximation reduces this error by one order of magnitude (bottom row in Fig. 2d) [56]. Moreover, search margins are needed to counteract the out-of-frame errors induced by increasing translation (Fig. 2d, left panel). A search margin of 4 pixels/voxels (Fig. 2d, middle and right panels) completely compensates this effect for all simulated translations. We performed an analogous accuracy analysis on the T. castaneum data set, where we quantified the accuracy of quickPIV in detecting know translations on one 3D volume in data set (i). We observed that diffuse and unspecific patterns in the embryo induce biases when using ZNCC. These biases are completely avoided by using NSQECC, which detects the underlying translation with 100% accuracy given a sufficiently large search margin (see Figure S1). We further analyzed the height distribution of the maximum cross-correlation peaks during the PIV analysis with NSQECC of two consecutive volumes of T. castaneum, shown in Fig. 4e. High peaks are found in the collectively migrating serosa cells at the anterior pole of the embryo, which we classify as segmentable and trackable (S/T), and in the non-segmentable and non-trackable (NS/NT) gastrulating embryo, Fig. 4d. These high peaks indicate that cellular migration in these regions is well approximated by a collective translation, and that the intensity patterns between the interrogation and search volumes are not deformed, rotated or sheared significantly. Non-collective migration of the serosa cells reduces the height of the NSQECC peaks in the central regions of the extraembryonic membranes, which we consider to be segmentable but not easily trackable (S/NT), since cell correspondences between the two time points can not unambiguously be assessed visually, Fig. 4d. A three-dimensional visualization of the maximum peak distribution in Fig. 4e is provided in Video S1. Performance-wise, calling the FFTW C-library is equally efficient from Julia and C++ (see Fig. 2e, right). In contrast, the performance of the NumPy implementation of the FFT algorithm is three times slower than the one provided in the FFTW library. This performance difference is translated to the complete 2D and 3D PIV analyses of the PIV packages, where the Python implementation in openPIV is consistently three times slower than both the C++ implementation (2D) and quickPIV (2D and 3D), see Table 1. Our results also show that 2D PIV analyses are performed faster with quickPIV than with the C++ implementation in openPIV. Since both packages share the same cross-correlation performance, this difference can only be explained by compiler optimizations brought by Julia's compilation pipeline, or by the ease of implementing good programming practices in Julia's high-level environment. For instance, quickPIV avoids bound checks when possible, minimizes memory allocations by using in-place operations, and leverages SIMD (single instruction, multiple data) operations exposed by the Julia programming language. Table 1 Performance evaluation of complete PIV analyses From a practical standpoint, we found that performance of PIV analyses can be dramatically increased by subsampling the input volumes and removing the background interrogation areas from the PIV analysis. For example, a PIV analysis of two volumes of T. castaneum with the following parameters (interSize of 60 voxels, searchMargin of 0 voxels, overlap of 30 voxels and multi-pass factor of 2), while skipping interrogation volumes with a maximum intensity lower than 100, takes 29 minutes. After applying subsampling by a factor of 3, the analogous analysis on the subsampled data (interSize of 20 voxels, searchMargin of 0 voxels, overlap of 10 voxels and multi-pass factor of 2) takes 55 s to complete. The results shown in Figs. 3 and 4, which were obtained after subsampling the input volumes by a factor of three in all dimensions, are in full agreement with the same analyses performed without subsampling (Fig. S2). The spatial resolution in this data set was very high, which is necessary to discern smaller structures. For motion analysis a lower image resolution is sufficient to obtain the same results. Before subsampling images, we, however, advise to test the agreement between the PIV vector fields in the original and a subsampled image. The application of quickPIV to the two T. castaneum embryos of data set (i) is shown in Fig. 3. The red and blue intensities correspond to the nuclear signal of the first and second input volumes, respectively, which aids in visualizing the underlying displacement of the nuclei between each pair of analyzed time points. The vector fields at the anterior regions of the embryos in Figs. 3a.1 and b.1 capture the underlying radially diverging pattern of cell migration towards the dorsal side of the embryo. Our PIV analyses also capture the coordinated condensation movement of the cells in the central and posterior regions, which will later give rise to the germband. These regions exhibit high cellular densities, challenging visual examination and rendering nuclei segmentation and tracking approaches unfeasible. Figure 3a.2 and b.2 are characterized by vastly coordinated movements of the wide-spread serosa cells over the anterior pole and along the dorsal side towards the posterior pole. Figures 3a.3 and b.3 depict a highly coordinated flow of serosa cells from the dorsal side over both, the posterior pole and the lateral equator, towards the ventral side, where they eventually give rise to the serosa window [57]. These observations are not only consistent with previous studies of collective cell migration during the gastrulation of T. castaneum, which were obtained through 1D PIV analyses [58] and manual 2D tracking of the extra-embryonic serosa cells [59, 60], but for the first time describe this process in 3D. Figure 3c illustrates the velocity, divergence/convergence and collectiveness mappings as well as some computed pseudo-trajectories on the anterior region of the embryo. Finally, the results from the analysis of the double hemizygous transgenic line, (ii), demonstrate the robustness of quickPIV on non-segmentable data. The agreement of the vector fields on the anterior pole of the embryo (which is non-segmentable in the actin signal, segmentable in the nuclear signal and exhibits high degrees of collective cell migration in both channels) indicates that PIV accuracies are independent of the segmentability of the input data sets, Figs. 4a and b. A quantitative comparison of the PIV vector fields between the nuclear and the actin stained volumes shows a high degree of similarity. This is illustrated in the scatter plot shown in Fig. 4c, exhibiting a high density in the area of large dot products and small Euclidean errors. The similarity of the actin and nuclear vector fields in highly dense non-segmentable regions further underlines the robustness of quickPIV regardless of the labeling scheme of the data sets. QuickPIV represents a free and open-source solution for performing efficient and robust quantification of collective cellular migration in the increasingly popular 3D dynamic data sets in life sciences. Our software includes several well established PIV features, such as multi-pass and sub-voxel peak approximation, as well as post-processing functions and visualization of the 3D vector fields in Paraview. To our knowledge, quickPIV is the only free PIV software that offers normalized squared error cross-correlation (NSQECC), which we found to be necessary for accurately describing collective cell and tissue migration in non-segmentable data sets. By using NSQECC, we could quantify collective cell migration from the non-segmentable and highly dynamic actin signal in a double hemizygous transgenic embryo of T. castaneum. The resulting vector fields were in complete agreement with previously published descriptions of the gastrulation movements in T. castaneum, and showed a strong correlation with the vector fields obtained from the nuclear signal of the same embryo. Moreover, the height distribution of the maximum cross-correlation peaks further indicates that NSQECC is robust to non-segmentable data. The performance evaluation of quickPIV shows that our software is three times faster in 2D and 3D analyses than the Python PIV implementation in openPIV, and also faster than the 2D implementation written in C++. This performance advantage is only possible because of the design of the Julia programming language and the optimization possibilities that it provides. By considering subsampling and excluding unnecessary regions of the input data (such as empty background), the quickPIV analysis of a pair of 3D volumes can be reduced to several seconds. These speeds are compatible with real-time PIV analyses, enabling the integration of PIV pipelines into smart microscopy techniques. For example, vector fields obtained with quickPIV could be used to automatically detect the onset of developmental events and adjust the acquisition parameters accordingly, e.g. laser power or acquisition interval. Overall, we believe that 3D PIV analyses will play an important role in understanding 3D biological processes as novel 3D imaging techniques are developed and adopted. For example, SVIM can already achieve up to 100\(\times\) higher recording speeds than standard LSFM. Such high temporal resolutions increase the accuracy of PIV and make PIV the ideal solution for reliable and automated pipelines for quantifying collective cellular migration. However, the computational demands required to analyze such temporally resolved data sets can only be met by further optimizations of quickPIV's performance. Therefore, future efforts will be directed towards adding multi-threading support and implementing our PIV analyses on a graphics card [61]. Availability and requirements Project name: quickPIV Project home page: https://github.com/Marc-3d/quickPIV Operating system(s): Platform independent Programming language: Julia Other requirements: Julia1.3.1 or higher License: MIT License Any restrictions to use by non-academics: None. The data sets used and/or analyzed during the current study can be accessed through the following Zenodo https://doi.org/10.5281/zenodo.5504076. Friedl P, Gilmour D. Collective cell migration in morphogenesis, regeneration and cancer. Nat Rev Mol Cell Biol. 2009;10(7):445–57. https://doi.org/10.1038/nrm2720. Chuai M, Hughes D, Weijer CJ. Collective epithelial and mesenchymal cell migration during gastrulation. Curr Genomics. 2012;13(4):267–77. https://doi.org/10.2174/138920212800793357. Szabó A, Mayor R. Mechanisms of neural crest migration. Annu Rev Genet. 2018;52(1):43–63. https://doi.org/10.1146/annurev-genet-120417-031559. Jiang J, Li L, He Y, Zhao M. Collective cell migration: implications for wound healing and cancer invasion. Burns Trauma. 2013;1(1):21. https://doi.org/10.4103/2321-3868.113331. Barriga EH, Franze K, Charras G, Mayor R. Tissue stiffening coordinates morphogenesis by triggering collective cell migration in vivo. Nature. 2018;554(7693):523–7. https://doi.org/10.1038/nature25742. Vedula SRK, Leong MC, Lai TL, Hersen P, Kabla AJ, Lim CT, Ladoux B. Emerging modes of collective cell migration induced by geometrical constraints. Proc Natl Acad Sci. 2012;109(32):12974–9. https://doi.org/10.1073/pnas.1119313109. Lin S-Z, Ye S, Xu G-K, Li B, Feng X-Q. Dynamic migration modes of collective cells. Biophys J. 2018;115(9):1826–35. https://doi.org/10.1016/j.bpj.2018.09.010. Santi PA. Light sheet fluorescence microscopy: a review. J Histochem Cytochem. 2011;59(2):129–38. https://doi.org/10.1369/0022155410394857. Power RM, Huisken J. A guide to light-sheet fluorescence microscopy for multiscale imaging. Nat Methods. 2017;14(4):360–73. https://doi.org/10.1038/nmeth.4224. Reynaud E, Krzic U, Greger K, Stelzer E. Light sheet-based fluorescence microscopy: more dimensions, more photons, and less photodamage. HFSP J. 2008;2:266–75. https://doi.org/10.2976/1.2974980. Kaufmann A, Mickoleit M, Weber M, Huisken J. Multilayer mounting enables long-term imaging of zebrafish development in a light sheet microscope. Development. 2012;139(17):3242–7. https://doi.org/10.1242/dev.082586. Huisken J. Optical sectioning deep inside live embryos by selective plane illumination microscopy. Science. 2004;305(5686):1007–9. https://doi.org/10.1126/science.1100035. Voleti V, Patel KB, Li W, Campos CP, Bharadwaj S, Yu H, Ford C, Casper MJ, Yan RW, Liang W, Wen C, Kimura KD, Targoff KL, Hillman EMC. Real-time volumetric microscopy of in vivo dynamics and large-scale samples with SCAPE 2.0. Nat Methods. 2019;16(10):1054–62. https://doi.org/10.1038/s41592-019-0579-4. Truong TV, Holland DB, Madaan S, Andreev A, Keomanee-Dizon K, Troll JV, Koo DES, McFall-Ngai MJ, Fraser SE. High-contrast, synchronous volumetric imaging with selective volume illumination microscopy. Commun Biol. 2020;3(1). https://doi.org/10.1038/s42003-020-0787-6. ...Chen B-C, Legant WR, Wang K, Shao L, Milkie DE, Davidson MW, Janetopoulos C, Wu XS, Hammer JA, Liu Z, English BP, Mimori-Kiyosue Y, Romero DP, Ritter AT, Lippincott-Schwartz J, Fritz-Laylin L, Mullins RD, Mitchell DM, Bembenek JN, Reymann A-C, Böhme R, Grill SW, Wang JT, Seydoux G, Tulu US, Kiehart DP, Betzig E. Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution. Science. 2014;346(6208):1257998. https://doi.org/10.1126/science.1257998. Sveen JK. An introduction to MatPIV v. 1.6.1. University of Oslo, Department of Mathematics;2004. Thielicke W, Stamhuis EJ. PIVlab - towards user-friendly, affordable and accurate digital particle image velocimetry in MATLAB. Journal of Open Research Software. 2014;2. https://doi.org/10.5334/jors.bl. Liberzon A, Lasagna D, Aubert M, Bachant P, Käufer T, Jakirkham, Bauer A, Vodenicharski B, Dallas C, Borg J, Tomerast, Ranleu. OpenPIV/openpiv-python: OpenPIV - Python (v0.22.2) with a new extended search PIV grid option. Zenodo. 2020. https://doi.org/10.5281/ZENODO.3930343 JPIV. https://eguvep.github.io/jpiv/ Bezanson J, Karpinski S, Shah VB, Edelman A. Julia: A fast dynamic language for technical computing. CoRR abs/1209.5145 (2012). arxiv 1209.5145 Strobl F, Klees S, Stelzer EHK. Light sheet-based fluorescence microscopy of living or fixed and stained tribolium castaneum embryos (122). 2017. https://doi.org/10.3791/55629 Strobl F, Stelzer EHK. A deterministic genotyping workflow reduces waste of transgenic individuals by two-thirds. Sci Rep. 2021;11(1). https://doi.org/10.1038/s41598-021-94288-0 Adrian R. Twenty years of particle image velocimetry. Exp Fluids. 2005;39:159–69. https://doi.org/10.1007/s00348-005-0991-7. Gollin D, Brevis W, Bowman ET, Shepley P. Performance of PIV and PTV for granular flow measurements. Granular Matter 2017;19(3). https://doi.org/10.1007/s10035-017-0730-9. Ferrari S. Image analysis techniques for the study of turbulent flows. EPJ Web of Conferences. 2017;143:01001. https://doi.org/10.1051/epjconf/201714301001. Melling A. Tracer particles and seeding for particle image velocimetry. Meas Sci Technol. 1997;8(12):1406–16. https://doi.org/10.1088/0957-0233/8/12/005. Keane RD, Adrian RJ. Theory of cross-correlation analysis of PIV images. Appl Sci Res. 1992;49(3):191–215. https://doi.org/10.1007/bf00384623. Stichel D, Middleton AM, Müller BF, Depner S, Klingmüller U, Breuhahn K, Matthäus F. An individual-based model for collective cancer cell migration explains speed dynamics and phenotype variability in response to growth factors. NPJ Syst Biol Appl. 2017;3(1). https://doi.org/10.1038/s41540-017-0006-3. Weiger MC, Vedham V, Stuelten CH, Shou K, Herrera M, Sato M, Losert W, Parent CA. Real-time motion analysis reveals cell directionality as an indicator of breast cancer progression. PLoS ONE. 2013;8(3):58859. https://doi.org/10.1371/journal.pone.0058859. Müller B, Bovet M, Yin Y, Stichel D, Malz M, González-Vallinas M, Middleton A, Ehemann V, Schmitt J, Muley T, Meister M, Herpel E, Singer S, Warth A, Schirmacher P, Drasdo D, Matthäus F, Breuhahn K. Concomitant expression of far upstream element (FUSE ) binding protein (FBP ) interacting repressor (FIR) and its splice variants induce migration and invasion of non-small cell lung cancer (NSCLC) cells. J Pathol. 2015;237(3):390–401. https://doi.org/10.1002/path.4588. Glover JD, Wells KL, Matthäus F, Painter KJ, Ho W, Riddell J, Johansson JA, Ford MJ, Jahoda CAB, Klika V, Mort RL, Headon DJ. Hierarchical patterning modes orchestrate hair follicle morphogenesis. PLoS Biol. 2017;15(7):2002117. https://doi.org/10.1371/journal.pbio.2002117. Zhang Y, Xu G, Lee RM, Zhu Z, Wu J, Liao S, Zhang G, Sun Y, Mogilner A, Losert W, Pan T, Lin F, Xu Z, Zhao M. Collective cell migration has distinct directionality and speed dynamics. Cell Mol Life Sci. 2017;74(20):3841–50. https://doi.org/10.1007/s00018-017-2553-6. Zickus V, Taylor JM. 3D + time blood flow mapping using spim-micropiv in the developing zebrafish heart. Biomed Opt Express. 2018;9(5):2418–35. https://doi.org/10.1364/BOE.9.002418. Vennemann P, Lindken R, Hierck B, Westerweel J. Volumetric particle image velocimetry in the developing chicken heart. J Biomech. 2006;39. https://doi.org/10.1016/S0021-9290(06)85562-4. Cheng C-M, Chang Y-F, Wu C-M. Cross-correlation analysis for live-cell image trajectory. 2013;8911:89110. https://doi.org/10.1117/12.2034840. Cornwell JA, Li J, Mahadevan S, Draper JS, Joun GL, Zoellner H, Asli NS, Harvey RP, Nordon RE. Trackpad: Software for semi-automated single-cell tracking and lineage annotation. SoftwareX. 2020;11:100440. https://doi.org/10.1016/j.softx.2020.100440. Cooley JW, Lewis PAW, Welch PD. Historical notes on the fast fourier transform. Proc IEEE. 1967;55(10):1675–7. https://doi.org/10.1109/proc.1967.5959. Frigo M, Johnson SG. The design and implementation of fftw3. Proc IEEE. 2005;93:216–31. https://doi.org/10.1109/JPROC.2004.840301. Lewis JP. Fast normalized cross-correlation Ind Light Magic. 2001;10. Bradski, G.: The OpenCV Library. Dr. Dobbs Journal of Software Tools. 2000. Bastiaans R. Cross-correlation PIV; Theory. Faculty of Mechanical Engineering, Eindhoven: Implementation and Accuracy. Eindhoven University of Technology; 2000. Bromiley P. Products and convolutions of gaussian distributions. 2003. Scarano F. A super-resolution particle image velocimetry interrogation approach by means of velocity second derivatives correlation. Meas Sci Technol. 2004;15:475. https://doi.org/10.1088/0957-0233/15/2/023. Roth G, Katz J. Five techniques for increasing the speed and accuracy of PIV interrogation. Meas Sci Technol. 2001;12:238. https://doi.org/10.1088/0957-0233/12/3/302. Scarano F, Riethmuller ML. Iterative multigrid approach in PIV image processing with discrete window offset. Exp Fluids. 1999;26(6):513–23. https://doi.org/10.1007/s003480050318. Masullo A., Theunissen R: On dealing with multiple correlation peaks in PIV. Exp Fluids 2018;59(5). https://doi.org/10.1007/s00348-018-2542-z. Charonko JJ, Vlachos PP. Estimation of uncertainty bounds for individual particle image velocimetry measurements from cross-correlation peak ratio. Meas Sci Technol. 2013;24(6):065301. https://doi.org/10.1088/0957-0233/24/6/065301. Xue Z, Charonko JJ, Vlachos PP. Particle image velocimetry correlation signal-to-noise ratio metrics and measurement uncertainty quantification. Meas Sci Technol. 2014;25(11):115301. https://doi.org/10.1088/0957-0233/25/11/115301. Raffel M, Willert CE, Scarano F, Kähler CJ, Wereley ST, Kompenhans, J.: PIV Uncertainty and Measurement Accuracy, pp. 203–241. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-68852-7_6. Chen, J., Revels, J.: Robust benchmarking in noisy environments. CoRR abs/1608.04295 (2016). arxiv1608.04295 Preibisch S, Amat F, Stamataki E, Sarov M, Singer RH, Myers E, Tomancak P. Efficient bayesian-based multiview deconvolution. Nat Methods. 2014;11(6):645–8. https://doi.org/10.1038/nmeth.2929. Thielicke, W.: The flapping flight of birds: Analysis and application. PhD thesis, University of Groningen (2014) Michaelis D, Neal DR, Wieneke B. Peak-locking reduction for particle image velocimetry. Meas Sci Technol. 2016;27(10):104005. https://doi.org/10.1088/0957-0233/27/10/104005. Nobach H, Bodenschatz E. Limitations of accuracy in PIV due to individual variations of particle image intensities. Exp Fluids. 2009;47(1):27–38. https://doi.org/10.1007/s00348-009-0627-4. Merzkirch W, Gui L. A comparative study of the MQD method and several correlation-based PIV evaluation algorithms. Exp Fluids. 2000;28(1):36–44. https://doi.org/10.1007/s003480050005. Forliti DJ, Strykowski PJ, Debatin K. Bias and precision errors of digital particle image velocimetry. Exp Fluids. 2000;28(5):436–47. https://doi.org/10.1007/s003480050403. Handel K, Grünfelder CG, Roth S, Sander K. Tribolium embryogenesis: a SEM study of cell shapes and movements from blastoderm to serosal closure. Dev Genes Evol. 2000;210(4):167–79. https://doi.org/10.1007/s004270050301. Münster S, Jain A, Mietke A, Pavlopoulos A, Grill SW, Tomancak P. Attachment of the blastoderm to the vitelline envelope affects gastrulation of insects. Nature. 2019;568(7752):395–9. https://doi.org/10.1038/s41586-019-1044-3. Benton MA, Akam M, Pavlopoulos A. Cell and tissue dynamics during tribolium embryogenesis revealed by versatile fluorescence labeling approaches. Development. 2013;140(15):3210–20. https://doi.org/10.1242/dev.096271. Jain, A., Ulman, V., Mukherjee, A., Prakash, M., Cuenca, M.B., Pimpale, L.G., Münster, S., Haase, R., Panfilio, K.A., Jug, F., Grill, S.W., Tomancak, P., Pavlopoulos, A.: Regionalized tissue fluidization is required for epithelial gap closure during insect gastrulation. Nature Communications 11(1) (2020). https://doi.org/10.1038/s41467-020-19356-x Liu Y, Zou Q, Luo S. GPU Accelerated Fourier Cross Correlation Computation and Its Application in Template Matching. 2011;163:484–91. https://doi.org/10.1007/978-3-642-25002-6_68. Open Access funding enabled and organized by Projekt DEAL. MP and FM acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—414985841. AD received funding from the LOEWE programme of the state of Hessen (DynaMem). FM is supported by the Giersch Foundation, Frankfurt am Main, and receives funding from the LOEWE programme of the state of Hessen (DynaMem, CMMS). FK received funding from the LOEWE programme of the state of Hessen (CMMS). EHKS was funded by the Cluster of Excellence—Frankfurt am Main for Macromolecular Complexes (CEF-MC, speaker Volker Dötsch) at the Buchmann Institute for Molecular Life Sciences (BMLS) at the Goethe Universität—Frankfurt am Main by the Deutsche Forschungsgemeinschaft (DFG, EXC 115). FS was funded by the Add-on Fellowship 2019 of the Joachim Herz Stiftung. EHKS and FS were funded by the Quantitative Structural Cell Biology Projects programme (Innovations- und Strukturentwicklungsinitiative 'Spitze aus der Breite') of the state of Hessen. The funding body did not play any role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. Equal contributor: Marc Pereyra and Armin Drusko Frankfurt Institute for Advanced Studies (FIAS) and Goethe Universität Frankfurt am Main, Ruth-Moufang-Straße 1, 60438, Frankfurt am Main, Germany Marc Pereyra & Franziska Matthäus Buchmann Institute for Molecular Life Sciences (BMLS), Max-von-Laue Straße 15, 60438, Frankfurt am Main, Germany Franziska Krämer, Frederic Strobl & Ernst H. K. Stelzer Heidelberg University Hospital, Im Neuenheimer Feld 410, 69120, Heidelberg, Germany Armin Drusko Marc Pereyra Franziska Krämer Frederic Strobl Ernst H. K. Stelzer Franziska Matthäus AD implemented the first quickPIV version, which included spatial and FFT cross-correlation, signal-to-noise measurements, multi-pass and sub-pixel approximation. MP added efficient ZNCC and NSQECC cross-correlation, implemented the accuracy and performance evaluations, performed the analysis on T. castaneum, prepared the figures for this paper, and maintains the current version of the package. FK generated the T. castaneum data sets (i and ii) analyzed in this paper. FS oversaw the recording of the T. castaneum embryos and provided the knowledge of the developmental processes of T. castaneum to contrast the PIV-computed vector fields. ES provided the microscopy setup and valuable insights on LSFM technology and T. castaneum development. FM conceptualised and supervised the implementation, as well as the performed analyses, and contributed in the figure preparation. MP, AD, FS and FM wrote the manuscript. FK and ES contributed to the preparation of the manuscript and proof-reading. All authors have read and approved the final manuscript. Correspondence to Marc Pereyra. Software and code The source code and instructions to run the evaluation scripts of quickPIV are available at Github: https://github.com/Marc-3d/quickPIV. The source code of the presented release of quickPIV, v1.0, can be accessed through: https://zenodo.org/record/5530573#.YVGyMX1CTBU. Additional file 1. This file contains a video animating the rotation and slicing of the three-dimensional distribution of the maximum peak heights at each interrogation area during the PIV analysis using NSQECC of two 3D volumes of data set (i). This file contains Figs. 1 and 2. Figure S1 reports the results form our synthetic accuracy evaluation on biological data, including the comparison between the accuracies obtained with ZNCC and NSQECC PIV analyses. Figure S2 shows the same PIV analyses as Fig. 3, with the difference that no subsampling of the input volume was used to obtain the results in Figure S2. Pereyra, M., Drusko, A., Krämer, F. et al. QuickPIV: Efficient 3D particle image velocimetry software applied to quantifying cellular migration during embryogenesis. BMC Bioinformatics 22, 579 (2021). https://doi.org/10.1186/s12859-021-04474-0 Light-sheet fluorescence microscopy Collective cell migration Tribolium castaneum
CommonCrawl
Лазерная связь 100Мб/с Лазерная связь 10Гб/c Лазерная связь 1Гб/с Лазерная дальняя связь 100км Антиснайпер БПЛА G8 для океана БПЛА- с грузом до 61кГ БПЛА R8 зенитный комплекс БПЛА U8 морской БПЛА X8 БПЛА Аэростат Радиосвязь 400МГц Радиосвязь 1,2ГГц Дальняя радиосвязь 20 - 100 км Оптоэлектронный генератор - наноструктурный - патент. Патент "Радиционно-стойкий волоконный световод, способ его изготовления " Изобретения патенты база сколько стоит патент на изобретение международный патент на изобретение купить патент на изобретение изобретения защищенные патентом получение патента на изобретение порядок продажа патента на изобретение номер патента на изобретение образец заявления на выдачу патента на изобретение Лазерные атмосферные системы связи Оборудование для БПЛА Радиосвязь 400 МГц Радиосвязь 1,2 ГГц Видеокамеры для БПЛА Инфракрасные системы Радиосвязь 60ГГц Радиосвязь 60ГГц 2 км Антенны - решетки Бортовая радиолокационная станция РЛС для БПЛА Беспилотные летательные аппараты БПЛА G8 БПЛА Z6 БПЛА R8 БПЛА U8 Оптоэлектронный генератор. Оптоэлектронный генератор. Принцип действия. Оптоэлектронный генератор. Характеристики. Оптоэлектронный генератор. Схемы построения. Основы теории оптоэлектронного генератора. Укороченные уравнения оптоэлектронного генератора Полуклассическая теория лазера и оптоэлектронный генератор Лазер для оптоэлектронного генератора Оптоэлектронный генератор с прямой модуляцией Оптоэлектронный генератор с модулятором Маха-Цендера Теория оптоэлектронного генератора с модулятором Маха-Цендера Долговременная нестабильность частоты оптоэлектронного генератора Шумы оптоэлектронного генератора с модулятором Маха-Цендера Флуктуационная модель оптоэлектронного генератора Естественная ширина линии оптоэлектронного генератора Фазовые шумы оптоэлектронного генератора с модулятором Маха-Цендера Оптоэлектронный генератор. Эксперимент. Оптоэлектронный генератор СВЧ Оптоэлектронный генератор СВЧ и КВЧ диапазонов Оптоэлектронный генератор в радиолокации Оптоэлектронный генератор для измерения фазовых шумов Оптоэлектронный генератор в магистральных защищенных ВОСП иформации Оптоэлектронный генератор - формирователь пикосекундных импульсов Оптоэлектронный генератор - датчик температуры и напряжения Оптоэлектронный генератор в связных системах 60-80ГГц для передачи информации 10ГГб/c Оптоэлектронный генератор в беспилотниках БПЛА Главные выводы из анализа оптоэлектронного генератора Оптоэлектронный генератор. Публикации Оптоэлектронный генератор патенты Оптоэлектронный генератор Устойчивость Влияние добротности резонатора лазера на радиочастотные фазовые шумы в оптоэлектронном генераторе Влияние ширины линии лазера на РЧ спектр оптоэлектронного генератора Оптоэлектроника и нанофотоника « Оптоэлектронные генераторы, наноструктурные, сверхмалошумящие СМ и ММ диапазонов Машина времени - локатор - на базе оптоэлектронного генератора Optoelectronic oscillator. Influence of width. Влияние добротности резонатора лазера Оптоэлектронный генератор с ФАП Видео Александра Баркова Новороссия в моем сердце Глава1 Новороссия в моем сердце Глава1(продолжение) Глава3 В Славянск Новороссия в моем сердце Глава 5 Новороссия в моем сердце. Глава 7. Новороссия в моем сердце Глава12 Комбриг Мозговой.Уроки революции. Новороссия видео оптоэлектронный генератор ОЭГ СВЧ Патенты2 патенты список 2 Патент оптоэлектронный генератор Наноструктурный оптоэлектронный генератор Защитные шлемы и каски Военная форма камуфляж Реальные путешествие по времени.МАШИНА ВРЕМЕНИ на ЭЛЕКТРОННЫХ КОМПОНЕНТАХ СССР Реальные путешествия по времени,Эксперименты по вращению микрочастиц и изменению спина фотона. Реальные путешествия по времени. Chronology protection conjecture Реальные путешествия во времени. Времени подобные области пространства и времени Реальные путешествия по времени.Может ли выбор в будущем влиять на прошлое Измерения в Итоге? Реальное путешествие по времени.Свобода воли и выбор состояния фотоном Реальные путешествия по времени. Квантовая телепортация фотона на расстояние более 100 км Квантовая модель оптоэлектронного генератора.Реальное путешествие по времени. Симулятор погоды, генератор погоды. Реальное путешествие по времени. Optoelectronic oscillator. Influence of width of the line of optical radiation of the laser of a rating on a radio-frequency range of the optoelectronic oscillator. "RADIOTEKHNIKA" , 2010, №2 UDC 625.371.526. Influence of width of the line of optical radiation of the laser of a rating on a radio-frequency range of the optoelectronic oscillator. Bortsov Alexander A., Ilyin Yuri B. Theoretical and experimental studying of the low-noise optoelectronic oscillator (OEO) with the dispersive fiber-optical line of a delay as systems of two oscillators of optical and radio-frequency ranges is carried out. One of oscillators - the laser serves as a rating for the second - the radio-frequency oscillator. It is shown that the OEO is the radio-frequency oscillator with the ultralow level of spectral density of phase noise (settlement size - 170db/Hz at a generation frequency 10 GHz when detuning 1 kHz). The optoelectronic oscillator (OEO) with the fiber-optical line of a delay (FOLD) [1-6] is a perspective source of the 4...100 GHz. The optoelectronic oscillator OEO can be used as the setting oscillator in devices of radio - and optical locational complexes [7], and also in systems of formation and processing of optical and electronic precision signals, for example, with impulse duration about a picosecond [8]. The OEO executed on the basis of optical microresonators [9] has small dimensions, weight and cost. Experimental results of measurements of spectral density of phase noise of such oscillator, equal-147 dB/Hz at a frequency of generation of 10 GHz at standard detuning of 1 kHz, allow to conclude that the OEO competes with low-noise microwave ovens generators with dielectric resonators on sapphire crystals at frequencies of generation of 8... 70 GHz [10]. Methods of phase stabilization of self-oscillations the OEO are developed [11,12]. Radio-frequency filters with on the basis of optical microresonators [13] which good quality on the microwave oven already makes more than 30000 are improved. Less than 100 cubic millimeters will be in the long term received super low-noise the OEO with overall dimensions. At the same time influence of width of the spectral line of optical radiation of the laser - basic element the OEO - on spectral characteristics of generation of radio-frequency fluctuations is a little studied. The purpose of work is establishment of dependence of spectral density of power of phase noise radio-frequency the OEO from width of the spectral line of optical radiation of the laser. We will analyse the OEO as system of two various generators - the optical quantum generator with a generation frequency approximately $\boldsymbol{\nu}_{0}$ =100...200 THz and the radio-frequency oscillator with a frequency of generation of $f { } _ {0} $ =4... 100 GHz. Thus the optical quantum generator is a rating source for the radio-frequency oscillator. In both generators their ranges are formed by the fluctuations having the different nature, and width of the spectral line of radio-frequency self-oscillations is defined by parameters of two self-oscillatory systems - the laser and the oscillator. The OEO is interesting feature that the range of radio-frequency fluctuations of generation is formed not only noise having the electronic nature but also the phase fluctuations of optical radiation of the laser which have the quantum nature and are defined by the spontaneous radiation of the laser. Consideration the OEO as sets of two self-oscillatory systems with dissipation gives the chance to analyse influence of characteristics of the laser on noise the OEO, to investigate further possibility of management of a signal the OEO by means of change of optical frequency and a range of the laser, to study synchronization the OEO an external optical source of radiation, etc. Lower for simplicity and identification of influence of noise of radiation of the laser on noise the OEO is considered that noise of the photodetector, noise of the electronic nonlinear amplifier are small and they can be neglected. 2. Device and principle of work OEO The optoelectronic oscillator consists as it was told, from two self-oscillatory systems - the laser (or the laser diode (LD - ЛД)) and the radio-frequency oscillator (fig. 1). The laser is a rating for the oscillator which is formed, consistently closed in a feedback ring by the electro-optical modulator of the Makh- Zander (MZ - МЦ), the fiber-optical system (FOS -ВОС), the photo diode (FD -ФД), the nonlinear amplifier (NA- НУ), the radio-frequency narrow-band filter (А-Ф ) and an connector (О). The OEO mathematical model with the quantum-dimensional laser diode is in detail studied in [14,15]. Fig. 1. Scheme (and) and photo experimental sample of the microwave oven low-noise laser oscillator of range. LD (LD-ЛД) - the laser diode, MTs (MZ- МЦ) – the electrooptical Makh-Zendera modulator, BOC (FO) - the fiber-optical light guide, FD(PD) - the photodetector, НУ - the nonlinear amplifier, F-A - the radio-frequency filter, O - the connector. Optical radiation (bearing) of the laser arrives on an entrance of the MTs optical modulator in which radiation is modulated by an electric signal of $u=u_{g} (t)$. Further optical radiation via the optical modulator, and fiber-optical system arrives on an entrance of the photodetector (FD). The radio-frequency fluctuations (subbearing) received at the photodetector exit take place through the nonlinear amplifier (NA), the frequency-selective filter (F - Ф) and go on this ring system through an connector to the microwave the MZ modulator entrance. In system the LOG when performing conditions of self-excitation in electronic part of such oscillator there are radio-frequency fluctuations of $u=u_{g} (t)$. Thus on an electronic entrance of MZ from an exit of the nonlinear amplifier through an connector in the course of generation of self-oscillations the radio-frequency signal, which instant tension arrives \begin{equation} \label{GrindEQ__1_} u_{g} (t)=U_{0} \cos (2\pi f_{0} t+\phi _{0e} ) {}, (1) \end{equation} where $U_{0} =U_{0M} =U_{0F} $, ${}_{ }$ - amplitude of self-oscillations on an entrance of the MTs modulator or an exit of the filter F, $ f_{0} $ - a radio frequency of self-oscillations, $\phi _{0e} $ - continuous phase shift. In a further statement we will consider the LOG in which the laser is a high-coherent source of optical radiation and line width $\Delta \nu _{L} \ll f_{0} $. The Makh-Zender modulator represents two strip optical wave guides connected on an entrance and an exit of Y - optical otvetvitel (fig. 1). When using in the OEO optoelectronic oscillator of the single fiber light guide (FLG), the difference in $T_{{\rm M20}} $ and $T_{{\rm M10}} $ in channels of the modulator makes \begin{equation} \label{GrindEQ__2_} {\rm \; }\Delta T_{{\rm M}} {\rm \; =\; }T_{{\rm M20}} {\rm -\; }T_{{\rm M10}}{}{},(2) \end{equation} The fiber optical line of a delay in the OEO is generally dispersive, that is delay time is function of optical frequency of the laser \[T_{{\rm BC}} =T_{{\rm BC}} {\rm (}\nu _{} {\rm )}{},{} . (3)\] Thus, the OEO optoelectronic generator represents the closed self-oscillatory system with dissipation which part the dispersive line of a delay is. 2. Mathematical model optoelectronic oscillator OEO For identification of the main mechanisms of formation of a radio-frequency range results of the analysis of systems of the differential equations with fluctuations for the laser and the optoelectronic oscillator are given in system the OEO below. 2.1 The laser in the optoelectronic oscillator We will give the fluctuation equations for it removed with use of the semi-classical theory [17, 18, 19] for reflection of the main spectral properties of the laser below. Dynamic character of noise characteristics of the laser can be described by means of the equations for intensity of electric field of $ E_{L} $ and an optical phase $\varphi _{L} $ the laser \begin{equation} \label{GrindEQ__4_} {\rm \; }dE_{L} {\rm /}dt{\rm \; =\; [}\alpha _{0L} -{\rm (2}\pi \nu _{0P} {\rm /}Q_{0L} )-\beta _{0n} E_{L} ^{2} ]E_{L} -\xi _{\beta AM},{}{}(4) \end{equation} \begin{equation} \label{GrindEQ__5_} {\rm \; }[2\pi\nu _{0} - \Omega _{L} +d\varphi_{L} {\rm /}dt{\rm \; ]E_{L}=\; [}\sigma _{0L}E_{L} +\rho _{0L} E_{L} ^{3} ]-\xi _{\beta FM},{}{}(5) \end{equation} where $\alpha _{0L} $ - optical strengthening of the active environment of the laser, $\nu _{0P}$ -- own frequency of the resonator of the laser, $Q_{0L} $ -- quality of the resonator of the laser, $\beta _{0n} $ -a factor of spontaneous radiation of the active environment, $\Omega _{L}$ --the angular optical frequency of the longitudinal generated fashion of the resonator of laser $\Omega _{L} =2\pi c(n_{L} L_{0L} )^{-1}$ , $n_{L}$ - index of refraction of material laser, $L_{0L}$ - the geometrical length of the resonator of the laser, $c$- velocity of light in vacuum, $\varepsilon _{{\rm 0}}$ - a dielectric constant , $\sigma _{{\rm 0}}$, $\rho _{0L}$ -constant coefficients, $\xi _{\beta AM} $, $\xi _{\beta FM}$ --inphase and quadrature components "lanzhevenovsky" of phase fluctuations . Their spectral density are equal according to $S_{\beta AM}$ to $ and $S_{\beta FM}$ which are defined by noise of a field of atoms of the active environment in the laser resonator. Taking into account generation of the laser in a limited strip of optical frequencies casual process of optical radiation of the laser can be considered stationary with zero average value. For such process spectral density of $S_{mL} {\rm (}\nu {\rm )}$ and $S_{\psi L} {\rm (}\nu {\rm )}$ fluctuations of the laser of $m_{L} (t)$ , $\psi _{m} (t)$ are at the solution of the truncated equations (4) and (5) taking into account fluctuation "lanzhevenovsky"of influences of $S_{\beta AM}$, $S_{\beta FM}$ and make a laser range. The spectral line of radiation of the laser can be described approximately Lorentz's function in which width the strip of radiation of the laser is equal $\Delta \nu$. From a laser exit linearly polarized optical radiation of $E_{L}^{} $ arrives on the MZ modulator entrance. Thus dependence on time of $t$ instant intensity of a field of radiation of $E_{L}^{} $ at the central frequency $\nu _{0} $ generation of the laser taking into account amplitude and phase fluctuations of the laser is defined by expression \[{\rm \; }E_{L} {\rm \; =\; (}E_{0L} {\rm +}m_{L} )\exp [j(2\pi \nu _{0} t-\varphi _{0L} -\psi _{m} )] {}{}, (8)\] where ${\rm \; }m_{L} {\rm \; =\; }m_{L} {\rm (}t,R)$, ${\rm \; }\psi _{m} {\rm \; =}\psi _{m} {\rm (}t,R)$ - amplitude and phase fluctuations of a field of $E_{L}^{} $ the laser determined by spectral density of expressions (6) and (7), respectively, $R$ - the index considering spatial dependence of optical radiation of the laser, ${\rm \; }E_{0L} {\rm \; =\; }E_{0L} {\rm (}R)$ - the partsialny amplitude of radiation of the laser of ${\rm \; }\varphi _{0L} {\rm \; =}\varphi _{0L} {\rm (}R)$ - partsialny phase attack of intensity of radiation of the laser. Fig. 2. The spectral density of power of optical radiation of the laser of a rating in the OEO depending on detuning V at various values differences of optical frequencies. V=$\Delta \nu _{}$=$\nu _{} - \nu _{0}$, and $\nu _{}$ and $\nu _{0}$ - the current and central optical frequency of radiation of the laser. Computer modeling was made at $ \frac{\Delta \nu _{} }{\nu _{0}}=10^{-6}$, $\Delta \nu _{}$ = 129 MHz, $\nu _{0}$ =$129 · 10^{12} $Hz $ \frac{\Delta \nu _{} }{\nu _{0}}$= $10^{-10}$ (a curve 1), $10^{-8} $(a curve 2); $10^{-7}$ (curve 3). 2.2 The modulator in the OEO Optical radiation of the laser passes the MZ modulator, the fiber light guide and arrives on a reception platform of the FD photo diode. On a platform of the FD photo diode there are two optical radiations which passed the MZ modulator for the first $ {\rm \; }E_{1L} {\rm \; =\; }E_{1L}(R)$ and second $ {\rm \; }E_{2L} {\rm \; =\; }E_{2L}(R)$ to optical channels \begin{equation} \label{GrindEQ__9_} {\rm \;} E_{1L} {\rm =\; }k_{01} \cdot {\rm (}E_{0L} {\rm +}m_{L} )\exp [j2\pi \nu _{0} (t+T_{M1} +T_{{\rm BC}} )-j\varphi _{0L} -j\psi _{m1}] (9)\end{equation} \begin{equation} \label{GrindEQ__10_} {\rm \;} E_{2L} {\rm =\; }k_{02} \cdot {\rm (}E_{0L} {\rm +}m_{L} )\exp [j2\pi \nu _{0} (t+T_{M2} +T_{{\rm BC}} )-j\varphi _{0L} -j\psi _{m2}] (10)\end{equation} With a small amplitude of tension of an entrance signal of modulation on MTs it is possible to use linearization of argument of ${\rm arg}_{12L} $ and for convenience in the coefficient of transfer of the MTs modulator of \[M_{z} {\rm =}k_{01} \cdot \{ 1-\cos {\rm \; }[2\pi \nu _{0} (T_{M20} -T_{M10} )]\}^{1/2}\]. 2.3 The spectral density of $S{}_{\boldsymbol{\mu} }( \boldsymbol{\omega})$ the photodetected fluctuations of the laser As a result of photodetection at the FD photo diode exit in the course of establishment of self-oscillations in the closed system the OEO is formed a harmonious radio-frequency signal at a frequency $f_{0} $ with noise. The radio-frequency noise in the OEO caused by noise of the laser can be interpreted as transformation of fluctuations. It is possible to consider that in optical part the LOG is formed by the MTs modulator and the light guide the interferometer with different geometrical lengths of shoulders. The interferometer together with the photo diode will transform frequency (or phase) noise of the laser to phase noise of photocurrent. Noise at FD exit in the opened system the OEO is defined by the conversion noise of the laser of pro-detected by AM-FM $S_{\mu AM-FM} $, FM noise of the laser of $S_{F} {\rm (}\omega {\rm )}$ and AM noise of $S_{\mu AM} $ the laser. AM, as a rule, can be neglected noise because of their trifle. As a result of the carried-out analysis the spectral density of fluctuation of currents determined by noise of the laser at the photo diode exit the LOG is equal in the opened system \[{ \ \; } S_{\mu}(\omega)= S_{\mu AM} {\rm \; }+S_{\mu AM-FM} +S_{F} \] \begin{equation} \label{GrindEQ__12_} {\rm \; \; }S_{\mu AM} {\rm (}\omega _{} {\rm )=\; }\frac{S_{\beta AM} D_{AM} \cdot U_{0M}^{2} }{(\omega _{} -\omega _{0} )^{2} T_{0L}^{2} \cdot 0,25+B_{L}^{2} } {},{} (11) {\rm \; \; }S_{\mu AM-FM} {\rm =\; \; }\frac{4\cdot S_{\beta _{} } D_{FA} \cdot U_{0}^{2} \cdot \sin [2\pi \nu _{0} (T_{M1} -T_{M2} )]\cdot G_{12} }{(\omega _{} -\omega _{0} )^{2} T_{0L}^{2} \cdot B_{L}^{2} }{}, (12)\end{equation} \begin{equation} \label{GrindEQ__13_} S_{F} {\rm (}\omega {\rm )\; }\approx \frac{4\cdot S_{\beta _{} } D_{FM} U_{0}^{2} \cdot G_{12} }{(\omega -\omega _{0} )^{2} T_{0L}^{2} \cdot B_{L}^{2} },{}{} (13)\end{equation} where $G_ {12}$ - coefficient of suppression of phase fluctuations of optical radiation of the laser which is defined as $G_{12} =1-[(\gamma _{1L\psi } -\gamma _{2L\psi } )+\exp (-\Delta T_{M} \cdot \Delta \nu _{L} )]^{} $, $\gamma _{1L\psi } $ and $\gamma _{2L\psi } $ - spatial constants of fluctuations of a phase of radiation of the laser at the exit of OK1 and OK2 channels of the Makha-Zendera modulator respectively. In practice to create ideal, identical optical channels in MTs directed the otvetvitelyakh it isn't possible and really coefficient of suppression of составляет $G_{12} \approx 10^{-1} -10^{-3} $ . The main mechanisms of formation of noise at the photo diode exit in the opened system the OEO with the Makha-Zender modulator are: transformation or conversion of phase noise of the laser in amplitude, suppression of conversion and phase noise with coefficient of $G_{12} $ due to coherent addition of fluctuations with different, but approximate equal delays at the photo diode exit. 2.4 The differential equations of the OEO optoelectronic generator with fluctuations The differential equation for the OEO taking into account the pro-detected fluctuations of the optical bearing is removed in the assumption that nonlinearity of the amplifier is defined by the average steepness of $S_{{\rm HY}} $, and filter parameters - own frequency of $f_ {F0} of $ and a constant of time of $T_{F} $ respectively. Thus amplitude of self-oscillations of $u(t)$ the LOG is defined as \begin{equation} \label{GrindEQ__14_} {\rm \; }\frac{d^{2} u}{dt^{2} } {\rm +}\frac{1}{T_{F} } \cdot \frac{d^{} u}{dt} {\rm +\; 2}\pi f_{F0} u{\rm =\; }S_{{\rm HY}} [E_{0L}^{2} K_{BLZ} \cdot u(t-T_{BC} )]+\Psi _{n}{} {}, {} (14) \end{equation} where $\Psi _{n} $ - the noise component of tension at the photo diode exit formed by the pro-detected fluctuations of a phase and amplitude of the optical U-0024 bearing the laser, and $\Psi _{n} =S_{{\rm HY}} \cdot K_{BLZ} \cdot \mu _{n} $. The differential fluctuation equation together with (4-5) for the laser forms full system of the equations with fluctuations. They allow to find solutions for radio-frequency ranges at different sizes of width of the line of laser $\Delta \nu _{} $ which are presented in fig. 3 - 6. 2.5 Range of radio-frequency fluctuations of the OEO Using expressions (11) - (13) and the equations (14) approximate expression for the spectral density of power of phase noise OEO ${\rm \; }S_{\Psi AG} $ from frequency detuning $\omega -\omega _{0} $: \begin{equation} \label{GrindEQ__15_} S_{\Psi AG} {\rm (}\omega _{} {\rm )=\; }\frac{4(\Delta \nu )^{2}S_{\beta }k_{E}D_{FM}G_{12} (1+\sin (\nu _{0} \cdot \Delta T_{M} )}{ (\omega _{} -\omega _{0} )^{2}T_{0L}^{2} \cdot \left|K_{BLZ} \right|\cdot E_{0L}^{2} \cdot [1+\cos (\omega -\omega _{0} )T_{BC} ]^{2} }.{}{}{} (15)\end{equation} Expression (15) establishes connection of ${\rm \; }S_{\Psi AG} $ with a width of the spectral line of radiation of laser $\Delta \nu $, a difference of delays in modulator MZ $\Delta T_{{\rm M}} $, a delay of VS of $T_{BC}^{} $ and capacities of the laser. Influence of width of the line of laser $\Delta \nu $ for spector $S_{\Psi AG} (\omega )$ especially is shown when detuning from carrier frequency $\omega -\omega _{0} \approx 2\pi \cdot \Delta \nu _{} $comparable with $\Delta \nu _{} $. At big detuning $\omega -\omega _{0} \gg 2\pi \cdot \Delta \nu _{} $ spector $S_{\Psi AG} (\omega )$ is defined by time of delay of $T_{BC} $ in the light guide. At a big difference of delays in channels of the modulator, for example, at $\Delta T_{{\rm M}} =10^{-4} c$, at $T_{BC}^{} =10^{-9} c$, $\Delta \nu _{} \approx 10^{3}$Hz from (15) follows, that \begin{equation} \label{GrindEQ__16_} {\rm \; }\Delta f_{0,5} \approx 3,8\cdot \Delta \nu _{} ^{3/4}{}{}.(16) \end{equation} In this case at $(\Delta T_{M} \cdot \Delta \nu _{} )\to 1$ with the help the OEO can perform measurements of width of the spectral line of laser \[\Delta \nu _{} ^{} {\rm \; }\approx [\Delta f_{0,5}^{4/3} ]/3,8^{4/3} \]. On the contrary, at a small difference of delays in channels of the modulator that is when work of $(\Delta T_{M} \cdot \Delta \nu )\to 0$, the LOG is the low-noise oscillator with extremely low level of phase noise. For example, at a small difference of delays in channels of modulator $\Delta T_{{\rm M}} =10^{-12} с $, ${}$ $\Delta \nu \approx 10^{3} $Hz, ${}$ $T_{BC}^{} =10\cdot 10^{-6} $ с, ${}$ $G_{R} =0,999$, ${}$ $S_{\beta _{} } \cdot D_{FM} =10^{-12} $ width of the OEO spectral line is about four orders less than width of the line of the laser \begin{equation} \label{GrindEQ__17_} {\rm \; }\Delta f_{0,.5} \approx 3,8\cdot 10^{-4} \cdot \Delta \nu _{} ^{3/4} {}(17). \end{equation} 3. Results of computer modeling OEO The solution of the equation (14) allowed to find ranges of tension of self-oscillations of $u_ {g} (t)$ of the OEO by means of numerical methods, using applied programs of a MATLAB package, for different values of width of the spectral line of the laser (fig. 3) and different sizes of delays of $T_ {BC} ^ { } $. In fig. 3 the spectral density of power of phase fluctuations of the laser taken at numerical modeling with laser radiation strip width $\Delta \nu _{} $= 10kHz ( curve 1) are presented, 1 MHz ( curve 2); and 10 MHz ( curve 3) at $\nu_{0} =1,29\cdot 10^{14} $${}^{ }$Hz. Thus spector were approximately described by Lorentz's function. Fig. 3. The spectral density of power of phase noise of radio-frequency self-oscillations the OEO from relative detuning for different sizes of rated time of supervision of transition process of the subbearing $t _{0}$ =$5, 14, 29, 35, 150$. $t_{0} $ is rated time of the beginning of development of of radio-frequency oscillations $t _{1}$ in the optoelectronic oscillator OEO. Rated time from a reference mark for the period of a delay in VOLZ $t _{0} = \frac{t _{1}}{ T_{BZ} }$. Computer modeling was made at values $\nu _{0}$ =$129 · 10^{12} $Hz. The question of formation of a range subbearing the LOG was studied and solutions of a range subbearing for various times of transition process (fig. 3) (for laser spector width $\Delta \nu _{} $=129 MHz, $\frac{\Delta \nu _{} }{\nu _{0}} $=$10^{-6}$, by $\Delta \nu _{} =1,29 \cdot 10^{14}$ Hz). In fig. 3 it is visible that process of formation and establishment of a range the LOG takes long time to multiple about 100 times of a delay in the light guide. In fig. 4 and fig. 5 results of numerical modeling of spectral density of phase noise for the OEO with not dispersive and dispersive line of a delay are shown. As a result of research conclusions are drawn that at small delays in the light guide the OEO with not dispersive line of a delay has considerable impact on a range the noise connected with phase and amplitude conversion (fig. 4). At increase in width of the line of the lazera $\Delta \nu _{} $ Delta in a spector of the OEO appear additional noise components. Fig. 4. The spectral density of power of phase noise of radio-frequency self-oscillations the OEO from not dispersive VOLZ from detuning on a radio frequency from width of spectral density of optical radiation of the laser subbearing 8,2 GHz at different values. Computer modeling on the basis of the fluctuation equations (18) OEO at $\Delta \nu _{} $ = 10kHz (a curve 1), 1 MHz (a curve 2); and 10 MHz (a curve 3) at $\nu _{0}$ =129 · 1012 Hz. The dispersive steepness of delay in the light guide made 10-7, length of the light guide-200 m. Dependences at $\Delta \nu _{} $ = 50 MHz (a curve 4) и при $\Delta \nu _{} $ =10 кГц (кривая 5) . Fig. 5. The Spectral density of power of phase noise from relative detuning (radio-frequency subbearing) self-oscillations the LOG with the dispersive fiber-optical line of a delay. The dispersive steepness of delay in the light guide, length of the light guide is $L=1000$ m. Computer modeling on the basis of the fluctuation equations (18) OEO ${}{}$at $\Delta \nu _{}$= 10 kHz ( curve 1), 1 MHz ( curve 2); and 100 MHz (curve 3). In the optoelectronic oscillator OEO with the dispersive line of a delay the increase in width of a range of the laser (fig. 5) more than 100 MHz is led due to dispersion of the fiber light guide to considerable expansion of a range by the LOG. Due to dispersion of the light guide the range has remarkable feature - edge structure - periodic dependence on detuning on a radio frequency. The period of this dependence is proportional to work of time of dispersion in the light guide on laser radiation strip width $DF\sim \Delta v\cdot \tau _{D} $ . When using highly dispersive light guides or low coherent lasers with a width of line more than 100 MHz in the spectral line of generation the LOG ostsillyation on detuning frequency are observed. Such edge structure of a radio-frequency range the LOG is similar to structure of optical spectra of laser diodes with highly dispersive active environments. 4. Experimental researches Some prototypes the the optoelectronic oscillator OEO of range of the microwave oven with different laser diodes of a rating radiating on lengths of waves 1310 nanometers or 1550nm with the maximum output power of optical radiation approximately from 10 to 20 mW were experimentally investigated. In fig. 1 (b) the photo of one of samples collected according to the scheme of fig. 1 (a) is presented . As the photodetector the FD photo diode on the basis of InGaAs was used. The radio-frequency filter represented the microwave dielectric resonator with the loaded good quality of $Q\approx 1000$ executed on ceramics and with an own frequency about 8,2 GHz. In the model it had took place broadband to $ 15$ GHz, modulation of laser radiation which was carried out by the Makh-Zender modulator of Hitachi firm. When carrying out experiments single-mode light guides with lengths from 60 m to 4640 m were used. Experimental amplitude and frequency adjusting curves are given in fig. 6. With different lengths in system the OEO it is received steady generation single-frequency self-oscillations at a frequency close to 8,2 GHz. In fig. 6 in in experimental frequency dependences on current of a rating of the laser diode it is possible to observe jumps of frequency (fig. 6 b) that is connected with reorganization of frequency on the next types of oscillations. In the OEO it is received short-term (on an interval of time of 1 min.) relative instability of frequency (mean square according to Allan) the generated $1\cdot 10^{-8} $ subbearing not worse at the room temperature. Level of phase noise when using various lasers of a rating made $S_ {c} \approx$ - 80 ...-140 dB/Hz on detuning of 1...10 kHz from the frequency generated by the microwave oven subbearing (fig. 4) and depended on width of the line of radiation of the laser. These experimental dependences will well be coordinated with settlement at the accounting of stabilization effect on lengths of the fiber light guide more than 50 m. Not compliance of settlement and experimental curve phase noise of $S_ {c} $ is explained by influence on them of polarization and heterogeneity on space of radiation of real laser sources which weren't considered at calculation. Fig. 6. Experimental dependences in the OEO of the microwave oven of range on current of shift and) capacities of the laser diode, b) amplitudes of tension of self-oscillations at an average radio frequency of self-oscillations of =8,2 GHz, c) radio frequencies of self-oscillations for various lengths of light guides 60m (1) and 70 m (2). 5. Conclusions. The carried-out analysis showed that a spector of the OEO optoelectronic oscillator the pro-detected conversion phase and amplitude noise and phase fluctuations define noise of optical radiation of the laser. The size of spectral density of power the OEO is proportional to a square of width of the line of optical radiation of the laser. The increase in width of the spectral line of the laser of a rating leads the OEO to expansion of a radio-frequency range. On condition of respectively small and big difference of delays in channels of the OEO with MZ modulator it is possible to use respectively as the low-noise oscillator with record-breaking small levels of phase noise or \textit { the measuring instrument of small width of the spectral line of radiation of lasers less 10 kHz. 6. Anknowlegements. Authors express gratitude for the shown interest and participation in discussions to professor Udalov N. N. and professor Kapranov M. V. 7. Literature: 1.Nakazawa M., Nakashima T., Tokuda M. An optoelectronic self-oscillatory circuit with an optical fiber delayed feedback and its injection locking technique.//J/Lightwave Technol. - 1984. - V.2, No. 5, - P. 719-730. 2. Grigoryants V. V., Dvornikov A.A., Ilyin Yu.B. and Konstantinov V. N. Prokofiev V. A. Generation of radio signals in system ''the laser - the optical line of a delay''.//Quantum. electron. - 1984. - T.11, No. 4. - Page 766-775. 3. Grigor'yants V.V., Il'in YU.B. Laser optical fiber heterodyne interferometer with frequency indicating of the phase shift of a light signal in an optical waveguide.//Optical and quantum electronics.-1989.-№ 21. - P.423-427. 4. Bortsov A. A., Grigoryants V. V., Ilyin Yu. B. Influence of efficiency of excitement of light guides on oscillator frequency with the differential fiber-optical line of a delay//Radio engineering. - 1989 - No. 7. - S.84-89. 5. А.с.№1538265 USSR, MKI3 H03K 9/00A. Device of functional transformation to A. A. frequencies / Fighters, Ilyin Yu. B., etc. (USSR). - 9 with.-1989 g. 6. X. S. Yao and L. Maleki, ''Optoelectronic microwave oscillator,'' \textit { J. Opt.Soc. Amer. B, Opt. Phys. }, vol. 13, no. 8, pp. 1725 - 1735, 1996. 7. Bortsov A. A., Ilyin Yu. B. The microwave oven differential optoelectronic oscillator with the lowest level of phase noise//Radio optical technologies in instrument making: Tez.dokl. The II-nd научн. - техн. конф. 14 - On September 21, 2004 - Sochi, 2004 - S.84-86. 8. J. J. McFerran, E. N. Ivanov, A. Bartels, G. Wilpers, C. W. Oates, S. A. Diddams, and Hollberg, ''Low-noise synthesis of microwave signals from an optical source,'' Electron. Lett. 41, 650-651 (2005). 9. Savchenkov, A. A. Ilchenko et al. Low Threshold optical oscillations in a whispering gallery mode CaF2 resonator. Physical Review Letters 93, 243905 (2004). 10. Tsarapkin D.P - Methods of generation of the microwave oven of fluctuations with a minimum level of phase noise: The thesis on competition of the Doctor of Engineering. - M, 2004. - 413 pages. 11. Patent for the invention No. 2282302 RU, MPK 3 7 H03 C3/00. The shaper of a frequency-modulated signal / A. A. Fighters, Ilyin Yu. B. - 10 pages of 2004. 12. Patent No. 44902 RU, МПК$ { } ^ {3} $7 H03 C3/00. The shaper of a frequency-modulated signal / A. A. Fighters, Ilyin Yu. B. - 10 pages of 2004. 13. A. A. Savchenkov, A. B. Matsko, V. S. Ilchenko, and L.Maleki, ''Optical resonators with ten million finesse,'' Opt. Express \15, 6768-6773 (2007). 14. A. A. Bortsov. Management of frequency in the laser oscillator with the compound fiber-optical line of a delay//Avtoref. edging. yew. on соиск. уч. step. Cand.Tech.Sci. - M.:MEI. - 2005. - Page. 15. Bortsov A. A. Fazochastotnaya and amplitude-frequency characteristics of the mezapoloskovy quantum-dimensional laser diode with a strip of frequencies of modulation to 12 GHz//Radio engineering - 2006 g2006 - Page 43 - 47 16. Lex M. Fluctuations and coherent phenomena. M.:Мир, 1974 17. Lebedev A.K. Theory of the laser M.: MEI, 1998 18. Zhalud V., Kuleshov V. of N. Noise in semiconductor devices. Under the general edition of A. K. Naryshkin. - M.: Sovetstky radio, 1977 of-416 pages. Article came to editorial office of the Radiotekhnika magazine in July 8, 2008. RADIOTEKHNIKA magazine, 2010,№2 {gallery}publication/Radote2010{/gallery}
CommonCrawl
Source location and timing of energy release for triggered event Possible source mechanism inferred from single-station data Source location and mechanism analysis of an earthquake triggered by the 2016 Kumamoto, southwestern Japan, earthquake Takeshi Nakamura1Email authorView ORCID ID profile and Shin Aoi1 Earth, Planets and Space201769:6 © The Author(s) 2017 Received: 1 August 2016 Accepted: 9 December 2016 Published: 3 January 2017 The 2016 Kumamoto earthquake (Mw 7.0) occurred in the central part of Kyushu Island, southwestern Japan, on April 16, 2016. The mainshock triggered an event of maximum acceleration 700 gal that caused severe damage to infrastructure and thousands of homes. We investigate the source location of the triggered event, and the timing of large energy release, by employing the back-projection method for strong-motion network data. The optimal location is estimated to be [33.2750°, 131.3575°] (latitude, longitude) at a depth of 5 km, which is 80 km northeast of the epicenter of the mainshock. The timing is 33.5 s after the origin time of the mainshock. We also investigate the source mechanism by reproducing observed displacement waveforms at a near-source station. The waveforms at smaller-sized events, convolved with the source time function of a pulse width 1 s, are similar to the signature of the observed waveforms of the triggered event. The observations are also reproduced by synthetic waveforms for a normal-fault mechanism and a normal-fault with strike-slip components at the estimated locations. Although our approach does not constrain the strike direction well, our waveform analysis indicates that the triggered earthquake occurred near the station that observed the strong motions, primarily via a normal-fault mechanism or a normal-fault with strike-slip components. 2016 Kumamoto earthquake Triggered earthquake Strong motion Back-projection method Green's function Hypocenter determination Source mechanism Two destructive earthquakes (named the 2016 Kumamoto earthquake) occurred in the central part of Kyushu Island, southwestern Japan (Fig. 1a), on April 14 and 15, 2016. The Japan Meteorological Agency (JMA) reported the moment magnitude and the source depth of the first and second mainshocks as 6.2 and 11.4 km, and 7.0 and 12.4 km, respectively. The mainshocks occurred in two active faults (Futagawa and Hinagu faults), predominantly via strike-slip mechanism with E–W compression (Fig. 1a). Location map of the 2016 Kumamoto earthquake and strong-motion records. a Epicenter (yellow star) and source mechanism (beach ball) of the 2016 Kumamoto earthquake occurred on April 15, 2016, and aftershocks (gray dots) occurred within 24 h. White circles and triangles indicate K-NET and KiK-net strong-motion stations and stations used for the back-projection analysis, respectively. Circles, which are located in the direction of N45°E from the epicenter of the mainshock, indicate stations of strong-motion waveforms shown in (b) and (c). The Japanese Islands and the focal area are shown in the top-left inset. b North–south component of acceleration waveforms. Black and red traces show waveforms at K-NET and KiK-net stations and K-NET station OIT009, respectively. Station codes are shown in the right side of each trace. Thick gray lines indicate an identified seismic phase different from phases of the mainshock, and S-wave propagation of apparent velocity 3.5 km/s from station OIT009. c Vertical component of 20 Hz high-pass filtered acceleration waveforms. Thick gray lines indicate an identified seismic phase different from phases of the mainshock, and P-wave propagation of apparent velocity 6.3 km/s from station OIT009 The fault model of the Mw 7.0 second mainshock (hereafter "mainshock"), evaluated from strong ground-motion data, showed rupture propagation in the northeast direction and maximum slip of 3.8 m at an epicentral distance of approximately 10–30 km (Kubo et al. 2016). Interferometric synthetic aperture radar (InSAR) data also showed large surface displacements at an epicentral distance of approximately 5–30 km in the northeast area (Ozawa et al. 2016). Hypocenter analysis of aftershocks indicated a linear distribution with total length of 50 km in the northeast direction from the epicenter of the mainshock (Yano and Matsubara 2016). In the off-fault area about 80 km northeast from the epicenter, the hypocenter distribution showed isolated seismic activities that are not directly continued to the main fault segments (Fig. 1a). Strong ground motions of more than 700 gal, which caused severe damage to infrastructure and thousands of homes (Oita prefecture, http://www.pref.oita.jp/site/bosaiportal/280414jisin.html, last accessed on July 19, 2016), were observed in the area of the isolated activities (Aoi et al. 2016). The observed acceleration is inconsistently large compared with that estimated from the empirical relationship as a function of the hypocentral distance. In Fig. 1a, the colors indicate the anomalously large amplitude of acceleration in the isolated activity area. We show seismic waveforms observed at strong-motion network stations of K-NET and KiK-net (Okada et al. 2004) operated by the National Research Institute for Earth Science and Disaster Resilience (NIED) in and around the source areas at the mainshock in Fig. 1b. S-wave propagation with apparent velocity of 3.5 km/s is found at stations located off the source fault of the mainshock after S-wave propagation from the mainshock. In the high-frequency range >20 Hz to suppress S- and surface waves from the mainshock, the propagation of P-waves with apparent velocity of 6.3 km/s is found at these stations (Fig. 1c). These phases start to propagate at 30–40 s after the origin time of the mainshock and differ from the coda phases of the mainshock. These observations suggest that an event might be triggered after the mainshock, possibly by external perturbations associated with the mainshock rupture such as stress changes to the area. In this study, we analyze the source location and timing of seismic energy release for the triggered event by employing the back-projection method (Spudich and Cranswick 1984). In the waveform data, contaminations of coda waves from the mainshock into the onset of body waves from the triggered event are found at most stations, making it difficult to identify the onset. The back-projection method determines the event location and timing by evaluating coherent signals in stacked waveform and does not use the onset data. It is anticipated that the method may provide an alternative approach to estimating the source location and timing for such event data. We also reproduce observed waveforms of the triggered event by using waveforms of smaller-sized events as Green's functions and infer the source mechanism of the triggered event. We verify the estimated source location and mechanism by calculating synthetic waveforms. These source data investigated in this study would contribute to quantitatively studying the causes of observed strong motions, seismic activity around the triggered event, and stress transfer from the mainshock to the triggered event. We employ the back-projection method (Spudich and Cranswick 1984) to investigate the hypocenter location and the timing of large energy release of the triggered event. For the investigation, we select 17 strong-motion stations of K-NET and KiK-net (white triangles in Fig. 1a; Additional file 1: Table S1) that record clear signals from the triggered event. Data from stations located in the northeastern and southwestern areas, which correspond to areas along the rupture direction, are not used. This is because there is significant contamination of mainshock coda waves within the frequency range of interest, resulting in significantly low signal-to-noise ratio. The back-projection method has often been applied to investigate the rupture process using coherent phases stacked from high-frequency seismic waveforms observed in seismic array, without assuming source mechanisms and dimensions and without calculating synthetic waveforms (e.g., Ishii et al. 2005). The method has been also applied to investigate the hypocenter determination of events such as tremor activities (e.g., Kao and Shan 2004) and micro-events (e.g., Vlček et al. 2015). By enhancing the coherency of stacked waveforms, the method might be appropriate for analyzing waveform data contaminated by coda waves of the mainshock as the present case, in which it is difficult to identify the onset of body waves. We also suppose that the estimates obtained via this approach will complement those of the conventional hypocenter determination approach. We apply the method to the S-wave component of envelope waveforms. We integrate acceleration data of the horizontal component within the frequency range 3–8 Hz and produce the vector sum of horizontal data from the envelope of each component using the Hilbert transform. We then calculate the mean amplitude every 0.1 s within a moving time window of 0.2 s. The obtained smoothed envelope waveform is normalized by the maximum amplitude of the phase of the triggered event at each trace. The reason for the frequency range of 3–8 Hz is that we aim to suppress coda waves from the mainshock, by high-pass filtering in a corner frequency of 3 Hz, and to avoid using complex waveforms including P-wave and other high-frequency components from the triggered event, by low-pass filtering in a corner frequency of 8 Hz. We stack the envelope waveforms for an assumed source location and timing of energy release using the following equation, which basically replicates that proposed by Kao and Shan (2004): $$S_{i} \left( t \right) = \frac{1}{N}\mathop \sum \limits_{n = 1}^{N} \left\{ {\frac{1}{2M}\mathop \sum \limits_{m = - M}^{M - 1} A_{n} \left( {t + T_{i,n} + m\Delta t} \right)} \right\},$$ where S i is the stacked waveform for the source grid i as a function of the time t for large energy release of the triggered event, A n is the envelope waveform at station n, Δt is the time interval of data sampling, T i,n is the travel time of S-wave from the source grid i to station n, M is the number of time points within the half-length of the time window, and N is the number of stations. For travel-time calculations, we use the one-dimensional (1D) velocity structure model that is routinely used by Kyushu University to determine hypocenters in this area (Fig. 2). We incorporate station corrections into the calculated times to correct the travel times induced by lateral variations in the three-dimensional (3D) velocity structure. The corrections are obtained from travel-time residuals that are estimated from the hypocenter determinations for 76 events that occurred within our study area (Step 1 in Fig. 3a; Table 1) since 1996. The travel-time calculations do not utilize apparent velocity, because of the difficulty, at near-source stations, in assuming incident plane waves from the horizontal direction. We identify the optimal grid point in time and space from the largest amplitude of the stacked waveform. By the stacking analysis of using multiple station data, we suppress the local site effect at a station for the estimation. Velocity structure models used in this study. Gray thick lines indicate P- and S-wave structure models used for the back-projection analysis. The model is used by Kyushu University for routine analysis of hypocenter determination in this area. Red and blue lines indicate P- and S-wave structure models used for calculating synthetic waveforms. The model is composed of the velocity structure obtained by discretization of the structure model by Kyushu University, and the velocity structure of sediment layers around station OIT009 by Koketsu et al. (2008) Hypocenter and origin time of the triggered event estimated by the back-projection method. a Areas used for the multi-scale analysis. A yellow star indicates the epicenter of the mainshock. White triangles indicate stations used in the analysis. b Epicenter of the triggered event obtained from multi-scale analysis. Color at each grid indicates the maximum stacked amplitude within the time range in Table 1. A red and a dashed red circles indicate the optimal and the suboptimal result for the epicenter of the triggered event, respectively. Panels show results for the distribution of the stacked amplitude for spatial grid intervals of 0.1°, 0.05°, 0.025°, and 0.0125°. Each panel shows the distribution at the depth of the optimal result. c Distribution of the stacked amplitude at an elapsed time of 33.3–34.0 s from the origin time of the mainshock. The distribution shows the result for the grid interval of 0.0125° at the depth of 5 km. d Distribution of the maximum stacked amplitude within the time range for the grid interval of 0.0125° at each depth Spatial grid size and search area and time for the back-projection analysis Step number Grid size in horizontal direction (°) Area for grid search (latitude/longitude) Grid size in vertical direction (km) Depth range for grid search (km) Time range for grid search (s) N32.9750–33.4750/E131.0825–131.5825 N33.1500– 33.4000/E131.2075–131.4575 The grid sizes and search regions used in this study are summarized in Table 1. We conduct multi-scale analysis to efficiently obtain the optimal solution: We first search for the optimal solution for a rough grid size across a wide area in Step 1 (see Fig. 3a; Table 1) and then search for the next solution for a finer grid size in a smaller area in Step 2 (see Fig. 3a; Table 1). The smallest grid in the horizontal direction for the search is 0.0125° (Step 4 in Fig. 3a; Table 1), which is comparable to the maximum wavelength for our use of waveforms. We show estimated results for the triggered event by the back-projection method in Fig. 3b–d. Since the stations used are distributed in a NW–SE direction, Fig. 3b–d shows large stacked amplitudes in a spatially linear distribution in the NE–SW direction, indicating a low resolution in this direction for our analysis relative to that in the NW–SE direction. The optimal solution for the epicenter of the large energy release and the depth for the triggered event is latitude 33.2750°, longitude 131.3575°, 5 km, respectively. The best solution of the source is located in the northeastern area of K-NET station OIT009. The timing of the energy release is 33.5 s after the origin time (16:25:05 UTC, on April 15, 2016) of the mainshock. The second optimal solution is found in the northern area of station OIT009 (dashed red circle in Fig. 3b–d). The hypocenter is located at latitude 33.2875°, longitude 131.3325°, at a depth of 5 km, and the timing of the solution is 33.8 s. We checked the performance of our analysis using data from well-relocated events that occurred in the area of Step 4 and are used for analyzing travel-time residuals, and evaluated the estimation error by comparing the estimated hypocenters with those listed in the JMA hypocenter catalogue. The standard errors are 4.6 and 4.7 km for the horizontal and depth directions, respectively. The error for time is 1.4 s. We here address that the obtained results for time and space using high-frequency waveform data do not match the origin time of the event and the initial break point but instead indicate the timing of large high-frequency energy release and the released area. The source studies for high-frequency seismic radiation (e.g., Spudich and Frazer 1984) showed that the radiated energies are generated by changes in slip and/or rupture speed, and the initiation of large slips. Following their results, the estimated results in our analysis may correspond to the source of large high-frequency energy release in and around a large slip region. We show the observed waveforms sorted in the epicentral distance based on the estimated epicentral location of the triggered event and the stacked waveform in Fig. 4, which ensures agreement of the predicted arrival time with the phase of the triggered event at the stations (Additional file 1: Figure S1) and enhances the coherency by appropriately stacking the phase. The amplitude of the stacked waveform for the phase of the triggered event is 0.68. In Fig. 4, phases with large amplitude before 30 s are from the mainshock. Stacked and envelope waveforms. Red traces in the upper panel show the waveform stacked from envelope waveform based on the travel time estimated from the optimal result for the source location of the triggered event. The time t = 0 on the horizontal axis is the origin time of the mainshock. Phases with large amplitudes before 30 s are the stacked phases of the mainshock. Black traces in the lower panels show envelope waveforms at strong-motion stations. The traces are sorted by epicentral distance based on the optimal result for the source location. Red squares indicate arrival times estimated from the optimal solution for the hypocenter of the triggered event From the F-net moment tensor (MT) database operated by NIED, we find that normal-fault events have occurred in and around the source area of the triggered event, which is consistent with geological survey findings that active faults in this area are primarily the normal-fault type (e.g., Kamata 1989). However, the mechanism of the triggered event has not been determined, since source mechanism analyses (such as analyses of the P waveform similarities between events, the polarity of initial P-wave, the particle motion of initial S-wave, and the waveform inversion for surface waves) are difficult because of the contamination of coda waves from the mainshock at most of stations. The iterative deconvolution method (Kikuchi and Kanamori 1991) that can decompose the phase of multiple subevents and estimate their mechanism is also difficult in the present case, because of significant differences in signal level in the low-frequency range between the mainshock and the triggered event. We here reproduce the waveform of the triggered event from those of smaller-sized events (Mw 4.1–5.1 as listed in Table 2) around station OIT009 in order to infer the mechanism. We assume the observed waveform at the smaller events as a Green's function of the triggered event. We convolve such waveforms with a cosine-type source time function and compare the convolved waveforms with waveforms of the triggered event, which is a similar approach to reproducing the waveforms of a large earthquake by using the empirical Green's function (Hartzell 1978). Source location and double-couple mechanism for smaller-sized events around station OIT009 from the F-net MT database Origin time (UTC yyyy/mm/dd, hh:mm:ss) Latitude (°) Longitude (°) Strike (°) Dip (°) Rake (°) 1999/01/01, 14:30:04 We use the horizontal component of high-pass filtered displacement waveforms in a corner frequency of 0.3 Hz for the convolution. The reason for focusing on this frequency is that the spectrum of the phase from the triggered event at station OIT009 shows a dominant frequency of approximately 0.3–2 Hz, which may indicate the dominant frequency of the source spectrum. Another reason is that the phase from the triggered event at frequencies >0.3 Hz is found to have larger amplitudes than "noise", which are coda waves from the mainshock. For the other stations, the phase from the triggered event is not clearly found within this frequency range in the displacement or is significantly disturbed by coda waves of the mainshock. The waveforms for Green's function are ones at events that occurred at shallow depth <20 km and in the epicentral area at a distance <10 km around station OIT009. We show the source mechanism, magnitude, and epicenter location of five smaller events used for the analysis in Table 2 and Fig. 5, which are listed in the F-net MT database. The source mechanism of all five events shows normal-fault type or normal-fault with strike-slip component. The strike for events on January 1, 1999 (Event 1) and April 29, 2016 (Event 5) is N81°E and N117°E, respectively, and for the other events is approximately N90°E. The magnitudes are Mw 4.1–5.1. Four events occurred within 2 weeks after the triggered event in the northeastern area of station OIT009 near the estimated epicenter of the triggered event, and may be aftershocks of the triggered event. We use the cosine-type source time function for convolution of waveforms for these five smaller events. By trial and error (comparing the waveform of the triggered event with the convolved waveform), we obtain 1 s as the optimal pulse width of the function. Source location and mechanism of smaller-sized events around station OIT009. A red and a dashed red circles indicate the optimal and the suboptimal result for the epicenter of the triggered event, respectively. White triangles indicate the locations of K-NET and KiK-net stations. Green stars indicate the source locations of smaller events. Double-couple mechanisms, origin times, and magnitudes of the smaller events are shown at the bottom We show the waveform comparisons in Fig. 6. For the vertical component, the waveform amplitude of the triggered event is small compared with that of the horizontal component and is contaminated by coda waves from the mainshock. The amplitude ratio of vertical component/horizontal component in the convolved waveform for the five smaller events is relatively large and does not explain the ratio in that of the triggered event. The amplitude ratio at a station just near the source can be significantly changed by the slight differences in mechanism and the location between two events, which may account for the discrepancy found in the components. Displacement waveforms at station OIT009 modeled by using waveforms at smaller-sized events. Black and purple traces show the waveforms of the triggered event and those of smaller events convolved with the source time function for the pulse width of 1 s, respectively. Both of the waveforms are high-pass filtered in a corner frequency of 0.3 Hz. The waveform of the triggered event is time-shifted to match the S-wave phase of the convolved waveforms. The time t = 0 on the horizontal axis is the origin time of the smaller events For the north–south component, we find a slight time shift of the initial S-phase between the triggered event and Events 1 and 4. Another difference between the triggered event and Event 3 is found in amplifications of later phases after the initial S-phase, which may be amplified not by the source and the path effects but by the effect of sediment layers at shallow depth, since monochromatic signals in the amplified later phases are found. For the east–west component, the initial S-phase with large amplitudes is reproduced well from the convolved waveforms for the five smaller events. For Events 2–5, the second phase of the triggered event (appearing 2 s after the initial phase) is also reproduced. The reproduction is also verified for convolved waveforms from a small Mw 3.7 event of the strike-slip type with a slight normal-fault component (Additional file 1: Figure S2) that was rapidly determined by waveform analysis and listed in the AQUA CMT database (Matsumura et al. 2006), indicating that the use of intermediate-sized earthquakes (Mw 4.1–5.1) in this analysis is not inappropriate for Green's function. However, the second phase is not reproduced by the convolved waveform of only Event 1. Of the five smaller-sized events, the phase in the horizontal component of the convolved waveform for Event 5 is likely to match that of the triggered event. Although we cannot determine the fault parameters from our waveform comparisons, we speculate that the source may be located in the northeastern area, near station OIT009, and be a normal-fault type or a normal-fault with strike-slip components. The maximum amplitude of the high-pass filtered waveform in a corner frequency of 0.3 Hz for the triggered event is 13 cm, which is 17–340 times larger than that of the convolved waveform for the smaller events. By referring to the moment magnitude for smaller events listed in the F-net MT database, the magnitude of the triggered event is then roughly estimated to be Mw 5.3 and Mw 5.7–5.9 in the case of using Event 5 and the other Event data, respectively. However, we should note that this estimation assumes only minor differences in source mechanism and location between the triggered and the smaller-sized events. Furthermore, this estimation significantly depends on the pulse width of the source time function used for calculating the convolution. Using a pulse width of 2 s for the convolution, the estimated magnitude could range from Mw 5.5–6.1. Our back-projection and convolution analyses indicate that the source is located near station OIT009 and that the mechanism is a normal-fault type or a normal-fault with strike-slip components. However, the smaller events have nearly the same mechanism. We here calculate synthetic waveforms for the source location estimated by the back-projection method by using various types of the mechanism from a theoretical approach and investigate the possibility of other types of source mechanism. We employ the discrete wavenumber method (Nakamura and Takenaka 2006) for the waveform calculation by using a 1D velocity model (red and blue lines in Fig. 2) composed of the velocity structure obtained by the discretization of the model used in the back-projection analysis (gray thick line in Fig. 2) and the velocity structure of sediment layers in this area by Koketsu et al. (2008). We use the same cosine-type source time function of the pulse width of 1 s as that used in the previous section. The seismic moment used is M 0 = 1013 Nm. Assuming the double-couple source, we calculate waveforms for the source location estimated in the back-projection analysis by using various source mechanisms with fault parameters within the range: strike direction 0°–360°, dip direction 0°–90°, and rake angle of −180° to 180°. We calculate 13,690 cases by varying these parameters every 10° within the given ranges. We also calculate waveforms for the source location and mechanisms of Events 1–5 (Fig. 5). We evaluate the fitting between synthetic and observed waveforms by the variance reduction VR (%) based on the following equation, which is generally used in moment tensor analysis (e.g., Matsumura et al. 2006), $${\text{VR}} = \left( {1 - \frac{{\sum\nolimits_{i} {\int {\left( {s_{i} \left( t \right) - o_{i} \left( t \right)} \right)^{2} {\text{d}}t} } }}{{\sum\nolimits_{i} {\int {\left( {o_{i} \left( t \right)} \right)^{2} {\text{d}}t} } }}} \right) \times 100,$$ where s i and o i are the synthetic and observed waveforms of the ith component, respectively. The variance reduction VR = 100% indicates complete fitting between the waveforms. We use the amplitude normalized by the maximum one of the horizontal components, because the moment magnitude and the seismic moment of the triggered event for calculation of the synthetic waveform are unknown and are not estimated in this step. We use the three-component waveforms to calculate the variance reduction. For the vertical component, the waveform amplitude is multiplied by a low weighing factor because evaluation is made difficult by the significant contamination by coda waves from the mainshock into the component. Both the observed and synthetic waveforms are displacements and high-pass filtered in a corner frequency of 0.3 Hz. The length of the time window is 4 s from the arrival time of S-waves. Because arrival times are affected by three-dimensional heterogeneous structures, the time of synthetic waveforms is shifted within a range of −0.5 to 0.5 s. We take the variance reduction of the maximum value in the shifted time range as the estimated results. In this source analysis, we assume that the hypocenter estimated by the back-projection method is located near the large slips. By simplifying the source as the point source, we infer the mechanism type of the triggered event as a first-order approximation. Figure 7 shows the comparison of the observed waveform of the triggered event with the synthetic waveforms. As we find the same features in Fig. 6, the phase of the horizontal component in the synthetic waveform for Event 1 is shifted to that of the observation, which is not found in the phase for Events 2–5. The second phase of the triggered event for the east–west component is also not reproduced. Because of this, the low variance reduction of −65% is obtained for Event 1. The synthetic waveforms of the horizontal component for the other Events appear to explain the observation. By searching the mechanism in the case of the source location estimated by the back-projection analysis, the optimal one is (strike 200°, dip 30°, rake −100°); note that these values of the angles may not be uniquely constrained due to errors caused by the assumption of the point source and limitations due to the use of only single-station data in the analysis. The synthetic waveform for the mechanism of the optimal solution determined by evaluating the variance reduction is shown in the bottom right of Fig. 7. The synthetic waveform explains the observations with the variance reduction of 57%, indicating good agreement between the waveforms. The magnitude estimated from the comparison of the synthetic waveform amplitude with that of the observed one is Mw 5.6, although this estimation may be overestimated because of amplifications in the observed waveform affected by coda waves from the mainshock. Displacement waveforms at station OIT009 modeled by the discrete wavenumber method. Black and red traces show the observed and synthetic waveforms of the triggered event, respectively. The synthetic waveforms are calculated from using the source time function of the pulse width of 1 s and the seismic moment M 0 = 1013 Nm for a 1D velocity structure shown in Fig. 2. Both of the waveforms are high-pass filtered in a corner frequency of 0.3 Hz. The time t = 0 on the horizontal axis is the origin time of the mainshock. Shadow areas indicate the time window to calculate the variance reduction. The synthetic waveform for the mechanism obtained by the grid search for the variance reduction is shown at the bottom right We plot the estimated variance reduction for the source location estimated by the back-projection analysis for various mechanisms on the triangle diagram of the focal mechanism (Frohlich 1992) in Fig. 8a, which shows the top 100 results of the mechanism with high variance reduction from the 13,690 cases. The distribution of the source mechanism indicates that the normal-fault are probable mechanisms to explain the observations, although we cannot constrain the strike direction, which is presented as nearly invariant for the strike angle in Additional file 1: Figure S3, which shows the comparison of the distribution of variance reductions as a function of the fault parameters. We also plot the estimated variance reduction from the hypocenter of the second optimal solution (Fig. 2b) located in the northern area of station OIT009, indicating that a normal-fault type with strike-slip components is a probable mechanism. In contrast, the opposite of the normal-fault and the normal-fault with strike-slip component, i.e., the compression type, does not show the high variance reduction for either of the hypocenters (Fig. 8a, b) and is unlikely to explain the observations from the synthetic waveforms. These investigations suggest that the triggered event might occur near station OIT009 via a normal-fault mechanism or a normal-fault with strike-slip components. Variance reduction plotted on a triangle diagram of the focal mechanism. a Mechanisms for the top 100 cases of variance reduction for the hypocenter located in the northeastern area. A red circle indicates the optimal solution for the mechanism. b Mechanisms for the top 100 cases of variance reduction for the hypocenter in the northern area. A dashed red circle indicates the optimal solution for the mechanism We investigated the source location and mechanism of an event triggered during the 2016 Kumamoto earthquake (Mw 7.0) on April 16, 2016. We applied the back-projection method for velocity waveforms within the frequency range 3–8 Hz. The optimal location is estimated to be [33.2750°, 131.3575°] (latitude, longitude) at a depth of 5 km, which is located just near the station that shows the peak acceleration amplitude of 700 gal during the event. The timing of large energy release of the triggered event is estimated to be 33.5 s after the origin time of the mainshock. We reproduced observed waveforms of high-pass filtered displacements in a corner frequency of 0.3 Hz at the station by using waveforms at smaller-sized events convolved with the source time function for the mechanism analysis. The convolved waveforms for the smaller events show agreement with the waveform of the triggered event. Synthetic waveforms for a normal-fault mechanism and a normal-fault with strike-slip components at the estimated source locations also explain the observed waveforms. Although our approach does not constrain the strike direction well, the results of our waveform analysis indicate that the triggered earthquake occurred near the station that observed the strong motions, primarily by a normal-fault mechanism or a normal-fault with strike-slip components. AQUA: accurate and quick analysis system for source parameters CMT: centroid moment tensor JMA: Japan Meteorological Agency JSPS: Japan Society for the Promotion of Science MT: moment tensor NIED: National Research Institute for Earth Science and Disaster Resilience TN conducted the data analysis and the calculation of seismic waveforms and wrote the paper. SA coordinated the strong-motion network data and contributed to the data analysis and the calculation. Both authors read and approved the final manuscript. Discussions with Hisahiko Kubo, Wataru Suzuki, and Nelson Pulido were very fruitful. Constructive comments and suggestions from two anonymous reviewers were very helpful. We used the hypocenter catalogue provided by the Japan Meteorological Agency (JMA). We used data from K-NET and KiK-net, operated by the National Research Institute for Earth Science and Disaster Resilience (NIED). This work was supported by JSPS KAKENHI Grant Number 26282105. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 40623_2016_588_MOESM1_ESM.pdf Additional file 1. This file includes additional figures (Figures S1–3) and a table (Table S1). National Research Institute for Earth Science and Disaster Resilience, 3-1 Tennodai, Tsukuba 305-0006, Japan Aoi S, Kunugi T, Suzuki W, Kubo H, Morikawa N, Fujiwara H (2016) Strong motion and source processes of the 2016 Kumamoto earthquake sequence. In: 2016 Japan Geoscience Union Meeting MIS34-06Google Scholar Frohlich C (1992) Triangle diagrams: ternary graphs to display similarity and diversity of earthquake focal mechanisms. Phys Earth Planet Int 75(1–3):193–198. doi:10.1016/0031-9201(92)90130-N View ArticleGoogle Scholar Hartzell SH (1978) Earthquake aftershock as Green's function. Geophys Res Lett 5:1–4. doi:10.1029/GL005i001p00001 View ArticleGoogle Scholar Ishii M, Shearer PM, Houston H, Vidale JE (2005) Extent, duration and speed of the 2004 Sumatra–Andaman earthquake imaged by the Hi-net array. Nature 435:933–936. doi:10.1038/nature03675 Google Scholar Kamata H (1989) Volcanic and structural history of the Hohi volcanic zone, central Kyushu, Japan. Bull Volcanol 51(5):315–332. doi:10.1007/BF01056894 View ArticleGoogle Scholar Kao H, Shan SJ (2004) The source-scanning algorithm: mapping the distribution of seismic sources in time and space. Geophys J Int 157(2):589–594. doi:10.1111/j.1365-246X.2004.02276.x View ArticleGoogle Scholar Kikuchi M, Kanamori H (1991) Inversion of complex body waves—III. Bull Seismol Soc Am 81(6):2335–2350Google Scholar Koketsu K, Miyake H, Fujiwara H, Hashimoto T (2008) Progress towards a Japan integrated velocity structure model and long-period ground motion hazard map. In: Proceedings 14th World Conference Earthquake Engineering S10-038Google Scholar Kubo H, Suzuki W, Aoi S, Sekiguchi H (2016) Source rupture process of the 2016 Kumamoto, Japan, earthquakes estimated from strong-motion waveforms. Earth Planets Space 68:161. doi:10.1186/s40623-016-0536-8 View ArticleGoogle Scholar Matsumura M, Ito Y, Kimura H, Obara K, Sekiguchi S, Hori S, Kasahara K (2006) Development of accurate and quick analysis system for source parameters (AQUA). Zisin 2(59):167–184View ArticleGoogle Scholar Nakamura T, Takenaka H (2006) A numerical analysis of seismic waves for an anisotropic fault zone. Earth Planets Space 58:569–582. doi:10.1186/BF03351954 View ArticleGoogle Scholar Okada Y, Kasahara K, Hori S, Obara K, Sekiguchi S, Fujiwara H, Yamamoto A (2004) Recent progress of seismic observation networks in Japan—Hi-net, F-net, K-NET and KiK-net. Earth Planets Space 56:XV–XXVIII. doi:10.1186/BF03353076 View ArticleGoogle Scholar Ozawa T, Fujita E, Ueda H (2016) Crustal deformation associated with the 2016 Kumamoto Earthquake and its effect on the magma system of Aso volcano. Earth Planets Space 68:186. doi:10.1186/s40623-016-0563-5 View ArticleGoogle Scholar Spudich P, Cranswick E (1984) Direct observation of rupture propagation during the 1979 Imperial Valley earthquake using a short baseline accelerometer array. Bull Seismol Soc Am 74(6):2083–2114Google Scholar Spudich P, Frazer LN (1984) Use of ray theory to calculate high-frequency radiation from earthquake sources having spatially variable rupture velocity and stress drop. Bull Seismol Soc Am 74(6):2061–2082Google Scholar Vlček J, Fischer T, Vilhelm J (2015) Back-projection stacking of P- and S-waves to determine location and focal mechanism of microseismic events recorded by a surface array. Geophys Prospect 64(6):1428–1440. doi:10.1111/1365-2478.12349 Google Scholar Yano TE, Matsubara M (2016) The significance of seismicity after the 2016 Kumamoto Earthquake sequence. In: 2016 Japan Geoscience Union Meeting MIS34-P05Google Scholar 4. Seismology 2016 Kumamoto earthquake sequence and its impact on earthquake science a...
CommonCrawl
Optimized photo-stimulation of halorhodopsin for long-term neuronal inhibition Chuanqiang Zhang1,2 na1, Shang Yang3 na1, Tom Flossmann1,4 na1, Shiqiang Gao3, Otto W. Witte1, Georg Nagel3, Knut Holthoff1 na2 & Knut Kirmse ORCID: orcid.org/0000-0002-9206-214X1 na2 BMC Biology volume 17, Article number: 95 (2019) Cite this article Optogenetic silencing techniques have expanded the causal understanding of the functions of diverse neuronal cell types in both the healthy and diseased brain. A widely used inhibitory optogenetic actuator is eNpHR3.0, an improved version of the light-driven chloride pump halorhodopsin derived from Natronomonas pharaonis. A major drawback of eNpHR3.0 is related to its pronounced inactivation on a time-scale of seconds, which renders it unsuited for applications that require long-lasting silencing. Using transgenic mice and Xenopus laevis oocytes expressing an eNpHR3.0-EYFP fusion protein, we here report optimized photo-stimulation techniques that profoundly increase the stability of eNpHR3.0-mediated currents during long-term photo-stimulation. We demonstrate that optimized photo-stimulation enables prolonged hyperpolarization and suppression of action potential discharge on a time-scale of minutes. Collectively, our findings extend the utility of eNpHR3.0 to the long-lasting inhibition of excitable cells, thus facilitating the optogenetic dissection of neural circuits. Within a decade, optogenetic tools for reversible silencing of neurons became an integral component of the neuroscientific repertoire. They facilitate analyzing how distinct neuronal populations causally contribute to brain dynamics at the cellular, network, and behavioral level and, in addition, promise substantial therapeutic potential in diverse clinical contexts [1,2,3]. Optogenetic tools for neuronal inhibition are molecularly diverse, including light-activated chloride channels [4,5,6], potassium-specific cyclic nucleotide-gated channels fused to a photo-activated adenylyl cyclase [7, 8], G protein-coupled receptors [9, 10], and ion pumps [11,12,13,14]. All actuators developed so far have specific biophysical constraints that are of practical interest when designing and interpreting experimental studies and data, respectively [15]. For example, light-gated chloride channels (e.g., GtACR1) enable divisive inhibition by shunting excitatory currents, but the direction of ion flow entirely depends on the existing electrochemical chloride gradient and, consequently, may also depolarize rather than hyperpolarize cells [16,17,18]. Light-activated G protein-coupled receptors operate on slower time-scales and modulate canonical signaling cascades that, in addition to reducing excitability, could lead to undesired off-target effects, e.g., changes in gene expression [19, 20]. In contrast, light-driven ion pumps exhibit on-/off-kinetics in the millisecond range and employ subtractive inhibition, which renders them virtually independent of existing electrochemical gradients [21]. A member of the latter class of actuators is eNpHR3.0 [12], an improved version of the light-driven chloride pump halorhodopsin derived from Natronomonas pharaonis [11]. Ranking amongst the most widely used inhibitory optogenetic tools, the most critical constraint of eNpHR3.0 results from its prominent inactivation, which refers to a decline in photo-current amplitude during continuous illumination [11, 22,23,24,25]. Inactivation has a time constant in the range of seconds implying limited usability of eNpHR3.0 in experimental settings that require long-lasting (> 10 s) inhibition (for review see [15]). Based on data obtained from structurally related halorhodopsins, inactivation is thought to result from a branched photo-cycle with an accumulation of intermediates containing a deprotonated Schiff base in the 13-cis-retinal configuration [26]. The return to the initial state, which involves thermal reversion to all-trans-retinal, is slow, and few published data suggest that it may be accelerated by blue light [11, 22]. Using both transgenic mice and Xenopus laevis oocytes, we here systematically explore as to which extent this property could be exploited to increase the temporal stability of eNpHR3.0-mediated photo-currents. We provide and biophysically characterize optimized photo-stimulation protocols that greatly reduce inactivation even for prolonged illumination periods. Our findings thus extend the suitability of eNpHR3.0 to various experimental paradigms, including situations when long-lasting inhibition of neuronal activity is required. Blue light accelerates the recovery of eNpHR3.0-mediated currents from inactivation in a duration- and power-dependent manner To explore the potential benefits of alternative eNpHR3.0 photo-stimulation paradigms, we expressed an eNpHR3.0-EYFP fusion protein in glutamatergic hippocampal neurons of mice using a transgenic approach (Emx1IREScre:eNpHR3.0-EYFPLSL mice) [27]. Whole-cell voltage-clamp recordings from identified EYFP+ CA1 pyramidal cells were performed in the continuous presence of antagonists of voltage-gated Na+ channels (0.5 μM TTX) and ionotropic glutamate and GABA receptors (10 μM DNQX, 50 μM APV, 10 μM bicuculline) to abolish recurrent excitation and minimize synaptic noise. In agreement with published data, photo-stimulation using yellow light (594 nm, 5 mW at the tip of the optical fiber) induced outward currents that rapidly decayed to 34.2 ± 3.0% of the initial peak amplitude within 10 s of continuous light exposure (Ipeak 62.0 ± 5.8 pA, n = 15 cells; Fig. 1a, b). We probed the recovery from inactivation by an additional 594-nm test pulse at variable time delays (Δt) and found that it was slow under control conditions (time constant of a mono-exponential fit, 54.1 ± 2.6 s, n = 7 cells; Fig. 1c). Recovery from inactivation was significantly enhanced by a brief pulse of blue light (488 nm, 500 ms, 5 mW) in a Δt-dependent manner [interaction (control/rescue × Δt): F = 17.8, df = 4, P = 2.9 × 10−9, n = 7/8 cells (control/rescue), mixed-model ANOVA; Δt = 15 s: t(13) = − 12.3, P = 1.5 × 10−8, Δt = 30 s: t(8.2) = 16.1, P = 1.6 × 10−7, Δt = 45 s: t(13) = − 12.3, P = 1.6 × 10−8, Δt = 60 s: t(13) = 12.1, P = 1.9 × 10−8, Δt = 75 s: t(13) = − 5.1, P = 2.1 × 10−4, two-sample t tests; Fig. 1b, c]. At the population level, the recovery time constant was not significantly correlated to the degree of inactivation induced by the initial photo-stimulation at 594 nm (Spearman's rank correlation coefficient = − 0.38, P = 0.16, n = 15 cells; Fig. 1d). We next addressed the time and power requirements of blue-light rescue stimulation. In a first set of experiments, we systematically varied the duration of the 488-nm light pulse while keeping the applied power constant (5 mW). We found that the recovery from inactivation monotonically increased with increasing duration of the blue-light pulse (F = 575, df = 3.2, P = 3.4 × 10−28, n = 11 cells, one-way repeated-measures ANOVA, Huynh-Feldt correction; Fig. 1e, f). In a second set of experiments, we systematically varied the power of the 488-nm light pulse at a constant duration of 1 s. The recovery from inactivation significantly depended on the power of blue light (F = 529, df = 7, P = 2.3 × 10−62, n = 12 cells, one-way repeated-measures ANOVA; Fig. 1g, h), but saturated at close to 3 mW. Blue light accelerates the recovery of eNpHR3.0-mediated currents from inactivation in a duration- and power-dependent manner. a Sample voltage-clamp recording illustrating that prolonged (10 s) photo-stimulation at 594 nm (5 mW) induces pronounced inactivation of eNpHR3.0-mediated currents. Note that the recovery from inactivation is slow (test pulse at Δt = 15 s). b Sample trace from another cell demonstrating that blue light (500 ms, 488 nm, 5 mW) accelerates the recovery from inactivation. Also note the outward current induced by blue light. c Recovery of eNpHR3.0-mediated currents is enhanced by blue light. Inset, recovery is defined as the ratio of current amplitudes induced by the test (at Δt) versus initial pulse, measured relative to Ilate (i.e., recovery = A2/A1). Dotted lines represent mono-exponential fits to population data. Each cell was tested for all values of Δt either without (Control, n = 7 cells) or with (Rescue, n = 8 cells) an intervening photo-stimulation at 488 nm (500 ms). In a and b, current responses to − 10-mV voltage steps used to monitor access resistance are clipped for clarity (#). d Independent of the degree of inactivation (1 − Ilate/Ipeak), time constants of recovery are lower for rescue as compared to control trials. Each symbol represents a single cell. e Recovery from inactivation depends on the duration of the 488-nm rescue pulse (blue lines). All traces are from a single cell. f Quantification. g Recovery from inactivation depends on the power of the 488-nm rescue pulse at a constant duration of 1 s. All traces are from a single cell. h Quantification. Data are presented as mean ± SEM Collectively, our data demonstrate that blue light accelerates the recovery of eNpHR3.0-mediated currents from inactivation in a duration- and power-dependent manner. Blue light attenuates the inactivation of eNpHR3.0-mediated currents during prolonged photo-stimulation in a mean power-dependent manner We next assessed whether blue light may be similarly used to prevent the inactivation of eNpHR3.0-mediated currents when co-applied with photo-stimulation at 594 nm. To this end, we photo-stimulated cells with a constant power of 594-nm light (5 mW) and systematically varied the power of 488-nm excitation (Fig. 2a, b). We quantified this effect by determining the remaining current at the end of photo-stimulation (Ilate) as a fraction of the peak eNpHR3.0-mediated current (i.e., Ilate/Ipeak). We found that Ilate/Ipeak strongly depended on the power of blue light (F = 226, df = 3.7, P = 3.1 × 10−15, n = 6 cells, one-way repeated-measures ANOVA, Huynh-Feldt correction; Fig. 2b) reaching an apparent saturation at about 3 mW. Co-stimulation at 594 nm and 488 nm attenuates the inactivation of eNpHR3.0-mediated currents during prolonged photo-stimulation in a mean power-dependent manner. a Sample voltage-clamp recording from a single cell illustrating eNpHR3.0-mediated currents in response to photo-stimulation at 594 nm (5 mW) alone (top) or in combination with 488 nm at variable power levels (middle and bottom). Power levels indicated refer to 488-nm light. b Dependence of inactivation on the power of 488-nm light. c The rescue effect of 488-nm light on the inactivation of eNpHR3.0-dependent currents depends on its mean, rather than peak, power. Top: continuous 488-nm stimulation (left) is equally effective in attenuating inactivation as compared to pulsed (1 kHz, 20/80% on/off) stimulation at constant mean power (right). Bottom: continuous 488-nm stimulation (left) is more effective in attenuating inactivation as compared to pulsed (1 kHz, 20/80% on/off) stimulation at constant peak power (right). d For quantification, Ilate measured during pulsed stimulation was normalized to Ilate obtained for the respective continuous-stimulation trials. Each symbol represents a single cell. Data are presented as mean ± SEM. **P < 0.01 We further explored the possibility to minimize the total power of blue light delivered by employing a high-frequency ("pulsed," 1 kHz) stimulation with a 20/80% (on/off) duty cycle by means of an acousto-optic tunable filter (see the "Methods" section). In a first set of experiments, we compared the effects of continuous versus pulsed photo-stimulation at a constant mean power of either 3 mW (Fig. 2c, top) or 0.2 mW by compensatorily increasing the peak 488-nm light power in pulsed stimulation trials. At both power levels, normalized Ilate amplitudes during pulsed stimulation did not significantly differ from those obtained during continuous stimulation (0.2 mW: 100.7 ± 2.1% of control, t(4) = − 0.63, P = 0.56, n = 5 cells, paired t test; 3 mW: 102.6 ± 1.4% of control, t(3) = − 1.98, P = 0.14, n = 4 cells, paired t test; Fig. 2d). In an independent set of experiments, continuous and pulsed photo-stimulation were compared at a constant peak 488-nm light power of either 3 mW (Fig. 2c, bottom) or 0.2 mW, which effectively reduced the mean power in pulsed stimulation trials to 20%. In line with the above data, pulsed stimulation at constant peak power was significantly less effective in preventing inactivation as compared to continuous excitation, reflected in lower values of Ilate (0.2 mW: 63.5 ± 4.0% of control, t(4) = 7.11, P = 2.1 × 10−3, n = 5 cells, paired t test; 3 mW: 66.9 ± 5.7% of control, t(3) = 7.14, P = 5.7 × 10−3, n = 4 cells, paired t test; Fig. 2d). Based on the same rationale and taking into consideration that the deactivation kinetics of eNpHR3.0-mediated currents is in the range of several milliseconds, we next investigated a potential benefit of pulsed [1 kHz, 20/80% (on/off) duty cycle] versus continuous photo-stimulation at 594 nm on the background of a continuous, constant-power (5 mW) blue-light excitation. We found that Ilate amplitudes did not significantly differ between the two regimes if the mean 594-nm light power was kept constant at 3 mW by compensatorily increasing the peak power in pulsed stimulation trials (normalized Ilate 96.0 ± 1.6%). In contrast, Ilate was significantly reduced to 81.4 ± 1.1% if the peak 594-nm light power was unchanged (continuous/mean power = 3 mW vs. pulsed/mean power = 3 mW: P = 0.21, continuous/mean power = 3 mW vs. pulsed/mean power = 0.6 mW: P = 7.4 × 10−4, post hoc t tests with Bonferroni correction; F = 46.1, df = 2, P = 2.3 × 10−6, n = 7 cells, one-way repeated-measures ANOVA). We further examined whether the effect of blue light reflects an inherent property of eNpHR3.0 by performing additional experiments on somatostatin-expressing (SOM) GABAergic interneurons (SOMIREScre:eNpHR3.0-EYFPLSL mice) [28]. When continuously stimulated at 594 nm (5 mW), eNpHR3.0-mediated photo-currents decayed substantially within seconds (Additional file 1: Figure S1A). In contrast, alternating photo-stimulation with yellow and blue light (1 kHz, 50/50% duty cycle, each 5 mW) substantially increased Ilate (t(4) = − 3.54, P = 0.024), while Ipeak was moderately reduced (t(4) = 5.94, P = 4.0 × 10−3), resulting in a profound increase of Ilate/Ipeak (t(4) = − 29.0, P = 8.4 × 10−6, n = 5 cells, paired t tests; Additional file 1: Figure S1A–C). In sum, our data demonstrate that blue light attenuates the inactivation of eNpHR3.0-mediated currents during prolonged photo-stimulation at 594 nm in a mean power-dependent, rather than peak power-dependent, manner. Analogously, Ilate is largely determined by the average, rather than the peak, 594-nm light power delivered. Blue light alone enables efficient and stable long-term photo-stimulation of eNpHR3.0 While the previous experiments provide a strategy to enhance the temporal stability of eNpHR3.0-mediated currents, our initial data employing rescue pulses of blue light (Fig. 1b) already revealed that photo-stimulation at 488 nm alone is capable of inducing outwards currents in eNpHR3.0-EYFP+ cells. We therefore set out to systematically investigate the properties of blue-light-evoked photo-stimulation of eNpHR3.0. To this end, cells were photo-stimulated at either 594 nm or 488 nm at power levels ranging from 1 to 5 mW (Fig. 3a). At each power level examined, Ipeak values were significantly higher for yellow-light as compared to blue-light stimulation (1 mW: t(6) = − 6.62, P = 5.7 × 10−4; 3 mW: t(6) = − 7.35, P = 3.2 × 10−4; 5 mW: t(6) = − 6.57, P = 5.9 × 10−4; n = 7 cells; paired t tests; Fig. 3a, c). Strikingly, however, current responses evoked by blue-light photo-stimulation displayed an extraordinary temporal stability. We quantified this effect by determining Ilate/Ipeak (Fig. 3d), which we found to be considerably higher for photo-stimulation at 488 nm as compared to 594 nm (1 mW: t(6) = 11.94, P = 2.1 × 10−5; 3 mW: t(6) = 19.69, P = 1.1 × 10−6; 5 mW: t(6) = 30.21, P = 8.7 × 10−8; n = 7 cells, paired t tests; Fig. 3a, d). In addition, whereas the ratio Ilate/Ipeak strongly declined with increasing power levels for 594-nm light, this dependency was considerably weaker in case of photo-stimulation with blue light (Fig. 3a, d). As a result of this behavior, absolute Ilate amplitudes evoked at 594 nm versus 488 nm diverged in a power-dependent manner [interaction (594/488 nm × power): F = 42.3, df = 1.06, P = 4.6 × 10−4, two-way repeated-measures ANOVA, Huynh-Feldt correction; Fig. 3b]. We next sought to determine as to which extent combinations of blue and yellow light, at constant total power (5 mW), could further increase Ilate. Strikingly, all tested combinations of 488/594 nm as well as 488 nm alone clearly outperformed photo-stimulation with pure yellow light as reflected in higher values of Ilate (594 nm versus all other groups: P = 5.1 × 10−3 or lower, post hoc t tests with Bonferroni correction; F = 71.4, df = 1.3, P = 2.1 × 10−6, n = 9 cells, one-way repeated-measures ANOVA, Huynh-Feldt correction; Fig. 3e, f). The highest values of Ilate were found for combinations with a blue-light fraction of 40–60%, which moderately exceeded Ilate values evoked by 488 nm alone (Fig. 3f). 488-nm light alone enables efficient and stable long-term photo-stimulation of eNpHR3.0. a Sample voltage-clamp recordings from a single cell illustrating the power-dependence of HR-mediated currents evoked by photo-stimulation at 594 nm (top) or 488 nm (bottom), delivered at 1 mW (left), 3 mW (middle), or 5 mW (right). Note that photo-currents evoked at 488 nm display lower peak amplitudes, but high temporal stability across the entire power range examined. At the end of each trial, 488-nm light (5 mW) was used to accelerate the recovery from inactivation (note the difference in onset kinetics of evoked currents depending on the degree of previous inactivation). Current responses to − 10-mV voltage steps used to monitor access resistance are clipped for clarity (#). b–d Late (Ilate, b) and peak (Ipeak, c) current amplitudes as well as the ratio of Ilate versus Ipeak (d) normalized to the respective values at 594 nm and 1 mW (n = 7 cells). e Sample traces from a single cell photo-stimulated at 594 nm and/or 488 nm and a constant total light power of 5 mW. f Quantification of Ilate measured during the photo-stimulation regimes indicated normalized to Ilate obtained by photo-stimulation at 594 nm (5 mW) alone. Note that each combination of 594 nm plus 488 nm tested (at constant total power) considerably outperformed photo-stimulation at 594 nm alone (dotted line). Each symbol represents a single cell. Data are presented as mean ± SEM. **P < 0.01, ***P < 0.001 Collectively, our data unexpectedly reveal that photo-stimulation at 488 nm either alone or combined with 594-nm light substantially enhances non-inactivating eNpHR3.0-mediated currents and profoundly improves their temporal stability. Wavelength dependency of eNpHR3.0-mediated photo-currents measured in X. laevis oocytes To further quantify the wavelength dependency of eNpHR3.0 and its generalizability to other expression systems, we next performed two-electrode voltage-clamp recordings from X. laevis oocytes expressing eNpHR3.0. We first compared eNpHR3.0 inactivation during long-term (60 s) illumination at three different wavelengths: 590 nm, 532 nm, and 473 nm. We found that 590-nm light could induce the highest initial photo-current amplitudes, but also showed the strongest inactivation (Fig. 4a–c) as compared to 532-nm or 473-nm light of the same intensity (2.6 mW/mm2; F = 345, df = 2, P = 6.4 × 10−7, n = 4 cells, one-way repeated-measures ANOVA, Fig. 4c). We next confirmed that inactivation of eNpHR3.0 is light power-dependent. Photo-current inactivation became more prominent with increasing power at 590 nm or 532 nm (590 nm: F = 194, df = 2.05, P = 1.1 × 10−7, n = 5 cells, one-way repeated-measures ANOVA with Greenhouse-Geisser correction; 532 nm: F = 21.3, df = 1.11, P = 0.007, n = 5 cells, one-way repeated-measures ANOVA with Greenhouse-Geisser correction; Fig. 4d). In agreement with our results obtained in mice, no obvious inactivation was observed for 473-nm light at powers up to 6.6 mW/mm2 (F = 3.74, df = 1.92, P = 0.074, n = 5 cells, one-way repeated-measures ANOVA with Greenhouse-Geisser correction; Fig. 4d). Wavelength-dependent inactivation and recovery of eNpHR3.0 in X. laevis oocytes. a Sample photo-current traces of eNpHR3.0 upon stimulation for 60 s at 590 nm, 532 nm, or 473 nm at constant intensity (2.6 mW/mm2). b, c Quantification of the initial peak current (Ipeak), the remaining current at the end of illumination (Ilate), and the ratio Ilate/Ipeak. d Ilate/Ipeak upon 60-s-long illumination at 590 nm, 532 nm, or 473 nm at different light intensities (n = 5 cells). e Sample photo-current trace of eNpHR3.0. Inactivation was induced by a 60-s light pulse at 590 nm (2.6 mW/mm2). Recovery was probed by 10-ms light pulses (590 nm, 2.6 mW/mm2) at 1, 5, 10, 20, 40, 60, 120, and 300 s after the initial 60-s illumination. f Quantification of eNpHR3.0 recovery (n = 8 cells). g Peak-scaled sample traces from three different cells demonstrating that blue (473 nm) or violet (400 nm) light (2 s, 1 mW/mm2) accelerates the recovery from inactivation (at 5 s). Note the outward current induced by blue light. h Quantification of recovery as in g. i Peak-scaled sample traces from one oocyte illustrating eNpHR3.0 photo-currents induced by illumination at 590 nm alone (2.6 mW/mm2) or by co-illumination with either 473 nm (1 mW/mm2) or 400 nm (1 mW/mm2). j Quantification of Ilate/Ipeak as in i. k Sample traces from one oocyte illustrating eNpHR3.0 photo-currents induced by illumination at 473 nm alone (6.6 mW/mm2), by co-illumination at 590 nm (2.6 mW/mm2) and 400 nm (1 mW/mm2) or by co-illumination at 532 nm (6.6 mW/mm2) and 400 nm (1 mW/mm2). l, m Quantification of Ipeak, Ilate, and Ilate/Ipeak as in k. All measurements were performed in Ringer's solution (pH 7.6) at a holding potential of − 40 mV. Data are presented as mean ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001. c, h, j, m Asterisks indicate significance levels of post hoc t tests with Bonferroni correction following one-way ANOVA (h) or one-way repeated-measures ANOVA (c, j, m) Following 60-s illumination at 590 nm, eNpHR3.0 slowly recovered from inactivation with a weighted time constant of 200 ± 45 s in the dark (F = 231, df = 1.29, P = 5.3 × 10−8, n = 8 cells, one-way repeated-measures ANOVA, Greenhouse-Geisser correction, Fig. 4e–f). We therefore addressed whether recovery can be accelerated by short-wavelength light. While light pulses (2 s, 1 mW/mm2) at either 473 nm or 400 nm accelerated recovery from inactivation, violet light was found to be significantly more effective (control vs. 473 nm: P = 0.047; control vs. 400 nm: P = 1.4 × 10−3; 473 nm vs. 400 nm: P = 0.037; post hoc t tests with Bonferroni correction; F = 23.6, df = 2, P = 1.4 × 10−3, n = 3 cells, one-way ANOVA, Fig. 4g, h). We next examined if short-wavelength light could alleviate eNpHR3.0 inactivation when co-applied with yellow light. We found that a combination of 590-nm light (2.6 mW/mm2) with either violet (400 nm, 1 mW/mm2) or blue (473 nm, 1 mW/mm2) light significantly increased the Ilate/Ipeak ratio, with violet light being more effective (590 nm vs. 590 + 473 nm: P = 0.021; 590 nm vs. 590 + 400 nm: P = 1.1 × 10− 3; 590 + 473 nm vs. 590 + 400 nm: P = 0.012; post hoc t tests with Bonferroni correction; F = 61.3, df = 2, P = 9.9 × 10−4, one-way repeated-measures ANOVA, Fig. 4i, j). Violet light per se evoked very small, if any, photo-currents (I400/I473 = 6.5 ± 1.6%, n = 4 cells). Based on the above results, we next explored the possibility to further increase the amplitude of stable photo-currents by examining a combination of green and violet light. Here, the power of 532-nm light was set to 6.6 mW/mm2, which led to a similar inactivation as compared to 590-nm light at 2.6 mW/mm2 when applied separately (Fig. 4d). Interestingly, the combination of 1-mW/mm2 400-nm light with 6.6-mW/mm2 532-nm light considerably outperformed the combination with 2.6-mW/mm2 590-nm light for its higher photo-current amplitudes and the eliminated inactivation (Ilate/Ipeak for 590 + 400 nm vs. 532 + 400 nm: P = 2.3 × 10−5, post hoc t test with Bonferroni correction; F = 126, df = 2, P = 1.2 × 10−5, one-way repeated-measures ANOVA; Fig. 4k–m). In addition, whereas 6.6-mW/mm2 473-nm light alone also induced stable photo-currents with negligible inactivation over 60 s (Ilate vs. Ipeak: t(3) = 2.75, P = 0.071, n = 4 cells, paired t test; Fig. 4l), its photo-current amplitude was only about one third of the green-violet combination (Fig. 4k–m). Inactivation of eNpHR3.0 is pH- and chloride-dependent Deprotonation of the Schiff base was suggested to underlie the inactivation of NpHR [11, 22, 26]. We therefore investigated the effect of extracellular pH (pHout) on eNpHR3.0 inactivation. At higher pHout, the Schiff base is expected to lose the proton more easily. Indeed, increased inactivation was observed when pHout was increased (F = 321, df = 2, P = 8.5 × 10−10, n = 6 cells, one-way repeated-measures ANOVA, Fig. 5a). Moreover, as the pKa of the Schiff base is strongly dependent on the occupancy of the chloride ion at binding site I of NpHR [29], chloride binding to NpHR could stabilize the protonated Schiff base. In agreement with this, lower extracellular chloride concentration ([Cl−]out) caused stronger inactivation of eNpHR3.0 (F = 201, df = 1.4, P = 9.7 × 10−7, n = 6 cells, one-way repeated-measures ANOVA, Greenhouse-Geisser correction; Fig. 5b). To gain mechanistic insight into the recovery of eNpHR3.0 from inactivation (i.e., reprotonation of the Schiff base), we systematically investigated the effects of pHout, [Cl−]out as well as membrane potential on the recovery time constant of eNpHR3.0 in the dark. The recovery of eNpHR3.0 from inactivation could be accelerated by either a decrease of pHout [tests for pHout effects: F = 61, df = 2, P = 2.9 × 10−6; interaction (pHout × Δt): F = 16.9, df = 2.66, P = 3.3 × 10−5; mixed-model ANOVA, Greenhouse-Geisser correction; Fig. 5c] or an increase of [Cl−]out [tests for [Cl−]out effects: F = 176, df = 3, P = 9.9 × 10−11; interaction ([Cl−]out × Δt): F = 49.6, df = 4.97, P = 1.5 × 10−13; mixed-model ANOVA, Greenhouse-Geisser correction; Fig. 5d]. This indicates that the proton for reprotonation of the Schiff base comes from the extracellular space. Interestingly, no significant difference of the recovery time of eNpHR3.0 at different membrane potentials was observed when [Cl−]out was 121 mM at pH 7.6 [tests for membrane potential effects: F = 3.63, df = 2, P = 0.077; interaction (membrane potential × Δt), F = 2.14, df = 2.85, P = 0.13; mixed-model ANOVA, Greenhouse-Geisser correction; Fig. 5e)]. Taken together, the data suggest that the proton for the reprotonation of the Schiff base originates from the extracellular side, and its uptake is always facilitated by the binding of chloride, and vice versa. Mechanistic insight of the inactivation and recovery of eNpHR3.0. a Decreasing extracellular proton concentration enhances inactivation of eNpHR3.0. Currents were measured in the same oocyte in Ringer's solution at pH 5.6, pH 7.6, or pH 9.6. b Increasing extracellular chloride concentration reduces inactivation of eNpHR3.0. Currents were measured in the same oocyte at different chloride concentrations. Buffers with different chloride concentrations were achieved by mixing Ringer's solution (pH 7.6) and NMG-Asp solution (pH 7.6) at different ratio. c Recovery of eNpHR3.0-mediated photo-currents in Ringer's solution at pH 5.6 (n = 4 cells), pH 7.6 (n = 8 cells), or pH 9.6 (n = 4 cells) at a holding potential of − 40 mV. d Recovery of eNpHR3.0-mediated photo-currents at an extracellular chloride concentration of 6 mM (n = 5 cells), 16 mM (n = 6 cells), 60 mM (n = 6 cells), or 121 mM (n = 5 cells). pH was set to 7.6 and holding potential to − 40 mV. Dotted lines in c and d represent bi-exponential fits to population data. e Recovery of eNpHR3.0 (pH 7.6) at holding potentials of − 100 mV (n = 7 cells), − 40 mV (n = 5 cells), or + 20 mV (n = 5 cells). Five hundred ninety-nanometer light at an intensity of 2.6 mW/mm2 was applied for 60 s in a and b, while in c–e, additional 10-ms 590-nm light pluses at the same intensity were delivered at 1, 5, 10, 20, 40, 60, 120, and 300 s after the initial 60-s illumination, as in Fig. 4e. Data are presented as mean ± SEM. *P < 0.05, ***P < 0.001. a, b Asterisks indicate significance levels of post hoc t tests with Bonferroni correction following one-way repeated-measures ANOVA Blue-light-induced photo-stimulation of eNpHR3.0 enables efficient long-term hyperpolarization and inhibition We next investigated whether the superior properties of photo-stimulation using short-wavelength light found in voltage-clamp experiments translate into a higher efficiency of long-term neuronal inhibition. We focused on blue light as the use of multiple wavelengths might potentially represent a complicating factor in in vivo applications (see the "Discussion" section)—in spite of lower photo-current amplitudes as compared to the green-violet combination (Fig. 4). Using current-clamp measurements from CA1 pyramidal cells in acute brain slices of mice, repetitive 1-s current injections via the patch pipette were used to evoke action potential firing in the presence of ionotropic glutamate and GABA receptor antagonists (Fig. 6a). Photo-stimulation of eNpHR3.0 at 594 nm (5 mW) for 1 min suppressed action potential discharge in a highly time-dependent manner: whereas firing in response to the first test pulse (500 ms after the onset of light stimulation) was virtually abolished, the inhibitory effect almost disappeared for the second as well as all subsequent test pulses (Fig. 6a, b). In contrast, photo-stimulation at 488 nm (5 mW) reliably inhibited action potential discharge during the entire 1-min illumination period [interaction (control/488 nm/594 nm × test pulse number): F = 13.2, df = 4.6, P = 1.7 × 10−7, n = 10 cells, two-way repeated-measures ANOVA, Huynh-Feldt correction; Fig. 6a, b]. Furthermore, photo-stimulation with yellow light resulted in a pronounced, but transient hyperpolarization, whereas photo-stimulation at 488 nm induced a highly stable hyperpolarizing response in all cells analyzed. Accordingly, a two-way repeated-measures ANOVA yielded a highly significant interaction between the independent variables (F = 71.2, df = 5.0, P = 3.1 × 10−20, n = 10 cells, Huynh-Feldt correction; Fig. 6a, c). Photo-stimulation of eNpHR3.0 at 488 nm enables efficient long-term hyperpolarization and inhibition. a Sample current-clamp measurements from a single cell (biased to about − 65 mV at rest) repetitively challenged with an inward current of constant amplitude either without (5 mW, left) or with photo-stimulation at 594 nm (5 mW, middle) or 488 nm (right), respectively. Note that yellow-light stimulation initially abolished action potential firing (#), which recovered during prolonged photo-stimulation periods. Also note that blue-light stimulation suppressed firing for the entire 1-min period on the background of a stable hyperpolarization. Insets: current responses to the last test pulse at higher temporal magnification (scale bars, 25 mV, 0.5 s). b Number of action potentials (AP) as a function of the test-pulse number. c Time-course of membrane potential (gray—period of photo-stimulation). Data are presented as mean ± SEM Wavelength and duration dependence of chloride loading during photo-stimulation of eNpHR3.0 As eNpHR3.0 mediates chloride uptake, prolonged activation of eNpHR3.0 may affect [Cl−]in and, hence, shift the reversal potential of GABAA receptor-dependent currents (EGABA) into the positive direction [30]. We therefore quantified the wavelength and duration dependence of photo-stimulation-induced changes in EGABA using gramicidin perforated-patch current-clamp recordings from CA1 pyramidal cells. In the presence of antagonists of ionotropic glutamate receptors (10 μM DNQX, 50 μM APV) and voltage-gated Na+ (0.5 μM TTX) and Ca2+ (100 μM CdCl2) channels, cells were challenged with a saturating puff of the GABAA receptor agonist isoguvacine (100 μM, 2 s). As demonstrated before, the peak membrane potential (Vpeak) approximates EGABA under these conditions [31]. We found that a 30-s-long photo-stimulation at 488 nm (5 mW) slightly, but significantly, shifted Vpeak by + 1.9 ± 0.6 mV (t(6) = − 3.15, P = 0.020, n = 7 cells, paired t test; Fig. 7a, b). This effect was dose-dependent, since a more pronounced shift in Vpeak by + 7.2 ± 0.7 mV (t(12) = − 9.77, P = 4.6 × 10−7, n = 13 cells, paired t test; Fig. 7c, d) was observed for a photo-stimulation period of 120 s (488 nm, 5 mW). In contrast, yellow-light photo-stimulation for 120 s (594 nm, 5 mW) did not significantly affect Vpeak (+ 0.0 ± 0.7 mV (t(9) = 0.027, P = 0.98, n = 10 cells, paired t test; Fig. 7e, f)). Importantly, resting membrane potential (Vrest) was unaffected by either photo-stimulation paradigm (P > 0.1, paired t tests; Fig. 7a–f), arguing against a major photo-toxic effect. In summary, our data demonstrate that photo-stimulation of eNpHR3.0 with blue light not only increases steady-state currents (Fig. 3) and membrane potential changes (Fig. 6), but also enhances the eNpHR3.0-mediated increase in [Cl−]in. Wavelength and duration dependence of chloride loading due to photo-stimulation of eNpHR3.0. a Sample gramicidin perforated-patch recording of membrane potential in response to puff application of isoguvacine (Iso, 100 μM, 2 s) before (Control) and after photo-stimulation at 488 nm for 30 s (5 mW). Dotted lines indicate resting (Vrest) and peak (Vpeak) membrane potential measured before photo-stimulation. Note that Vpeak approximates EGABA under our recording conditions. b Quantification of Vrest and Vpeak before and after photo-stimulation. c, d As in a and b, but photo-stimulation was performed for 120 s at 488 nm (5 mW). e, f As in a and b, but photo-stimulation was performed for 120 s at 594 nm (5 mW). Experiments were performed at P4–10. Data are presented as mean ± SEM. n.s. not significant, **P < 0.01, ***P < 0.001 Consequently, alterations in network dynamics resulting from EGABA changes represent a potential experimental constraint related to the use of eNpHR3.0. We examined this possibility in hippocampal slices obtained from neonatal mice (P3–6), i.e., at a developmental stage when depolarizing GABAergic transmission drives synchronized network activity [28, 32]. In agreement with published data [33], pharmacological inhibition of the chloride co-transporter NKCC1 using bumetanide (10 μM) largely abolished bursts of spontaneous postsynaptic currents (PSCs), confirming that synchronized network activity is strongly [Cl−]in-dependent at this age (Additional file 2: Figure S2A,B). We predicted that, in the continuous presence of bumetanide, photo-stimulation of Emx1+ CA1 pyramidal cells would rescue PSC bursts by elevating [Cl−]in. Indeed, following the offset of blue-light illumination (60 s, 5 mW), PSC bursts transiently reappeared (PSC burst count per 20-s bin: before stim 0.35 ± 0.19, after stim 4.25 ± 0.66, t(3) = − 6.57, n = 4 cells, P = 7.2 × 10−3, paired t test; Additional file 2: Figure S2A–C). Thus, these data provide proof-of-principle evidence that activation of eNpHR3.0 may alter neuronal population dynamics due to a shift in EGABA. We further reasoned that such effects may be less pronounced in slices obtained at a later developmental stage (P11–12), when KCC2-dependent chloride extrusion is more effective [34], synchronized network events are virtually absent and spontaneous excitatory PSCs (EPSCs) mainly reflect miniature release. In line with this prediction, long-lasting (5 min) photo-stimulation at 488 nm (5 mW) failed to alter EPSC frequency after stimulation offset, and EPSC frequency was stable for the following 10 min (F = 0.99, df = 2, P = 0.41, n = 6 cells, one-way repeated-measures ANOVA; Additional file 2: Figure S2D–F). The latter observation also argues against a major photo-toxic effect on synaptic release due to extended photo-stimulation with blue light. Up to now, the chloride pump eNpHR3.0 is one of the most popular optogenetic tools for hyperpolarization of excitable cells. Like many other rhodopsin-based optogenetics tools, inactivation upon light illumination is a main obstacle hindering the application of eNpHR3.0 [24]. Here, we systematically studied the inactivation of eNpHR3.0 caused by long-term illumination in both murine hippocampal neurons and X. laevis oocytes. We expect that congruent findings made in these two different expression systems reflect the property of the protein itself, largely eliminating the influence of possible protein-protein interactions in the host system. Inactivation upon long-term illumination, slow recovery by thermal decay in the dark, and the accelerated recovery by blue-light illumination of eNpHR3.0 were similarly observed in both host cells. In addition, in both systems, the temporal stability of eNpHR3.0 is improved under optimized photo-stimulation conditions, such as co-application of yellow and blue light, green and violet light, or blue light alone. Biophysical mechanism of eNpHR3.0 inactivation and recovery Inactivation of the homologue protein HsHR from Halobacterium salinarum was proven to be the consequence of accumulation of an M-like intermediate from a branched photo-cycle with a deprotonated Schiff base [26, 35,36,37]. Therefore, inactivation of NpHR was naturally attributed to a similar mechanism, but not further investigated [11, 22]. To gain additional insights into the mechanism of inactivation of eNpHR3.0, we characterized the effects of different illumination and extracellular ionic conditions. Light power and wavelength dependences of inactivation may result from different photochemical processes underlying different light stimuli. The proton and chloride dependences of inactivation strongly support the Schiff base deprotonation hypothesis. Unlike bacteriorhodopsin (BR), where the positive charge of the protonated Schiff base is counterbalanced by its aspartate D85, in NpHR, this negative charge is provided by the binding of a chloride ion [38]. Accordingly, a decrease of either extracellular proton or chloride concentration will increase the chance of deprotonation of the Schiff base, although the Schiff base of NpHR was suggested to be never deprotonated during the chloride pumping cycle [39,40,41]. To effectively transport chloride, the Schiff base also needs to be protonated to facilitate chloride binding. Therefore, NpHR intermediates with deprotonated Schiff base are stable and non-pumping. Indeed, the slow kinetics of recovery from inactivation of eNpHR3.0 is also consistent with the stable and non-pumping feature of this M-like intermediate. Collectively, our data indicate that eNpHR3.0 inactivation following long-term illumination is due to formation of an M-like intermediate with a deprotonated Schiff base (Fig. 8). Proposed photo-cycle of NpHR. An extracellular chloride ion is bound to the Schiff base lysine of NpHR at resting state, with Km = 16 mM [11]. Photon absorption (with maximum at 580 nm) triggers the isomerization of retinal and starts the photo-cycle, containing intermediates K (omitted here), L, N, and O. The chloride ion is released into the cytosol during the transition from N to O, and uptake of a chloride ion from the extracellular side takes place in the recovery from O to the initial state. HR without a bound chloride ion is prone to deprotonation of the Schiff base in the L state (indicated by dashed line), leading to formation of M. This intermediate is long-lived and absorbs similarly as HR410 (or M412 in BR) from Halobacterium salinarum. The uptake of the proton for reprotonation of the M intermediate is very slow in dark (open arrow) but fast after absorption of a blue photon (blue arrow). Our data support deprotonation of the chloride-free L state (indicated by broken line) We observed that the recovery time of eNpHR3.0 strongly depends on pHout, indicating that the proton for reprotonation of the deprotonatated Schiff base comes from the extracellular side. In addition, extracellular chloride could also affect the reprotonation of the Schiff base by regulating the pKa through binding to the binding site I [29]. In keeping with this, we found that lowering [Cl−]out slowed down the recovery of eNpHR3.0. Changes of membrane potential will always cause opposite effects on proton or chloride binding to the Schiff base. Accordingly, no difference in recovery time was observed at different membrane potentials. It is intriguing to ask what the proton source is under the blue-light-induced recovery scenario. We argue that the proton is also from the extracellular side: First, in the structurally related HsHR, proton uptake has been experimentally proven to occur from the extracellular channel upon restoration of H410 to the initial state by blue-light absorption [26]. Second, in BR, pump activity can be inhibited by additional blue light, in which blue-light absorption decays the M412 intermediate to the initial BR568 state by reprotonation of the Schiff base from the extracellular side [42,43,44] (Fig. 8). Third, a crystallography study proposed that the reprotonation of the Schiff base occurs after retinal isomerization when the cytosolic interhelical space is already closed, suggesting that the proton is from the extracellular side [45]. Beyond that, our findings may have broader utility via their application to training statistical models for the computational design of optimized NpHR variants [46]. Short-wavelength light enables optimized spatiotemporal control of eNpHR3.0 The inactivation of NpHR3.0 during continuous illumination [11, 22] limits its utility for long-lasting (> 10 s) neuronal inhibition (see also [15]). However, prolonged silencing of neuronal populations is typically a critical requirement for analyzing their involvement in network oscillations and behaviors. We here confirm that inactivation increases with increasing green or yellow light power (Fig. 3), which could be particularly problematic if expression levels are low, as is the case in many transgenic models. Importantly, the present study reveals that, independent of expression system, inactivation is highly wavelength-dependent, being profoundly reduced for blue as compared to green or yellow light (Figs. 3 and 4). This may be of great practical importance as, within the tissue, light power declines with increasing distance from the fiber tip [15]. Consequently, when using yellow light, cells that lie close to the light source (i.e., that are exposed to a comparatively high light power) will not only exhibit higher peak photo-current amplitudes, but also a more pronounced to inactivation than those at larger distances. In other words, in addition to increasing the temporal stability of eNpHR3.0-mediated currents within individual cells, blue light is expected to minimize differences in inactivation between spatially distributed cells. Comparing continuous with high-frequency photo-stimulation regimes revealed that blue light attenuates the yellow-light-induced eNpHR3.0 inactivation in a mean power-dependent manner (Fig. 2). This finding justifies the use of continuous blue-light illumination, which can be delivered using simpler hardware solutions. As compared to photo-stimulation with blue light alone, co-illumination with yellow light produced non-inactivating currents of even higher amplitude. Largest photo-currents were found for combinations with a blue-light fraction of 40–60% (Fig. 3). The temporal stability of eNpHR3.0-mediated currents, however, was maximal for pure 488-nm illumination and not further enhanced by co-illumination with yellow light (Fig. 3). In oocytes, similarly stable eNpHR3.0-mediated photo-currents were obtained by combining green with violet light, while violet light per se evoked negligible photo-currents when applied in isolation (Fig. 4). In addition, the corresponding steady-state amplitudes exceeded those evoked by photo-stimulation with pure blue light (Fig. 4). This may render the combination of green and violet light the preferred photo-stimulation regime, if high-amplitude photo-currents are required. However, it should be considered that violet and green light exhibit a differential distance-dependent power attenuation in brain tissue, which may result in larger spatial inhomogeneities as outlined above. In sum, in combination with the fast intrinsic on-/off-kinetics of eNpHR3.0 in the millisecond range, the protocols described here provide for an optimized spatiotemporal control of eNpHR3.0 photo-activation for flexible neuronal inhibition. Light-driven chloride pumps use subtractive inhibition (i.e., hyperpolarization) and, consequently, operate independently of the electrochemical chloride gradient [21]. This is a potential advantage over chloride-conducting channelrhodopsins (e.g., GtACR1), which act in a [Cl−]in-dependent manner and, for instance, can depolarize presynaptic terminals and evoke neurotransmitter release [18]. However, a general constraint of using ion pumps for neuronal silencing results from changes in reversal potential of the ionic species transported. For example, the proton pump archaerhodopsin (eArch3.0) was shown to induce a pH-dependent Ca2+ influx that enhanced spontaneous vesicular release [18]. Analogously, the chloride pump eNpHR3.0 can increase [Cl−]in, which shifts EGABA and may facilitate depolarizing GABAergic/glycinergic signaling that could counteract the eNpHR3.0-mediated hyperpolarization [25, 30]. We here show that EGABA shifts are larger for photo-stimulation with blue as compared to yellow light (Fig. 7), in agreement with the increase in charge transfer for prolonged stimulation periods (Fig. 3). While manipulations of EGABA may be useful, e.g., for dissecting the contribution of GABAergic signaling to network dynamics under physiological or pathophysiological conditions [25], they also impose an experimental constraint to the use of eNpHR3.0 for neuronal silencing. We provide plausibility evidence that EGABA shifts may transiently induce aberrant network activity in the post-stimulation period at P3–6 (Additional file 2: Figure S2A–C), whereas such effects were not observed at a later developmental stage (Additional file 2: Figure S2D–F), when chloride extrusion has been shown to be more efficient [34]. In the general case, the magnitude of EGABA shifts can hardly be predicted, as it depends on several parameters (chloride extrusion capacity, resting chloride conductance, etc.), and should be examined for a given experimental setting. Additionally, due to the fast deactivation kinetics of eNpHR3.0, abrupt termination of photo-stimulation can lead to rebound depolarization and action potential firing [30, 47]. This effect is due to network-based mechanisms [28], chloride loading, and/or activation of voltage-gated conductances (e.g., H-current, T-type Ca2+ current) and can be readily attenuated by replacing a step-like termination of photo-stimulation with a more gradual decrease in light power [18]. Finally, while our data do not provide direct evidence for photo-toxicity induced by prolonged illumination with blue light (Fig. 7 and Additional file 2: Figure S2), photo-stimulation invariably heats tissue and could thus affect a number of temperature-dependent physiological processes. Indeed, thermal constraints of optogenetics are well documented [48], further underlying the need for well-designed control experiments. Taken together, our study provides a novel approach for long-term optogenetic silencing that is based on an optimization of photo-stimulation, rather than protein engineering. For short-term optogenetic inhibition, yellow light remains the preferred choice for its capability to induce large photo-currents and its favorable tissue penetration properties. However, when prolonged inhibition is required, photo-stimulation with blue light (either alone or in combination with yellow light) is advantageous due to its superior temporal stability. Besides, our study also provides alternative photo-simulation schemes for long-term inhibition, as we observed in oocytes that a green-violet combination outperformed blue light in terms of photo-current amplitudes without any obvious inactivation. In sum, our study provides easy-to-implement photo-stimulation approaches for the light-driven chloride pump eNpHR3.0 that are associated with an extraordinary temporal stability of pump currents and thus render eNpHR3.0 suitable for long-term neuronal inhibition. All experimental procedures were carried out with approval from the local government and complied with European Union norms (Directive 2010/63/EU). Experiments were performed on acute brain slices prepared from mice of both sexes at postnatal day (P) 4–13. Pyramidal cell-specific expression of an eNpHR3.0-EYFP fusion protein was achieved by crossing homozygous female Emx1IREScre mice (The Jackson Laboratory, stock no. 005628) [49] to homozygous male mice of the Ai39 cre-reporting strain (The Jackson Laboratory, stock no. 014539) [50]. For experiments on SOM interneurons, homozygous SOMIREScre mice (The Jackson Laboratory, stock no. 013044) [51] were crossed to homozygous Ai39 mice. Animals were housed in standard cages with 12-h light/12-h dark cycles. Preparation of brain slices Animals were decapitated under deep isoflurane anesthesia. The brain was removed quickly and transferred into ice-cold saline containing (in mM) 125 NaCl, 4 KCl, 10 glucose, 1.25 NaH2PO4, 25 NaHCO3, 0.5 CaCl2, and 2.5 MgCl2, bubbled with 5% CO2/95% O2 (pH, 7.4). Horizontal brain slices containing the hippocampus (350 μm) were cut on a vibratome and stored for at least 1 h before their use at room temperature in artificial cerebrospinal fluid (ACSF) containing (in mM) 125 NaCl, 4 KCl, 10 glucose, 1.25 NaH2PO4, 25 NaHCO3, 2 CaCl2, and 1 MgCl2, bubbled with 5% CO2/95% O2 (pH, 7.4). For recordings, slices were placed into a submerged-type recording chamber on the microscope stage (Nikon Eclipse FN1, Nikon Instruments Inc.) equipped with near-infrared differential interference contrast optics (ACSF flow rate ~ 3 ml min−1). Experiments were performed at near physiological temperature (32–34 °C). Electrophysiology in brain slices Electrophysiological signals were acquired using a Multiclamp 700B amplifier, a 16-bit AD/DA board (Digidata 1550A) and the software pClamp 10 (Molecular Devices). Signals were low-pass filtered at 3 kHz and sampled at 10 kHz. For patch-clamp recordings of photo-currents from CA1 pyramidal cells, glass pipettes (4–7 MΩ) were filled with the following solution (in mM): 40 KCl, 100 K+-gluconate, 1 CaCl2, 11 EGTA, 10 HEPES, 2 Mg2+-ATP, and 0.3 Na+-GTP (pH adjusted to 7.25 with KOH). Whole-cell voltage-clamp recordings were performed at a holding potential of − 70 mV. In whole-cell current-clamp measurements, the resting membrane potential was manually biased to about − 65 mV via current injection. Voltages were not corrected for liquid junction potential (LJP). Except for experiments illustrated in Fig. 1a–d, a blue-light pulse (488 nm, 3 s, 5 mW) was routinely applied at the end of each stimulation trial to accelerate the recovery of eNpHR3.0-mediated currents from inactivation (for an example, see Fig. 3a). The recovery from inactivation of eNpHR3.0-mediated currents (Fig. 1c) was fitted, separately for each cell, by a mono-exponential function of the following form: $$ \mathrm{recovery}=a\times {e}^{\frac{-\Delta t}{\tau }}+c $$ where Δt is the latency of the test pulse onset. The offset c was constrained to 100%. Due to a high intercellular variability of input resistances, the amplitude of injected currents used to evoke action potential firing in current-clamp experiments (see Fig. 4) was separately set for each cell and kept constant throughout the recording. Using a series of repetitive current injections (1 s, 5-pA increments), the amplitude was determined from the largest current step that failed to induce action potentials under brief (2 s, 5 mW) photo-stimulation at 488 nm. For gramicidin perforated-patch current-clamp recordings from CA1 pyramidal cells, glass pipettes were filled with the following solution (in mM): 140 K+-gluconate, 1 CaCl2, 11 EGTA, 1 MgCl2, and 10 HEPES (pH adjusted to 7.3 with KOH), additionally supplemented with 50 μg/ml gramicidin. Here, measured voltages were offline-corrected for LJP (16.5 mV). Recordings were performed at zero current. Whole-cell voltage-clamp recordings of spontaneous PSCs were performed at a holding potential of − 70 mV without correction for LJP. At P3–6, glass pipettes were filled with (in mM) 40 KCl, 100 K+-gluconate, 1 CaCl2, 11 EGTA, 10 HEPES, 2 Mg2+-ATP, and 0.3 Na+-GTP (pH adjusted to 7.25 with KOH). Bursts of spontaneous PSCs were visually detected using the following criteria: (I) duration > 400 ms and (II) amplitude > 200 pA. For measurement of EPSCs at P11–12, glass pipettes were filled with (in mM) 8 KCl, 140 K+-gluconate, 1 CaCl2, 11 EGTA, 10 HEPES, 2 Mg2+-ATP, and 0.3 Na+-GTP (pH adjusted to 7.25 with KOH). EPSCs were detected using a template-matching algorithm implemented in pClamp 10. eNpHR3.0 DNA was cloned into oocyte expression vectors, based on the plasmid pGEMHE 22, a derivative of pGEM3z (Promega). NheI-linearized plasmid DNA was used for the in vitro generation of cRNA with the AmpliCap-MaxT7 High Yield Message Maker Kit (Epicentre Biotechnologies). Electrophysiology in oocytes X. laevis oocytes were injected with 30 ng eNpHR3.0 cRNA and incubated in medium containing 10 μM all-trans-retinal for 2 or 3 days before measurement. Two-electrode voltage-clamp recordings of photo-currents were made in Ringer's solution with different pH (110 mM NaCl, 5 mM KCl, 2 mM CaCl2, 1 mM MgCl2, 5 mM HEPES, pH 7.6; 110 mM NaCl, 5 mM KCl, 2 mM CaCl2, 1 mM MgCl2, 5 mM MES, pH 5.6; and 110 mM NaCl, 5 mM KCl, 2 mM CaCl2, 1 mM MgCl2, 10 mM CAPSO, pH 9.6) or in NMG-Asp solution (110 mM NMG, 2 mM CaCl2, 1 mM MgCl2, 5 mM HEPES, pH adjusted to 7.6 by aspartate) at a holding potential of − 100, − 40, or 20 mV. Solutions with different chloride concentrations were prepared by mixing of Ringer's solution with NMG-Asp solution at different ratios. The recovery from inactivation of eNpHR3.0-mediated currents (Fig. 4f and Fig. 5c, d) was fitted, separately for each cell, by a bi-exponential function of the following form: $$ \mathrm{recovery}=a1\times {e}^{\frac{-\Delta t}{\tau 1}}+a2\times {e}^{\frac{-\Delta t}{\tau 2}}+c $$ where Δt is the latency of the test pulse onset. The offset c was constrained to 100%. The weighted time constant τw was computed as follows: $$ {\tau}_w=\tau 1\times \frac{a1}{a1+a2}+\tau 2\times \frac{a2}{a1+a2} $$ Optogenetic stimulation For measurements in brain slices, excitation was provided by a 488-nm diode laser (Cobolt MLD 488) and a 594-nm solid-state laser (Cobolt Mambo), intensity-modulated by an acousto-optic tunable filter (GH18A, Gooch & Housego) and coupled into a multimode 0.22 NA optical fiber with a core diameter of 200 μm (FG200LCC, Thorlabs GmbH). The tip of the fiber was positioned at an axial distance of ~ 0.5 mm to the surface of CA1. All power levels indicated were calibrated at the fiber tip, separately for each wavelength (LabMax-TO and OP-2 VIS sensor, Coherent). Photo-stimulation of eNpHR3.0 was performed using either continuous or high-frequency pulse-like (1 kHz, on/off 20/80%) stimulation patterns. The on/off time constant of the acousto-optic tunable filter was ≤ 6 μs, as quantified using a fast photodiode (PDA100A, Thorlabs). For oocyte measurements, a 532-nm laser, a 473-nm laser (Changchun New Industries Optoelectronics Tech), a 400-nm LED (ProLight Opto Technology), and a 590-nm LED (WINGER) were used as light sources. The light intensities at different wavelengths were measured with a Laser Check optical power meter (Coherent Inc.). Chemicals were obtained from Sigma (bicuculline methiodide), Tocris [DL-2-amino-5-phosphonopentanoic acid (APV), 6,7-dinitroquinoxaline-2,3(1H,4H)-dione (DNQX)], and Biotrend [tetrodotoxin (TTX)]. Experimental design and statistical analysis Data were analyzed using pClamp 10, WinWCP, Microsoft Excel, and Matlab 2010a/2016a. Statistical analyses were performed using OriginPro 2018, Prism 7 and SPSS Statistics 22/24. All data are reported as mean ± standard error of the mean (SEM) (Additional file 3). Exact sample sizes for each experiment are given in the "Results" section. The Kolmogorov–Smirnov test was used to test for the normality of data. Parametric testing procedures were applied for normally distributed data; otherwise, non-parametric tests were used. In case of two-sample t tests and unequal group variances, Welch's correction was applied. In case of multiple comparisons, analysis of variance (ANOVA) was used (post hoc tests indicated in the "Results" section). P values (two-tailed tests) < 0.05 were considered statistically significant. All data generated or analyzed during this study are included in this article and its supplementary information. Klapper SD, Swiersy A, Bamberg E, Busskamp V. Biophysical properties of optogenetic tools and their application for vision restoration approaches. Front Syst Neurosci. 2016;10:74. PubMed PubMed Central Article CAS Google Scholar Moser T. Optogenetic stimulation of the auditory pathway for research and future prosthetics. Curr Opin Neurobiol. 2015;34:29–36. CAS PubMed Article PubMed Central Google Scholar Bui AD, Alexander A, Soltesz I. Seizing control: from current treatments to optogenetic interventions in epilepsy. Neuroscientist. 2017;23(1):68–81. Govorunova EG, Sineshchekov OA, Janz R, Liu X, Spudich JL. NEUROSCIENCE. Natural light-gated anion channels: a family of microbial rhodopsins for advanced optogenetics. Science. 2015;349(6248):647–650. Wietek J, Beltramo R, Scanziani M, Hegemann P, Oertner TG, Wiegert JS. An improved chloride-conducting channelrhodopsin for light-induced inhibition of neuronal activity in vivo. Sci Rep. 2015;5:14807. CAS PubMed PubMed Central Article Google Scholar Berndt A, Lee SY, Wietek J, Ramakrishnan C, Steinberg EE, Rashid AJ, Kim H, Park S, Santoro A, Frankland PW, et al. Structural foundations of optogenetics: determinants of channelrhodopsin ion selectivity. Proc Natl Acad Sci U S A. 2016;113(4):822–9. Beck S, Yu-Strzelczyk J, Pauls D, Constantin OM, Gee CE, Ehmann N, Kittel RJ, Nagel G, Gao S. Synthetic light-activated ion channels for optogenetic activation and inhibition. Front Neurosci. 2018;12:643. PubMed PubMed Central Article Google Scholar Bernal Sierra YA, Rost BR, Pofahl M, Fernandes AM, Kopton RA, Moser S, Holtkamp D, Masala N, Beed P, Tukker JJ, et al. Potassium channel-based optogenetic silencing. Nat Commun. 2018;9(1):4611. Siuda ER, McCall JG, Al-Hasani R, Shin G, Il Park S, Schmidt MJ, Anderson SL, Planer WJ, Rogers JA, Bruchas MR. Optodynamic simulation of beta-adrenergic receptor signalling. Nat Commun. 2015;6:8480. Masseck OA, Spoida K, Dalkara D, Maejima T, Rubelowski JM, Wallhorn L, Deneris ES, Herlitze S. Vertebrate cone opsins enable sustained and highly sensitive rapid control of Gi/o signaling in anxiety circuitry. Neuron. 2014;81(6):1263–73. Zhang F, Wang LP, Brauner M, Liewald JF, Kay K, Watzke N, Wood PG, Bamberg E, Nagel G, Gottschalk A, et al. Multimodal fast optical interrogation of neural circuitry. Nature. 2007;446(7136):633–9. Gradinaru V, Zhang F, Ramakrishnan C, Mattis J, Prakash R, Diester I, Goshen I, Thompson KR, Deisseroth K. Molecular and cellular approaches for diversifying and extending optogenetics. Cell. 2010;141(1):154–65. Chow BY, Han X, Dobry AS, Qian X, Chuong AS, Li M, Henninger MA, Belfort GM, Lin Y, Monahan PE, et al. High-performance genetically targetable optical neural silencing by light-driven proton pumps. Nature. 2010;463(7277):98–102. Chuong AS, Miri ML, Busskamp V, Matthews GA, Acker LC, Sorensen AT, Young A, Klapoetke NC, Henninger MA, Kodandaramaiah SB, et al. Noninvasive optical inhibition with a red-shifted microbial rhodopsin. Nat Neurosci. 2014;17(8):1123–9. Wiegert JS, Mahn M, Prigge M, Printz Y, Yizhar O. Silencing neurons: tools, applications, and experimental constraints. Neuron. 2017;95(3):504–29. Szabadics J, Varga C, Molnar G, Olah S, Barzo P, Tamas G. Excitatory effect of GABAergic axo-axonic cells in cortical microcircuits. Science. 2006;311(5758):233–5. Price GD, Trussell LO. Estimate of the chloride concentration in a central glutamatergic terminal: a gramicidin perforated-patch study on the calyx of Held. J Neurosci. 2006;26(44):11432–6. Mahn M, Prigge M, Ron S, Levy R, Yizhar O. Biophysical constraints of optogenetic inhibition at presynaptic terminals. Nat Neurosci. 2016;19(4):554–6. Kim JM, Hwa J, Garriga P, Reeves PJ, RajBhandary UL, Khorana HG. Light-driven activation of beta 2-adrenergic receptor signaling by a chimeric rhodopsin containing the beta 2-adrenergic receptor cytoplasmic loops. Biochemistry. 2005;44(7):2284–92. Airan RD, Thompson KR, Fenno LE, Bernstein H, Deisseroth K. Temporally precise in vivo control of intracellular signalling. Nature. 2009;458(7241):1025–9. Zhang F, Vierock J, Yizhar O, Fenno LE, Tsunoda S, Kianianmomeni A, Prigge M, Berndt A, Cushman J, Polle J, et al. The microbial opsin family of optogenetic tools. Cell. 2011;147(7):1446–57. Han X, Boyden ES. Multiple-color optical activation, silencing, and desynchronization of neural activity, with single-spike temporal resolution. PLoS One. 2007;2(3):e299. Tonnesen J, Sorensen AT, Deisseroth K, Lundberg C, Kokaia M. Optogenetic control of epileptiform activity. Proc Natl Acad Sci U S A. 2009;106(29):12162–7. Mattis J, Tye KM, Ferenczi EA, Ramakrishnan C, O'Shea DJ, Prakash R, Gunaydin LA, Hyun M, Fenno LE, Gradinaru V, et al. Principles for applying optogenetic tools derived from direct comparative analysis of microbial opsins. Nat Methods. 2011;9(2):159–72. Alfonsa H, Merricks EM, Codadu NK, Cunningham MO, Deisseroth K, Racca C, Trevelyan AJ. The contribution of raised intraneuronal chloride to epileptic network activity. J Neurosci. 2015;35(20):7715–26. Bamberg E, Tittor J, Oesterhelt D. Light-driven proton or chloride pumping by halorhodopsin. Proc Natl Acad Sci U S A. 1993;90(2):639–43. Kummer M, Kirmse K, Witte OW, Holthoff K. Reliable in vivo identification of both GABAergic and glutamatergic neurons using Emx1-Cre driven fluorescent reporter expression. Cell Calcium. 2012;52(2):182–9. Flossmann T, Kaas T, Rahmati V, Kiebel SJ, Witte OW, Holthoff K, Kirmse K. Somatostatin interneurons promote neuronal synchrony in the neonatal hippocampus. Cell Rep. 2019;26(12):3173–82. Kanada S, Takeguchi Y, Murakami M, Ihara K, Kouyama T. Crystal structures of an O-like blue form and an anion-free yellow form of pharaonis halorhodopsin. J Mol Biol. 2011;413(1):162–76. Raimondo JV, Kay L, Ellender TJ, Akerman CJ. Optogenetic silencing strategies differ in their effects on inhibitory synaptic transmission. Nat Neurosci. 2012;15(8):1102–4. Zhu L, Polley N, Mathews GC, Delpire E. NKCC1 and KCC2 prevent hyperexcitability in the mouse hippocampus. Epilepsy Res. 2008;79(2–3):201–12. Ben-Ari Y, Cherubini E, Corradetti R, Gaiarsa JL. Giant synaptic potentials in immature rat CA3 hippocampal neurones. J Physiol. 1989;416:303–25. Dzhala VI, Talos DM, Sdrulla DA, Brumback AC, Mathews GC, Benke TA, Delpire E, Jensen FE, Staley KJ. NKCC1 transporter facilitates seizures in the developing brain. Nat Med. 2005;11(11):1205–13. Spoljaric A, Seja P, Spoljaric I, Virtanen MA, Lindfors J, Uvarov P, Summanen M, Crow AK, Hsueh B, Puskarjov M, et al. Vasopressin excites interneurons to suppress hippocampal network activity across a broad span of brain maturity at birth. Proc Natl Acad Sci U S A. 2017;114(50):E10819–28. Hegemann P, Oesterbelt D, Steiner M. The photocycle of the chloride pump halorhodopsin. I: azide-catalyzed deprotonation of the chromophore is a side reaction of photocycle intermediates inactivating the pump. EMBO J. 1985;4(9):2347–50. Lanyi JK. Mechanism of base-catalyzed Schiff base deprotonation in halorhodopsin. Biochemistry. 1986;25(21):6706–11. Steiner M, Oesterhelt D. Isolation and properties of the native chromoprotein halorhodopsin. EMBO J. 1983;2(8):1379–85. Kouyama T, Kanada S, Takeguchi Y, Narusawa A, Murakami M, Ihara K. Crystal structure of the light-driven chloride pump halorhodopsin from Natronomonas pharaonis. J Mol Biol. 2010;396(3):564–79. Varo G, Brown LS, Sasaki J, Kandori H, Maeda A, Needleman R, Lanyi JK. Light-driven chloride ion transport by halorhodopsin from Natronobacterium pharaonis. 1. The photochemical cycle. Biochemistry. 1995;34(44):14490–9. Chizhov I, Engelhard M. Temperature and halide dependence of the photocycle of halorhodopsin from Natronobacterium pharaonis. Biophys J. 2001;81(3):1600–12. Mevorat-Kaplan K, Brumfeld V, Engelhard M, Sheves M. Effect of anions on the photocycle of halorhodopsin. Substitution of chloride with formate anion. Biochemistry. 2005;44(43):14231–7. Oesterhelt D, Hess B. Reversible photolysis of the purple complex in the purple membrane of Halobacterium halobium. Eur J Biochem. 1973;37(2):316–26. Ormos P, Dancshazy Z, Karvaly B. Mechanism of generation and regulation of photopotential by bacteriorhodopsin in bimolecular lipid membrane. Biochim Biophys Acta. 1978;503(2):304–15. Nagel G, Mockel B, Buldt G, Bamberg E. Functional expression of bacteriorhodopsin in oocytes allows direct measurement of voltage dependence of light induced H+ pumping. FEBS Lett. 1995;377(2):263–6. Kouyama T, Kawaguchi H, Nakanishi T, Kubo H, Murakami M. Crystal structures of the L1, L2, N, and O states of pharaonis halorhodopsin. Biophys J. 2015;108(11):2680–90. Bedbrook CN, Yang KK, Robinson JE, Gradinaru V, Arnold FH. Machine learning-guided channelrhodopsin engineering enables minimally-invasive optogenetics. Nat Methods. 2019. Epub ahead of print. https://0-doi-org.brum.beds.ac.uk/10.1038/s41592-019-0583-8. Arrenberg AB, Del Bene F, Baier H. Optical control of zebrafish behavior with halorhodopsin. Proc Natl Acad Sci U S A. 2009;106(42):17968–73. Owen SF, Liu MH, Kreitzer AC. Thermal constraints on in vivo optogenetic manipulations. Nat Neurosci. 2019;22(7):1061–5. Gorski JA, Talley T, Qiu M, Puelles L, Rubenstein JL, Jones KR. Cortical excitatory neurons and glia, but not GABAergic neurons, are produced in the Emx1-expressing lineage. J Neurosci. 2002;22(15):6309–14. Madisen L, Mao T, Koch H, Zhuo JM, Berenyi A, Fujisawa S, Hsu YW, Garcia AJ 3rd, Gu X, Zanella S, et al. A toolbox of Cre-dependent optogenetic transgenic mice for light-induced activation and silencing. Nat Neurosci. 2012;15(5):793–802. Taniguchi H, He M, Wu P, Kim S, Paik R, Sugino K, Kvitsiani D, Fu Y, Lu J, Lin Y, et al. A resource of Cre driver lines for genetic targeting of GABAergic neurons in cerebral cortex. Neuron. 2011;71(6):995–1013. We thank Ina Ingrisch for the technical assistance. This work was supported by the Priority Program 1665 (HO 2156/3–2 to KH, KI 1816/1–2 to KK), the Collaborative Research Center/Transregio 166 (B3 to KH, KK, A3 to GN) and the Research Unit 3004 (KI 1816/5-1 to KK) of the German Research Foundation, and the Interdisciplinary Centre for Clinical Research Jena (KK, KH). Chuanqiang Zhang, Shang Yang, and Tom Flossmann contributed equally to this work. Knut Holthoff and Knut Kirmse jointly supervised this work. Hans-Berger Department of Neurology, Jena University Hospital, Am Klinikum 1, 07747, Jena, Germany Chuanqiang Zhang, Tom Flossmann, Otto W. Witte, Knut Holthoff & Knut Kirmse Present Address: Laboratory of Sensory Processing, Brain Mind Institute, Faculty of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015, Lausanne, Switzerland Chuanqiang Zhang Institute for Molecular Plant Physiology and Biophysics, Biocenter, & Institute of Physiology – Neurophysiology, Julius-Maximilians-University of Würzburg, 97070, Würzburg, Germany Shang Yang, Shiqiang Gao & Georg Nagel Present Address: Centre for Discovery Brain Sciences, Biomedical Sciences, University of Edinburgh, Edinburgh, EH8 9XD, UK Tom Flossmann Shang Yang Shiqiang Gao Otto W. Witte Georg Nagel Knut Holthoff Knut Kirmse KK, KH, CZ, and GN conceived the study and designed the experiments. CZ, SY, and TF performed the experiments. CZ, SY, SG, TF, and KK analyzed the data. All authors contributed to the data interpretation and manuscript preparation. All authors read and approved the final manuscript. Correspondence to Knut Kirmse. Co-stimulation at 594 nm and 488 nm attenuates the inactivation of eNpHR3.0-mediated currents in somatostatin (SOM) interneurons. A, Sample voltage-clamp recordings from an individual EYFP+ SOM interneuron in an acute slice obtained from a SOMIREScre:eNpHR3.0-EYFPLSL mouse in the presence of TTX (0.5 μM). The cell was stimulated for 30 s either continuously at 594 nm (5 mW at fiber tip) or in an alternating manner at 488/594 nm (1 kHz, 50/50% duty cycle, 5 mW each at fiber tip). B, Quantification of Ipeak and Ilate. C, Co-stimulation with blue light substantially reduced inactivation of eNpHR3.0-mediated photo-currents (P3–4). Data are presented as mean ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001.The data set was obtained from cells included in [28]. Differential effects of eNpHR3.0-mediated chloride loading on network activity in acute hippocampal slices. A, Time-course of bursts of spontaneous postsynaptic currents (PSCs) at P3–6. PSC burst were virtually absent in the presence of the NKCC1 inhibitor bumetanide (10 μM). Subsequent blue-light photo-stimulation (60 s, 488 nm, 5 mW) of Emx1+ pyramidal cells led to a transient reappearance of PSC bursts. B, Sample voltage-clamp recordings from an individual cell showing PSC bursts (time points as indicated in A). Note that PSC bursts reappear following photo-stimulation (bottom trace). C, Quantification of bursts count per 20-s time bins. D, Sample voltage-clamp recording of spontaneous EPSCs isolated by reversal potential before photo-stimulation (top), immediately after the offset of photo-stimulation (488 nm, 5 min, 5 mW; middle) and ~ 10 min after photo-stimulation offset (bottom). E, Time-course of EPSC frequency at P11–12 (normalized to the mean of the pre-stimulation period). Note the long-term stability of EPSC frequency after photo-stimulation. Brief interruptions of recordings used to monitor access resistance are not depicted for clarity F, Absolute EPSC frequencies before and after photo-stimulation. Data are presented as mean ± SEM. n.s. – not significant, **P < 0.01. Additional file 3. Raw data values used for statistical comparisons. Zhang, C., Yang, S., Flossmann, T. et al. Optimized photo-stimulation of halorhodopsin for long-term neuronal inhibition. BMC Biol 17, 95 (2019). https://0-doi-org.brum.beds.ac.uk/10.1186/s12915-019-0717-6 Accepted: 30 October 2019 Halorhodopsin eNpHR3.0 Optogenetic
CommonCrawl
11.2 Electrical machines - generators and motors (ESCQ4) We have seen that when a conductor is moved in a magnetic field or when a magnet is moved near a conductor, a current flows in the conductor. The amount of current depends on: the speed at which the conductor experiences a changing magnetic field, the number of coils that make up the conductor, and the position of the plane of the conductor with respect to the magnetic field. The effect of the orientation of the conductor with respect to the magnetic field is illustrated in Figure 11.1. Figure 11.1: Series of figures showing that the magnetic flux through a conductor is dependent on the angle that the plane of the conductor makes with the magnetic field. The greatest flux passes through the conductor when the plane of the conductor is perpendicular to the magnetic field lines as in Figure 11.1 (a). The number of field lines passing through the conductor decreases, as the conductor rotates until it is parallel to the magnetic field Figure 11.1 (c). If the emf induced and the current in the conductor were plotted as a function of the angle between the plane of the conductor and the magnetic field for a conductor that has a constant speed of rotation, then the induced emf and current would vary as shown in Figure 11.2. The current alternates around zero and is known as an alternating current (abbreviated AC). Figure 11.2: Variation of induced emf and current as the angle between the plane of a conductor and the magnetic field changes. The angle changes as a function of time so the above plots can be mapped onto the time axis as well. Recall Faraday's Law, which you learnt about in Grade 11: Faraday's Law The emf, \(\mathcal{E}\), induced around a single loop of conductor is proportional to the rate of change of the magnetic flux, φ, through the area, \(A\), of the loop. This can be stated mathematically as: \[\mathcal{E} =-N\frac{\Delta \phi }{\Delta t}\] where \(\phi =B·A\cos\theta\) and \(B\) is the strength of the magnetic field. Faraday's Law relates induced emf to the rate of change of magnetic flux, which is the product of the magnetic field strength and the cross-sectional area the field lines pass through. The cross-sectional area changes as the loop of the conductor rotates which gives rise the \(\cos\theta\) factor. \(\theta\) is the angle between the normal to the surface area of the loop of the conductor and the magnetic field. As the closed loop conductor changes orientation with respect to the magnetic field, the amount of magnetic flux through the area of the loop changes and an emf is induced in the conducting loop. Electrical generators (ESCQ5) AC generator (ESCQ6) The principle of rotating a conductor in a magnetic field to generate current is used in electrical generators. A generator converts mechanical energy (motion) into electrical energy. A generator is a device that converts mechanical energy into electrical energy. The layout of a simple AC generator is shown in Figure 11.3. The conductor is formed of a coil of wire, placed inside a magnetic field. The conductor is manually rotated within the magnetic field. This generates an alternating emf. The alternating current needs to be transmitted from the conductor to the load, which is the system requiring the electrical energy to function. The load and the conductor are connected by a slip ring. A slip ring is a connector which is able to transmit electricity between rotating portions of a machine. It is made up of a ring and brushes, one of which is stationary with respect to the other. Here, the ring attaches to the conductor and the brushes are attached to the load. Current is generated in the rotating conductor, passes into the slip rings, which rotate against the brushes. The current is transmitted through the brushes into the load, and the system is thus powered. Figure 11.3: Layout of an alternating current generator. The direction of the current changes with every half turn of the coil. As one side of the loop moves to the other pole of the magnetic field, the current in the loop changes direction. This type of current which changes direction is known as alternating current and Figure 11.4 shows how it comes about as the conductor rotates. Figure 11.4: The red (solid) dots represent current coming out of the page and the crosses show current going into the page. AC generators are also known as alternators. They are found in motor cars to charge the car battery. DC generator (ESCQ7) A simple DC generator is constructed the same way as an AC generator except that there is one slip ring which is split into two pieces, called a commutator, so the current in the external circuit does not change direction. The layout of a DC generator is shown in Figure 11.5. The split-ring commutator accommodates for the change in direction of the current in the loop, thus creating direct current (DC) current going through the brushes and out to the circuit. The current in the loop does reverse direction but if you look carefully at the 2D image you will see that the section of the split-ring commutator also changes which side of the circuit it is touching. If the current changes direction at the same time that the commutator swaps sides the external circuit will always have current going in the same direction. Figure 11.5: Layout of a direct current generator. The shape of the emf from a DC generator is shown in Figure 11.6. The emf is not steady but is the absolute value of a sine/cosine wave. Figure 11.6: Variation of emf in a DC generator. AC versus DC generators (ESCQ8) The problems involved with making and breaking electrical contact with a moving coil are sparking and heat, especially if the generator is turning at high speed. If the atmosphere surrounding the machine contains flammable or explosive vapours, the practical problems of spark-producing brush contacts are even greater. If the magnetic field, rather than the coil/conductor is rotated, then brushes are not needed in an AC generator (alternator), so an alternator will not have the same problems as DC generators. The same benefits of AC over DC for generator design also apply to electric motors. While DC motors need brushes to make electrical contact with moving coils of wire, AC motors do not. In fact, AC and DC motor designs are very similar to their generator counterparts. The AC motor is depends on the reversing magnetic field produced by alternating current through its stationary coils of wire to make the magnet rotate. The DC motor depends on the brush contacts making and breaking connections to reverse current through the rotating coil every 1/2 rotation (180 degrees). Electric motors (ESCQ9) The basic principles of operation for an electric motor are the same as that of a generator, except that a motor converts electrical energy into mechanical energy (motion). An electric motor is a device that converts electrical energy into mechanical energy. If one were to place a moving charged particle in a magnetic field, it would experience a force called the Lorentz force. The Lorentz Force The Lorentz force is the force experienced by a moving charged particle in an electric and magnetic field. The magnetic component is: \[F=qvB\] where \(F\) is the force (in newtons, N), \(q\) is the electric charge (in coulombs, C), \(v\) is the velocity of the charged particle (in \(\text{m·s$^{-1}$}\)) and \(B\) is the magnetic field strength (in teslas, T). In this diagram a positive charge is shown moving between two opposite poles of magnets. The direction of the charge's motion is indicated by the orange arrow. It will experience a Lorentz force which will be in the direction of the green arrow. A current-carrying conductor, where the current is in the direction of the orange arrow, will also experience a magnetic force, the green arrow, due to the Lorentz force on the individual charges moving in the current flow. If the direction of the current is reversed, for the same magentic field direction, then the direction of the magnetic force will also be reversed as indiced in this diagram. We can if there are two parallel conductors with current in opposite direcions they will experience magnetic forces in opposite directions. An electric motor works by using a source of emf to make a current flow in a loop of conductor such that the Lorentz force on opposite sides of the loop are in opposite directions which can cause the loop to rotate about a central axis. The force on a current-carrying conductor due to a magnetic field is called Ampere's law. The direction of the magnetic force is perpendicular to both the direction of the flow of current and the direction of the magnetic field and can be found using the Right Hand Rule as shown in the picture below. Use your right hand; your first finger points in the direction of the current, your second finger in the direction of the magnetic field and your thumb will then point in the direction of the force. Both motors and generators can be explained in terms of a coil that rotates in a magnetic field. In a generator the coil is attached to an external circuit that is turned, resulting in a changing flux that induces an emf. In a motor, a current-carrying coil in a magnetic field experiences a force on both sides of the coil, creating a twisting force (called a torque, pronounce like 'talk') which makes it turn. If the current is AC, the two slip rings are required to create an AC motor. An AC motor is shown in Figure 11.7 Figure 11.7: Layout of an alternating current motor. If the current is DC, split-ring commutators are required to create a DC motor. This is shown in Figure 11.8. Figure 11.8: Layout of a direct current motor. Real-life applications (ESCQB) A car contains an alternator. When the car's engine is running the alternator charges its battery and powers the car's electric system. Try to find out the different current values produced by alternators for different types of machines. Compare these to understand what numbers make sense in the real world. You will find different values for cars, trucks, buses, boats etc. Try to find out what other machines might have alternators. A car also contains a DC electric motor, the starter motor, to turn over the engine to start it. A starter motor consists of the very powerful DC electric motor and starter solenoid that is attached to the motor. A starter motor requires a very high current to crank the engine and is connected to the battery with large cables to carry large current. In order to produce electricity for mass distribution (to homes, offices, factories and so forth), AC generators are usually used. The electricity produced by massive power plants usually has a low voltage which is converted to high voltage. It is more efficient to distribute electricity over long distances in the form of high voltage power lines. The high voltages are then coverted to 240 V for consumption in homes and offices. This is usually done within a few kilometres of where it will be used. Figure 11.9: AC generators are used at power plants (all types, hydro- and coal-plants shwon) to generate electricity. Test yourself now High marks in science are the key to your success and future plans. Test yourself and learn more on Siyavula Practice. Sign up and test yourself Generators and motors State the difference between a generator and a motor. An electrical generator is a mechanical device to convert energy from a source into electrical energy. An electrical motor is a mechanical device to convert electrical energy from a source into another form energy. Use Faraday's Law to explain why a current is induced in a coil that is rotated in a magnetic field. Faraday's law says that a changing magnetic flux can induce an emf, when the coil rotates in a magnetic field it is possible for the rotation to change the flux thereby inducing an emf. If the rotation of the coil is such that the flux doesn't change, i.e. the surface of the coil remains parallel to the magenetic field, then there will be no induced emf. Explain the basic principle of an AC generator in which a coil is mechanically rotated in a magnetic field. Draw a diagram to support your answer. Solution not yet available Explain how a DC generator works. Draw a diagram to support your answer. Also, describe how a DC generator differs from an AC generator. Explain why a current-carrying coil placed in a magnetic field (but not parallel to the field) will turn. Refer to the force exerted on moving charges by a magnetic field and the torque on the coil. A current-carrying coil in a magnetic field experiences a force on both sides of the coil that are not parallel to the magnetics field, creating a twisting force (called a torque) which makes it turn. Any coil carrying current can feel a force in a magnetic field. The force is due to the magnetic component of the Lorentz force on the moving charges in the conductor, called Ampere's Law. The force on opposite sides of the coil will be in opposite directions because the charges are moving in opposite directions. Explain the basic principle of an electric motor. Draw a diagram to support your answer. Give examples of the use of AC and DC generators. Cars (both AC and DC), electricity generation (AC only), anywhere where a power supply is needed. Give examples of the uses of motors. Pumps, fans, appliances, power tools, household appliances, office equipment.
CommonCrawl
Selective fluorescence labeling: time-lapse enzyme visualization during sugarcane hydrolysis Makiko Imai1, Asako Mihashi2, Tomoya Imai1, Satoshi Kimura1,3, Tomohiko Matsuzawa4, Katsuro Yaoi4, Nozomu Shibata5, Hiroshi Kakeshita5, Kazuaki Igarashi5, Yoshinori Kobayashi2 & Junji Sugiyama ORCID: orcid.org/0000-0002-5388-49251,6 Enzymatic biomass saccharification is an important process for bioethanol production. Hitherto, numerous cellulase cocktails (crude enzyme) have been developed to improve enzymatic activity. For this purpose, the synergy of incorporating hydrolase functionality within a cellulase cocktail is a key function. However, such synergistic action, by potentially numerous different enzyme types, on biomass tissue has not been considered despite the importance toward the realistic case of biomass saccharification. This study aims to visualize the behavior of each of the key cellulase components on biomass tissue during saccharification. Time-lapse fluorescence microscopy observations were conducted during saccharification of a thin transverse sugarcane section to monitor enzymes modified with a fluorescence dye. Statistical image analysis successfully demonstrated a unique adsorption/desorption behavior of each enzyme component. Particularly, the behavior of endoxylanase10 (Xyn10), which was recently discovered from Penicillium sp. as a high-performance xylanase, displayed remarkable adsorption on tissues of sugarcane, which accounts for the superior activity of the cellulase mixture with Xyn10. Development of new alternative sources of energy is a global hot topic, and recently a wealth of interdependent research has focused on the use of inedible biomass such as lignocellulose. Among the numerous biomass materials, sugarcane has received significant attention at the industrial scale as one of the lignocelluloses for bioethanol production, particularly, with bagasse (fiber remaining after extracting sugarcane juice) being a focus of attention. As a first step to produce bioethanol from lignocellulose, it is necessary to hydrolyze cellulose and hemicellulose polymers into their respective monomers. Investigations have recently focused on enzymatic hydrolysis as the method of hydrolysis because of the positive environmental assessment received, despite several problems in this process. For example, the yield of saccharification reaches the limit by low concentrations of cellulase cocktail [1]. The mechanism of this phenomenon still requires elucidation; however, studies have suggested a complicated process of nonproductive and nonspecific adsorption to substrates [2, 3], deactivation of enzymes [4], product inhibition [5], etc. To circumvent the limitation in hydrolysis activity, large amounts of enzymes are required. Furthermore, the enzyme cost is another factor to consider when commercializing [6, 7]. Therefore, there is a demand to develop new strategies that allow efficient hydrolysis using minimum amounts of enzymes. The native crude cellulase is composed of multiple enzyme components, each of which offers individual function and specific activity. The three main components are cellobiohydrolase (CBH), endoglucanase (EG), and β-glucosidase (BGL). Numerous reports have detailed the synergistic effect among these components [8,9,10]. Enzyme–substrate specificity has received significant attention that includes the development of visually observing their interactions. Classically, gold-labeled individual enzymes were visualized by electron microscopy [11, 12], and thereafter, cellulose-binding modules (CBMs) with fluorescein isothiocyanate [13, 14] were observed. A single molecular motion of a green fluorescence protein-tagged CBM on a cellulose crystal of Valonia ventricosa was analyzed [15]. Furthermore, a fluorescence resonance energy-transfer technique [16] demonstrated that cellulases were located only a few nm from each other on the surface of a cellulose microfibril. More impressively, by high-speed atomic force microscopy, the running motion of CBH I particles, on a cellulose microfibril, from the cell wall of Cladophora sp. was directly visualized [17]. The authors further investigated, and observed a type of "traffic jam" on the cellulose surface, and by theoretical analysis, elucidated a possible mechanism of the enzyme–substrate interaction [18]. As described above, the observation of 'individual' cellulase components has been successfully reported. Hitherto, however, such observations have not been directed toward the visualization of each enzyme component in cellulase mixture working synergistically with real biomass tissue. Therefore, the focus of this study was to visualize the pattern of adsorption and desorption of each enzyme component acting on a section of sugarcane by using fluorescence microscopy combined with a newly developed image analysis technique. Furthermore, this study also focused on xylanases as important enzymes for the hydrolysis of biomass. Two xylanases were selected for the visualization experiment: endoxylanase III (Xyn III) from Trichoderma reesei, well known to be efficient; and endoxylanase10 (Xyn10) from Penicillium sp., a recently found high-performance enzyme. Enzyme preparation The enzymes used in this study were CBH I (TrCel7A), CBH II (TrCel6A), EG I (TrCel7B), EG II (TrCel5A), EG IV (TrCel61A), Xyn III, β-xylosidase (TrXyl3A, BXL), BGL I, and Xyn10 (PspXyn10). CBH I, CBH II, EG I, EG II, EG IV, Xyn III and BXL were derived from the T. reesei strain PC-3-7 [19] and BGL I was derived from Aspergillus aculeatus [20]. These eight enzymes were heterologously expressed in A. oryzae (Ozeki Co. Ltd., Hyogo, Japan) following the method reported by Kawai et al. [21]. Each enzyme was purified from the culture supernatant of A. oryzae cells by hydrophobic chromatography (TOYOPEARL®Butyl-650) followed by anion exchange chromatography (TOYOPEARL® DEAE-650). To prepare Xyn10 from Penicillium sp., the pspxyn10 gene was amplified from pUC-Pcbh1-pspxyn10-amdS plasmid [22] by a polymerase chain reaction and expressed in A. oryzae cells under the control of an improved enoA promoter (PenoA142f) [23] that harbored 12 tandem repeats of the cis-acting element (region III) of the agdA promoter [24]. A. oryzae cells expressing the pspxyn10 gene were cultured in a DP medium (2% dextrin hydrate, 1% peptone) containing 0.5% potassium dihydrogen phosphate, 0.05% magnesium sulfate, 0.187% l-glutamic acid monosodium salt, and 0.003% l-methionine at 30 °C, 105 rpm for 3 days. After cultivation, the A. oryzae cells were removed by filtration (0.45 μm) and Xyn10 was purified as described above. Enzyme labeling with a fluorescent dye Herein, CBH I, CBH II, EG I, EG II, Xyn III, and Xyn10 were labeled with a fluorescent dye as theses enzymes are the main components of a wild-type cellulase and have important functions. Each enzyme was labeled with Alexa Fluor® 546 NHS Ester (Invitrogen, California, USA) in accordance with an attached instruction. The fluorescent molecule forms a covalent bond with a primary amine group in enzyme. Determination of the protein concentration was performed using a Quick Start™ Bradford protein assay (Bio-Rad, California, USA), and a gamma-globulin standard was used throughout this study. Bio-Gel® P-4Gel fine (Bio-Rad, wet bead size 45–90 µm, molecular weight exclusion limit of > 4000) was used to separate the labeled enzyme from the free dye. The absorbance at 554 nm of the labeled enzyme solution was measured to calculate the degree of labeling (DL), using the following formula: $${\text{DL}} = \, \left( {A_{ 5 5 4} \times k} \right) \, / \, \left( {\mu_{\text{ext}} \times C_{\text{protein}} } \right),$$ where DL is the amount of moles of dye per mole of protein, A554 is the absorbance at 554 nm, k is a dilution factor, µext is the molar extinction coefficient of Alexa Fluor® 546 NHS Ester at 554 nm (104,000 cm−1M−1), and Cprotein is the protein concentration (M). Assessment of labeled xylanases To compare the enzymatic activity of two xylanases, a saccharification test was investigated in the presence of cellulase mixture including Xyn III or Xyn10. The substrate, sugarcane (Saccharum officinarum) bagasse powder, was sieved through a 1-mm mesh and pretreated by autoclaving in 1% sodium hydroxide solution at 120 °C for 20 min. The composition was estimated as: cellulose (63%), hemicellulose (18%), lignin (7.6%), and ash (4.0%). The bagasse (10 mg on a dry matter basis) was hydrolyzed in 0.1 M acetate buffer (pH 5.0) at 50 °C, shaking at 150 rpm. The total solution volume was 1 mL and the enzyme concentration was 3 mg/g of substrate. The enzyme composition is the same as that described later in the microscopy section. The experiments were performed for both nonlabeled and labeled xylanases. The supernatant was collected at 5, 24, 48, and 96 h to measure d-glucose yield using a CII Test Wako kit (Wako, Osaka, Japan), and d-xylose yield using a d-xylose kit (Megazyme, Wicklow, Ireland). Fluorescence microscopy of a selectively labeled enzyme in a cellulase cocktail Transverse sections (30-µm-thick) were cut from a stem of sugarcane harvested in Okinawa, Japan, by a microtome equipped with a freezing stage. The sections were treated in 0.5% sodium hydroxide using an oil bath at 100 °C for 1 h. After washing thoroughly, the treated sections were used as the substrate. The enzyme mixture comprised purified components of CBH I 35 wt%, CBH II 20 wt%, EG I 15 wt%, EG II 5 wt%, EG IV 5 wt%, BGL I 5 wt%, BXL 5 wt%, and Xyn III 10 wt%. One of the enzymes was replaced with a fluorescent-labeled enzyme at a fixed ratio of 5 wt% (total enzyme basis); for example, a system comprising labeled CBH I at 5 wt% and nonlabeled CBH I at 30 wt%. For comparison purpose, Xyn III was replaced with Xyn10 to visualize the functional difference between the two xylanases. A pretreated section was mounted on a glass slide together with a 25-μL aliquot of an enzyme mixture (~ 40 mg/g biomass). A cover slip was then placed on top of the specimen and sealed with nail polish to prevent water evaporating during the reaction. The preparation was performed on a thermostage, maintained at 50 °C, with an inverted fluorescent microscope (IX71, Olympus, Tokyo, Japan) under a constant illumination flux from a super-high pressure mercury lamp and a 4× objective lens (UPlanFLN, NA: 0.13, Olympus). Images (1600 × 1200 pixels, 8 bit RGB) were recorded every 5 min for 360 min in fluorescent mode with a charge-coupled device camera having an exposure time set at 500 ms (DP 73, Olympus). As the filter set, TRIRC-B (Semrock, N.Y., USA) was used comprising a bandpass excitation filter (543 nm/22 nm), a dichroic mirror (> 562 nm), and a bandpass emission filter (593 nm/40 nm). No auto-fluorescence was detected in the presence of a nonfluorescent-labeled enzyme subjected to the same conditions. Relationship between the number of labeled enzymes and fluorescence image intensity Prior to image data interpretation, the relationship between the fluorescence intensity of the microscopic image and the labeled enzyme concentration was determined. A series of 2-μL enzyme-labeled solutions, at various concentrations, were placed in a circle (φ = 8 mm) surrounded by water-resistant fluororesin on a glass slide, TF0808 (MATSUNAMI, Osaka, Japan). Thereafter, a cover slip was placed on the solution and observations made using fluorescent microscopy under the same conditions as previously described. The average fluorescence intensity was calculated from five different positions using ImageJ software, and plotted against the corresponding enzyme concentration. Time-lapse fluorescence profiles from two-dimensional images A stack of fluorescent images (at 5-min intervals) were carefully aligned by the registration algorithm proposed by Thévenaz et al. [25] as a plugin for ImageJ. A 500 × 500 pixel region was cropped, wherein one complete vascular bundle (VB) from the inner part of the stem was recorded. After conversion to a gray-scale, and noise reduction by median filtering, the intensity profiles from each pixel in an image stack were taken as a function of time. The 250,000 time-dependent intensity profiles were then classified into eight representative profiles by the k-Means algorithm, and the corresponding regions associated with the eight profiles were contour-mapped into two-dimensional images. The number of clusters was chosen to be slightly larger than the number of cell types in the region of interest: phloem, bundle sheath, metaxylem and parenchyma, which were expected to show different susceptibilities to the enzymatic attack. All calculations were performed in python 3.6 using the scikit-learn v0.19.2 [26] data mining tool. Xylanase activity: effect of labeling As xylanase is known to be a key enzyme for the saccharification of biomass, the investigation herein studied two endo-β-xylanases, Xyn III and Xyn10, both of which belong to the glycoside hydrolase family GH-10 in the CAZy database. Xyn III was observed by Xu et al. [27] to be a highly active xylanase enzyme, while Xyn10, developed by Kao Corporation, Tokyo, Japan [22], exhibits an even higher activity. Xyn III has no carbohydrate-binding module (CBM) and shows a high affinity to soluble xylan [28], while Xyn10 is an endo-type xylanase with a CBM 1 that demonstrates a high affinity to the surface of crystalline cellulose [22]. As previously reported [22], Xyn10 demonstrated a higher activity than Xyn III against alkali-pretreated sugarcane bagasse powder (Fig. 1), reconfirming the excellent performance of Xyn10. Saccharification in the presence of labeled and nonlabeled enzymes. Glucose yield was calculated by measuring glucose amount divided by the total glucose amount expected from cellulose. Xylose yield was calculated in the same way. a Glucose and xylose yields in the presence of labeled or nonlabeled Xyn III. b Glucose and xylose yields in the presence of labeled or nonlabeled Xyn10. Open circle: glucose by nonlabeled xylanase, filled circle: glucose by labeled xylanase, open square: xylose by nonlabeled xylanase, filled square: xylose by labeled xylanase. The error bars indicate the standard deviation of the measured values Furthermore, a remarkable finding in this study for Xyn10 is that the production of xylose precedes that of glucose, until 48 h of treatment (Fig. 1b). As part of the xylan structure is tightly bound to cellulose fibrils [29], effective xylanases may remove xylan from the surface of cellulose, which subsequently allows cellulose to be accessible to cellulase. Finally, the reaction progress appears to be similar for fluorescent-labeled and nonlabeled enzyme systems (Fig. 1), indicating that the enzyme labeling does not influence xylanase hydrolysis performance. Additionally, there was no significant influence hydrolysis performance for the other enzymes analyzed in this study (data not shown). Hence, enzyme labeling does not influence data interpretation derived from fluorescence microscopy in this study. Fluorescence intensity of the labeled enzyme solution at various concentrations is plotted in Fig. 2a. Excellent linearity between the fluorescence microscopy intensities and the applied dose of individual enzymes was demonstrated. The intensity per mole of protein (the slope of each line), was calculated in Fig. 2a. When the slope was plotted against the degree of enzyme labeling for each enzyme (Fig. 2b), a linear relationship was also observed, indicating that fluorescence intensity is proportional to the number of fluorescence dye molecules regardless of the enzyme tagged. Therefore, the fluorescence intensity of each enzyme can be quantitatively compared by normalizing to DL of the corresponding enzyme. a Relationship between image intensity and enzyme concentration. b Relationship between intensity per mole of protein and degree of labeling Changes to morphology and enzyme adsorption during hydrolysis The typical appearance of the VB of sugarcane during hydrolysis is presented in Fig. 3. The images were taken in both normal brightfield mode (Fig. 3a–c) and fluorescence mode (Fig. 3d–f). As shown in the brightfield image (Fig. 3c), the parenchyma cell wall substances distant from the VB are more susceptible to hydrolysis, and the image contrast was almost lost after 360 min of treatment. The thick-walled bundle sheath of the VB remained but became notably thinner. Initial adsorption of CBH I occurred at the parenchyma cells distant from the VB (Fig. 3d) and became more concentrated toward the VB outer areas (Fig. 3e) after 100 min, which is composed of smaller-sized parenchyma cells and thinner-walled VB fibers. Thereafter, CBH I was observed only at highly lignified areas such as the bundle sheath. As such, it was possible to visualize the substrate degradation pattern and the corresponding enzyme distribution. Typical microscopy images during hydrolysis in the presence of mixed enzymes containing labeled cellobiohydrolase (CBH) I. Hydrolysis time: a, d 0 min, b 90 min, e 100 min, c, f 360 min; a–c brightfield microscopy images; d–f fluorescent microscopy images. bs bundle shearth, p phloem, pc parenchyma, mv metaxylem vessel Time-lapse analysis of individual enzymes As a negative control, free Alexa Fluor® 546 NHS Ester was tested under the same experimental conditions. No fluorescence of dyes on sugarcane sections was observed. A concept of the analysis is given in Fig. 4. A stack of images were carefully aligned, as shown in Fig. 4a. From a set of images, 250,000 intensity profiles were obtained, with some intensity profiles being shown in Fig. 4b. Several profile patterns are clearly observed; flat profiles (background), constant increasing or decreasing profiles, profiles with maximum peak, and so on. The typical center profiles obtained by the common method of vector quantization (k-means clustering) are represented in Fig. 4c. The number of clusters was set at eight. Finally, all pixels were colored corresponding to the profiles (Fig. 4d). In this way, the adsorption/desorption behavior was analyzed in a 2D image for each labeled enzyme. A time-lapse flow-diagram observation of enzyme localization in a sugarcane section. A stack of image slices was registered and carefully aligned (a). Intensity at each pixel point in the stack as a function of time. Up to 300 profiles randomly extracted from 250,000 profiles in total were exemplified in b. The eight representative profiles obtained by k-means clustering (c) together with a typical contour map of the corresponding eight regions (d) Figure 5 shows the intensity profiles of each enzyme together with the 2D distribution image. To compare intensity as a function of enzyme concentration, the intensity profiles were divided by DL. Thereafter, the labeled enzyme was fixed at 5 wt%, based on the total number of enzymes present in the system. Therefore, the intensity was converted to the actual enzyme concentration; for example, the intensity was multiplied by seven in the case of CBH I. The patterns from CBH I, CBH II, EG I, EG II, and Xyn10, were somehow similar to one another, with a pattern displaying two types of profiles, one having a maximum peak and the other simply increasing as a function of time. The profile having maximum peak derives from the area of parenchyma cell walls of which are thin and contain little lignin, or parenchyma cells of a relatively small diameter, adjacent to the VB. The simply increasing profile derives from the area of bundle sheath cell walls of which are thick and contain significant amount of lignin, and is relatively enhanced at the outer VB area. Combining the profiles and the 2D enzyme distributions, it was concluded that all enzymes, except for Xyn III, were significantly adsorbed at the parenchyma cell walls distant from the VB during the initial stage of hydrolysis. The enzymes gradually moved toward smaller parenchyma cells adjacent to the VB. Thereafter, the enzymes desorbed from the hydrolyzed parenchyma and first re-adsorbed on areas exhibiting less lignin content before finally moving to areas of the VB cell walls containing the highest degree of lignin. Conversely, the behavior of Xyn III was significantly unique. Throughout the hydrolysis, the degree of Xyn III adsorption was relatively small, especially almost zero at the parenchyma cells (Fig. 5e), which demonstrates a remarkably different behavior from Xyn10, which also possesses a CBM similar to the other enzymes. The different behaviors between the two xylanases tested in this study are proposed to arise from the possession of CBMs. Xyn III of T. reesei was reported to be initially devoid of a CBM and Xyn III, expressed with a xylan-binding domain from Streptomyces olivaceoviridis E-86, showed higher adsorption toward insoluble xylan by a factor of two [28]. Furthermore, when Xyn10 that initially possessed a CBM [22] was modified to be devoid of a CBM, Xyn10 was no longer observed to adsorb onto the substrates (data not shown). Therefore, the CBM of xylanase was found to be critical for the interaction with xylan molecules that are closely associated with cellulose molecules at the substrate surface, which may improve the total activity of the cellulase system toward biomass. A time-lapse movie of the enzyme adsorption and desorption observed by fluorescence microscopy can be seen in the animation (Additional file 1: Online Resource S1). Center adsorption/desorption profiles of each enzyme, together with the corresponding contour map in the 2D image. Inserts are enlarged profiles up to 30 min Time-lapse analyses of enzyme activity in specific anatomic areas Taking the maximum amount of adsorbed CBH I to be one, all the profiles shown in Fig. 5 were recalculated, and the relative amounts of each enzyme in the specific anatomic area were reproduced in Fig. 6. The specific anatomic areas are parenchyma cells, parenchyma cells near the VB, and the outermost cells of the VB. Relative enzyme adsorption/desorption profiles in each cell type. Inserts are enlarged graphs. a Parenchyma, b Parenchyma cells near the vascular bundle, and c the outermost cells of the vascular bundle As shown in Fig. 6, the major enzyme that adsorbed onto the surface of the substrate was unambiguously CBH I. Additionally, the profiles from CBH I, EG II, and Xyn10 were somehow similar in that these systems worked rapidly and moved to other regions that were more difficult to be hydrolyzed. Particularly, EG II and Xyn10 appeared to adsorb simultaneously; however, the desorption rate of Xyn10 was observed to proceed slightly quicker than EG II. Conversely, the degree of CBH II adsorption, which is known to work synergistically with CBH I [9, 30], was as low as one-seventh of CBH I, and furthermore, enzyme adsorption maxima occurred ~ 1 h later than that of CBH I. This observation may explain partly why both these CBHs are considered to be processive enzymes; however, CBH II exhibits more of a less-processive nature as postulated earlier [31]. As for EG I and EG II, significant differences in the enzyme–substrate interactions are clearly demonstrated. As EG II was adsorbed to a significantly higher degree, it is suggested that EG II functions more efficiently near the cellulosic surface, while the adsorption of EG I to parenchyma was significantly hindered (Fig. 6a, b) and preferred molecules dissociated from the substrate surface. Additionally, pulp viscosity was reported to decrease to a greater extent in the presence of EG II when compared with EG I [32]. Furthermore, Horikawa et al. [33] reported that the difference in enzymatic activity between EG I and EG II toward water-soluble carboxymethyl cellulose, was not distinguishable, whereas EG II significantly decreased the degree of polymerization of cellulose in a microfibrillar form than EG I. These reports agree well with our binding experiment. From the time-lapse analyses of the transverse sections of sugarcane stems, it is clear that CBH I is the most important and indispensable enzyme, which adsorbed onto the substrate significantly more than any other enzyme, displaying a rapid adsorption/desorption action. Conversely, even though CBH II is a processive enzyme, similar to CBH I, the observed adsorption was less than expected. EG I and EG II display different modes of action toward the microstructure of the substrate: EG II directly attacks the surface, while EG I exhibits a preference toward disentangled molecules. As for xylanases, while Xyn10, in the presence of CBM, adsorbed at the initial stage of hydrolysis and desorbed soon after, the degree of Xyn III adsorption on any tissue of sugarcane was minimal. The visualization of individual enzymes is important to elucidate the orchestrated interactions between the enzyme and the complex biomass. CBH: cellobiohydrolase endoglucanase BGL: β-glucosidase Xyn: endoxylanase BXL: β-xylosidase degree of labeling VB: CBM: carbohydrate-binding module Horikawa Y, Konakahara N, Imai T, Abe K, Kobayashi Y, Sugiyama J (2013) The structural changes in crystalline cellulose and effects on enzymatic digestibility. Polym Degrad Stab 98:2351–2356. https://doi.org/10.1016/j.polymdegradstab.2013.08.004 Lou H, Wang M, Lai H, Lin X, Zhou M, Yang D, Qiu X (2013) Reducing non-productive adsorption of cellulase and enhancing enzymatic hydrolysis of lignocelluloses by noncovalent modification of lignin with lignosulfonate. Bioresour Technol 146:478–484. https://doi.org/10.1016/j.biortech.2013.07.115 Palonen H, Tjerneld F, Zacchi G, Tenkanen M (2004) Adsorption of Trichoderma reesei CBH I and EG II and their catalytic domains on steam pretreated softwood and isolated lignin. J Biotechnol 107:65–72. https://doi.org/10.1016/j.jbiotec.2003.09.011 Gunjikar TP, Sawant SB, Joshi JB (2001) Shear deactivation of cellulase, exoglucanase, endoglucanase, and beta-glucosidase in a mechanically agitated reactor. Biotechnol Prog 17:1166–1168. https://doi.org/10.1021/bp010114u Atreya ME, Strobel KL, Clark DS (2016) Alleviating product inhibition in cellulase enzyme Cel7A. Biotechnol Bioeng 113:330–338. https://doi.org/10.1002/bit.25809 Cherry JR, Fidantsef AL (2003) Directed evolution of industrial enzymes: an update. Curr Opin Biotechnol 14:438–443 Liu G, Zhang J, Bao J (2016) Cost evaluation of cellulase enzyme for industrial-scale cellulosic ethanol production based on rigorous Aspen Plus modeling. Bioprocess Biosyst Eng 39:133–140. https://doi.org/10.1007/s00449-015-1497-1 Beldman G, Voragen AG, Rombouts FM, Pilnik W (1988) Synergism in cellulose hydrolysis by endoglucanases and exoglucanases purified from Trichoderma viride. Biotechnol Bioeng 31:173–178. https://doi.org/10.1002/bit.260310211 Henrissat B, Driguez H, Viet C, Schülein M (1985) Synergism of cellulases from trichoderma reesei in the degradation of cellulose. Nat Biotechnol 3:722–726. https://doi.org/10.1038/nbt0885-722 Wood TM, McCrae SI, Bhat KM (1989) The mechanism of fungal cellulase action. Synergism between enzyme components of Penicillium pinophilum cellulase in solubilizing hydrogen bond-ordered cellulose. Biochem J 260:37–43 Chanzy H, Henrissat B, Vuong R (1984) Colloidal gold labelling of l,4-β-d-glucan cellobiohydrolase adsorbed on cellulose substrates. FEBS Lett 172:193–197. https://doi.org/10.1016/0014-5793(84)81124-2 White AR, Brown RM (1981) Enzymatic hydrolysis of cellulose: visual characterization of the process. Proc Natl Acad Sci USA 78:1047–1051 Jervis EJ, Haynes CA, Kilburn DG (1997) Surface diffusion of cellulases and their isolated binding domains on cellulose. J Biol Chem 272:24016–24023. https://doi.org/10.1074/jbc.272.38.24016 Pinto R, Carvalho J, Mota M, Gama M (2006) Large-scale production of cellulose-binding domains. Adsorption studies using CBD-FITC conjugates. Cellulose 13:557–569. https://doi.org/10.1007/s10570-006-9060-5 Liu Y-S, Luo Y, Baker JO, Zeng Y, Himmel ME, Smith S, Ding S-Y (2010) A single molecule study of cellulase hydrolysis of crystalline cellulose. In: Single Molecule Spectroscopy and Imaging III. International Society for Optics and Photonics, p 757103 Wang L, Wang Y, Ragauskas AJ (2012) Determination of cellulase colocalization on cellulose fiber with quantitative FRET measured by acceptor photobleaching and spectrally unmixing fluorescence microscopy. Analyst 137:1319–1324. https://doi.org/10.1039/c2an15938d Igarashi K, Koivula A, Wada M, Kimura S, Penttilä M, Samejima M (2009) High speed atomic force microscopy visualizes processive movement of Trichoderma reesei cellobiohydrolase I on crystalline cellulose. J Biol Chem 284:36186–36190. https://doi.org/10.1074/jbc.M109.034611 Igarashi K, Uchihashi T, Koivula A, Wada M, Kimura S, Okamoto T, Penttilä M, Ando T, Samejima M (2011) Traffic jams reduce hydrolytic efficiency of cellulase on cellulose surface. Science 333:1279–1282. https://doi.org/10.1126/science.1208386 Kawamori M, Ado Y, Takasawa S (1986) Preparation and application of Trichoderma reesei mutants with enhanced β-glucosidase. Agric Biol Chem 50:2477–2482. https://doi.org/10.1080/00021369.1986.10867787 Kawaguchi T, Enoki T, Tsurumaki S, Sumitani J, Ueda M, Ooi T, Arai M (1996) Cloning and sequencing of the cDNA encoding β-glucosidase 1 from Aspergillus aculeatus. Gene 173:287–288. https://doi.org/10.1016/0378-1119(96)00179-5 Kawai T, Nakazawa H, Ida N, Okada H, Tani S, Sumitani J, Kawaguchi T, Ogasawara W, Morikawa Y, Kobayashi Y (2012) Analysis of the saccharification capability of high-functional cellulase JN11 for various pretreated biomasses through a comparison with commercially available counterparts. J Ind Microbiol Biotechnol 39:1741–1749. https://doi.org/10.1007/s10295-012-1195-9 Shibata N, Suetsugu M, Kakeshita H, Igarashi K, Hagihara H, Takimura Y (2017) A novel GH10 xylanase from Penicillium sp. accelerates saccharification of alkaline-pretreated bagasse by an enzyme from recombinant Trichoderma reesei expressing Aspergillus β-glucosidase. Biotechnol Biofuels 10:278. https://doi.org/10.1186/s13068-017-0970-2 Tsuboi H, Koda A, Toda T, Minetoki T, Hirotsune M, Machida M (2005) Improvement of the Aspergillus oryzae enolase promoter (P-enoA) by the introduction of cis-element repeats. Biosci Biotechnol Biochem 69:206–208 Minetoki T, Kumagai C, Gomi K, Kitamoto K, Takahashi K (1998) Improvement of promoter activity by the introduction of multiple copies of the conserved region III sequence, involved in the efficient expression of Aspergillus oryzae amylase-encoding genes. Appl Microbiol Biotechnol 50:459–467 Thevenaz P, Ruttimann UE, Unser M (1998) A pyramid approach to subpixel registration based on intensity. IEEE Trans Image Process 7:27–41. https://doi.org/10.1109/83.650848 Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Wiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay É (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830 Xu J, Takakuwa N, Nogawa M, Okada H, Morikawa Y (1998) A third xylanase from Trichoderma reesei PC-3-7. Appl Microbiol Biotechnol 49:718–724. https://doi.org/10.1007/s002530051237 Matsuzawa T, Kaneko S, Yaoi K (2016) Improvement of thermostability and activity of Trichoderma reesei endo-xylanase Xyn III on insoluble substrates. Appl Microbiol Biotechnol 100:8043–8051. https://doi.org/10.1007/s00253-016-7563-z Penttilä PA, Várnai A, Pere J, Tammelin T, Salmén L, Siika-aho M, Viikari L, Serimaa R (2013) Xylan as limiting factor in enzymatic hydrolysis of nanocellulose. Bioresour Technol 129:135–141. https://doi.org/10.1016/j.biortech.2012.11.017 Fägerstam LG, Pettersson LG (1980) The 1.4-β-glucan cellobiohydrolases of Trichoderma reesei QM 9414: a new type of cellulolytic synergism. FEBS Lett 119:97–100. https://doi.org/10.1016/0014-5793(80)81006-4 Boisset C, Fraschini C, Schülein M, Henrissat B, Chanzy H (2000) Imaging the enzymatic digestion of bacterial cellulose ribbons reveals the endo character of the cellobiohydrolase Cel6A from Humicola insolens and its mode of synergy with cellobiohydrolase Cel7A. Appl Environ Microbiol 66:1444–1452 Rahkamo L, Siika-Aho M, Vehviläinen M, Dolk M, Viikari L, Nousiainen P, Buchert J (1996) Modification of hardwood dissolving pulp with purified Trichoderma reesei cellulases. Cellulose 3:153–163. https://doi.org/10.1007/BF02228798 Horikawa Y, Imai T, Abe K, Sakakibara K, Tsujii Y, Mihashi A, Kobayashi Y, Sugiyama J (2016) Assessment of endoglucanase activity by analyzing the degree of cellulose polymerization and high-throughput analysis by near-infrared spectroscopy. Cellulose 23:1565–1572. https://doi.org/10.1007/s10570-016-0927-9 MI, TI, and JS designed the study. MI observed bagasse sections and interpreted the action and interaction of enzymes, and was a major contributor in writing the manuscript. AM and YK pretreated bagasse powder and mixed enzyme components for saccharification. TI selected fluorescence dye and instructed a labeling method. SK set up the condition of fluorescence microscope for the time-lapse observation. TM, KY, NS, HK, and KI made and purified Xyn10. JS performed image analysis and contributed to writing the manuscript. All authors read and approved the final manuscript. The study was conducted within the framework of RISH Cooperative Research (ADAM), and RISH Mission Research II. The transverse sections investigated in this study were prepared by Dr. Miho Kojima and Prof. Keiji Takabe, Kyoto University, which are sincerely acknowledged. We thank Edanz Group (http://www.edanzediting.com/ac) for editing a draft of this manuscript. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. Sample preparation and fluorescence microscopy were supported by the New Energy and Industrial Technology Development Organization (NEDO) project entitled: "Construction of Innovative Saccharifying Enzyme-producing Microorganism and Development of Manufacturing of the Enzyme for the Biofuel Commercialization". Computational analysis was supported by Grant-in-Aid for Scientific Research (Grant Numbers 25252033, 18H05485). Research Institute for Sustainable Humanosphere, Kyoto University, Gokasho, Uji, Kyoto, 611-0011, Japan Makiko Imai, Tomoya Imai, Satoshi Kimura & Junji Sugiyama Tsukuba Research Laboratory, Japan Bioindustry Association, Tsukuba, Ibaraki, 305-8566, Japan Asako Mihashi & Yoshinori Kobayashi Graduate School of Agricultural and Life Science, The University of Tokyo, Bunkyo-ku, Tokyo, 113-8657, Japan Satoshi Kimura Bioproduction Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba Central 6, 1-1-1 Higashi, Ibaraki, 305-8566, Japan Tomohiko Matsuzawa & Katsuro Yaoi Biological Science Research, Kao Corporation, 1334 Minato, Wakayama, Wakayama, 640-8580, Japan Nozomu Shibata, Hiroshi Kakeshita & Kazuaki Igarashi College of Materials Science and Engineering, Nanjing Forestry University, Nanjing, 210037, China Junji Sugiyama Makiko Imai Asako Mihashi Tomoya Imai Tomohiko Matsuzawa Katsuro Yaoi Nozomu Shibata Hiroshi Kakeshita Kazuaki Igarashi Yoshinori Kobayashi Correspondence to Junji Sugiyama. Additional file 1: Online Resource S1. Time-lapse images during saccharification by fluorescence microscopy in the presence of labeled enzyme. Imai, M., Mihashi, A., Imai, T. et al. Selective fluorescence labeling: time-lapse enzyme visualization during sugarcane hydrolysis. J Wood Sci 65, 17 (2019). https://doi.org/10.1186/s10086-019-1798-0 Sugarcane tissue Statistical image analysis
CommonCrawl
Home | Mathematics | * Consumer Loan Calculator * Applied Mathematics * Calculus * Finance Calculator * Is Mathematics a Science? * Maxima * Sage * Trigonometric Relations * Unit Conversions Area of an Irregular Polygon Binomial Probability Coronavirus Math Equities Myths Graphinity Graphitude Interactive LaTeX Editor Introduction to Statistics Mandelbrot Set Peak People Polygon Calculator Polynomial Regression Data Fit Polynomial Regression Data Fit (Java version) Prime Numbers Quadratic Equation Solver Randomness Signal Processing The Mathematics of Pi The Mathematics of Population Increase The Physics Behind Stopping a Car Why PDF? Share This Page The Physics Behind Stopping a Car Copyright © 2018, P. Lutus — Message Page — Most recent update: Braking Distance | Stopping Distance Equation | Stopping Distance Tables | Calculator | Common Misconceptions | Conclusion | Reader Feedback Braking Distance Question: if a car going 20 miles per hour (MPH) requires 20 feet to stop, how much distance is required at 40 MPH? 10 feet. The answer, which surprises nearly everyone, is (d) 80 feet (on dry, level pavement and neglecting driver reaction distance). This is because the energy of a moving car is proportional to its mass times the square of its velocity, based on the kinetic energy equation from physics: \begin{equation} \displaystyle E_k = \frac{1}{2} m v^2 \end{equation} Where: $E_k$ = Kinetic energy, joules $m$ = Mass, kilograms $v$ = Velocity, meters/second It turns out that a car's braking distance is proportional to its kinetic energy. The energy is dissipated as heat in the brakes, in the tires and on the road surface — more energy requires more braking distance. This explains why braking distance increases as the square of a car's speed. Stopping Distance Equation We can use the kinetic energy idea, and a knowledge of driver reaction times, to write an equation that predicts car stopping distances ("stopping" distance is the sum of reaction and braking distance). Here is the equation's canonical form: \begin{equation} d = r v \frac{10}{36} + \frac{v^2}{b} \end{equation} Where: $d$ = Total stopping distance (reaction + braking), meters. $v$ = Vehicle speed, kilometers/hour. $r$ = Driver reaction time, seconds. $b$ = Braking coefficient factor. The left-hand side of the equation ($r v \frac{10}{36}$) converts the driver's reaction time into distance traveled during that time. The right-hand side of the equation ($\frac{v^2}{b}$) computes braking distance by applying a braking coefficient factor ($b$) to the square of the car's velocity. Assuming dry, level pavement, a typical value for $b$ would be 170, but this is an empirical factor — it's derived from field measurements. This equation may be rewritten for non-metric measurement units, but it's simpler and more reliable to convert its arguments and results to/from metric units: To convert input velocities from miles per hour (MPH) to KPH, multiply by 1.609344. To convert output distances from meters to feet, multiply by 3.28084. Remember that this equation provides the car's total stopping distance — the sum of driver reaction distance (left-hand side) and braking distance (right-hand side). To compute these values independently, isolate the equation's sides (driver reaction distance = $r v \frac{10}{36}$, braking distance = $\frac{v^2}{b}$). Stopping Distance Tables Here are tables of typical values generated using the above equation and that agree closely with data published by public safety organizations. Metric units: (KPH, meters): Imperial units (MPH, feet): These tables assume dry, level pavement and a driver reaction time of 1.5 seconds. It turns out that, within broad limits and because of the physics of tire friction, the size of one's tires and their loading (from vehicle mass) don't significantly change the outcome for most vehicles (details below in the "Common Misconceptions" section), so the above tables provide reasonably accurate stopping-distance predictions — but the equation provided earlier is more flexible and useful than these tables. This calculator provides results for user-entered speeds, driver reaction times and braking coefficients. Choose input and output units and enter values in those units. Speed: KPH MPH Reaction Time: Seconds Braking Coefficient: Empirical Reaction Distance: Meters Feet Braking Distance: Reaction+Braking Distance: Common Misconceptions Vehicle Mass For a fixed tire size and within reasonable limits, increasing a vehicle's mass shouldn't increase its braking distance. The reason is that the heavier vehicle's tires apply more force to the road — braking effectiveness results from a combination of surface area and force. The increased inertia of the heavier vehicle is balanced by its increased surface force. Tire Surface Area At first glance, one might think increasing the size and road-contact surface area of a tire should improve its braking performance — after all, there's more rubber contacting the road. But as it turns out, for a given vehicle mass, each square meter of a larger tire's surface presses against the road with less force, and (as explained above) braking effectiveness results from a combination of surface area and force. This is why we don't see gigantic tires on the vehicles of safety-conscious drivers — it just doesn't work. Moving in the other direction, if we make tires too small, the energy of braking would melt their surfaces, destroying their effectiveness. Also small tires tend to wear out more quickly in normal operation, so there's a lower practical limit to tire size. Truck Braking Distance Large truck operators often claim that a large truck must have more braking distance, because stopping a greater mass requires more distance. This is false, and I'm going to prove it below. Once you've read the proof, you will realize the big-truck braking-distance argument makes no sense. Here we go: Imagine a sport-utility vehicle (SUV) that weighs four tons and has four tires. Its braking distance can be accurately predicted using the stopping distance equation provided earlier. Compare the SUV with a big truck that weighs 20 tons and has 20 tires. Can this big, heavy truck — five times more massive than the SUV — stop in the same distance? Yes, it must be so — read on. Now imagine five four-ton SUVs driving close together, almost touching. If they all apply their brakes at once, each SUV will stop in the same distance as when separated*. Now imagine the five SUVs are connected together by metal bars, so they become one vehicle — a vehicle that weighs 20 tons and has 20 tires. What has changed? Each driver applies his brakes in the same way, therefore the connected assembly of SUVs stops in the same distance that the individual SUVs do when separated. By being connected, the five individual four-ton SUVs have become a vehicle that weighs 20 tons, has 20 tires, and stops in the same distance as one SUV. Q.E.D.* It's true that, in present-day reality, big trucks do require more stopping distance than small cars, but the reason is economics, not physics. In principle, big trucks could be designed to stop in the same distance as small cars, if we wanted to pay for the engineering improvements. Here are this article's "take-homes": A car's braking distance increases as the square of its speed (disregarding reaction time). Twice as fast, four times the stopping distance. Heavy vehicles with adequate brakes should stop in the same distance as light vehicles, because the heavy vehicle's tires are either more numerous or are pressing down on the road with more force. Ordinarily, not knowing physics and math is only inconvenient, but for car stopping problems it can get you killed. Truck Stopping Distances | Vehicle Stopping Distance in Inclement Conditions | Braking Distance on a Slope | Stopping Distance Without Brakes | Sloping terrain, wet pavement, leaves on the road — Help! | Tractor-Trailer Braking Distance Truck Stopping Distances Thank you for your explanation of vehicular braking characteristics. It was interesting to read. However, I would like to refute your assertion that "big trucks" stop in the same distance as an SUV. I look forward to a refutation that understands and acknowledges the underlying physics. Your Comparison: (Compare the SUV with a big truck that weighs 20 tons and has 20 tires. Can this big, heavy truck — five times more massive than the SUV — stop in the same distance? Yes, it must be so — read on.) A US Commercial Hauling truck (aka, tractor-trailer) is a vehicle with a combined Maximum GVW of 80,000 lbs. Typically they are loaded to 50,000 - 70,000 lbs GVW. There are special permits that can be obtained to exceed this weight with unmodified equipment. The truck and trailer can have a significant amount of weight variation. The braking systems are typically set up to be most effective at a mean value. Additionally, in your assertion, you state that the commercial truck has 20 tires on the ground. In fact, most have only 18. Yes, and each of those 18 tires presses on the pavement with proportionally more force than one with 20 tires, so if the truck has adequate brakes, the braking distance is the same. If the truck is loaded light or empty, the truck will tend to break traction more easily and cause an extended distance stop. Wait ... so are you saying if the truck is lightly loaded, it requires more stopping distance, not less? Surely you see the contradiction in your argument — that if a truck is heavily loaded, it takes more distance to stop, but if it's lightly loaded, it also takes more distance to stop? If the truck is loaded heavier than the adjust level, the greater energy will take longer to dissipate. No, the higher kinetic energy is dissipated over the same distance because the tire pressure on the asphalt is proportionally greater — more heat is generated along the truck's path, but the stopping distance is the same. It all comes out in the physics — if the braking system is properly designed and the tires don't melt under high loads, the braking distance on dry, level pavement is the same. In my article I make this point with some number N of SUVs, but if you prefer I can add SUVs to equal the mass of any imaginable truck with any number of wheels. Here's the take-home: If you increase a vehicle's mass with the same number of tires, each tire presses on the road with more force, so the stopping distance remains the same. If you decrease the vehicle's mass, the tires press down with less force, so the stopping distance remains the same. For the underlying physics and math, see my reference list at the bottom of this message. So, the vehicle comparisons are not apples to apples. If you had understood the key points in my article, you would realize that, for properly designed brakes, adequately sized tires and the same surface, all vehicles require the same stopping distance. Basically, your 5 SUVs would be towing one extra SUV and minus a pair of wheels and trying to stop in the same distance. Think about what you're saying. If I double the number of SUVs in my example, the braking distance is the same. If I instead load each SUV with more mass, their tires press on the pavement with more force, so they stop in the same distance. Link: Does the braking distance of a car depend on weight of the car? (ResearchGate) Quote: "The above equation shows that braking distance is independent of mass of vehicle." Link: Stopping Distance for Auto (HyperPhysics) Quote: "Note that this [equation] implies a stopping distance independent of vehicle mass." And so on, for hundreds of references. Surely you don't think I made this up, do you? That would be incredibly irresponsible, and I could be held to account for the consequences. I hope this helps, and thanks for writing. Vehicle Stopping Distance in Inclement Conditions Thank you for providing such clear explanation on stopping distance. It will definitely inform my writing. Are aware of common additional factors that would be included in the case of rain or snow while driving? Certainly there are a large number of variables that cannot be easily tabulated outside of test conditions. I'm trying to discover if there is a general maxim that could be suggested about stopping in certain conditions. Example - if it takes approximately 200 feet to stop an average car on clear, flat, dry pavement using average braking power, could we establish a corollary that generally describes the stopping distance for other conditions like "Because of the XYZ variables, driving in wet conditions requires 1.8 times the stopping distance as in dry"? One cannot reliably do this. Consider the variables: The notorious combination of a pea-gravel surface and anti-lock brakes, the latter of which will glide over the gravel and apply almost no braking force, wrongly calculating that traction has been lost. This combination of factors must be experienced to be believed. The first season's rainfall over pavement coated with a full season of oil buildup from past traffic. New snow on top of a layer of old snow. When this happens in steep terrain, it leads to avalanches. When it happens on roadways, it leads to a false sense of security because the top snow layer looks fresh and pliable, but it hides a slick surface beneath. Black ice, very dangerous and often appearing when the air temperature is well above freezing because the pavement radiates its heat directly into space, disregarding the intervening air's temperature (in physics, radiation is much more efficient than convection). Irregular surfaces with patches of water and hydroplaning effects. No, these conditions and others mean one cannot say with any certainty what a stopping distance will be on a surface other than dry, level pavement. Braking Distance on a Slope Thank you for the information on the mechanics of stopping distances for an average vehicle on a "level", dry surface, tires of average condition etc. But...how does the math/physics change if the surface isn't level but has a slope? Let say 10%. The mass of the vehicle isn't changed. How about the frictional forces? First, for a slope s expressed as a percentage, the angle in degrees is equal to tan-1(s / 100), so for a 10% slope it's 5.71 degrees — call this θ. The vertical component of the mass (that bears down on the tires and road surface) is on average m cos(θ) (m = vehicle mass), so for the 10% slope case, the effective frictional mass is 99.5% of the level mass. But the vehicle's inertial mass (working to prevent a change in velocity) remains the same. Therefore we already have a factor in the vertical dimension that works against an efficient stop. Added to this is the effect of the slope. A force proportional to m sin(θ) is added to — or subtracted from — the forces acting on the vehicle and its tires. For 5.71 degrees this is roughly equal to 0.1 m. So for a downhill direction, the effective stopping distance from this factor alone is increased by 10%. I emphasize that this factor can't be evaluated independently of the prior factor ("vertical component"), which has the effect of reducing the vehicle's effective braking mass but without changing its inertial mass. More formally, for intermediate angles between zero and 90 degrees, the math becomes very tricky because it also depends on the behavior of the car's suspension and its center of mass. The above equations are only applicable — and only approximately — for angles near zero. All the above becomes practically unworkable if we try to calculate the specific effect on the four individual tires for a vehicle with a high center of mass (the tires nearer the center of mass get greater loading, those farther from the center of mass get less). Going to the extreme, if a vehicle is in free fall (in a vacuum), there are no fictional forces so at that point it is removed from the equation completely, right? Yes. At that point it's a classic falling object on a ballistic trajectory, no braking force any more. Interestingly, in an environment with less gravitational acceleration like the moon, masses can be lifted against gravity more easily, but they have the same inertia, so getting an object moving (applying an acceleration) on a level frictionless surface requires the same amount of force as on earth. The Apollo astronauts found it surprisingly difficult to adjust to having much less gravitational mass but the same inertial mass — some just fell over. So how does the physics change for a 10% slope? As above. In summary, a simple answer is not available. After computing the above test case, this isn't something I would dream of making a definitive statement about. Consider a top-heavy vehicle or a vehicle that's leaning to the side as well as uphill or downhill — that would prevent any realistic advance estimate of stopping distance. Stopping Distance Without Brakes At 300 mph how long would it take to stop without brakes? I'm thinking about building a 1/4 mile drag strip that can handle any speed safely so I'm trying to figure out without brakes the distance one would need to stop at 300mph. Without brakes? You've left out some important information. If I assume a perfect car, transmission in neutral, with ideal bearings and a racetrack on the moon (or anywhere without air resistance), the car would never stop. That's never, jamais, noch nie, numquam. It would roll on forever. You need to understand that a moving car has kinetic energy, and for the car to slow down, that energy must be converted into a different form. Wind resistance is one source of energy dissipation, brakes are another. Add imperfect bearings, tire rolling resistance and a few others. But without knowing whether or where the car's energy of motion will be dissipated, no estimate can be given. Without any energy loss, Newton's First Law rules: "An object will remain at rest or in uniform motion in a straight line unless acted upon by an external force." Thank You kindly. You're welcome. Sloping terrain, wet pavement, leaves on the road — Help! I need your assistance. With a 3 1/2% downhill grade and a speed of 40 miles an hour what will the stopping distance be a 4000 pound car on an asphalt road and dry weather, In wet weather, And with wet leaves. Using the same parameters 3 1/2% downhill grade 40 miles an hour what would the similar stopping distance before an 18 wheeler Weighing 70,000 pounds. Making the assumptions that all of the brakes operates correctly all of the tires have adequate tread. These questions cannot be answered with any reliability. There are too many independent factors. No professional would think of assigning firm numbers to a problem with this number of confounding factors. But here are some general principles: Point one: The mass of the vehicle should not matter, nor the number of wheels and tires, assuming the vehicle is designed correctly so the tires and brakes don't simply melt or burn under load. To understand why, imagine five identical vehicles each weighing 7 tons (14,000 pounds) and having four tires. Test their independent stopping distances — take careful notes. Now imagine that all five vehicles are moving close together at the same speed in a demonstration, and all of them stop at once. This will give the same stopping distance as when the vehicles stopped independently. Now — read carefully — imagine that the five vehicles are connected together with rigid steel bars, so they move as a unit. Test their stopping distance. It will be the same as when the vehicles were not connected together, but by connecting them, you have assembled a vehicle that weighs 35 tons (70,000 pounds), has 20 tires, but that stops in the same time and distance as when they were separated. The conclusion from this thought experiment must be that the size and mass of the vehicle doesn't matter, only that it have adequate brakes and tires. Point two: If you compare two vehicles with the same mass, but one has fewer or smaller tires than the other, this also doesn't matter. The reason is the vehicle with fewer tires presses those tires onto the road with greater force, and braking friction is proportional to force times area — more force, less area. So we can assert two points: (1) vehicle size and mass doesn't matter, and (2) number of wheels/tires doesn't matter (assuming the tires don't melt under load). As to sloping terrain, wet pavement, leaves on the road, your problem statement is not sufficient to draw a reliable conclusion. Even the dry-sloping-pavement condition is complicated by the fact that two vehicles with different centers of mass will have different stopping distances because the sloping surface places the center of mass over the tires differently, and the greater the slope, the greater the difference. So it is not possible to predict the outcome with any reliability. A careful researcher would ask, "What kind of leaves? From what kind of tree? Does the tree exude oil in its leaves? Can one leaf ever be on top of another leaf? And how much water — damp conditions, freezing conditions, hydroplaning conditions?" That hypothetical researcher would finally say, "It is not possible to say what the stopping distance will be unless we measure it." Vehicle stopping distance tables only work for dry, level pavement and mechanically sound vehicles, and even then there are confounding factors like temperature. Because this looks like homework, I strongly recommend that you simply say there is not enough information for a reliable answer. If it turns out your instructor expects specific stopping distances given the stated conditions, I suggest that you change instructors. Thank you very much. You're most welcome. Tractor-Trailer Braking Distance Your explanation of stopping distance is interesting. I have a few comments though. Tractor trailers (semis) DO take longer to stop than your 5 SUV example. You are comparing apples to oranges. If you wanted to do an accurate comparison you would need to compare an SUV towing a trailer to the semi. No, you would want to compare two SUVs in a row, but I already covered that case. But to address your example directly, imagine a tractor and a trailer, one following the other, not attached. The trailer is magically accelerated to a high speed, then its anti-lock braking system is activated. Does the fact that it has no tractor at the front adversely affect its stopping distance? No, not if the braking action is properly carried out with no chance for a jackknife. This means that the tractor, and the trailer, cannot — indeed, must not — have any longitudinal force between them. Were this not the case, they would either break apart or jackknife. In a properly designed rig, the trailer must not ever push the tractor, because that risks a jackknife. The reason SUVs/cars will stop quicker is because of the weight transfer to the front wheels which transfers weight to the front wheels adding to the traction of those tires. That is true for any vehicle, with any number of wheels. It is not true — and cannot be true — for a trailer behind a tractor, for obvious safety reasons. The trailer cannot push against the tractor, that would be very dangerous. The trailer behind the semi does not transfer any weight to the front wheels to add in braking in fact the trailer tries to continue in a straight line (Newton; a body in motion tends to stay in motion). The trailer brakes slow the momentum of the trailer but if they lock up the tendency will be for the trailer to try and pass the power unit which is able to brake better than the trailer again due to that weight transfer. This is false and dangerous. Modern highway rigs are designed so this cannot happen, because if while navigating a curve the operator was required to apply the brakes in an emergency (or while traveling downhill, an everyday occurrence), in your imagined scenario the trailer would jackknife the tractor with catastrophic consequences. Think about your claim. Imagine a truck braking as it descends a steep, winding road. If the trailer must rely on the tractor for part of its braking effectiveness, what happens? Remember that the trailer is much more massive than the tractor. Modern braking systems are designed so that the tractor and trailer brake independently, with no net longitudinal force between them. All the hitch does is keep the trailer obediently behind the tractor, nothing else.
CommonCrawl
Gradient blow-up in Zygmund spaces for the very weak solution of a linear elliptic equation Persistence of Hölder continuity for non-local integro-differential equations May 2013, 33(5): 1773-1807. doi: 10.3934/dcds.2013.33.1773 Formal Poincaré-Dulac renormalization for holomorphic germs Marco Abate 1, and Jasmin Raissy 2, Dipartimento di Matematica, Università di Pisa, Largo Pontecorvo 5, 56127 Pisa, Italy Dipartimento di Matematica e Applicazioni, Università degli Studi di Milano Bicocca, Via Cozzi 53, 20125 Milano, Italy Received April 2012 Revised July 2012 Published December 2012 We shall describe an alternative approach to a general renormalization procedure for formal self-maps, originally suggested by Chen-Della Dora and Wang-Zheng-Peng, giving formal normal forms simpler than the classical Poincaré-Dulac normal form. As example of application we shall compute a complete list of normal forms for bi-dimensional superattracting germs with non-vanishing quadratic term; in most cases, our normal forms will be the simplest possible ones (in the sense of Wang-Zheng-Peng). We shall also discuss a few examples of renormalization of germs tangent to the identity, revealing interesting second-order resonance phenomena. Keywords: Poincaré-Dulac normal form, superattracting germs, renormalization, tangent to the identity maps., formal transformation. Mathematics Subject Classification: Primary: 37G05, 37F99, 32H5. Citation: Marco Abate, Jasmin Raissy. Formal Poincaré-Dulac renormalization for holomorphic germs. Discrete & Continuous Dynamical Systems, 2013, 33 (5) : 1773-1807. doi: 10.3934/dcds.2013.33.1773 M. Abate, Holomorphic classification of $2$-dimensional quadratic maps tangent to the identity, Sūkikenkyūsho Kōkyūroku, 1447 (2005), 1-14. Google Scholar M. Abate, Discrete holomorphic local dynamical systems, in "Holomorphic Dynamical Systems" (G.Gentili, J. Guénot and G. Patrizio, eds.), Lect. Notes in Math. 1998, Springer, Berlin, 2010, pp. 1-55. Google Scholar M. Abate and F. Tovena, Formal classification of holomorphic maps tangent to the identity, Discrete Contin. Dyn. Syst. Suppl (2005), 1-10. Google Scholar M. Abate and F. Tovena, Poincaré-Bendixson theorems for meromorphic connections and holomorphic homogeneous vector fields, J. Differential Equations, 251 (2011), 2612-2684. doi: 10.1016/j.jde.2011.05.031. Google Scholar A. Algaba, E. Freire and E. Gamero, Hypernormal forms for equilibria of vector fields. Codimension one linear degeneracies, Rocky Mountain J. Math. 29 (1999), 13-45. doi: 10.1216/rmjm/1181071677. Google Scholar A. Algaba, E. Freire, E. Gamero and C. Garcia, Quasi-homogeneous normal forms, J. Comput. Appl. Math. 150 (2003), 193-216. doi: 10.1016/S0377-0427(02)00660-X. Google Scholar V. I. Arnold, "Geometrical Methods In The Theory Of Ordinary Differential Equations," Springer Verlag, New York, 1988. Google Scholar A. Baider, Unique normal forms for vector fields and Hamiltonians, J. Differential Equations, 78 (1989), 33-52. doi: 10.1016/0022-0396(89)90074-0. Google Scholar A. Baider and R. Churchill, Unique normal forms for planar vector fields,, Math. Z., 199 (): 303. Google Scholar A. Baider and J. Sanders, Further reduction of the Takens-Bogdanov normal form, J. Differential Equations, 99 (1992), 205-244. doi: 10.1016/0022-0396(92)90022-F. Google Scholar G. R. Belitskii, Invariant normal forms of formal series, Functional Anal. Appl., 13 (1979), 46-67. Google Scholar G. R. Belitskii, Normal forms relative to a filtering action of a group, Trans. Moscow Math. Soc., 40 (1979), 3-46. Google Scholar F. Bracci and D. Zaitsev, Dynamics of one-resonant biholomorphisms,, J. Eur. Math. Soc. , (). Google Scholar A. D. Brjuno, Analytic form of differential equations. I, Trans. Moscow Math. Soc. 25 (1971), 131-288. Google Scholar A. D. Brjuno, Analytic form of differential equations. II, Trans. Moscow Math. Soc. 26 (1972), 199-239. Google Scholar H. Broer, Formal normal form theorems for vector fields and some consequences for bifurcations in the volume preserving case, in "Dynamical systems and turbulence, Warwick 1980 (Coventry, 1979/1980)", Lecture Notes in Math. 898, Springer, Berlin, 1981, pp. 54-74. Google Scholar H. Cartan, "Cours de calcul différentiel," Hermann, Paris, 1977. Google Scholar G. T. Chen and J. Della Dora, Normal forms for differentiable maps near a fixed point, Numer. Algorithms, 22 (1999), 213-230. Google Scholar G. T. Chen and J. Della Dora, Further reductions of normal forms for dynamical systems, J. Differential Equations, 166 (2000), 79-106. Google Scholar J. Écalle, "Les Fonctions Résurgentes. Tome III: L'Équation Du Pont Et La Classification Analytique Des Objects Locaux," Publ. Math. Orsay, 85-05, Université de Paris-Sud, Orsay, 1985. Google Scholar J. Écalle, Iteration and analytic classification of local diffeomorphisms of $\mathbbC^v$, in "Iteration Theory And Its Functional Equations (Lochau, 1984)", Lect. Notes in Math., 1163, Springer-Verlag, Berlin, 1985, pp. 41-48. Google Scholar E. Fischer, Über die differentiationsprozesse der algebra, J. für Math., 148 (1917), 1-78. Google Scholar G. Gaeta, Further reduction of Poincaré-Dulac normal forms in symmetric systems, Cubo, 9 (2007), 1-11. Google Scholar A. Giorgilli and A. Posilicano, Estimates for normal forms of differential equations near an equilibrium point, Z. Angew. Math. Phys., 39 (1988), 713-732. doi: 10.1007/BF00948732. Google Scholar F. Ichikawa, On finite determinacy of formal vector fields,, Invent. Math. 70 (1982/83), 70 (): 45. Google Scholar F. Ichikawa, Classification of finitely determined singularities of formal vector fields on the plane, Tokyo J. Math. 8 (1985), 463-472. doi: 10.3836/tjm/1270151227. Google Scholar H. Kokubu, H. Oka and D. Wang, Linear grading function and further reduction of normal forms, J. Differential Equations, 132 (1996), 293-318. doi: 10.1006/jdeq.1996.0181. Google Scholar E. Lombardi and L. Stolovitch, Normal forms of analytic perturbations of quasihomogeneous vector fields: Rigidity, invariant analytic sets and exponentially small approximation, Ann. Sci. Éc. Norm. Supér. 43 (2010), 659-718. Google Scholar D. Malonza and J. Murdock, An improved theory of asymptotic unfoldings, J. Differential Equations, 247 (2009), 685-709. Google Scholar J. Murdock, "Normal Forms And Unfoldings For Local Dynamical Systems," Springer Verlag, Berlin, 2003. Google Scholar J. Murdock, Hypernormal form theory: Foundations and algorithms, J. Differential Equations, 205 (2004), 424-465. Google Scholar J. Murdock and J. A. Sanders, A new transvectant algorithm for nilpotent normal forms, J. Differential Equations, 238 (2007), 234-256. Google Scholar J. Raissy, Torus actions in the normalization problem, J. Geom. Anal. 20 (2010), 472-524. Google Scholar J. Raissy, Brjuno conditions for linearization in presence of resonances, in "Asymptotics In Dynamics, Geometry And PDE's; Generalized Borel Summation, Vol. I" (O. Costin, F. Fauvet, F. Menous and D. Sauzin, eds.), Edizioni Della Normale, Pisa, 2010, pp. 201-218. Google Scholar H. Rüssmann, Stability of elliptic fixed points of analytic area-preserving mappings under the Bruno condition, Ergodic Theory Dynam. Systems, 22 (2002), 1551-1573. Google Scholar J. A. Sanders, Normal form theory and spectral sequences, J. Differential Equations, 192 (2003), 536-552. Google Scholar D. Wang, M. Zheng and J. Peng, Further reduction of normal forms of formal maps, C. R. Math. Acad. Sci. Paris, 343 (2006), 657-660. Google Scholar D. Wang, M. Zheng and J. Peng, Further reduction of normal forms and unique normal forms of smooth maps, Internat. J. Bifur. Chaos Appl. Sci. Engrg. 18 (2008), 803-825. doi: 10.1142/S0218127408020665. Google Scholar P. Yu and Y. Yuan, The simplest normal form for the singularity of a pure imaginary pair and a zero eigenvalue, Dyn. Contin. Discrete Impuls. Syst. Ser. B Appl. Algorithms, 8 (2001), 219-249. Google Scholar Marco Abate, Francesca Tovena. Formal normal forms for holomorphic maps tangent to the identity. Conference Publications, 2005, 2005 (Special) : 1-10. doi: 10.3934/proc.2005.2005.1 João Lopes Dias. Brjuno condition and renormalization for Poincaré flows. Discrete & Continuous Dynamical Systems, 2006, 15 (2) : 641-656. doi: 10.3934/dcds.2006.15.641 Sze-Bi Hsu, Bernold Fiedler, Hsiu-Hau Lin. Classification of potential flows under renormalization group transformation. Discrete & Continuous Dynamical Systems - B, 2016, 21 (2) : 437-446. doi: 10.3934/dcdsb.2016.21.437 Ricardo Miranda Martins. Formal equivalence between normal forms of reversible and hamiltonian dynamical systems. Communications on Pure & Applied Analysis, 2014, 13 (2) : 703-713. doi: 10.3934/cpaa.2014.13.703 Vivi Rottschäfer. Multi-bump patterns by a normal form approach. Discrete & Continuous Dynamical Systems - B, 2001, 1 (3) : 363-386. doi: 10.3934/dcdsb.2001.1.363 Todor Mitev, Georgi Popov. Gevrey normal form and effective stability of Lagrangian tori. Discrete & Continuous Dynamical Systems - S, 2010, 3 (4) : 643-666. doi: 10.3934/dcdss.2010.3.643 Dario Bambusi, A. Carati, A. Ponno. The nonlinear Schrödinger equation as a resonant normal form. Discrete & Continuous Dynamical Systems - B, 2002, 2 (1) : 109-128. doi: 10.3934/dcdsb.2002.2.109 Denis Gaidashev, Tomas Johnson. Spectral properties of renormalization for area-preserving maps. Discrete & Continuous Dynamical Systems, 2016, 36 (7) : 3651-3675. doi: 10.3934/dcds.2016.36.3651 Yiming Ding. Renormalization and $\alpha$-limit set for expanding Lorenz maps. Discrete & Continuous Dynamical Systems, 2011, 29 (3) : 979-999. doi: 10.3934/dcds.2011.29.979 B. Fernandez, E. Ugalde, J. Urías. Spectrum of dimensions for Poincaré recurrences of Markov maps. Discrete & Continuous Dynamical Systems, 2002, 8 (4) : 835-849. doi: 10.3934/dcds.2002.8.835 Changyou Wang, Shenzhou Zheng. Energy identity for a class of approximate biharmonic maps into sphere in dimension four. Discrete & Continuous Dynamical Systems, 2013, 33 (2) : 861-878. doi: 10.3934/dcds.2013.33.861 Sikhar Patranabis, Debdeep Mukhopadhyay. Identity-based key aggregate cryptosystem from multilinear maps. Advances in Mathematics of Communications, 2019, 13 (4) : 759-778. doi: 10.3934/amc.2019044 Virginie De Witte, Willy Govaerts. Numerical computation of normal form coefficients of bifurcations of odes in MATLAB. Conference Publications, 2011, 2011 (Special) : 362-372. doi: 10.3934/proc.2011.2011.362 Letizia Stefanelli, Ugo Locatelli. Kolmogorov's normal form for equations of motion with dissipative effects. Discrete & Continuous Dynamical Systems - B, 2012, 17 (7) : 2561-2593. doi: 10.3934/dcdsb.2012.17.2561 John Burke, Edgar Knobloch. Normal form for spatial dynamics in the Swift-Hohenberg equation. Conference Publications, 2007, 2007 (Special) : 170-180. doi: 10.3934/proc.2007.2007.170 Tomáš Oberhuber, Tomáš Dytrych, Kristina D. Launey, Daniel Langr, Jerry P. Draayer. Transformation of a Nucleon-Nucleon potential operator into its SU(3) tensor form using GPUs. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1111-1122. doi: 10.3934/dcdss.2020383 Bassam Fayad, Maria Saprykina. Realizing arbitrary $d$-dimensional dynamics by renormalization of $C^d$-perturbations of identity. Discrete & Continuous Dynamical Systems, 2022, 42 (2) : 597-604. doi: 10.3934/dcds.2021129 Hans Koch. On hyperbolicity in the renormalization of near-critical area-preserving maps. Discrete & Continuous Dynamical Systems, 2016, 36 (12) : 7029-7056. doi: 10.3934/dcds.2016106 V. Afraimovich, Jean-René Chazottes, Benoît Saussol. Pointwise dimensions for Poincaré recurrences associated with maps and special flows. Discrete & Continuous Dynamical Systems, 2003, 9 (2) : 263-280. doi: 10.3934/dcds.2003.9.263 Stefan Siegmund. Normal form of Duffing-van der Pol oscillator under nonautonomous parametric perturbations. Conference Publications, 2001, 2001 (Special) : 357-361. doi: 10.3934/proc.2001.2001.357 Marco Abate Jasmin Raissy
CommonCrawl
Recent questions tagged calculate Solve for \(x: 2 x^{\dfrac{4}{3}}=32\) asked 44 minutes ago in Mathematics by ♦Gauss Diamond (71,743 points) | 2 views The roots of a quadratic equation are given as \(x=\dfrac{-2 \pm \sqrt{1-3 k}}{4}\). Solve for \(x\) in \(2 x^2 \geq 18\) Solve for \(x\): \(2-x=\sqrt{x-2}\) asked 1 day ago in Mathematics by ♦Gauss Diamond (71,743 points) | 12 views How do you find the equation of a parabola given its vertex and focus? How do you find the slope of a line given two points? What is the Gram-Schmidt process? asked 2 days ago in Mathematics by ♦Gauss Diamond (71,743 points) | 14 views gram-schmidt What is a definite integral? What is the formula for the volume of a cone? What is the formula for the volume of a rectangular prism? What is the formula for the volume of a cylinder? What is the sum of the interior angles of a hexagon? kcse What is the value of $\lim_{x \to \infty} \frac{3x^2}{4x+1}$? What is the equation of the parabola with vertex at $(3,-2)$ and directrix $y=-1$? What is the equation of the line that is parallel to the line $y = 2x+1$ and passes through the point $(3,5)$? What is the p-value associated with a z-score of 2.5 in a standard normal distribution? standard-deviation What is the slope of the line that passes through the points $(3,4)$ and $(6,8)$? Why do people visit MathsGee? asked Jan 25 in General Knowledge by ♦Gauss Diamond (71,743 points) | 17 views mathsgee Simplify the following without using a calculator: Calculate the following without using a calculator: asked Jan 24 in Mathematics by ♦Gauss Diamond (71,743 points) | 11 views Calculate the following without the use of a calculator: Calculate each of the following without the use of a calculator: Without using a calculator, calculate the value of each of the following: If \(\tan 22^{\circ}=t\) write the following in terms of \(t\). In eight years' time a person wishes to pay cash for a car. He will require R350 000. He opens an investment account and earns 14% per annum compounded monthly. Jennifer bought a house for R620 000. She paid \(35 \%\) cash and the balance was paid through a bank loan. The interest paid on the loan was \(15 \%\) per annum compounded monthly. Vanessa borrows R1 250000 from the bank in order to buy a new house. The interest rate is \(14,4 \%\) per annum compounded monthly. Timothy buys furniture to the value of \(R 10000\). He borrows the money on 1 February 2010 from a financial institution that charges interest at a rate of 9,5% per annum compounded monthly. \(\mathrm{Mr}\) Brown has just finished paying off his twenty-year home loan which was R400 000. During the first five years the interest rate was \(24 \%\) per annum compounded monthly. Brenda takes out a twenty year loan of R400 000 . She repays the loan by means of equal monthly payments starting one month after the granting of the loan. A father decided to buy a house for his family for \(\mathrm{R} 800000\). He agreed to pay monthly instalments of R10 000 on a loan which incurred interest at a rate of \(14 \%\) per annum compounded monthly Kevin takes out a bank loan for R250 000 . The interest rate charged by the bank is \(18,5 \%\) per annum compounded monthly. Josephine opened a savings account with a single deposit of R1 000 on the 1st April 2012. She then makes 18 monthly deposits of R700 at the end of every month. Lindiwe receives a bursary of R80 000 for her studies at university. She invests the money at a rate of \(13,75 \%\) per annum compounded yearly. Ernest takes out a twenty year loan of \(\mathrm{R} 250000\). He repays the loan by means of equal monthly payments starting four months after the granting of the loan. Andrew wants to borrow money to buy a motorbike that costs R55 000 and plans to repay the full amount over a period of 4 years in monthly instalments. He is presented with TWO options: Thembi takes out a loan of R150 000 for home improvements. The loan is taken over six years at an interest rate of \(12 \%\) per annum compounded half-yearly. Daphne wants to buy a house for R700 000 . She puts down a deposit of R50 000 and takes out a loan for the balance at a rate of \(18 \%\) per annum compounded monthly. How much can Belinda borrow from a bank if she repays the loan by means of equal monthly payments of R3 500, starting in one months' time? Michael takes out a twenty-year retirement annuity. He makes monthly payments of R1 800 into the fund and the payments start immediately. It is the 31st December 2012. Anna decides to start saving money and wants to save R700 000 in five years' time by paying equal monthly amounts of \(\mathrm{R} x\), starting in one months time Pat starts a five year savings plan. At the beginning of the month he deposits R1 400 into the account and makes a further deposit of R1 400 at the end of that month. R6 000 is invested at \(9,6 \%\) per annum interest compounded quarterly. After how many years will the investment be worth R40 000? Edgar's motor car costing R230 000 depreciated at a rate of \(7 \%\) per annum on the reducing-balance method. Calculate how long it took for the car to depreciate to a value of R100 000 under these conditions. Tumelo buys a car for R100 000. He drives the car for four years and then decides to sell the car. Suppose that after four years of depreciation, the car is worth one quarter of its original value. Due to load shedding, a restaurant buys a large generator for R227 851. It depreciates at \(23 \%\) per annum on a reducing-balance. Sandy takes out a loan of R120 000 for home improvements. The loan is taken over four years at an interest rate of \(12 \%\) per annum compounded monthly. Lerato takes out a bank loan and repays the loan by means of equal quarterly payments of \(\mathrm{R} 2000\), starting in three months time.
CommonCrawl
floor function desmos The notation for the floor function is: floor(x) = ⌊x⌋ Examples Floor(2.1) = ⌊2.1⌋ = 2 Floor … PostgreSQL - CAST vs :: operator on LATERAL table function. What should be the graph of $[y]=[\sin x]$? The notation for the floor function is: floor(x) = ⌊x⌋ Examples Floor(2.1) = ⌊2.1⌋ = 2 Floor (3) = ⌊3⌋ = 3 Learn Desmos: Functions. Why are Stratolaunch's engines so far forward? It is not difficult to show that $\lfloor x\rfloor=\lfloor y\rfloor$ if and only if $(x-\lfloor y\rfloor)(y-\lfloor x\rfloor)\ge0$. FLOOR(x) rounds the number x down.Examples. How do rationalists justify the scientific method. Is whatever I see on the internet temporarily present in the RAM? How can I deal with claims of technical difficulties for an online exam? FLOOR function Description. The Floor Function and the Ceiling Function Main Concept The floor of a real number x , denoted by , is defined to be the largest integer no larger than x . For example, mod(6,4) will show the remainder of 6 divided by 4 and output 2 in the expression list as the answer. $$ (x-\lfloor y\rfloor)(y- \lfloor x\rfloor)\ge0$$. You can use "floor" for the Greatest Integer Function: Or, you can use "ceil" for the Least Integer Function. What makes cross input signature aggregation complicated to implement? By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. The input to the floor function is any real number x and its output is the greatest integer less than or equal to x. In mathematics and computer science, the floor function is the function that takes as input a real number, and gives as output the greatest integer less than or equal to , denoted ⁡ or ⌊ ⌋.Similarly, the ceiling function maps to the least integer greater than or equal to , denoted ⁡ or ⌈ ⌉.. For example, ⁡ = ⌊ ⌋ = and ⁡ = ⌈ ⌉ =, while ⌊ ⌋ = ⌈ ⌉ =. An online calculator to calculate values of the floor and ceiling functions for a given value of the input x. The Unshaded region below is the graph of your equation. In the Screenshot you can see the function floor(x)-floor(y)=0. English Desmos User Guide; Quick Start Guide; Spanish Desmos User Guide (Guía del Usuario) Russian Desmos User Guide (РУКОВОДСТВО ПОЛЬЗОВАТЕЛЯ) Italian Desmos User Guide (Guida per l'utente) Traditional Chinese Desmos User Guide (使用手冊) See all 20 articles Updates What's New? Use MathJax to format equations. Free Floor/Ceiling Equation Calculator - calculate equations containing floor/ceil values and expressions step by step This website uses cookies to ensure you get the best experience. Is the space in which we live fundamentally 3D or is this just how we perceive it? Decipher name of Reverend on Burial entry. Evaluate ∫ 0 ∞ ⌊ x ⌋ e − x d x. It's a lot more useful than the standard arctangent function, and I'm getting tired of having to redefine it every project. How do I change it to round outputs to the nearest integer, instead of rounding either up or down. Why were there only 531 electoral votes in the US Presidential Election 2016? All of the squares $[n,n+1]\times [n,n+1]$ would satisfy this, $(x-\lfloor y\rfloor)(y-\lfloor x\rfloor)\ge0$, $\vert \lfloor y\rfloor-\lfloor x\rfloor \vert <1$, "Question closed" notifications experiment results and graduation, MAINTENANCE WARNING: Possible downtime early morning Dec 2/4/9 UTC (8:30PM…, Possible alternative for finding the Area under the floor function (aka, the integral of floor(x)). By using this website, you agree to our Cookie Policy. Can floor functions be used to "cycle" between values? Rule of Thumb for Finding Error in Floor Terms. Use function notation to make meaningful connections between expressions, tables, and other mathematical objects. Help me finding function which give this type of graph. rev 2020.11.24.38066, The best answers are voted up and rise to the top, Mathematics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, I think it's trying to graph a curve when it ought to be a region, and it's confused because of the equal sign. Conic Sections: Parabola and Focus. By using this website, you agree to our Cookie Policy. How to sustain this sedentary hunter-gatherer society? The ceiling function by ceil(), the floor function by floor(), and the sign function by sign(). Conic Sections: Ellipse with Foci Why Is an Inhomogenous Magnetic Field Used in the Stern Gerlach Experiment? Is Elastigirl's body shape her natural shape, or did she choose it? The absolute value symbol by | (i.e., Shift + \ on US keyboards). An online calculator to calculate values of the floor and ceiling functions for a given value of the input x. How do we actually do the function though? It is an error (see my comment). example. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. You can find both of these functions under the Misc tab. I tested the function at the point (1.2,1.4) manually and it was true, but the point is not part of the graph. Why does Slowswift find this remark ironic? Desmos User Guide. The best strategy is to break up the interval of integration (or summation) into pieces on which the floor function is constant. Midpoint; Distance; Polygon; Floor / Greatest Integer; Modular Arithmetic (mod) Absolute Values; example. If you click the FUNCTIONS key on the Desmos keypad, you'll find more supported functions in the tabs. Free Floor/Ceiling Equation Calculator - calculate equations containing floor/ceil values and expressions step by step This website uses cookies to ensure you get the best experience. Desmos; Graphing Features; Functions; Functions Follow New articles New articles and comments. Wondering why it doesn't show a hollow circle on the right side of each line on the floor(x) function? Making statements based on opinion; back them up with references or personal experience. Why did MacOS Classic choose the colon as a path separator? Also, because integrals can take a while sometimes, it would be nice to have a way to increase/decrease their accuracy somehow (perhaps just as a graph option) so that we can choose between having a more accurate or a more dynamic graph. You can find both of these functions under the Misc tab. I ran over a problem with desmos after playing around with floor functions and want to know what is my thinking error or if the problem lies with Desmos. How can I make the seasons change faster in order to shorten the length of a calendar year on it? You can use "floor" for the Greatest Integer Function: Or, you can use "ceil" for the Least Integer Function. Why is the battery turned off for checking the voltage on the A320? By clicking "Post Your Answer", you agree to our terms of service, privacy policy and cookie policy. You can also use just numbers with the mod function. Definite integrals and sums involving the floor function are quite common in problems and applications. $$\left|\lfloor x\rfloor-\lfloor y\rfloor\right|>0$$. I tested the function at the point (1.2,1.4) manually and it was true, but the point is not part of the graph. Floor and ceil are actually pretty good at making interesting graphs... what if we want to change the y-intercept from the origin, then what do we do? Desmos does not show it accurately on the boundaries, however. To learn more, see our tips on writing great answers. \int\limits_0^\infty \lfloor x \rfloor e^{-x} \, dx. In the Screenshot you can see the function floor(x)-floor(y)=0. It is also the case that $\lfloor y\rfloor=\lfloor x\rfloor$ if and only if $\vert \lfloor y\rfloor-\lfloor x\rfloor \vert <1$, but Desmos errs once again on the boundary. Thanks for contributing an answer to Mathematics Stack Exchange! Asking for help, clarification, or responding to other answers. Generic word for firearms with long barrels. For more functions, check out the Desmos keyboard. Curing non-UV epoxy resin with a UV light? For more functions, check out the Desmos keyboard. I tested the function at the point (1.2,1.4) manually and it was true, but the point is not part of the graph. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. What if the P-Value is less than 0.05, but the test statistic is also less than the critical value? To write a piecewise function, use the following syntax: y = {condition: value, condition: value, etc.} How to limit population growth in a utopia? 1 f x = f l o o r x Flooring and Ceiling Functions: The flooring function rounds any number down to the nearest integer and the ceiling function rounds any number up to the nearest integer. MathJax reference. FLOOR(1.6) equals 1 FLOOR(-1.2) equals -2 Calculator Colossians 2:15 Nkjv, Importance Of Cooperation In Family, Spalding Tract Eagle Lake Ca, Copleston History Of Philosophy Volume 11 Pdf, How To Order Chipotle Burrito Bowl, What Is Bohea Tea, Bach Partita In A Minor Flute, Caboolture Golf Course Layout, Three Threes Gherkins, floor function desmos 2020
CommonCrawl
The determinants of individual health care expenditures in prison: evidence from Switzerland Karine Moschetti1,2, Véra Zabrodina1,3, Tenzin Wangmo3, Alberto Holly4, Jean-Blaise Wasserfallen2, Bernice S. Elger3,5 & Bruno Gravier6 Prison health systems are subject to increasing pressures given the specific health needs of a growing and aging prison population. Identifying the drivers of medical spending among incarcerated individuals is therefore key for health care governance in prisons. This study assesses the determinants of individual health care expenditures within the prisons of the canton of Vaud, a large region of Switzerland. We use a unique dataset linking demographic and prison stay characteristics as well as objective measures of morbidity to detailed medical invoice data. We adopt a multivariate regression approach to model total, somatic and psychiatric outpatient health care expenditures. We find that chronic infectious, musculoskeletal and skin diseases are strong predictors of total and somatic costs. Schizophrenia, neurotic and personality disorders as well as the abuse of illicit drugs and pharmaceuticals drive total and psychiatric costs. Furthermore, cumulating psychiatric and somatic comorbidities has an incremental effect on costs. By identifying the characteristics associated with health care expenditures in prison, this study constitutes a key step towards a more efficient use of medical resources in prison. Health care expenditures represent an increasing share of public spending and constitute a pressing concern in many developed countries. In correctional systems, these trends are likely to be exacerbated given the growth, aging, and specific health needs of the prison population [1, 2]. In Switzerland, the share of the population in prison has remained relatively stable at 85 individuals per 100,000 inhabitants in 2014, which is lower than the OECD average of about 150 [3]. However, the absolute number of incarcerated individuals increased by 25% between 2000 and 2014 [4]. As in most European countries, the principle of equivalence of care applies, meaning that all individuals in prison have the right to access the same standards of quality of health care as the general population [5,6,7]. Hence, prison systems are required to meet the health needs of individuals in prison with limited resources and while facing important organizational and ethical challenges. The prison population has a high prevalence of chronic somatic and psychiatric conditions as well as substance abuse problems and infectious diseases [8,9,10,11]. Unmet needs prevail due to low socioeconomic status, precarious life experience, and limited access to health care prior to incarceration [12]. These factors contribute to explaining why health care utilization has been found to be greater in prison than the general population [13,14,15,16,17,18,19]. Furthermore, developed countries are witnessing an increase in the share of elderly prisoners [20, 21], which stems from harsher sentencing patterns with longer incarceration times [22, 23]. Older individuals in prison naturally have a higher prevalence of chronic health conditions, serious life-limiting illnesses and comorbidity rates [8, 11, 24]. Aging and chronic conditions are known to have major economic consequences on the health care system as a whole [25, 26]. In particular, non-communicable chronic diseases account for roughly 80% of total annual health care expenditures in Switzerland [27], with cardiovascular, musculoskeletal and psychiatric diseases being the largest burdens. Meanwhile, research lacks to understand the drivers of health care expenditures in prison. Empirical analyses have been hampered by the lack of individual-level data linking expenditures and clinical diagnoses. In light of the demographic trends and needs outlined above, understanding which factors are associated with individual health care expenditures represents a key step towards a more efficient use of resources and higher standards of care in prison. This study investigates the determinants of outpatient health care expenditures in prisons. To this end, we estimate regression models for costs using a unique individual-level dataset from the canton of Vaud, a large region of Switzerland. We merge administrative data collected by prison medical staff with insurance invoice data for incarcerated individuals. The administrative data contain information on demographics and prison stay as well as a detailed profile of chronic diseases diagnosed by physicians. The invoice data capture all medical resources consumed by the individual within on-site prison outpatient clinics in 2011. This analysis complements previous studies that examine health care utilization in prisons [14, 17, 18, 28,29,30,31,32], none of which however consider costs. Our results show that costs are significantly associated with chronic infectious, musculoskeletal and skin diseases as well as schizophrenia, neurotic and personality disorders, and the abuse of illicit drugs and pharmaceuticals. Finally, we show that cumulating somatic and psychiatric comorbidities leads to a disproportional increase in costs. This section presents the data, the cost outcomes of interest and the explanatory variables used to model them in the multivariate regression analyses. Methodological details on these analyses are provided in the next section. Merged administrative and invoice data Our analysis links two datasets. First, we use a cross-sectional dataset on 1664 adult individuals who were in a closed prison in the canton of Vaud at any point during the year 2011. The canton of Vaud has one of the largest prison systems in Switzerland with 641 spaces, that is about 9% of the total prison capacity in the country during that period [33]. The administrative data incorporates information on demographics, prison stay characteristics (e.g. mode of detention, length of prison stay), health care utilization, as well as a comprehensive set of objectively diagnosed chronic somatic and mental health conditions (see [11] for further details on this data). Each of the four studied prisons of the canton of Vaud has an on-site outpatient primary care clinic with nurses, generalist practitioners (GPs), psychiatrists and other visiting specialists, e.g. gynaecologists and physiotherapists. These clinics are the main points of provision of health care to individuals in prison and are operated by the Service of Correctional Medicine and Psychiatry (SMPP) of the University Hospital of Lausanne (CHUV). These data were systematically collected and updated by the medical staff of the SMPP. The Swiss legislation requires an examination by a GP within 3 weeks of being incarcerated to identify existing conditions and health needs. The data exclude individuals who did not have this routine medical examination upon incarceration and thus missing health data. They also exclude observations with missing values for any of the variables used in the empirical analysis. Second, to obtain health care costs, invoice data were extracted from the accounting system of the CHUV, which centralizes all the bills sent to the health insurance companies for outpatient health care provided in prisons by the SMPP. In Switzerland, outpatient care is reimbursed through a fee-for-service system, so that invoices contain information on all health services provided and corresponding costs. The fees for outpatient services in prisons are the same as for the general population. The data also include expenses for all medications delivered on site. These two datasets are matched to combine individual information with invoices. While non-matches are excluded, we retain 106 individuals who are reported not to have consumed health care in the administrative data and impute them as zero costs. These either had brief strays, or entered prison before the beginning of the year and then did not use health care. Concurrently, there are several explanations for some of the individuals who were reported to consume health care in the administrative data not matching with invoice data. The two datasets are collected independently, possibly leading to discrepancies and some bills not being issued. Sample selection issues related to the matching procedure are discussed below. The final analysis sample includes 1107 individuals. Outcome variables: Total, somatic and psychiatric health care costs As outcome variables, our regression analyses model three categories of outpatient health care costs obtained from invoice data. First, total costs include all expenditures from medical services used in the on-site outpatient prison clinics. Costs are cumulated when individuals have multiple prison stays in 2011. Second, we split total costs into somatic and psychiatric costs, corresponding to the two main types of health care provided in prison. This distinction allows to grasp the channel through which total costs are affected. The fee-for-service codes in invoice data are used to identify specific medical services and allocate them across somatic and psychiatric categories. Somatic costs comprise GP consultations, somatic care by nurses, somatic medication as well as physiotherapy, occupational therapy, and gynaecological consultations. Medical supplies and diagnostic tests are also assigned to somatic care. Psychiatric costs include consultations with psychiatrists, psychiatric care by nurses, ambulatory stays at the day psychiatric clinic and psychotropic drugs. We classify drugs as psychotropic if their Anatomical Therapeutic Chemical (ATC) classification code begins with N05 or N06, and as somatic otherwise [34]. Explanatory variables The administrative data allow us to use a rich set of variables to model individual health care costs. Our regressions include binary indicators for chronic somatic diseases, mental health conditions and substance abuse problems (Table 2), categorized into groups according to the International Classification of Diseases, version 10 (ICD-10). This allows us to compare the magnitude and the significance of the influence of each disease group on costs. While some disease groups may drive up costs by requiring costly treatment with few contacts with physicians, others may remain relatively inexpensive despite requiring regular monitoring. Somatic diseases are diagnosed and reported by a GP, and include infectious diseases, skin problems, and diseases of the musculoskeletal, digestive, circulatory, endocrine, respiratory, and nervous systems. A psychiatrist reports psychiatric conditions and substance abuse problems. The former include schizophrenia, mood disorders, neurotic disorders, behavioural syndromes, personality disorders and mental retardation. The latter encompass illicit drug, pharmaceuticals and alcohol abuse. Regression models also control for sex, age group, marital status and Swiss origin. A binary indicator for having health insurance acts as a proxy for socioeconomic status and prior access to health care. Basic health insurance is mandatory in Switzerland, so that all residents have to conclude contracts with private health insurance companies. These contracts are highly regulated and cover a wide range of medical services as determined by federal law. Footnote 1Contracts involve the payment of monthly premiums, and partial or full means-tested state subsidies exist to support low-income individuals. In particular, the insurance covers the costs of all health services provided in prison. For individuals with an existing health insurance contract, the premiums are financed using private funds whenever possible, or by state subsidies and contributions of the prison administration. However, more than half of the individuals in our sample are uninsured, mostly due to an illegal or highly vulnerable socioeconomic status (e.g. migrants, marginalized). This points to inequalities in access to medical care prior to incarceration for uninsured individuals, to whom prisons may offer an opportunity to access health care [35]. For these uninsured individuals, the prison administration directly bears the medical costs, or purchases basic health insurance on their behalf in case of longer sentences. Furthermore, in the canton of Vaud, the prison administration is completely legally and hierarchically separate from the prison health services, and does not intervene in medical decisions, with the exception of court-mandated psychiatric therapies. Prison health services do not pay medical costs but rather get reimbursed for all the care they provide, either by the prison administration or health insurance company. Hence, there should be no incentives for the prison medical staff to discriminate against individuals without health insurance, namely to make different treatment decisions based on insurance status. Moreover, healthcare service provision in Swiss prisons is governed by the medical and ethical guidelines of the Swiss Academy of Medical Sciences [36], under the principle of equivalence of care established by international norms [6, 37]. We capture prison stay characteristics by including indicators for the type of crime (sexual, drug-related or violent, with other as the baseline, e.g. traffic or fraud), number of stays in 2011 and an indicator for detention regime (preventive or convicted). We account for length of stay in 2011 to account for the fact that individuals with shorter stays have fewer opportunities to consume health care. Furthermore, total length of stay (including time spent in prison before 2011) may impact costs, since anxiety, isolation or withdrawal symptoms evolve over the incarceration time. Individuals with a shorter prison experience may consume health care more intensively than those who have stayed longer and had time to adapt. Concurrently, individuals with longer stays may need more care due to the psychological burden of long-term sentences, or conditions acquired in prison [18, 38]. We also include binary indicators for two specific types of sentences. First, individuals whose crime is related to severe mental disorders and high risks of recidivism are mandated psychiatric treatment under the Swiss Criminal Code (SCC, article 59) [39]. This measure can be extended for up to 5 years an unlimited number of times. Second, indefinite incarceration may also be ordered for serious offenders who are highly prone to recidivism, or are deemed untreatable (SCC, article 64). Finally, prison facilities differ by types of detention regimes, capacity, social environment, and organization of medical care provision. To capture these differences, we include binary indicators equal to one if the individual stayed in a given prison and zero otherwise, considering that individuals may stay in more than one facility in a given year (e.g. transfers or multiple stays). Modelling health care costs As outlined above, we use a multivariate regression approach to assess the determinants of the health care costs outcomes described in Section 2.2, using the explanatory variables presented in Section 2.3. Modelling health care costs poses several challenges due to the skewness, heavy-tails, and excess zeroes of their distribution. The performance of alternative regression methods has been widely tested in the econometric literature [40,41,42,43,44,45,46], but no single approach emerges as the optimal one [47, 48]. However, generalized linear models (GLMs) offer several advantages in this context. They avoid retransformation issues by estimating directly on the raw scale, thus accommodating zero costs and providing interpretable estimatesFootnote 2. GLMs based on a linear exponential family density such as Poisson or Gamma can be estimated via pseudo-maximum likelihood, and provide consistent estimates as long as the link function (conditional mean) is correctly specified, even if the true density does not belong to the linear exponential family [49, 50]. Given this, the choice of the density only affects efficiency. However, as the link is likely to be misspecified to some extent, the fit will not be equally good over the whole range of predicted values [41]. GLMs may suffer from loss of precision with extremely heavy tails [40], as they impose restrictions on the whole distribution and do not allow to flexibly model higher order conditional moments [45, 51]. To select appropriate densities and link functions for our GLMs, we perform modified Hosmer-Lemeshow, Pregibon link, Pearson correlation, and modified Park tests [40, 47]. These tests work with raw-scale residuals and may be sensible to extreme values. We also conduct 50-fold cross-validation and compute mean prediction error (MPE), root mean square error (RMSE) and mean absolute prediction error, which measure the accuracy of individual predictions [44]. For conciseness, we only present results of selected tests at the bottom of Table 4. All models include the length of stay in 2011 as an exposure variable with a coefficient constrained to 1 to account for differences in opportunities to use care. Finally, robust sandwich standard errors are estimated to shield us against misspecifications of the variance [52, 53]. Average partial effects (APEs) of chronic health conditions and comorbidities The next step is to determine the magnitude of the influence of specific chronic diseases and their combinations on costs in monetary terms. To this end, we use the GLM estimates to compute APEs using the method of recycled predictions. For each chronic disease c indicator, the APE is computed as $$ {APE}_c=\frac{1}{N}{\sum}_i\left\{{\hat{\mu}}_i\left(\boldsymbol{x},1\right)-{\hat{\mu}}_i\left(\boldsymbol{x},0\right)\right\} $$ where \( {\widehat{\mu}}_i\left(\boldsymbol{x},c\right) \)is the predicted conditional mean of health care costs for individual i holding all other explanatory variables x constant. Specifically, we first predict costs for all individuals in the sample with the disease indicator switched off (equal to 0). Second, we predict costs for all individuals with the disease indicator switched on (equal to 1). Third, we average the differences in predictions across individuals. This approach provides an estimate of the average difference in costs from having the particular disease or not across all individuals and thus avoids covariate imbalance. We also explore whether comorbidities have a mutually reinforcing influence on costs. In other words, we test whether the costs associated with having a given pair of conditions are greater than the sum of costs associated with having each disease separately. In particular, we focus on pairs of somatic and psychiatric diseases. Similarly to above, the additional cost of a comorbidity pair (s, p) is calculated as the average difference in APEs resulting from switching disease indicators on and off: $$ {APE}_{sp}=\frac{1}{N}{\sum}_i\left\{{\hat{\mu}}_i\left(\boldsymbol{x},1,1\right)-{\hat{\mu}}_i\left(\boldsymbol{x},1,0\right)-\left[{\hat{\mu}}_i\left(\boldsymbol{x},0,1\right)-{\hat{\mu}}_i\left(\boldsymbol{x},0,0\right)\right]\right\} $$ where \( {\widehat{\mu}}_i\left(\boldsymbol{x},s,p\right) \) is now estimated for combinations of the somatic condition indicator s and the psychiatric condition indicator p, holding other explanatory variables x constant.Footnote 3 Sample selection As outlined in Section 2.1, some individuals do not match across datasets. Hence, our estimates may be subject to sample selection bias if unobservable factors that influence the probability of matching are correlated with health care expenditures. We use the fully robust test proposed by JM Wooldridge [54] to investigate sample selection bias in regression models with log link. We refer the reader to the reference for further details on this test (p. 666). Sample and descriptive statistics In the data merging procedure, out of the 1664 individuals present in the administrative data, 1107 (67%) individuals were matched to invoice data and enter our final sample, while 557 (33%) were not matched. However, the test for sample selection does not provide strong evidence for bias from dropping non-matched individuals (total costs p-value = 0.051; somatic costs p-value = 0.300; psychiatric costs p-value = 0.099). Table 1 presents descriptive statistics of the explanatory variables for individuals in our final sample. The most prevalent chronic somatic conditions are infectious and musculoskeletal diseases. The prevalence of hepatitis B and C, HIV and tuberculosis is known to be high among incarcerated individuals [10, 11]. Musculoskeletal diseases (mostly back pain) may develop due to the uncomfortable conditions and lack of physical activity. Skin problems are also widespread in prison due to the confined environment [55]. Among psychiatric conditions, neurotic and personality disorders are most common. In terms of demographics, 47% of the sample consists of male individuals under 30 years old. Total incarceration length displays large variation and ranges from 1 day to 11 years in our sample. Table 1 Descriptive statistics of chronic health conditions, demographics and prison stay characteristics Table 2 provides descriptive statistics of total health care expenditures across several subsamples. Somatic care represents roughly 30% of total costs, with almost 40% individuals having at least one chronic somatic condition. Psychiatric costs account for 70% of total costs and the proportion of individuals with mental health conditions is 40%. Individuals aged 50 and older are usually defined as elderly in prison, since they are in worse health than individuals of the same age in the general population [24]. Although these older individuals have higher somatic costs on average, they have lower total and psychiatric costs than younger individuals. Women cost more than men in all categories. Individuals with chronic somatic and psychiatric conditions have substantially higher expenditures in all categories. The average total cost for on-site outpatient care is of CHF 29 per individual per day of incarceration. These figures show the high degree of variation in individual costs. However, they are descriptive, and the observed relationships may be influenced by confounding factors, such as length of stay or health profile. The regression analysis in the next section provides evidence on the specific characteristics associated with costs. Table 2 Descriptive statistics of health care costs by subsamples Results of cost regression models Table 3 reports the exponentiated coefficients of Poisson and Gamma GLMs with log link for each cost outcome. These densities performed better overall in our selection tests, and are commonly used to model health care expenditures [42, 43]. The log link enables a direct interpretation of exponentiated coefficients as the multiplicative effect on the outcome of a unit change in the explanatory variable. Comparing these two models for the three cost categories allows us to test the sensitivity of results to the choice of the density. The modified Park test and the predictive accuracy measures favour the Poisson model for total and psychiatric costs, which also performs better than the Gamma GLMs in the other tests. The choice is less clear for somatic costs. The Hosmer-Lemeshow test has significant p-values in most cases, and the negative cross-sample MPE suggests that the models over-estimate expenditures on average, particularly in the upper end of the cost distributionFootnote 4. Table 3 Results of generalized linear models of total, somatic and psychiatric health care costs Chronic somatic conditions Chronic infectious, skin and musculoskeletal diseases are associated with increased total costs in the Poisson model, and increased somatic costs in both GLMs. Having an infectious disease multiplies total costs by a factor of roughly 2.5, which is related to treatments being particularly expensive for these conditions. Circulatory diseases significantly increase somatic costs, while endocrine diseases are significant in the Gamma model only. Respiratory, digestive and nervous system diseases are not significant. Interestingly, we also find a positive association between somatic diseases and psychiatric costs, which suggests that they partly capture the effect of psychiatric conditions or substance abuse problems. For example, individuals with psychiatric disorders and those who inject drugs may display more behavioural risk factors and comorbidity [35], making them prone to infectious diseases [56]. Out of the 102 individuals with infectious diseases in our sample, 59 have a substance addition, and 31 have a comorbid psychiatric disorder. These results point to an incremental cost of having both somatic and psychiatric diseases, which we explore further below. Mental health conditions and substance abuse problems Schizophrenia is a strong predictor of psychiatric and total expenditures, along with neurotic and personality disorders. Conversely, mood and behavioural disorders, which include diseases such as depression or anxiety, have coefficients below one or non significant for psychiatric and total costs. Behavioural syndromes are associated with significantly lower somatic costs. Psychiatric therapies are expensive, as they require face-to-face sessions with a specialist and costly psychotropic medication. The coefficients on illicit drugs and pharmaceuticals abuse are highly significantly positive for psychiatric and total costs. Substance addicts are often enrolled in methadone maintenance treatment programs managed by psychiatrists. Illicit drug abuse diminishes somatic costs in the Poisson model. We also find positive associations between psychiatric disorders and somatic costs, which may have several explanations. Individuals with psychiatric disorders or substance abuse problems may be less autonomous in health self-management and particularly susceptible to violence and self-harm, thus increasing the need for consultations with GPs or nurses. Also, the use of psychotropic medication may induce metabolic side effects requiring somatic surveillance [57]. Demographic characteristics Age is a strong predictor of costs in Gamma GLMs. Being aged 50 or older is associated with doubled total and somatic costs, and tripled psychiatric costs relative to the 18–29 year-old baseline. The pattern is less clear in Poisson models, which predict higher costs for individuals aged 30–39 and 40–49, but not for those aged 50 or older. Age is known to be positively associated with morbidity in both the general and the prison population [11]. Hence, the chronic disease indicators partially capture the influence of age on costs. All else being equal, women have significantly lower total and psychiatric expenditures in Poisson models, despite their high prevalence of mental disorders and infectious diseases [11]. For somatic costs, the coefficients are above one but not significant. Previous studies find mixed evidence for women seeking more care relative to men in prison [14, 15, 28, 31]. However, they have underlined the specific needs of women in prison, which prison medical services are often not designed to satisfy. Marital status is not significant, while being of Swiss origin is associated with higher psychiatric and total costs in Gamma models. A possible explanation is that these individuals are more likely to have followed a therapy prior to incarceration, so that their conditions are more easily identified. Also, 44% of drug abusers and 63% of individuals with mandated psychiatric treatment are Swiss in our sample, so that these characteristics are correlated. These groups are prone to have a worse health status. Insurance status is not a significant predictor of expenditures. This suggests that there is no discrimination in health care access across individuals in prison in terms of insurance status, and that having had easier access to health care prior to incarceration appears to have no direct impact on health care expenditures in prisons. Prison stay characteristics The total length of stay in prison is significantly negative, with an additional day in prison being associated with costs lower by 0.1–0.3%. This suggests that expenditures accumulate more slowly as the individual adapts to the prison environment. The number of stays does not affect costs. Having committed a violent crime displays no strong association, while crimes of a sexual nature are associated with larger total and psychiatric costs, as sexual offenders often follow a psychiatric therapy. Drug-related offences are negatively related to expenditures, but are significant in Gamma models only. Note that the characteristics of drug dealers may differ from those of consumers. Being mandated psychiatric treatment is associated with significantly higher total and psychiatric expenditures, while being incarcerated indefinitely displays no association. Individuals under preventive detention have higher costs, which may be explained by the shock of incarceration or the stress related to on-going judiciary proceedings. Their health status may also deteriorate due to them spending almost all day in their cell. Finally, estimations show that prison indicators are significant, which points to heterogeneity across facilities being correlated with health care costs. Average partial effects Table 4 shows the APEs of chronic conditions on total costs expressed in Swiss francs (CHF), the scale of interest. Among the somatic disease groups, infectious diseases have the greatest APE and are associated with increases in total costs of about CHF 3200 to 4500. Schizophrenia is the costliest psychiatric disorder and induces expenditures between CHF 3100 and 4400. APEs vary across GLMs, with greater differences for diseases with large coefficients. Table 4 Average partial effects of chronic health conditions on total costs (in CHF) Table 5 displays the additional expenditures generated by selected comorbidity pairs of chronic somatic and psychiatric diseases. Specifically, it presents the difference in the APE of each disease associated with also suffering from the other condition in the pair. The results suggest that it costs approximately CHF 4600 more to treat infectious diseases for individuals with schizophrenia compared to those without schizophrenia. Infectious diseases also have significant comorbidity costs in combination with the other psychiatric disorders. Neurotic disorders and pharmaceuticals abuse induce additional costs when cumulated with skin or musculoskeletal diseases. Circulatory diseases do not generate comorbidity costs with psychiatric conditions. These results are in line with previous evidence for the general population showing that the incremental cost of an additional chronic disease increases with the number of existing conditions [58]. Table 5 Average partial effects of selected comorbidities on total costs (in CHF) This study explores the drivers of individual health care expenditures for outpatient care within the prisons of the canton of Vaud, Switzerland. We construct a unique individual-level dataset by combining detailed invoice data with a wide set of objective measures of morbidity as well as demographic and prison stay information. We run regression models to assess the magnitude and significance of the associations between these characteristics and total, somatic and psychiatric health care expenditures. Furthermore, calculating the APEs allows the estimation of the economic impact of chronic conditions as well as the additional costs of cumulating psychiatric and somatic comorbidities. This paper adds to the scarce literature on the health profile and health care utilization of incarcerated individuals. The results provide key insights into the costs of medical resources provided in prison, and inform the allocation of scarce financial resources in prisons by identifying individual characteristics and health conditions that predict higher expenditures. A more efficient identification of individuals who present these characteristics may significantly improve their management and outcomes, and subsequently lead to lower expenditures. In our sample, less than 100 individuals induce more than half of the total on-site outpatient costs. From the perspective of the prison administration, the results provide relevant information to identify those individuals, for whom health insurance should be purchased rapidly. Among somatic diseases, we identify chronic infectious diseases, musculoskeletal and skin problems as important predictors of health care expenditures within prisons. Infectious diseases in particular remain a serious issue in correctional facilities and an important target for prevention. Schizophrenia, neurotic and personality disorders as well as illicit drugs and pharmaceuticals abuse are significantly associated with increased total and psychiatric costs. Furthermore, individuals cumulating somatic and psychiatric conditions induce additional costs directly related to the comorbidity itself. Hence, the high predisposition of individuals with psychiatric and substance abuse disorders towards risky health behaviours make them a particularly pertinent target group. Psychotropic treatments may also cause somatic side effects that are difficult to manage in prisons. These considerations underscore the importance of coordination between psychiatric and somatic health care professionals. More generally, the incremental costs of comorbidities are an increasingly relevant issue both in prisons and in the general population. Prison infrastructures and health systems are often not designed to manage somatic chronic conditions efficiently, which may exacerbate their impact [59]. Regular follow up consultations with nurses and GPs are complicated by security and logistic concerns, adding indirect costs to those of medical care. The need for consultations is increased in prison since, unlike the general population, incarcerated individuals cannot benefit from informal health care contacts, e.g. with relatives or pharmacists [15, 16]. In this context, prevention can contribute to avoid developing or worsening of chronic illnesses – along with costly complications and comorbidities – and to contain costs. In particular, fostering health literacy and informing individuals in prison on how to manage their chronic somatic conditions may increase treatment adherence and outcomes, while reducing the need for medical contacts. Additionally, ensuring continuity of treatment both outside of the prison and throughout the incarceration period may favour reintegration and socioeconomic stability upon release, as well as decrease the probability of costly acute complications. An interesting feature of the environment under study is the absence of inequalities in access to health care. Indeed, the medical services offered in prison are relatively standardized, and the institutional setting eliminates the direct role of financial enabling factors, since incarcerated individuals incur no out-of-pocket costs. This differs from the general population, which typically faces inequalities in access to and quality of care related to socioeconomic status. However, individuals in prison have specific needs and therefore are not a representative subgroup of the general population. For example, individuals in prison may consult medical staff because of difficulties to adjust to the correctional environment, a wish to relieve boredom, or the hope to obtain psychotropic drugs, which have resale value within prisons. Organizational constraints also exist that may create a wedge between the desirable and actual levels of health care utilization. Finally, the absence of provider choice in prison and competition mechanisms could limit incentives to improve quality of care and contain costs. Limitations and strengths This analysis has several limitations. First, with regards to captured costs, our data do not include costs generated by off-site health care services, namely specialized outpatient care unavailable on-site, emergency admissions or inpatient care. Complete cost data were not available for these services since providers other than the CHUV may supply them. Off-site care is typically costly, primarily because concerned individuals require emergency admissions or hospital stays and are usually more severe. Off-site transfers also involve complex security measures and generate further non-medical expenses. However, off-site care complements rather than substitutes regular outpatient care in prison clinics, since prison medical staff act as gatekeepers for off-site care and are responsible for subsequent follow ups. Not accounting for these off-site is thus unlikely to deflate (or shift) the on-site costs. Furthermore, on-site outpatient care represents more than 95% of the total number of outpatient consultations, making this study highly relevant for prison health care governance. Further research aiming at evaluating individual risk factors for off-site health care utilization would be pertinent. As for the on-site outpatient care under study, it is possible that some services were not entered into the accounting system, so that costs may be underestimated. In particular, short visits to nurses for routine blood pressure level checks or medication administration may have been overlooked. These are likely to represent low costs, and there is no reason to believe that billing depends on any individual characteristics so as to bias our estimates. Another limitation of our data and hence our models is that we do not include acute episodes such as influenza or hunger strikes. However, chronic conditions generate a substantial part of these acute complications (e.g. blood sugar drops for individuals with diabetes, or self-harm for individuals suffering from depression). Finally, data are available only for one region of Switzerland, which limits external validity. Prisons play a crucial role in addressing the medical needs of incarcerated individuals, whose access to health care prior to detention is often limited. Most of them are eventually released, so that poorly managed health problems may additionally burden the health care system outside the prison [12]. In light of the high disease prevalence even in young prison populations, the pressures on prison health systems are bound to increase as the population grows and ages. While this study provides insights into the patterns of individual health care expenditures in prison, further evidence on the cost-effectiveness and organization of prison health care provision is key to ensure adequate quality of care for incarcerated individuals as well as working conditions for prison staff. Data availability in correctional settings remains a challenge to progress in this area. For further details on the Swiss health and insurance systems, see e.g., the review published by the OECD and the WHO [60]. Ordinary least squares of log-transformed expenditures are commonly used and may be efficient with heavy-tailed data [40]. However, this method poses the retransformation problem to obtain interpretable results, especially in the presence of heteroscedasticity, and is inappropriate for zero data. Breusch-Pagan and White tests detect complex heteroscedasticity of log-scale residuals, and the Shapiro-Wilk test rejects normality (all p-values < 0.000). We experimented with models including interaction terms between disease indicators, but these led to severe collinearity issues. Plotting the mean residuals by deciles of costs indicates that the models consistently over-predict in the upper decile. The Gamma model in particular downweighs the errors at the high end of costs compared to the Poisson model, which gives equal weight to errors across the whole range. Therefore, the Gamma fits high-cost observations worse [41]. APE: Average partial effect ATC: Anatomical Therapeutic Chemical Classification CHUV: University Hospital of Lausanne (Centre Hospitalier Universitaire Vaudois) GLM: Generalized linear model ICD: MPE: Mean prediction error RMSE: Root mean squared error SCC: Swiss Criminal Code SMPP: Service of Psychiatry and Correctional Medicine (Service de Psychiatrie et Médecine Pénitentiaire) Penal Reform International. Global prison trends: Penal Reform International; 2015. https://cdn.penalreform.org/wp-content/uploads/2015/04/PRI-Prisons-global-trends-report-LR.pdf. Walmsley R. World prison population list (eleventh edition). Int Center Pris Stud. 2015. http://www.prisonstudies.org/sites/default/files/resources/downloads/world_prison_population_list_11th_edition_0.pdf. OECD. Society at a Glance 2016: OECD Social Indicators; 2016. https://doi.org/10.1787/9789264261488-en. Swiss Federal Statistics Office: Swiss Federal Statistics Office, crime and criminal justice.; 2017. Lehtmets A, Pont J. Prison health care and medical ethics: a manual for health-care workers and other prison staff with responsibility for prisoners' well-being: Council of Europe; 2014. https://rm.coe.int/publications-healthcare-manual-web-a5-e/16806ab9b5. Elger BS. Towards equivalent health care of prisoners: European soft law and public health policy in Geneva. J Public Health Policy. 2008;29(2):192–206. Elger BS. Prison medicine, public health policy and ethics: the Geneva experience. Swiss Med Wkly. 2011;141:w13273. Harzke AJ, Baillargeon JG, Pruitt SL, Pulvino JS, Paar DP, Kelley MF. Prevalence of chronic medical conditions among inmates in the Texas prison system. J Urban Health. 2010;87(3):486–503. Fazel S, Baillargeon J. The health of prisoners. Lancet. 2011;377(9769):956–65. Wolff H, Sebo P, Haller DM, Eytan A, Niveau G, Bertrand D, Getaz L, Cerutti B. Health problems among detainees in Switzerland: a study using the ICPC-2 classification. BMC Public Health. 2011;11:245. Moschetti K, Stadelmann P, Wangmo T, Holly A, Bodenmann P, Wasserfallen JB, Elger BS, Gravier B. Disease profiles of detainees in the Canton of Vaud in Switzerland: gender and age differences in substance abuse, mental health and chronic health conditions. BMC Public Health. 2015;15:872. Wilper AP, Woolhandler S, Boyd JW, Lasser KE, McCormick D, Bor DH, Himmelstein DU. The health and health care of US prisoners: results of a nationwide survey. Am J Public Health. 2009;99(4):666–72. Twaddle AC. Utilization of medical services by a captive population: an analysis of sick call in a state prison. J Health Soc Behav. 1976;17(3):236–48. Lindquist CH, Lindquist CA. Health behind bars: utilization and evaluation of medical care among jail inmates. J Community Health. 1999;24(4):285–303. Marshall T, Simpson S, Stevens A. Use of health services by prison inmates: comparisons with the community. J Epidemiol Community Health. 2001;55(5):364–5. Feron JM, Paulus D, Tonglet R, Lorant V, Pestiaux D. Substantial use of primary health care by prisoners: epidemiological description and possible explanations. J Epidemiol Community Health. 2005;59(8):651–5. Nobile CG, Flotta D, Nicotera G, Pileggi C, Angelillo IF. Self-reported health status and access to health services in a sample of prisoners in Italy. BMC Public Health. 2011;11:529. Wangmo T, Meyer AH, Handtke V, Bretschneider W, Page J, Sommer J, Stuckelberger A, Aebi MF, Elger BS. Aging prisoners in Switzerland: an analysis of their health care utilization. J Aging Health. 2016;28(3):481–502. Elger BS, Goehring C, Revaz SA, Morabia A. Prescription of hypnotics and tranquilisers at the Geneva prison's outpatient service in comparison to an urban outpatient medical service. Soz Praventivmed. 2002;47(1):39–43. Ginn S. Elderly prisoners. BMJ. 2012;345:e6263. Human Rights Watch: Old behind bars: The aging prison population in the United States.; 2012. Dawes J. Ageing prisoners: issues for social work. Aust Soc Work. 2009;62(2):258–71. Kerbs J, Jolley J. A commentary on age segregation for older prisoners: philosophical and pragmatic considerations for correctional systems. Criminal Justice Review. 2009;34(1):119–39. Wangmo T, Meyer AH, Bretschneider W, Handtke V, Kressig RW, Gravier B, Bula C, Elger BS. Ageing prisoners' disease burden: is being old a better predictor than time served in prison? Gerontology. 2015;61(2):116–23. Busse R, Blümel M, Scheller-Kreinsen D, Zentner A. Tackling chronic disease in Europe: strategies, interventions and challenges.: observatory studies series. Edited by policies EOoHSa. UK: World Health Organization; 2010. Suhrcke M, Nugent RA, Stuckler D, Rocco L. Chronic disease: an economic perspective. London: Oxford Health Alliance 2006; 2006. Wieser S, Tomonaga Y, Riguzzi M, Fischer B, Telser H, Pletscher M, Eichler K, Trost M, Schwenkglenks M: Die Kosten der nichtübertragbaren Krankheiten in der Schweiz (in German). 2014. Moschetti K, Zabrodina V, Stadelmann P, Wangmo T, Holly A, Wasserfallen JB, Elger BS, Gravier B. Exploring differences in healthcare utilization of prisoners in the Canton of Vaud, Switzerland. PLoS One. 2017;12(10):e0187255. Garrity TF, Hiller ML, Staton M, Webster JM, Leukefeld CG. Factors predicting illness and health services use among male Kentucky prisoners with a history of drug abuse. Prison J. 2002;82:295. Nowotny KM. Social factors related to the utilization of health care among prison inmates. J Correctional Health Care. 2016;22(2):129–38. Nesset MB, Rustad AB, Kjelsberg E, Almvik R, Bjorngaard JH. Health care help seeking behaviour among prisoners in Norway. BMC Health Serv Res. 2011;11:301. Gonçalves LC, Gonçalves RA, Martins C, Dirkzwager A. Predicting infractions and health care utilization in prison: a meta-analysis. Crim Justice Behav. 2014;41(8):921–42. Swiss Statistics: Etablissements de privation de liberté et nombre officiel de places (in French). 2015. World Health Organization: Guidelines for ATC classification and DDD assignment.; 2012. Williams BA, Goodwin JS, Baillargeon J, Ahalt C, Walter LC. Addressing the aging crisis in U.S. criminal justice health care. J Am Geriatr Soc. 2012;60(6):1150–6. Swiss Academy of Medical Sciences: Medical practice in respect of detained persons.; 2015. Jotterand F, Wangmo T. The principle of equivalence reconsidered: assessing the relevance of the principle of equivalence in prison medicine. Am J Bioeth. 2014;14(7):4–12. Handtke V, Wangmo T. Ageing prisoners' views on death and dying: contemplating end-of-life in prison. J Bioeth Inq. 2014;11(3):373–86. Swiss Criminal Code. https://www.admin.ch/opc/en/classified-compilation/19370083/201801010000/311.0.pdf. Manning WJ, Mullahy J. Estimating log models: to transform or not to transform? J Health Econ. 2001;20:461–94. Buntin MB, Zaslavsky AM. Too much ado about two-part models and transformation? Comparing methods of modeling Medicare expenditures. J Health Econ. 2004;23:525–42. Basu A, Arondekar BV, Rathouz PJ. Scale of interest versus scale of estimation: comparing alternative estimators for the incremental costs of a comorbidity. Health Econ. 2006;15:1091–107. Manning WJ, Basu A, Mullahy J. Generalized modeling approaches to risk adjustment of skewed outcomes data. J Health Econ. 2005;24:465–88. Jones AM, Lomas J, Moore P, Rice N. A quasi-Monte Carlo comparison of developments in parametric and semi-parametric regression methods for heavy tailed and non-normal data: with an application to healthcare costs: University of York HEDG WP 13/30; 2013. https://www.york.ac.uk/media/economics/documents/hedg/workingpapers/13_30.pdf. Jones AM, Lomas J, Rice N. Healthcare cost regressions: going beyong the mean to estimate the full distribution. Health Econ. 2015; https://doi.org/10.1002/hec.3178. Hill SC, Miller GE. Health expenditure estimation and functional form: applications of the generalized gamma and extended estimating equations models. Health Econ. 2010;19:608–927. Basu A, Manning WG. Issues for the next generation of health care cost analyses. Med Care. 2009;47(7):109–14. Mihaylova B, Briggs A, O'Hagan A, Thompson SG. Review of statistical methods for analysing healthcare resources and costs. Health Econ. 2011;20:897–916. Gourieroux C, Monfort A, Trognon A. Pseudo Maximum likelihood methods: theory. Econometrica. 1984;52(3):681–700. Cameron CA, Trivedi PK. Econometric models based on count data: comparisons and applications of some estimators and tests. J Appl Econ. 1986;1(1):29–53. Holly A, Monfort A, Rockinger M. Fourth order pseudo maximum likelihood methods. J Econ. 2011;162:278–93. Wooldridge JM. In: Pesaran MH, Schmidt P, editors. Quasi-likelihood methods for count data: Handbook of Applied Econometrics Volume II: Microeconomics edn. Malden, MA: Blackwell Publishers; 1999. Cameron CA, Trivedi PK. Microeconometrics using Stata, revised edition: Stata Press; 2010. ISBN-10:1-59718-073-4. Wooldridge JM. Econometric analysis of cross section and panel data: MIT Press; 2002. ISBN: 978-0-262-23258-6. Coury C, Kelly B. Prison dermatology: experience in the Texas Department of Criminal Justice dermatology clinic. J Correct Health Care. 2012;18(4):302–8. Wandeler G, Dufour JF, Bruggmann P, Rauch A. Hepatitis C: a changing epidemic. Swiss Med Wkly. 2015;145:w14093. Tschoner A, Engl J, Laimer M, Kaser S, Rettenbacher M, Fleischhacker WW, Patsch JR, EC F. Metabolic side effects of antipsychotic medication. Int J Clin Pract. 2007;61(8):1356–70. Orueta JF, Garcia-Alvarez A, Garcia-Goni M, Paolucci F, Nuno-Solinis R. Prevalence and costs of multimorbidity by deprivation levels in the basque country: a population based study using health administrative databases. PLoS One. 2014;9(2):e89787. Handtke V, Bretschneider W, Wangmo T, Elger BS. Facing the challenges of an increasingly ageing prison population in Switzerland: in search of ethically acceptable solutions. Bioethica Forum. 2012;5(4):134–41. OECD, World Health Organization: OECD reviews of health systems: Switzerland 2011. Technical report 2011. The authors thank the staff of the Service of Correctional Medicine and Psychiatry (SMPP) for the efforts in collecting the data. They are grateful to a reviewer and the editor of this journal for valuable comments. This research was undertaken as part of a larger project funded by the Swiss National Science Foundation (SNF) and thus the authors acknowledge the financial support received from the SNF (CR13I1_135035/1). The data supporting the findings of this study were obtained from the Service of Correctional Medicine and Psychiatry (SMPP) and the University Hospital of Lausanne (CHUV). Restrictions apply to the use of these data that contain sensitive and confidential judiciary and medical information on individuals in prison. The data were obtained under a specific authorization for the present study and cannot be made publicly available. Institute of Social and Preventive Medicine, University of Lausanne and University Hospital of Lausanne (CHUV), Route de la Corniche 10, 1010, Lausanne, Switzerland Karine Moschetti & Véra Zabrodina Technology Assessment Unit, University Hospital of Lausanne (CHUV), Lausanne, Switzerland Karine Moschetti & Jean-Blaise Wasserfallen Institute for Biomedical Ethics, University of Basel, Basel, Switzerland Véra Zabrodina, Tenzin Wangmo & Bernice S. Elger Institute of Health Economics and Management, HEC Lausanne, University of Lausanne, Lausanne, Switzerland Alberto Holly University Centre of Legal Medicine, University of Geneva, Geneva, Switzerland Bernice S. Elger Service of Correctional Medicine and Psychiatry, University Hospital of Lausanne (CHUV), Lausanne, Switzerland Bruno Gravier Karine Moschetti Véra Zabrodina Tenzin Wangmo Jean-Blaise Wasserfallen KM is responsible for the research proposal. VZ conducted the statistical analysis. KM and VZ interpreted the data and drafted the manuscript. BE, TW, AH and JBW participated in the initial conception of the work and made substantial contributions to improve the manuscript. BG elaborated and coordinated data collection and participated in the interpretation of the data; he revised the intellectual content of the manuscript. All authors gave critical contributions to the manuscript. They also approved the final version. Correspondence to Karine Moschetti. The study was approved by the ethical commission of the canton of Vaud, Switzerland (Protocol No. 388/12). As the study was retrospective and used only anonymized and aggregated data, individual agreement was not required. Moschetti, K., Zabrodina, V., Wangmo, T. et al. The determinants of individual health care expenditures in prison: evidence from Switzerland. BMC Health Serv Res 18, 160 (2018). https://doi.org/10.1186/s12913-018-2962-8 Health care expenditures Utilization, expenditure, economics and financing systems
CommonCrawl
Online ISSN 1534-7486; Print ISSN 1056-3911 Journals Home Search My Subscriptions Subscribe Your device is paired with for another days. Previous issue | This issue | Most recent issue | All issues | Next issue | Previous article | Recently published articles | Next article Secant varieties of ${\mathbb {P}^1}\times \cdots \times {\mathbb {P}^1}$ ($n$-times) are NOT defective for $n \geq 5$ Authors: Maria Virginia Catalisano, Anthony V. Geramita and Alessandro Gimigliano Journal: J. Algebraic Geom. 20 (2011), 295-327 Published electronically: March 25, 2010 Abstract | References | Additional Information Abstract: Let $V_n$ be the Segre embedding of ${\mathbb {P}^1}\times \cdots \times {\mathbb {P}^1}$ ($n$ times). We prove that the higher secant varieties $\sigma _s(V_n)$ always have the expected dimension, except for $\sigma _3(V_4)$, which is of dimension 1 less than expected. References [Enhancements On Off] (What's this?) J. Alexander and A. Hirschowitz, Polynomial interpolation in several variables, J. Algebraic Geom. 4 (1995), no. 2, 201–222. MR 1311347 J. Alexander and A. Hirschowitz, An asymptotic vanishing theorem for generic unions of multiple points, Invent. Math. 140 (2000), no. 2, 303–325. MR 1756998, DOI https://doi.org/10.1007/s002220000053 H. Abo, G. Ottaviani, and C. Peterson. Induction for secant varieties for segre varieties. Available at http://front.math.ucdavis.edu/0607.5191, 2006. Elizabeth S. Allman and John A. Rhodes, Molecular phylogenetics from an algebraic viewpoint, Statist. Sinica 17 (2007), no. 4, 1299–1316. MR 2398597 Elizabeth S. Allman and John A. Rhodes, Phylogenetic ideals and varieties for the general Markov model, Adv. in Appl. Math. 40 (2008), no. 2, 127–148. MR 2388607, DOI https://doi.org/10.1016/j.aam.2006.10.002 Peter Bürgisser, Michael Clausen, and M. Amin Shokrollahi, Algebraic complexity theory, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 315, Springer-Verlag, Berlin, 1997. With the collaboration of Thomas Lickteig. MR 1440179 Luca Chiantini and Marc Coppens, Grassmannians of secant varieties, Forum Math. 13 (2001), no. 5, 615–628. MR 1858491, DOI https://doi.org/10.1515/form.2001.025 L. Chiantini and C. Ciliberto, Weakly defective varieties, Trans. Amer. Math. Soc. 354 (2002), no. 1, 151–178. MR 1859030, DOI https://doi.org/10.1090/S0002-9947-01-02810-0 M. V. Catalisano, A. V. Geramita, and A. Gimigliano, Ranks of tensors, secant varieties of Segre varieties and fat points, Linear Algebra Appl. 355 (2002), 263–285. MR 1930149, DOI https://doi.org/10.1016/S0024-3795%2802%2900352-X M. V. Catalisano, A. V. Geramita, and A. Gimigliano, Higher secant varieties of Segre-Veronese varieties, Projective varieties with unexpected properties, Walter de Gruyter, Berlin, 2005, pp. 81–107. MR 2202248 M. V. Catalisano, A. V. Geramita, and A. Gimigliano, Higher secant varieties of the Segre varieties $\Bbb P^1\times \dots \times \Bbb P^1$, J. Pure Appl. Algebra 201 (2005), no. 1-3, 367–380. MR 2158764, DOI https://doi.org/10.1016/j.jpaa.2004.12.049 M. V. Catalisano, A. V. Geramita, and A. Gimigliano, Segre-Veronese embeddings of $\Bbb P^1\times \Bbb P^1\times \Bbb P^1$ and their secant varieties, Collect. Math. 58 (2007), no. 1, 1–24. MR 2310544 CoCoATeam. CoCoA: a system for doing Computations in Commutative Algebra. Available at http://cocoa.dima.unige.it, 2004. Jan Draisma, A tropical approach to secant dimensions, J. Pure Appl. Algebra 212 (2008), no. 2, 349–363. MR 2357337, DOI https://doi.org/10.1016/j.jpaa.2007.05.022 Shmuel Friedland. On the generic rank of 3-tensors. Available at http://front. math.ucdavis.edu/0805.1959, 2008. Dan Geiger, David Heckerman, Henry King, and Christopher Meek, Stratified exponential families: graphical models and model selection, Ann. Statist. 29 (2001), no. 2, 505–529. MR 1863967, DOI https://doi.org/10.1214/aos/1009210550 Luis David Garcia, Michael Stillman, and Bernd Sturmfels, Algebraic geometry of Bayesian networks, J. Symbolic Comput. 39 (2005), no. 3-4, 331–355. MR 2168286, DOI https://doi.org/10.1016/j.jsc.2004.11.007 R. Hartshorne and A. Hirschowitz, Courbes rationnelles et droites en position générale, Ann. Inst. Fourier (Grenoble) 35 (1985), no. 4, 39–58 (French, with English summary). MR 812318 Vassil Kanev, Chordal varieties of Veronese varieties and catalecticant matrices, J. Math. Sci. (New York) 94 (1999), no. 1, 1114–1125. Algebraic geometry, 9. MR 1703911, DOI https://doi.org/10.1007/BF02367252 J. M. Landsberg, Geometry and the complexity of matrix multiplication, Bull. Amer. Math. Soc. (N.S.) 45 (2008), no. 2, 247–284. MR 2383305, DOI https://doi.org/10.1090/S0273-0979-08-01176-2 J. M. Landsberg and L. Manivel, Generalizations of Strassen's equations for secant varieties of Segre varieties, Comm. Algebra 36 (2008), no. 2, 405–422. MR 2387532, DOI https://doi.org/10.1080/00927870701715746 J. M. Landsberg and Jerzy Weyman, On the ideals and singularities of secant varieties of Segre varieties, Bull. Lond. Math. Soc. 39 (2007), no. 4, 685–697. MR 2346950, DOI https://doi.org/10.1112/blms/bdm049 F. Palatini. Sulle varietà algebriche per le quali sono di dimensione minore dell' ordinario, senza riempire lo spazio ambiente, una o alcuna delle varietà formate da spazi seganti. Atti Accad. Torino Cl. Scienze Mat. Fis. Nat., 44:362–375, 1909. A. Terracini. Sulle $v_k$ per cui la varietà degli $s_h$ $(h+1)$-seganti ha dimensione minore dell'ordinario. Rend. Circ. Mat. Palermo, 31:392–396, 1911. F. L. Zak, Tangents and secants of algebraic varieties, Translations of Mathematical Monographs, vol. 127, American Mathematical Society, Providence, RI, 1993. Translated from the Russian manuscript by the author. MR 1234494 J. Alexander and A. Hirschowitz. Polynomial interpolation in several variables. J. Algebraic Geom., 4(2):201–222, 1995. MR 1311347 (96f:14065) J. Alexander and A. Hirschowitz. An asymptotic vanishing theorem for generic unions of multiple points. Invent. Math., 140(2):303–325, 2000. MR 1756998 (2001i:14024) Elizabeth S. Allman and John A. Rhodes. Molecular phylogenetics from an algebraic viewpoint. Statist. Sinica, 17(4):1299–1316, 2007. MR 2398597 (2009e:62426) Elizabeth S. Allman and John A. Rhodes. Phylogenetic ideals and varieties for the general Markov model. Adv. in Appl. Math., 40(2):127–148, 2008. MR 2388607 (2008m:60145) Peter Bürgisser, Michael Clausen, and M. Amin Shokrollahi. Algebraic complexity theory, volume 315 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 1997. With the collaboration of Thomas Lickteig. MR 1440179 (99c:68002) Luca Chiantini and Marc Coppens. Grassmannians of secant varieties. Forum Math., 13(5):615–628, 2001. MR 1858491 (2002g:14079) L. Chiantini and C. Ciliberto. Weakly defective varieties. Trans. Amer. Math. Soc., 354(1):151–178 (electronic), 2002. MR 1859030 (2003b:14063) M. V. Catalisano, A. V. Geramita, and A. Gimigliano. Ranks of tensors, secant varieties of Segre varieties and fat points. Linear Algebra Appl., 355:263–285, 2002. MR 1930149 (2003g:14070) M. V. Catalisano, A. V. Geramita, and A. Gimigliano. Publisher's erratum to: "Ranks of tensors, secant varieties of Segre varieties and fat points� [Linear Algebra Appl. 355 (2002), 263–285. MR 1930149 (2003g:14070)] Linear Algebra Appl., 367:347–348, 2003. MR 1976931 M. V. Catalisano, A. V. Geramita, and A. Gimigliano. Higher secant varieties of Segre-Veronese varieties. In Projective varieties with unexpected properties, pages 81–107. Walter de Gruyter GmbH & Co. KG, Berlin, 2005. MR 2202248 (2007k:14109a) M. V. Catalisano, A. V. Geramita, and A. Gimigliano. Higher secant varieties of the Segre varieties $\mathbb P^ 1\times \dots \times \mathbb P^ 1$. J. Pure Appl. Algebra, 201(1-3):367–380, 2005. MR 2158764 (2006d:14060) M. V. Catalisano, A. V. Geramita, and A. Gimigliano. Segre-Veronese embeddings of $\mathbb P^ 1\times \mathbb P^ 1\times \mathbb P^ 1$ and their secant varieties. Collect. Math., 58(1):1–24, 2007. MR 2310544 (2008f:14069) Jan Draisma. A tropical approach to secant dimensions. J. Pure Appl. Algebra, 212(2):349–363, 2008. MR 2357337 (2008j:14102) Dan Geiger, David Heckerman, Henry King, and Christopher Meek. Stratified exponential families: graphical models and model selection. Ann. Statist., 29(2):505–529, 2001. MR 1863967 (2002h:60020) Luis David Garcia, Michael Stillman, and Bernd Sturmfels. Algebraic geometry of Bayesian networks. J. Symbolic Comput., 39(3-4):331–355, 2005. MR 2168286 (2006g:68242) R. Hartshorne and A. Hirschowitz. Courbes rationnelles et droites en position générale. Ann. Inst. Fourier (Grenoble), 35(4):39–58, 1985. MR 812318 (87e:14028) Vassil Kanev. Chordal varieties of Veronese varieties and catalecticant matrices. J. Math. Sci. (New York), 94(1):1114–1125, 1999. Algebraic geometry, 9. MR 1703911 (2001b:14078) J. M. Landsberg. Geometry and the complexity of matrix multiplication. Bull. Amer. Math. Soc. (N.S.), 45(2):247–284, 2008. MR 2383305 (2009b:68055) J. M. Landsberg and L. Manivel. Generalizations of Strassen's equations for secant varieties of Segre varieties. Comm. Algebra, 36(2):405–422, 2008. MR 2387532 (2009f:14109) J. M. Landsberg and Jerzy Weyman. On the ideals and singularities of secant varieties of Segre varieties. Bull. Lond. Math. Soc., 39(4):685–697, 2007. MR 2346950 (2008h:14055) F. L. Zak. Tangents and secants of algebraic varieties, volume 127 of Translations of Mathematical Monographs. American Mathematical Society, Providence, RI, 1993. Translated from the Russian manuscript by the author. MR 1234494 (94i:14053) Maria Virginia Catalisano Affiliation: DIPTEM - Dipartimento di Ingegneria della Produzione, Termoenergetica e Modelli Matematici, Piazzale Kennedy, pad. D 16129 Genoa, Italy Email: [email protected] Anthony V. Geramita Affiliation: Department of Mathematics and Statistics, Queen's University, Kingston, Ontario, Canada, and Dipartimento di Matematica, Università di Genova,Genoa, Italy MR Author ID: 72575 Email: [email protected] Alessandro Gimigliano Affiliation: Dipartimento di Matematica and CIRAM, Università di Bologna, 40126 Bologna, Italy Email: [email protected] Received by editor(s): September 27, 2008 Received by editor(s) in revised form: March 12, 2009
CommonCrawl
Probability Distributions Calculator Enter a probability distribution table and this calculator will find the mean, standard deviation and variance. The calculator will generate a step by step explanation along with the graphic representation of the data sets and regression line. 1. Enter the numbers separated by comma(,) , colon(:), semicolon(;) or blank space. 2. You can enter either integers (10), decimal numbers(10.12) or fractions(10/3). Enter Values for X: Enter Values for P(X): Enter Values for X and P(X): Use data grit to input x and y values Find the Mean (expectation) of a distribution Find the Standard Deviation of a distribution Find the Variance of a distribution Show me an explanation How to use this calculator A company tested a new product and found that the number of errors per 100 products had the following probability distribution: $$ \begin{array}{c|ccccc} number~of~errors (X) & ~2~ & ~3~ & ~4~ & ~5~ & ~6 \\ P(X) & ~0.02~ & ~0.25~ & ~0.4~ & ~0.3~ & ~0.03 \end{array} $$ Find the Variance of the number of errors per 100 products. Set up the form The discrete probability distribution of X is given by: $$ \begin{array}{c|ccccc} X & ~0~ & ~2~ & ~5~ & ~7/3~ & ~5 \\ P(x) & ~0.1~ & ~0.2~ & ~1/3~ & ~1/6~ & ~0.3 \end{array} $$ Find the mean of the distribution. Standard Deviation Calculator Z Score Calculator Normal Distribution Calculator Correlation and Regression Calculator
CommonCrawl
Full paper | Open | Published: 23 April 2019 Longitudinal differences in the statistical characteristics of ionospheric spread-F occurrences at midlatitude in Eastern Asia Ning Wang ORCID: orcid.org/0000-0001-9512-50141,2,3, Lixin Guo1, Zonghua Ding3, Zhenwei Zhao3, Zhengwen Xu3, Tong Xu3 & Yanli Hu3 Earth, Planets and Spacevolume 71, Article number: 47 (2019) | Download Citation Spread-F is known as the electron density inhomogeneous structures in F layer of ionosphere and can usually be classified as frequency spread-F (FSF) and range spread-F (RSF). Few studies have reported on the statistical characteristics of spread-F occurrences at midlatitudes in Eastern Asia, particularly the comparison of spread-F occurrences between China and Japan. In this paper, we used spread-F data recorded by ten ionosondes located between 25°N and 45°N from 1997 to 2016, to investigate the longitudinal differences in the statistical characteristics of spread-F occurrence and the probable mechanism for its occurrence at midlatitudes in Eastern Asia. Variations in the spread-F occurrences with the solar and geomagnetic activities, season and local time are presented. The main conclusions are as follows: (a) the occurrence percentage of FSF is higher than that of RSF, of which the former is anti-correlated with the solar or geomagnetic activities; (b) higher FSF occurrence percentages mostly appeared during summer, while RSF occurred more frequently in winter near 45°N latitude such as Urumqi, Changchun and Wakkanai; (c) the maximum of the FSF occurrence percentages mostly appeared between 01:00 and 02:00 LT approximately, whereas that of RSF appeared near 00:00 LT; (d) the spread-F occurrence percentages in the coastal or marine areas are higher than those in the inland region between 35°N and 45°N latitudes; however, this phenomenon is not obvious at lower latitudes; and (e) both the mean occurrences of FSF and RSF reach the minimum around 31°N latitude. These above results are helpful for understanding variations in spread-F occurrence at midlatitudes in Eastern Asia. Spread-F has been widely studied since it was first defined on the ionogram in the early 1930s (Booker and Wells 1938). Several observations since then revealed the main morphological features of spread-F occurrence, including its dependences on solar and magnetic activities, season, longitude, latitude, local time and the background ionosphere (Aarons et al. 1980; Abdu et al. 1981, 1983, 1998; Fukao et al. 2004; Li et al. 2013; Lynn et al. 2011; Wang et al. 2007, 2018; Xu et al. 2010). Previous studies have focused on low latitudes especially for the equatorial region. In order to understand the global characteristics of spread-F, the variations at midlatitudes have attracted great interest of many researchers since 1980s. Bowman (1984) analyzed the data recorded at Brisbane (36.4°S, 226.6°E) in Australia and showed a reduction in spread-F occurrence following an increased geomagnetic activity. Furthermore, data from two midlatitude ionosonde stations (Lannion in France and Canberra in Australia) of similar geomagnetic latitudes showed the latent correlations between spread-F occurrence and geomagnetic activity (Abdu et al. 1983; Bowman 1994; Hoang et al. 2010). Huang et al. (2011) compared the spread-F occurrence percentages using the data of two Chinese ionosonde stations located at Changchun (43.84°N, 125.28°E) and Urumqi (43.75°N, 87.64°E). The results showed that the spread-F occurrence at Changchun was higher than that at Urumqi especially during the low solar activity years and anti-correlations between the spread-F occurrence and the solar 10.7 fluxes. Bhaneja et al. (2018) investigated the seasonal and solar cycle variations in midlatitude spread-F at five different North American sites spanning between Puerto Rico (18.5°N, 67.1°W) and California (34.8°N, 120.5°W). They found that spread-F events occurred more frequently during solar minimum years for all five stations. The minimum spread-F occurrence percentages happened near spring equinox for all the sites except Vandenberg. Meanwhile, they found that the influence of geomagnetic on the spread-F occurrence was weak. Most scholars did not classify spread-F when they discussed the statistical variations on the spread-F occurrence. In detailed studies, spread-F can be classified as the frequency spread-F (FSF) and range spread-F (RSF). The echo of FSF distributes spreading along the frequency axis close to the critical frequencies of the ordinary and extraordinary traces of the ionograms; hence, they are associated with irregularities nearby the F region peak; meanwhile, the echo of RSF distributes spreading along the vertical height axis. Some scholars have analyzed the differences between FSF and RSF occurrence. Chandra et al. (2003) analyzed the spread-F occurrences at Ahmadabad (23°N, 72.4°E) in the Indian zone and Cachoeira Paulista (22.5°S, 45°W) in Brazil. The RSF occurrences at Cachoeira Paulista always showed a maximum in summer during low-sunspot years, and similar phenomenon had been found at Ahmadabad. Hajkowicz (2007) studied the ionosonde data obtained over a wide range of southern latitudes (in latitude range: 23°S–52°S). The spread-F at low midlatitude (23°S–36°S) and midlatitude regions (44°S–48°S) were characterized by a strong peak in RSF occurrence in winter, but an enhanced RSF activity was observed in local summer in Japan. Bhaneja et al. (2009) conducted a study using ionosonde data over a full solar cycle at Virginia (37.95°N, 284.53°E). They found that the spread-F events occurred more frequently during the late fall or early winter particularly during solar minimum. RSF was more prevalent during solar minimum, whereas FSF was prevalent in solar maximum conditions. Chen et al. (2011) employed a digisonde at Wuhan (30.5°N, 114.4°E), China, to observe the spread-F. They found that the FSF events were highly active in summer; meanwhile, the RSF showed very low occurrence during the whole night. Paul et al. (2018) investigated the occurrence rate of spread-F using the ionosonde data in the European longitude sectors (Nicosia, Athens and Pruhonice) during 2009, 2015 and 2016. He found that RSF in nighttime occurred more frequently in lower midlatitude regions during high solar activity, whereas FSF played the dominant role at higher midlatitudes. The gravitational Rayleigh–Taylor instability (R–T) theory was the primary mechanism to explain the formation of irregularities in the equatorial region. Spread-F occurrences at midlatitudes are affected by many factors (Booker 1979; Bowman 1990; Fukao et al. 2004; Perkins 1973). One of them is the acoustic gravity wave (AGW). AGW was considered to be a seeding mechanism that creates density perturbations in the ionosphere, leading to spread-F at midlatitude regions. Since most of the AGWs in the ionosphere originate from the lower atmosphere, there should be some regional features of spread-F occurrences due to the different local meteorological or ground conditions. Booker (1979) discussed the role of AGWs in the generation of spread-F and concluded that AGWs play a key role in the formation of spread-F. Bowman (1990) studied the data at Bribie Island (27.05°S, 153.16°E) and Moggill (60 km south of the Bribie Island) in Australia. They found that the F2 region combined with the observation of TIDs (traveling ionospheric disturbances) associated with the electron density depletions was fully consistent with the involvement AGWs. Many scholars found that the behavior of the AGWs in the midlatitude F region is related to the spread-F occurrence in the nighttime ionosphere (Bowman 1994, 1996, 1998; Bowman and Mortimer 2000, 2003; Chen et al. 2011; Dyson et al. 1995; Fukao et al. 2004). Xiao et al. (2009) presented an observational evidence for the AGWs' seeding of the ionospheric plasma instability by revealing the observational linkage between spread-F and AGWs based on the HF Doppler frequency shift measurements at Peking University. These observational facts showed the close relation between AGWs and spread-F and were regarded as the evidence of the seeding role of the AGWs in the spread-F formation. Statistical results showed that the AGWs were not the only factor in triggering spread-F. Nighttime spread-F structures have been found following the Perkins-type instability processes (Perkins 1973). It is shown that such layers are linearly unstable at night, if the field-aligned integrated Hall conductivity exceeds the field line-integrated Pedersen conductivity. The instability involves the growth of an altitude perturbation, shaped like a plane wave, which is accompanied by large polarization electric fields. In addition, the spread-F has also been correlated with geomagnetic storms through the excitation of TIDs and subsequent F region uplifts. A series of complicated physical processes may lead to the generation of spread-F events. The Perkins instability theory is accepted as the most reasonable explanation of the spread-F phenomenon at midlatitudes. The midlatitude region of Eastern Asia covers typical geomorphological features such as continents, oceans and land–sea junctions, the longitude span is large, and the longitude of geomagnetic field varies greatly. Therefore, a comparative statistical characteristic of the spread-F at different longitude regions in Eastern Asia is of great scientific significance. Due to insufficient data, few studies have been carried out on the longitudinal and latitudinal differences in spread-F occurrences at stations with similar geographical latitude in Eastern Asia. For a more detailed study, various types of spread-F occurrences should be examined separately. This paper aims to present the statistical comparison of FSF and RSF occurrence percentages using ionosonde data from six Chinese stations and four Japanese stations at midlatitudes. These stations are roughly distributed over four latitude chains. In particular, obvious differences among the 10 sites in terms of ground conditions, geographical environment and geomagnetic dip are observed where the sites are distributed within the mainland, on the coast and in island. The analysis focused on (1) the difference in FSF and RSF occurrence percentages at different sites located at nearly the same latitude, particularly on the variation in occurrence probability with solar and geomagnetic activities, season and local time; (2) the difference in FSF and RSF occurrence percentages between China and Japan from between 25°N and 45°N at midlatitudes; and (3) trying to find the possible mechanism for the difference in spread-F occurrence percentages. In "Data set and method of analysis" section, the source of the data and the method of analysis of spread-F occurrence are described briefly. "Statistical results and discussion" section presents the statistical results and discussion on the spread-F occurrences at the 10 stations. The probable mechanism of longitudinal and latitudinal differences was discussed in "Probable mechanism of longitudinal and latitudinal differences in spread-F occurrences" section. We summarize the conclusions in "Summary and conclusions" section. Data set and method of analysis China Research Institute of Radio-wave Propagation (CRIRP) built a long-term network of ionospheric observation stations covering the mainland of China. In this study, we extracted data of simultaneous spread-F that occurred during 1997 and 2016 from six digital ionosondes located at Kunming (25.64°N, 103.72°E), Suzhou (31.34°N, 120.41°E), Lanzhou (36.06°N, 103.87°E), Qingdao (36.24°N, 120.42°E), Urumqi (43.75°N, 87.64°E) and Changchun (43.84°N, 125.28°E). The download linkage of the ionosonde data from the four stations in Okinawa (26.68°N, 128.15°E), Yamagawa (31.2°N, 130.62°N), Kokubunji (35.71°N, 139.49°E) and Wakkanai (45.16°N, 141.75°E) is provided in the Web site: http://wdc.nict.go.jp/. Details of the geographical coordinates and data set of the stations are presented in Fig. 1 and Table 1. The ionosonde data of Suzhou and Kunming used for this study were obtained in June 2009 and August 2007, respectively. For the convenience of comparison, the data used at Yamagawa and Okinawa stations in Japan were also obtained from 2009 to 2007. Details of the digital ionosonde sites used in the investigation; these stations are located roughly at latitudes of 25, 31, 35 and 45 Table 1 Data set of 10 sites used in this study The terrain or geographical location of the 10 stations is different. Urumqi is in the center of the Europe–Asia continent, whereas Changchun and Lanzhou are inland cities. Kunming is located in the middle-to-low-latitude transitional area. This region is in the west of the Yunnan–Guizhou Plateau, near the special terrain of the Tibetan plateau where the ionosphere exhibits complex characteristics. Qingdao and Suzhou are situated in a typical coastal area in the midlatitude region. Wakkanai, Kokubunji, Yamagawa and Okinawa in Japan are all located in coastal areas. In this study, we used the spread-F occurrence percentages to describe the spread-F statistical characteristic, which is defined as follows: $$p\left( {y,m,h} \right) = \frac{{n\left( {y,m,h} \right)}}{{N\left( {y,m,h} \right)}} \times 100\%$$ where y, m and h represent the year, month and local time (LT), respectively; n is the number of FSF or RSF occurrences that appear at the same local time but on different days of a single month; and N is the total number of observations in local time for a given year. To examine their seasonal variations, the data were grouped into the following four seasons: summer (May, June, July and August), spring equinox (March and April), autumn equinox (September and October) and winter (January, February, November and December). Since the spread-F is mostly a nighttime phenomenon, only data between local time 6:00 PM and 6:00 AM were considered in this study. Meanwhile, solar and geomagnetic activities are the two main factors that influence the ionosphere activities. The data set covered the entire Solar Cycle 23 and more than half of Solar Cycle 24. Solar Cycle 24 was abnormal and different from the preceding solar cycles (Chen et al. 2011), and its maximum solar activity index was 50% lower than the former one. We used F10.7 and Ap index to identify the solar and geomagnetic activities, separately. In order to obtain the relationship between the spread-F and the geomagnetic and solar activities, the spread-F events are divided into three or two groups based on the F10.7 or Ap index to conduct statistical studies, respectively (Abdu et al. 2003). The classification is listed as follows: $$\left\{ {\begin{array}{*{20}l} {{\text{F}}10.7 \le 100\;{\text{low solar activity levels}}} \hfill \\ {100 < {\text{F}}10.7 < 180\;{\text{medium solar activity levels}}} \hfill \\ {{\text{F}}10.7 \ge 180\;{\text{high solar activity levels}}} \hfill \\ \end{array} } \right.\quad {\text{and}}\quad \left\{ {\begin{array}{*{20}l} {{\text{Ap}} < 12\;{\text{quite day}}} \hfill \\ {{\text{Ap}} \ge 12\;{\text{disturbed day}}} \hfill \\ \end{array} } \right.$$ In order to analyze the correlation between the spread-F occurrence rates and the F10.7 or Ap index, the normalized probability was used. The normalized spread-F occurrence rate is defined as follows: $$p_{i} = \frac{{m_{i} }}{{\sum\nolimits_{i} {m_{i} } }}\sum_{i} {p_{i} } = 1$$ where p is the normalized FSF or RSF occurrence rate and mi is the number of FSF or RSF event occurrences when F10.7 or Ap index is within a certain level. Statistical results and discussion Figure 2 shows the variation in daily F10.7, Ap index and monthly mean of FSF occurrence percentages with local time. Figure 3 shows how the daily F10.7, Ap index and total amount of FSF vary with local time. The following common features could be found: (1) the FSF occurrences frequently occurred after midnight; (2) when the solar 10.7 fluxes and Ap index increase, the FSF occurrences decrease at the 10 stations, meaning a negative correlation between the FSF occurrences and solar and geomagnetic activity; (3) FSF occurred mostly during the summer; (4) there is no significant difference in the total number of FSF between the 10 stations between 2004 and 2009. However, the total number of FSF in LZ, QD, KOK, SU, YAM, KM and OKI are obviously larger than that in the other three stations between 2010 and 2015. Most of the features are nearly in agreement with the results given by the other authors (such as Bowman 1996; Huang et al. 2011; Igarashi and Kato 1993; Niranjan et al. 2003). But there is some difference in this result. A large difference was found in the FSF occurrence percentages among the stations at nearly the same latitude. The FSF occurrence percentages at OKI were higher than those of KM. The maximum FSF occurrence percentage was approximately 80% and occurred in June 2010 and in July 2016 at OKI. However, the FSF occurrences at YAM were remarkably lower than those at SZ, especially between 2015 and 2016 when the FSF occurrences at YAM were less than 15%. The FSF occurrences at LZ and QD were much larger than those at KOK, but the differences of FSF occurrences between UR, CC and WAK was not obvious. The peak occurrence percentages of FSF in these 10 stations can reach 88% approximately. Another noteworthy phenomenon is the higher FSF occurrence percentages in winter versus the increase in latitude. Therefore, the results at UR, CC and WK showed a more complex situation. Figures 4 and 5 reveal that normalized FSF occurrence percentages varied with local time on all the three solar activities and two geomagnetic activities levels. It can be seen that when F10.7 is less than to 100, the normalized FSF occurrence percentages are the largest. In sunrise or sunset periods at SZ, KM, KOK and OKI stations, the FSF occurrence percentages at medium solar activity level is sometimes even higher than those at low solar activity level. When the solar activity is high, there will be almost no FSF events at many stations. The normalized FSF occurrence percentages are much greater when Ap is less than 12. The results of Figs. 4 and 5 give us a clearer understanding of the effect of solar or geomagnetic activity on the FSF occurrences. Variations in daily F10.7, Ap index and monthly mean of FSF occurrence percentages at the 10 stations; the x-axis is the year and the y-axis is the local time Variations in daily F10.7, Ap index and FSF total amount at the 10 stations; the x-axis is the year and the y-axis is the FSF total amount Variations in normalized FSF occurrence percentages on different F10.7 index levels; magenta lines represent the high solar activity, blue lines represent the medium solar activity, and green lines represent the low solar activity Variations in normalized FSF occurrence percentages on different Ap index levels; magenta lines represent the low geomagnetic activity and blue lines represent the high geomagnetic activity Figures 6 and 7 show that the daily F10.7, Ap index and monthly mean of RSF occurrence percentages and FSF total amount vary with local time for the 10 stations. The variation on RSF occurrences was different from FSF in the following ways: (1) the RSF occurrences in these 10 stations are much lower than the FSF occurrences; (2) the RSF frequently occurs near midnight; (3) longitudinal changes in total amount of RSF in China and Japan. It seems that the RSF has a negative correlation with the solar or geomagnetic activity at LZ and QD. Meanwhile, no remarkable difference is observed in the RSF occurrences between SZ and YAM. Figures 8 and 9 show that the normalized RSF occurrence percentages vary with local time on all the three solar activities and two geomagnetic activities levels. The normalized RSF occurrence percentages are the largest on the low solar activity except KOK. Moreover, the RSF occurrence percentages on medium solar activity are higher than those on low solar activity at some time of some stations. Variations in daily F10.7, Ap index and monthly mean of RSF occurrence percentages at the 10 stations; the x-axis is the year and the y-axis is the local time Variations in daily F10.7, Ap index and RSF total amount at the 10 stations; the x-axis is the year and the y-axis is the RSF total amount Variations in normalized RSF occurrence percentages on different F10.7 index levels; magenta lines represent the high solar activity, blue lines represent the medium solar activity, and green lines represent the low solar activity Variations in normalized RSF occurrence percentages on different Ap index levels; magenta lines represent the low geomagnetic activity and blue lines represent the high geomagnetic activity The seasonal variations in the FSF occurrence percentages observed at the 10 sites are shown in Fig. 10. The FSF occurrences were higher during summer than during other seasons at LZ, QD, KOK, SZ, YAM, KM and OKI. Nevertheless, FSF occurred mostly during summer and winter at UR, CC and WAK. The FSF occurrence during winter is even higher than those during summer at WAK. Another more obvious phenomenon is that the difference of FSF occurrence percentages in four seasons at LZ, QD, KOK, SZ, YAM, KM and OKI is greater than that at UR, CC and WAK. FSF occurrence percentages were higher during the autumn equinox than during the spring one at UR, LZ and QD. However, the difference between these two seasons is not obvious at the other sites. Figure 11 reveals the seasonal RSF occurrence percentage variations at the 10 sites. RSF occurred mostly in the winter months at UR, CC and WAK which is different from FSF. The RSF occurrences were higher during summer at SZ, LZ, QD, KOK, KM and OKI. However, the RSF occurred mostly during the vernal equinox at YAM. Figure 12 shows the local time variations in the monthly mean of FSF and RSF occurrence percentages. Statistically speaking, FSF started at 21:00 LT and lasted approximately until 05:00 LT. Meanwhile, the peak value of FSF occurrence percentages appeared at about 02:00 LT at CC, LZ, QD. At the four other sites in Japan, the peak occurrences were observed at 01:00 LT. As a contrast, the peak of RSF occurrence percentages appeared at approximately 00:00 LT except YAM. The FSF occurrence percentages decrease with the increase in longitude from 31°N to 45°N latitude. However, the RSF occurrence percentages increase with the increase in longitude near 45°N latitude. This difference still indicates a very strong longitudinal effect for the ten stations which located at almost the same latitudes. Some details will be discussed in the next section. Fig. 10 Seasonal variations in FSF mean occurrence percentages; green, red, blue and magenta lines represent winter, vernal equinox, autumn equinox and summer, respectively Seasonal variations in RSF mean occurrence percentages; green, red, blue and magenta lines represent winter, vernal equinox, autumn equinox and summer, respectively Nocturnal variations in FSF and RSF mean occurrence percentages; the x-axis is the local time and the y-axis is the FSF or RSF mean occurrence The main features of the spread-F occurrences from our statistics are in agreement with the midlatitude observations in the other sectors reported by other scholars (such as Bowman 1984, 1994, 1996; Deminov et al. 2005, 2009; Hanumath Sastri 1977; Ossakow 1981; Rao Rama et al. 2004). For example, the FSF occurrences have a negative correlation with the solar and geomagnetic activities, where the FSF occurrences reached maxima mostly during the summer months. The RSF occurred mostly during summer and winter at UR, CC and WAK, but only during summer at the other seven stations. The peak values of FSF occurrence appeared during the post-midnight period. This variation is the same as that pointed out by Igarashi and Kato (1993). By using the data of Far East stations, Igarashi and Kato (1993) have pointed out the same result where the spread-F occurrence peaks appeared from June to July in summer and from December to January in winter. Huang et al. (2011) investigated the spread-F occurrences at UR and CC stations; they found that (1) spread-F occurrence showed a high value during low solar activity and (2) spread-F occurred mostly in winter and summer. These features are the same as our findings as the data used at the same stations. A comprehensive statistical study of midlatitude spread-F in the North American sector was presented using the ionosonde data observed at Puerto Rico (18.5°N, 67,1°W), Virginia (37.95°N, 75.5°W), Texas (32.4°N, 99.8°W), Colorado (40°N, 105.3°W) and California (34.8°N, 120.5°W) (Bhaneja et al. 2018). Each station had a maximum spread-F occurrence during different seasons. The spread-F mostly occurred during winter in Texas. The annual maxima of spread-F occurrence over the lower midlatitude region (< 50°N) are found near summer solstices and independent of solar activity (Paul et al. 2018). And they also found that there was a clear inverse correlation between the spread-F occurrences with solar activity. These results are also consistent with our conclusions. Abdu et al. (2003) showed that the RSF events are associated with developed or developing plasma bubble events, and the FSF events are associated with narrow-spectrum irregularities that occur near the peak of the F layer. The difference in the mechanism possibly results in the difference in FSF and RSF occurrence percentages. Although differences were observed in the statistics for the RSF and FSF in each station, both types showed pronounced minima in occurrence percentages near solar maximum, and the RSF occurrences at these 10 stations in Eastern Asia were far lower than the FSF occurrences. Our conclusion agrees with the former results (Bhaneja et al. 2009; Hajkowicz 2007) where the maximum of the spread-F activity was present during the local summer in Japan. Our findings agree with the above statistical results, except the WAK station. Sinno and Kan (1980) and Huang et al. (1994) reported similar variations in the spread-F occurrences in Eastern Asia (Japan and Taiwan). Chen et al. (2011) found that the FSF and RSF have a minor occurrence peak in winter at Wuhan (30.5°N, 114.4°E). The variations on the spread-F occurrence in Wuhan are similar to that in Suzhou since both sites are located at similar latitude. Probable mechanism of longitudinal and latitudinal differences in spread-F occurrences The longitudinal and latitudinal effects of the spread-F occurrences at midlatitude have been discussed by many authors (Huang et al. 2011; Kherani et al. 2009; Wang et al. 2018; Perkins 1973; Yakoyama et al. 2008; Zhou et al. 2005). They argued that AGW and Perkins instability may be the main mechanism for the occurrence and evolution of spread-F at midlatitude. In this case study, 10 stations at exactly four longitudes from approximately 25°N–45°N were selected to further investigate the FSF and RSF occurrences. These 10 stations are obviously different in geographical environment. Some stations are located in typical inland cities, other sites are in coastal areas and the rest are surrounded by oceans. The most striking fact is that both FSF and RSF occurrences at ocean or ocean–land junction regions are higher than those at inland regions at 35°N and 45°N latitude. The possible impact of AGW on the spread-F occurrence percentages Earlier studies showed that the midlatitude spread-F occurrence was influenced by several factors such as electron density gradient, neutral wind, electric field, local geomagnetic field and AGW. Although the role of AGW in the spread-F evolution is still not proved yet, many discussions have been conducted on the role of the AGW in triggering spread-F both theoretically and observationally (Booker 1979; Bowman 1990, 1994, 1996, 1998, 2001; Kherani et al. 2009; Huang and Kelly 1997; Xiao et al. 2009). Direct measurements of the AGWs near the ground surface are difficult, considering that their amplitude is extremely small in lower atmosphere. However, AGWs are still considered one of the important driving mechanisms for midlatitude spread-F. It is well known that most of AGWs in the ionosphere originate from lower atmosphere and ground conditions are one of the determining factors in generating AGWs. Huang et al. (2011) found that the AGW play an important role among the factors of the spread-F occurrences due to the different ground conditions by using the ionosonde data of UR and CC in China. The two stations have remarkable discrepancies of ground meteorological conditions. CC station is located near the coast, whereas UR is located in the very center of the Europe–Asia continent. The results showed that the spread-F occurrences at CC was always much higher than those at UR. This result has aroused our great interest. In this study, QD and CC lies near the coast, whereas LZ and UR is located in the Europe–Asia continent. The FSF and RSF occurrences at QD and CC were higher than those at LZ and UR. This seems that the spread-F occurrence near the sea is greater than that in inland stations. But the spread-F occurrences at many other sites are not so. The RSF occurrences at WAK and YAM are greater than those at other sites at near the same latitude. Meanwhile, the FSF occurrence at OKI is greater than that at KM. This result indicates that the generation of spread-F is not only influenced by AGW, but also by other factors. It is certain that different locations provide different source conditions for AGW generation, which in turn present different influences on the spread-F occurrence. For the ten stations, although some of them have near the same geographical latitude, there is large disparity in their terrain topography. This is important because they give different source conditions of AGW's generation which gives different influences on the spread-F occurrence. Our result also verifies the correctness of the conclusion obtained by Huang et al. (2011). On the other hand, Huang et al. (2011) pointed that the spread-F occurrences at CC and UR tended to be higher in summer and winter seasons than in equinox. Our findings also confirm this conclusion. Kherani et al. (2009) examined the influences of AGWs on the evolutions of plasma bubbles deduced from the observations in Brazil. These findings clearly indicated the impact of AGWs seeding on the growth of plasma bubbles, thereby influencing the spread-F occurrence. Abdu et al. (2009) also discussed the role of AGW in the equatorial region and pointed out that the polarization electric field in an instability development can enhance under the action of AGWs. Observationally, the role of AGW as a seeding has been discussed by many authors (Nicolls and Kelley 2005; Xiao and Zhang 2001; Xiao et al. 2009; Pietralla et al. 2017; Kherani et al. 2009). Xiao et al. (2009) revealed a close relation between the AGWs and the midlatitude spread-F by analyzing 6-year HF Doppler records. Now our work provides observational facts on the large difference of spread-F occurrence percentages at ten stations that are approximately at similar geographical latitudes. However, for the stations in each group, there is a large disparity in their terrain topography. Considering the seeding role of AGW in excitation of the spread-F events, it could be reasonably assumed that levels of AGW activities are different in the F regions over the ten stations. This may be very important because they give different source conditions of AGW's generation which may lead to variations in differences in spread-F occurrences. Wang et al. (2018) used the ionosonde data of Haikou, Guangzhou, Beijing and Changchun to investigate spread-F occurrence percentages, the possible threshold of foF2 for FSF occurrence and the relationship between hʹF and RSF occurrence. They pointed out that the difference in foF2 and hʹF at different longitudes brought the variation in the spread-F occurrences. Additionally, we also checked the foF2 and hʹF data at the 10 sites in this paper, but the results showed that there were no obvious differences of foF2 and hʹF between the stations at nearly the same latitudes. Furthermore, it is well known that the variation in foF2 is connected with the heights of F2 region, so it may also affect spread-F occurrence percentages, but not the key factor in determining the difference of spread-F occurrences here. Meanwhile, even under favorable background conditions, certain triggering factors like AGW, electric field and so on are often needed for the spread-F occurrences. Therefore, our results indicated that the AGW was an important factor for spread-F event, but not the only one. The possible impact of Perkins instability on the spread-F occurrence percentages In contrast to the rich observational history for low-latitude spread-F, observations of spread-F at midlatitudes are relatively poor. Despite this brief history, a consistent understanding of climatological behavior of the spread-F in longitudinal occurrence has emerged from a variety of observational techniques. Perkins (1973) had proposed a system including three coupled nonlinear partial differential equations that could provide a basis for an instability process consistent with the spread-F event in the midlatitude ionosphere when appropriately solved. Hence, Perkins instability is now accepted as the most reasonable explanation of the spread-F phenomenon at midlatitudes (Kelley and Fukao 1991; Miller 1997; Zhou et al. 2005). The nighttime ionosphere at midlatitudes is typically dynamically stable as the upward \(E \times B\) drift due to eastward electric field and/or equatorward neutral wind supports the F region plasma against downward, gravity-driven diffusion (Perkins 1973). This equilibrium can be written by using an effective electric field \(E_{0}^{*} = E + U \times B\), where E is electric field, U is neutral wind and B is magnetic field (Kelley et al. 2003): $$\frac{{\left| {E_{0}^{*} } \right|\cos \theta }}{B}\cos D = \frac{g}{{\left\langle {\nu_{in} } \right\rangle }}\sin^{2} D$$ where \(\theta\) is an angle between \(E_{0}^{*}\) and geomagnetic east, D is the dip angle of the magnetic field, \(g\) is the magnitude of the gravitational acceleration at the F layer height and \(\nu_{in}\) is the density-weighted collision frequency. The left and right sides are balanced in the steady state. The simple form of the linear growth rate for the Perkins instability is given as (Perkins 1973): $$\begin{aligned} \gamma & = \frac{{E_{0}^{*} \cos D}}{BH}\sin \left( {\theta - \alpha } \right)\sin \alpha \\ & = \frac{{g\sin^{2} D}}{{\left\langle \nu \right\rangle_{0} H}}\frac{{\sin \left( {\theta - \alpha } \right)\sin \alpha }}{\cos \theta } \\ \end{aligned}$$ where \(\alpha\) is the angle between geomagnetic east and the wave vector, \(\left\langle \nu \right\rangle_{0}\) is the background of \(\left\langle {\nu_{in} } \right\rangle\), H is the neutral scale height and \(\gamma\) is proportional to the effective electric fields and \(\sin \left( {\theta - \alpha } \right)\sin \alpha\). Since U blows southeastward during the nighttime in the northern hemisphere due to a diurnal time, \(E_{0}^{*}\) would typically be northeastward. The dip angle of the magnetic field over the 10 sites in this study is very different. The dip angle of the magnetic field decreases with the increase in longitude at the same latitude whenever under high or low solar activity. Therefore, the dip angle of the magnetic field at UR is the largest, and the dip angle of the magnetic field at OKI is the smallest. The largest dip angle is about 64°, and the smallest one is about 38°. This will probably cause the differences between \(E_{0}^{ *}\) and \(\sin \left( {\theta - \alpha } \right)\sin \alpha\) at different sites and then cause the difference in growth rate of Perkins instability which may lead to the variation in spread-F occurrence percentages. Considerable effort is currently being made to increase the understanding of the spread-F occurred at midlatitude and the suggested Perkins instability process. Yakoyama et al. (2008) developed a three-dimensional numerical simulation in the nighttime midlatitude ionosphere and applied to the Perkins instability evolution in the F region. Nevertheless, they found that the growth rate of Perkins instability is too small to explain some observational results. So it is necessary to consider other mechanisms to intensify the instability in the F region. Furthermore, although the southeastward neutral wind in the post-sunset period is suitable for generating the irregularity structure through the Perkins instability, the dynamo electric field induced by neutral wind is southwestward. The electric field can also modulate the F region and seed the Perkins instability. Therefore, the actual physical process is more complicated. In spite of this, the Perkins instability can still be invoked to partly explain the statistical results in this manuscript. The AGW or Perkins instability may be one of the causes of the spread-F events. To determine whether the spread-F variation is due to the lower atmosphere and ground condition influences, further studies using weather observation data (including the weather satellite images or other meteorological data) should be conducted. Therefore, further investigation about the mechanisms for the spread-F occurrences is still needed. Summary and conclusions In this study, we investigated the variations in the FSF and RSF occurrences, and the possible mechanisms for the longitudinal differences in the spread-F occurrence percentages at midlatitudes in Eastern Asia include the years of the data used in 23rd and 24th solar cycles. The major conclusions are summarized as follows: The occurrence percentages of FSF in these ten stations are higher than RSF. The FSF occurrence percentages are higher during the low solar activity years at all sites. Moreover, the spread-F occurrences are anti-correlated with the geomagnetic activity. The spread-F events seldom occur at some sites when \({\text{F}}10.7 \ge 180\) or \({\text{Ap}} \ge 12\). The FSF occurred mainly during the summer except WAK, whereas the RSF occurred mostly in the winter at UR, CC and WAK. Post-midnight FSF was the most frequently observed type of spread-F events, whereas the peak value of the RSF occurrence appeared at approximately 00:00 LT. Spread-F occurred more often at coastal or marine areas than at inland area especially near 35°N–45°N latitudes due to different geographical locations. Nevertheless, the mean occurrence of FSF at SZ is higher than that at YAM. The mean occurrence of RSF at KM is higher than that at OKI. Another phenomenon that needs more attention is that FSF and RSF mean occurrences at 31°N latitude are the lowest for the four latitude chains. It is well known that the AGW and Perkins instability both likely play important roles in the spread-F occurrences. Due to the limitation of data, the generation mechanism of spread-F should be further studied. This study presented a preliminary variation in the spread-F occurrence percentages including some new results, which is helpful in understanding the ionospheric variation in Eastern Asia. The above data and statistical results presented in this paper can be used as a reference for future studies. FSF: frequency spread-F RSF: range spread-F EIA: equatorial ionization anomaly F10.7: the monthly average data of solar 10.7-cm radio flux R–T: Rayleigh–Taylor pre-reversal electric field traveling ionospheric disturbance AGW: acoustic gravity wave CRIRP: China Research Institute of Radio-wave Propagation Aarons J, Mullen JP, Whitney HE, Mackenzie EM (1980) The dynamics of equatorial irregularity patch formation, motion, and decay. J Geophys Res Space Phys 85(A1):139–149. https://doi.org/10.1029/JA085iA01p00139 Abdu MA, Bittencourt JA, Batista IS (1981) Magnetic declination control of the equatorial F region dynamo electric field development and spread F. J Geophys Res Space Phys 86:11443–11446 Abdu MA, Bittencourt JA, Batista Inez S (1983) Longitudinal differences in the spread F characteristics. Rev Bras Fis 13(4):647–663 Abdu MA, Sobral JHA, Batista IS, Rios VH, Medina C (1998) Equatorial spread-F occurrence statistics in the American longitudes: diurnal, seasonal and solar cycle variations. Adv Space Res 22:851–854. https://doi.org/10.1016/S0273-1177(98)00111-2 Abdu MA, Souza JR, Batista IS, Sobral JHA (2003) Equatorial spread F statistics and empirical representation for IRI: a regional model for the Brazilian longitude sector. Adv Space Res 31(3):703–716 Abdu MA, Alam Kherani E, Batista IS, de Paula ER, Fritts DC (2009) Gravity wave influences on plasma instability growth rates based on observations during the Spread F Experiment (SpreadFEx). Ann Geophys 27:2607–2622 Bhaneja P, Earle GD, Bishop RL, Bullett TW, Mabie J, Redmon R (2009) A statistical study of midlatitude spread F at Wallops Island, Visginia. J Geophys Res Space Phys 114:A04301. https://doi.org/10.1029/2008JA013212 Bhaneja P, Earle GD, Bullett TW (2018) Statistical analysis of midlatitude spread F using multi-station digisonde observations. J Atmos Sol Terr Phys 167:146–155. https://doi.org/10.1016/j.jastp.2017.11.016 Booker HG (1979) The role of acoustic gravity waves in the generation of spread-F and ionospheric scintillation. J Atmos Sol Terr Phys 41:501–515. https://doi.org/10.1016/S0273-1177(98)00111-2 Booker HG, Wells HG (1938) Scattering of radio waves by the F-region of ionosphere. J Geophys Res 43:249–256 Bowman GG (1984) A comparison of mid-latitude and equatorial-latitude spread-F characteristics. J Atmos Sol Terr Phys 46(1):65–71 Bowman GG (1990) A review of some recent work on mid-latitude spread-F occurrence as detected by ionosondes. Earth Planets Space 42(2):109–138. https://doi.org/10.5636/jgg.42.109 Bowman GG (1994) Mid-latitude spread F occurrence related to geomagnetic activity at preferred local times. Radio Sci 29(3):631–634. https://doi.org/10.1029/94RS00451 Bowman GG (1996) The influence of 10.7-cm solar-flux variations on midlatitude daytime ionospheric disturbance conditions. J Geophys Res 101(A5):10849–10854. https://doi.org/10.1029/95JA03768 Bowman GG (1998) Short-term delays (hours) of ionospheric spread F occurrence at a range of latitudes, following geomagnetic activity. J Geophys Res 103(A6):11627–11634. https://doi.org/10.1029/98JA00630 Bowman GG (2001) A comparison of nighttime TID characteristics between equatorial-ionospheric-anomaly crest and midlatitude regions, related to spread F occurrence. J Geophys Res Space Phys 106(A2):1761–1769. https://doi.org/10.1029/2000JA900123 Bowman GG, Mortimer IK (2000) Quantitative estimates of relationships between geomagnetic activity and equatorial spread-F as determined by TID occurrence levels. Earth Planets Space 52(6):451–458. https://doi.org/10.1186/BF03352257 Bowman GG, Mortimer IK (2003) Spread-F/sporadic E coupling at Chung-Li, especially for postsunset periods of sunspot maximum years. J Geophys Res Space Phys 108(A4):1148. https://doi.org/10.1029/2002JA009541 Chandra H, Som Sharma, Abdu MA, Batista IS (2003) Spread-F at anomaly crest regions in the Indian and American longitudes. Adv Space Res 31(3):717–727. https://doi.org/10.1016/S0273-1177(03)00034-6 Chen WS, Lee CC, Chu FD, Su SY (2011) Spread F, GPS phase fluctuations, and medium-scale traveling ionospheric disturbances over Wuhan during solar maximum. J Atmos Sol Terr Phys 73:528–533. https://doi.org/10.1016/j/jastp.2010.11.012 Deminov MG, Nepomnyashchaya EV, Sitnov YuS (2005) Regularities of the midlatitude spread F occurrence probability during sunrises and sunsets. Geomagn Aeron 45(4):458–465 Deminov MG, Deminov RG, Nepomnyashchaya EV (2009) Seasonal features in the spread-F probability near midnight over Moscow. Geomagn Aeron 49(5):630–636 Dyson PL, Johnston DL, Scali JL (1995) Observations of gravity waves associated with mid-latitude spread-F. Adv Space Res 16(5):113–116 Fukao S, Ozawa Y, Yokoyama T, Yamamoto M, Tsunoda RT (2004) First observations of the spatial structure of F region 3-m-scale field-aligned irregularities with the Equatorial Atmosphere Radar in Indonesia. J Geophys Res Space Phys 109:A02304. https://doi.org/10.1029/2003JA010096 Hajkowicz LA (2007) Morphology of quantified ionospheric range spread-F over a wide range of midlatitudes in the Australian longitudinal sector. Ann Geophys 25:1125–1130 Hanumath Sastri J (1977) A study of midlatitude spread-F. J Atmos Sol Terr Phys 39:1347–1352 Hoang TL, Abdu MA, MacDougall J, Batista Inez S (2010) Longitudinal differences in the equatorial spread F characteristics between Vietnam and Brazil. Adv Space Res 45:351–360. https://doi.org/10.1016/j.asr.2009.08.019 Huang CS, Kelly MC (1997) Numerical simulations on large-scale ionosphere perturbations in mid-latitude. Acta Geophys Sinica 40:301 Huang CS, Miller CA, Kelley MC (1994) Basic properties and gravity wave initiation of the midlatitude F region instability. Radio Sci 29:395–405 Huang WQ, Xiao Z, Xiao SG, Zhang DH, Hao YQ, Suo YC (2011) Case study of apparent longitudinal differences of spread F occurrence for two midlatitude stations. Radio Sci 46:RS1015. https://doi.org/10.1029/2009RS004327 Igarashi K, Kato H (1993) Solar cycle variations and latitudinal dependence on the mid-latitude spread-F occurrence around Japan. In: The XXIV general assembly. International Union of Radio Science, Kyoto Kelley MC, Fukao S (1991) Turbulent upwelling of the mid-latitude ionosphere, 2. Theoretical framework. J Geophys Res 96:3747–3754 Kelley MC, Makela JJ, Vlasov MN (2003) Further studies of the Perkins stability during Space Weather Month. J Atoms Sol Terr Phys 65(10):1071–1075 Kherani EA, Abdu MA, de Paula ER, Fritts DC, Sobral JHA, de Meneses FC Jr (2009) The impact of gravity waves rising from convection in the lower atmosphere on the generation and nonlinear evolution of equatorial bubble. Ann Geophys 27:1657–1668 Li GZ, Ning BQ, Abdu MA, Otsuka Yuchi, Yokoyama T, Yamamoto M, Liu LB (2013) Longitudinal characteristics of spread-F backscatter plumes observed with the EAR and Sanya VHF radar in Southeast Asia. J Geophys Res Space Phys 118:6544–6557. https://doi.org/10.1002/igra.50581 Lynn K, Otsuka Y, Shiokawa K (2011) Simultaneous observations at Darwin of equatorial bubbles by ionosonde-based range/time displays and airglow imaging. Geophys Res Lett 38:L23101. https://doi.org/10.1029/2011GL049856 Miller CA (1997) Electrodynamics of midlatitude spread F, 2. A new theory of gravity wave electric fields. J Geophys Res 102:11533–11538 Nicolls MJ, Kelley MC (2005) Strong evidence for gravity wave seeding of an ionospheric plasma instability. Geophys Res Lett 32:L05108. https://doi.org/10.1029/2004GL020737 Niranjan K, Brahmanandam PS, Ramakrishna Rao P, Uma G, Prasad DSVVD, Rama Rao PVS (2003) Post midnight spread-F occurrence over Waltair (17.7°N, 83.3°E) during low and ascending phases of solar activity. Ann Geophys 21:745–750 Ossakow SL (1981) Spread F theories—a review. J Atmos Sol Terr Phys 43:437–452. https://doi.org/10.1016/0021-9169(81)90107-0 Paul KS, Haralambous H, Oikonomou C, Paul A, Belehaki A, Ioanna T, Kouba D, Buresova D (2018) Multi-station investigation of spread F over Europe during low to high solar activity. J Space Weather Space Clim 8:A27. https://doi.org/10.1051/swsc/2018006 Perkins FW (1973) Spread F and ionospheric currents. J Geophys Res 78:218–226 Pietralla M, Pezzopane M, Fagundes PR, de Jesus R, Supnithi P, Klinnagm S, Ezquer RG, Cabrera MA (2017) Equinoctial spread-F occurrence at low latitudes in different longitude sectors under moderate and high solar activity. J Atmos Sol Terr Phys 164:149–162. https://doi.org/10.1016/j.jastp.2017.07.007 Rao Rama PVS, Prasad DSVVD, Niranjan K, Uma G, Krishna SG, Venkateswarlu K (2004) Muti-station studies on spread-F and VHF scintillations in the Indian sector. Terr Atmos Ocean Sci 15:667–681 Sinno K, Kan M (1980) Ionospheric scintillation and fluctuation of faraday-rotation caused by spread-F and sporadic-E over Kokubunji, Japan. J Radio Res Lab 27(122):53–77 Wang GJ, Shi JK, Wang X, Shang SP (2007) Seasonal variation of spread-F observed in Hainan. Adv Space Res 41:639–644. https://doi.org/10.1016/j.asr.2007.04.077 Wang N, Guo LX, Zhao ZW, Ding ZH, Lin LK (2018) Spread-F occurrences and relationships with foF2 and hʹF at low- and mid-latitudes in China. Earth Planets Space 70:59. https://doi.org/10.1186/s40623-018-0821-9 Xiao Z, Zhang TH (2001) A theoretical analysis of global characteristics of spread-F. Chin Sci Bull 46:1593–1594 Xiao SG, Xiao Z, Shi JK, Zhang DH, Feng XS (2009) Observational facts in revealing a close relation between acoustic-gravity waves and midlatitude spread-F. J Geophys Res Space Phys 114:A01303. https://doi.org/10.1029/2008JA013747 Xu T, Wu ZS, Hu YL, Wu J, Suo YC, Feng J (2010) Statistical analysis and model of spread F occurrence in China. Sci China Technol Sci 53:1725–1731. https://doi.org/10.1007/s11431-010-3169-3 Yakoyama T, Otsuka Y, Ogawa T, Yamamoto M, Hysell DL (2008) First three-dimensional simulation of the Perkins instability in the nighttime midlatitude ionosphere. Geophys Res Lett 35:L0301. https://doi.org/10.1029/2007GL032496 Zhou Q, Mathews JD, Du Q, Miller CA (2005) A preliminary investigation of the pseudo-spectral method numerical solution of the Perkins instability equations in the homogeneous TEC case. J Atmos Sol Terr Phys 67:325–335. https://doi.org/10.1016/j.jastp.2004.10.005 WN designed the study, analyzed the data and wrote the manuscript. GLX, XZW and ZZW contributed related analysis on data in China. XT and HYL contributed related analysis on data in Japan. DZH and ZZW helped with the text of the paper, particularly with the introduction and comparison with previous works. All coauthors contributed to the revision of the draft manuscript and improvement of the discussion. All authors read and approved the final manuscript. Ning Wang is currently a Ph.D. student at Xidian University. She also is an Associate Professor at the China Research Institute of Radio-wave Propagation. She has authored and coauthored eight patents and over 16 journal articles. Her research interests are in ionospheric irregularities and ionospheric radio-wave propagation. Dr. Linxin Guo is currently a Professor and Head of the School of Physics and Optoelectronic Engineering Science at Xidian University, China. He has been a Distinguished Professor of the Changjiang Scholars Program since 2014. He has authored and coauthored four books and over 300 journal articles. Dr. Zonghua Ding is currently an Associate Professor at the China Research Institute of Radio-wave Propagation. His research interests are ionospheric sounding and ionospheric radio-wave propagation. Dr. Zhenwei Zhao is currently a Professor and Chief engineer at the China Research Institute of Radio-wave Propagation. His current positions include: Chairman of the ITU-R SG3 in China; Head of the Chinese Delegation of ITU-R SG3 and Lead expert for the Asia–Pacific Space Cooperation Organization (APSCO). Dr. Zhengwen Xu is currently a Professor and also the Vice Director in the National Key Laboratory of Electromagnetic Environment, CRIRP. His research interests mainly include ionospheric physics and radio propagation, electromagnetic waves in random media, ionospheric remote sensing and ionospheric effects on radio systems. Dr. Tong Xu is currently an Associate Professor at the China Research Institute of Radio-wave Propagation. His research interests include ionospheric physics, ionospheric modeling, ionospheric forecast and radio propagation. Yanli Hu is currently an engineer at the China Research Institute of Radio-wave Propagation. Her research interests are mainly focused on ionosphere physics and ionospheric radio propagation. The authors would like to thank the NICT of Japan for the ionosonde data sharing (four ionosonde stations: Okinawa, Yamagawa, Kokubunji and Wakkanai). The authors would like to thank the NOAA for the F10.7 and Ap data service. The authors would like to thank Dr. Shuji Sun for proofreading this manuscript. The authors would like to thank the anonymous referee for the useful comments and suggestions for improving the paper. The ionosonde data in China are in the Web site: ftp://ftp-out.ips.gov.au/. The ionosonde data in Japan are in the Web site: http://wdc.nict.go.jp/IONO/. The authors would like to thank the NICT of Japan for the ionosonde data sharing. Regretfully, the data in China used in this manuscript cannot be shared because they belonged to the China Research Institute of Radio-wave Propagation (CRIRP). Written informed consent was obtained from study participants for participation in the study and for the publication of this report and any accompanying images. Consent and approval for publication was also obtained from Xidian University and China Research Institute of Radio-wave Propagation. This work was supported by the National Natural Science Foundation of China (Grant No. 41604129), the National Key Laboratory Foundation of Electromagnetic Environment (Grant No. A171601003) and the Specialized Research Fund for State Key Laboratories. The funds from Grant No. 41604129 and the Specialized Research Fund for State Key Laboratories were used for data collection and analysis. The fund from Grant No. A171601003 was used for manuscript preparation. School of Physics and Optoelectronic Engineering, Xidian University, Xi'an, 710071, Shaanxi, China Ning Wang & Lixin Guo State Key Laboratory of Space Weather, Chinese Academy of Sciences, Beijing, 100190, China National Key Laboratory of Electromagnetic Environment, China Research Institute of Radio-wave Propagation, Qingdao, 266107, Shandong, China , Zonghua Ding , Zhenwei Zhao , Zhengwen Xu , Tong Xu & Yanli Hu Search for Ning Wang in: Search for Lixin Guo in: Search for Zonghua Ding in: Search for Zhenwei Zhao in: Search for Zhengwen Xu in: Search for Tong Xu in: Search for Yanli Hu in: Correspondence to Ning Wang. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Spread-F Occurrence percentages Longitudinal difference Perkins instability 3. Space science
CommonCrawl
But, if we find in 10 or 20 years that the drugs don't do damage, what are the benefits? These are stimulants that help with concentration. College students take such drugs to pass tests; graduates take them to gain professional licenses. They are akin to using a calculator to solve an equation. Do you really want a doctor who passed his boards as a result of taking speed — and continues to depend on that for his practice? One study of helicopter pilots suggested that 600 mg of modafinil given in three doses can be used to keep pilots alert and maintain their accuracy at pre-deprivation levels for 40 hours without sleep.[60] However, significant levels of nausea and vertigo were observed. Another study of fighter pilots showed that modafinil given in three divided 100 mg doses sustained the flight control accuracy of sleep-deprived F-117 pilots to within about 27% of baseline levels for 37 hours, without any considerable side effects.[61] In an 88-hour sleep loss study of simulated military grounds operations, 400 mg/day doses were mildly helpful at maintaining alertness and performance of subjects compared to placebo, but the researchers concluded that this dose was not high enough to compensate for most of the effects of complete sleep loss. …The Fate of Nicotine in the Body also describes Battelle's animal work on nicotine absorption. Using C14-labeled nicotine in rabbits, the Battelle scientists compared gastric absorption with pulmonary absorption. Gastric absorption was slow, and first pass removal of nicotine by the liver (which transforms nicotine into inactive metabolites) was demonstrated following gastric administration, with consequently low systemic nicotine levels. In contrast, absorption from the lungs was rapid and led to widespread distribution. These results show that nicotine absorbed from the stomach is largely metabolized by the liver before it has a chance to get to the brain. That is why tobacco products have to be puffed, smoked or sucked on, or absorbed directly into the bloodstream (i.e., via a nicotine patch). A nicotine pill would not work because the nicotine would be inactivated before it reached the brain. However, history has shown that genies don't stay in bottles. All ethics aside, there is ample proof that use of smart drugs can profoundly improve human cognition, and where there is an advantage to be gained – even where risks are involved – some people will leap at the chance to capitalize. At Smart Drug Smarts, we anticipate the social tide will continue to turn in favor of elective neural enhancers, and that the beneficial effects to users who choose to make the most of their brains will inevitably outweigh the costs. How exactly – and if – nootropics work varies widely. Some may work, for example, by strengthening certain brain pathways for neurotransmitters like dopamine, which is involved in motivation, Barbour says. Others aim to boost blood flow – and therefore funnel nutrients – to the brain to support cell growth and regeneration. Others protect brain cells and connections from inflammation, which is believed to be a factor in conditions like Alzheimer's, Barbour explains. Still others boost metabolism or pack in vitamins that may help protect the brain and the rest of the nervous system, explains Dr. Anna Hohler, an associate professor of neurology at Boston University School of Medicine and a fellow of the American Academy of Neurology. Despite some positive findings, a lot of studies find no effects of enhancers in healthy subjects. For instance, although some studies suggest moderate enhancing effects in well-rested subjects, modafinil mostly shows enhancing effects in cases of sleep deprivation. A recent study by Martha Farah and colleagues found that Adderall (mixed amphetamine salts) had only small effects on cognition but users believed that their performance was enhanced when compared to placebo. Some smart drugs can be found in health food stores; others are imported or are drugs that are intended for other disorders such as Alzheimer's disease and Parkinson's disease. There are many Internet web sites, books, magazines and newspaper articles detailing the supposed effects of smart drugs. There are also plenty of advertisements and mail-order businesses that try to sell "smart drugs" to the public. However, rarely do these businesses or the popular press report results that show the failure of smart drugs to improve memory or learning. Rather, they try to show that their products have miraculous effects on the brain and can improve mental functioning. Wouldn't it be easy to learn something by "popping a pill" or drinking a soda laced with a smart drug? This would be much easier than taking the time to study. Feeling dull? Take your brain in for a mental tune up by popping a pill! Core body temperature, local pH and internal pressure are important indicators of patient well-being. While a thermometer can give an accurate reading during regular checkups, the monitoring of professionals in high-intensity situations requires a more accurate inner body temperature sensor. An ingestible chemical sensor can record acidity and pH levels along the gastrointestinal tract to screen for ulcers or tumors. Sensors also can be built into medications to track compliance. Modafinil, sold under the name Provigil, is a stimulant that some have dubbed the "genius pill." It is a wakefulness-promoting agent (modafinil) and glutamate activators (ampakine). Originally developed as a treatment for narcolepsy and other sleep disorders, physicians are now prescribing it "off-label" to cellists, judges, airline pilots, and scientists to enhance attention, memory and learning. According to Scientific American, "scientific efforts over the past century [to boost intelligence] have revealed a few promising chemicals, but only modafinil has passed rigorous tests of cognitive enhancement." A stimulant, it is a controlled substance with limited availability in the U.S. Because smart drugs like modafinil, nicotine, and Adderall come with drawbacks, I developed my own line of nootropics, including Forbose and SmartMode, that's safe, widely available, and doesn't require a prescription. Forskolin, found in Forbose, has been a part of Indian Ayurvedic medicine for thousands of years. In addition to being fun to say, forskolin increases cyclic adenosine monophosphate (cAMP), a molecule essential to learning and memory formation. [8] At this point, I began thinking about what I was doing. Black-market Adderall is fairly expensive; $4-10 a pill vs prescription prices which run more like $60 for 120 20mg pills. It would be a bad idea to become a fan without being quite sure that it is delivering bang for the buck. Now, why the piracetam mix as the placebo as opposed to my other available powder, creatine powder, which has much smaller mental effects? Because the question for me is not whether the Adderall works (I am quite sure that the amphetamines have effects!) but whether it works better for me than my cheap legal standbys (piracetam & caffeine)? (Does Adderall have marginal advantage for me?) Hence, I want to know whether Adderall is better than my piracetam mix. People frequently underestimate the power of placebo effects, so it's worth testing. (Unfortunately, it seems that there is experimental evidence that people on Adderall know they are on Adderall and also believe they have improved performance, when they do not5. So the blind testing does not buy me as much as it could.) From its online reputation and product presentation to our own product run, Synagen IQ smacks of mediocre performance. A complete list of ingredients could have been convincing and decent, but the lack of information paired with the potential for side effects are enough for beginners to old-timers in nootropic use to shy away and opt for more trusted and reputable brands. There is plenty that needs to be done to uplift the brand and improve its overall ranking in the widely competitive industry. Learn More... One item always of interest to me is sleep; a stimulant is no good if it damages my sleep (unless that's what it is supposed to do, like modafinil) - anecdotes and research suggest that it does. Over the past few days, my Zeo sleep scores continued to look normal. But that was while not taking nicotine much later than 5 PM. In lieu of a different ml measurer to test my theory that my syringe is misleading me, I decide to more directly test nicotine's effect on sleep by taking 2ml at 10:30 PM, and go to bed at 12:20; I get a decent ZQ of 94 and I fall asleep in 16 minutes, a bit below my weekly average of 19 minutes. The next day, I take 1ml directly before going to sleep at 12:20; the ZQ is 95 and time to sleep is 14 minutes. Feeling behind, I resolved to take some armodafinil the next morning, which I did - but in my hurry I failed to recall that 200mg armodafinil was probably too much to take during the day, with its long half life. As a result, I felt irritated and not that great during the day (possibly aggravated by some caffeine - I wish some studies would be done on the possible interaction of modafinil and caffeine so I knew if I was imagining it or not). Certainly not what I had been hoping for. I went to bed after midnight (half an hour later than usual), and suffered severe insomnia. The time wasn't entirely wasted as I wrote a short story and figured out how to make nicotine gum placebos during the hours in the dark, but I could have done without the experience. All metrics omitted because it was a day usage. "You know how they say that we can only access 20% of our brain?" says the man who offers stressed-out writer Eddie Morra a fateful pill in the 2011 film Limitless. "Well, what this does, it lets you access all of it." Morra is instantly transformed into a superhuman by the fictitious drug NZT-48. Granted access to all cognitive areas, he learns to play the piano in three days, finishes writing his book in four, and swiftly makes himself a millionaire. Many people find it difficult to think clearly when they are stressed out. Ongoing stress leads to progressive mental fatigue and an eventual breakdown. Luckily, there are several ways that nootropics can help relieve stress. One is through the natural promotion of feelings of relaxation and the other is by replenishing the brain chemicals drained by stress. There are hundreds of cognitive enhancing pills (so called smart pills) on the market that simply do NOT work! With each of them claiming they are the best, how can you find the brain enhancing supplements that are both safe and effective? Our top brain enhancing pills have been picked by sorting and ranking the top brain enhancing products yourself. Our ratings are based on the following criteria. With this experiment, I broke from the previous methodology, taking the remaining and final half Nuvigil at midnight. I am behind on work and could use a full night to catch up. By 8 AM, I am as usual impressed by the Nuvigil - with Modalert or something, I generally start to feel down by mid-morning, but with Nuvigil, I feel pretty much as I did at 1 AM. Sleep: 9:51/9:15/8:27 "I bought this book because I didn't want a weightloss diet, but I wanted the most optimal gut/brain food I could find to help with an autoimmune. I subscribe to Cavin's podcast and another newsletter for gut health which also recommended this book. Also, he's a personal friend of mine who's recovery I have witnessed firsthand. Thank you so much for all of the research and your continued dedication to not only help yourself, but for also helping others!" Neuroprime – Mind Nutrition's offering to the nootropic industry. Mind Nutrition is one of the most interesting nootropics we've found on the industry. It brings a formula that is their solution for the market, as a fundamental combination of vitamins and nootropics, or at least they call it. Neuroprime brings that to the table, as well as the fact that Neuroprime is also one of the most transparent companies that we've seen. Their online site is detailed, yet clean, without making any outrageous claims or statements. However, we here at Top10BrainPills.com… Learn More... Natural-sourced ingredients can also help to enhance your brain. Superfood, herbal or Amino A ingredient cognitive enhancers are more natural and are largely directly derived from food or plants. Panax ginseng, matcha tea and choline (found in foods like broccoli) are included under this umbrella. There are dozens of different natural ingredients /herbs purported to help cognition, many of which have been used medicinally for hundreds of years. In avoiding experimenting with more Russian Noopept pills and using instead the easily-purchased powder form of Noopept, there are two opposing considerations: Russian Noopept is reportedly the best, so we might expect anything I buy online to be weaker or impure or inferior somehow and the effect size smaller than in the pilot experiment; but by buying my own supply & using powder I can double or triple the dose to 20mg or 30mg (to compensate for the original under-dosing of 10mg) and so the effect size larger than in the pilot experiment. The evidence? Although everyone can benefit from dietary sources of essential fatty acids, supplementation is especially recommended for people with heart disease. A small study published in 2013 found that DHA may enhance memory and reaction time in healthy young adults. However, a more recent review suggested that there is not enough evidence of any effect from omega 3 supplementation in the general population. The evidence? Ritalin is FDA-approved to treat ADHD. It has also been shown to help patients with traumatic brain injury concentrate for longer periods, but does not improve memory in those patients, according to a 2016 meta-analysis of several trials. A study published in 2012 found that low doses of methylphenidate improved cognitive performance, including working memory, in healthy adult volunteers, but high doses impaired cognitive performance and a person's ability to focus. (Since the brains of teens have been found to be more sensitive to the drug's effect, it's possible that methylphenidate in lower doses could have adverse effects on working memory and cognitive functions.) 20 March, 2x 13mg; first time, took around 11:30AM, half-life 3 hours, so halved by 2:30PM. Initial reaction: within 20 minutes, started to feel light-headed, experienced a bit of physical clumsiness while baking bread (dropped things or poured too much thrice); that began to pass in an hour, leaving what felt like a cheerier mood and less anxiety. Seems like it mostly wore off by 6PM. Redosed at 8PM TODO: maybe take a look at the HRV data? looks interestingly like HRV increased thanks to the tianeptine 21 March, 2x17mg; seemed to buffer effects of FBI visit 22 March, 2x 23 March, 2x 24 March, 2x 25 March, 2x 26 March, 2x 27 March, 2x 28 March, 2x 7 April, 2x 8 April, 2x 9 April, 2x 10 April, 2x 11 April, 2x 12 April, 2x 23 April, 2x 24 April, 2x 25 April, 2x 26 April, 2x 27 April, 2x 28 April, 2x 29 April, 2x 7 May, 2x 8 May, 2x 9 May, 2x 10 May, 2x 3 June, 2x 4 June, 2x 5 June, 2x 30 June, 2x 30 July, 1x 31 July, 1x 1 August, 2x 2 August, 2x 3 August, 2x 5 August, 2x 6 August, 2x 8 August, 2x 10 August, 2x 12 August: 2x 14 August: 2x 15 August: 2x 16 August: 1x 18 August: 2x 19 August: 2x 21 August: 2x 23 August: 1x 24 August: 1x 25 August: 1x 26 August: 2x 27 August: 1x 29 August: 2x 30 August: 1x 02 September: 1x 04 September: 1x 07 September: 2x 20 September: 1x 21 September: 2x 24 September: 2x 25 September: 2x 26 September: 2x 28 September: 2x 29 September: 2x 5 October: 2x 6 October: 1x 19 October: 1x 20 October: 1x 27 October: 1x 4 November: 1x 5 November: 1x 8 November: 1x 9 November: 2x 10 November: 1x 11 November: 1x 12 November: 1x 25 November: 1x 26 November: 1x 27 November: 1x 4 December: 2x 27 December: 1x 28 December: 1x 2017 7 January: 1x 8 January: 2x 10 January: 1x 16 January: 1x 17 January: 1x 20 January: 1x 24 January: 1x 25 January: 2x 27 January: 2x 28 January: 2x 1 February: 2x 3 February: 2x 8 February: 1x 16 February: 2x 17 February: 2x 18 February: 1x 22 February: 1x 27 February: 2x 14 March: 1x 15 March: 1x 16 March: 2x 17 March: 2x 18 March: 2x 19 March: 2x 20 March: 2x 21 March: 2x 22 March: 2x 23 March: 1x 24 March: 2x 25 March: 2x 26 March: 2x 27 March: 2x 28 March: 2x 29 March: 2x 30 March: 2x 31 March: 2x 01 April: 2x 02 April: 1x 03 April: 2x 04 April: 2x 05 April: 2x 06 April: 2x 07 April: 2x 08 April: 2x 09 April: 2x 10 April: 2x 11 April: 2x 20 April: 1x 21 April: 1x 22 April: 1x 23 April: 1x 24 April: 1x 25 April: 1x 26 April: 2x 27 April: 2x 28 April: 1x 30 April: 1x 01 May: 2x 02 May: 2x 03 May: 2x 04 May: 2x 05 May: 2x 06 May: 2x 07 May: 2x 08 May: 2x 09 May: 2x 10 May: 2x 11 May: 2x 12 May: 2x 13 May: 2x 14 May: 2x 15 May: 2x 16 May: 2x 17 May: 2x 18 May: 2x 19 May: 2x 20 May: 2x 21 May: 2x 22 May: 2x 23 May: 2x 24 May: 2x 25 May: 2x 26 May: 2x 27 May: 2x 28 May: 2x 29 May: 2x 30 May: 2x 1 June: 2x 2 June: 2x 3 June: 2x 4 June: 2x 5 June: 1x 6 June: 2x 7 June: 2x 8 June: 2x 9 June: 2x 10 June: 2x 11 June: 2x 12 June: 2x 13 June: 2x 14 June: 2x 15 June: 2x 16 June: 2x 17 June: 2x 18 June: 2x 19 June: 2x 20 June: 2x 22 June: 2x 21 June: 2x 02 July: 2x 03 July: 2x 04 July: 2x 05 July: 2x 06 July: 2x 07 July: 2x 08 July: 2x 09 July: 2x 10 July: 2x 11 July: 2x 12 July: 2x 13 July: 2x 14 July: 2x 15 July: 2x 16 July: 2x 17 July: 2x 18 July: 2x 19 July: 2x 20 July: 2x 21 July: 2x 22 July: 2x 23 July: 2x 24 July: 2x 25 July: 2x 26 July: 2x 27 July: 2x 28 July: 2x 29 July: 2x 30 July: 2x 31 July: 2x 01 August: 2x 02 August: 2x 03 August: 2x 04 August: 2x 05 August: 2x 06 August: 2x 07 August: 2x 08 August: 2x 09 August: 2x 10 August: 2x 11 August: 2x 12 August: 2x 13 August: 2x 14 August: 2x 15 August: 2x 16 August: 2x 17 August: 2x 18 August: 2x 19 August: 2x 20 August: 2x 21 August: 2x 22 August: 2x 23 August: 2x 24 August: 2x 25 August: 2x 26 August: 1x 27 August: 2x 28 August: 2x 29 August: 2x 30 August: 2x 31 August: 2x 01 September: 2x 02 September: 2x 03 September: 2x 04 September: 2x 05 September: 2x 06 September: 2x 07 September: 2x 08 September: 2x 09 September: 2x 10 September: 2x 11 September: 2x 12 September: 2x 13 September: 2x 14 September: 2x 15 September: 2x 16 September: 2x 17 September: 2x 18 September: 2x 19 September: 2x 20 September: 2x 21 September: 2x 22 September: 2x 23 September: 2x 24 September: 2x 25 September: 2x 26 September: 2x 27 September: 2x 28 September: 2x 29 September: 2x 30 September: 2x October 01 October: 2x 02 October: 2x 03 October: 2x 04 October: 2x 05 October: 2x 06 October: 2x 07 October: 2x 08 October: 2x 09 October: 2x 10 October: 2x 11 October: 2x 12 October: 2x 13 October: 2x 14 October: 2x 15 October: 2x 16 October: 2x 17 October: 2x 18 October: 2x 20 October: 2x 21 October: 2x 22 October: 2x 23 October: 2x 24 October: 2x 25 October: 2x 26 October: 2x 27 October: 2x 28 October: 2x 29 October: 2x 30 October: 2x 31 October: 2x 01 November: 2x 02 November: 2x 03 November: 2x 04 November: 2x 05 November: 2x 06 November: 2x 07 November: 2x 08 November: 2x 09 November: 2x 10 November: 2x 11 November: 2x 12 November: 2x 13 November: 2x 14 November: 2x 15 November: 2x 16 November: 2x 17 November: 2x 18 November: 2x 19 November: 2x 20 November: 2x 21 November: 2x 22 November: 2x 23 November: 2x 24 November: 2x 25 November: 2x 26 November: 2x 27 November: 2x 28 November: 2x 29 November: 2x 30 November: 2x 01 December: 2x 02 December: 2x 03 December: 2x 04 December: 2x 05 December: 2x 06 December: 2x 07 December: 2x 08 December: 2x 09 December: 2x 10 December: 2x 11 December: 2x 12 December: 2x 13 December: 2x 14 December: 2x 15 December: 2x 16 December: 2x 17 December: 2x 18 December: 2x 19 December: 2x 20 December: 2x 21 December: 2x 22 December: 2x 23 December: 2x 24 December: 2x 25 December: 2x ran out, last day: 25 December 2017 –> Phenserine, as well as the drugs Aricept and Exelon, which are already on the market, work by increasing the level of acetylcholine, a neurotransmitter that is deficient in people with the disease. A neurotransmitter is a chemical that allows communication between nerve cells in the brain. In people with Alzheimer's disease, many brain cells have died, so the hope is to get the most out of those that remain by flooding the brain with acetylcholine. Medication can be ineffective if the drug payload is not delivered at its intended place and time. Since an oral medication travels through a broad pH spectrum, the pill encapsulation could dissolve at the wrong time. However, a smart pill with environmental sensors, a feedback algorithm and a drug release mechanism can give rise to smart drug delivery systems. This can ensure optimal drug delivery and prevent accidental overdose. A number of different laboratory studies have assessed the acute effect of prescription stimulants on the cognition of normal adults. In the next four sections, we review this literature, with the goal of answering the following questions: First, do MPH (e.g., Ritalin) and d-AMP (by itself or as the main ingredient in Adderall) improve cognitive performance relative to placebo in normal healthy adults? Second, which cognitive systems are affected by these drugs? Third, how do the effects of the drugs depend on the individual using them? The abuse liability of caffeine has been evaluated.147,148 Tolerance development to the subjective effects of caffeine was shown in a study in which caffeine was administered at 300 mg twice each day for 18 days.148 Tolerance to the daytime alerting effects of caffeine, as measured by the MSLT, was shown over 2 days on which 250 g of caffeine was given twice each day48 and to the sleep-disruptive effects (but not REM percentage) over 7 days of 400 mg of caffeine given 3 times each day.7 In humans, placebo-controlled caffeine-discontinuation studies have shown physical dependence on caffeine, as evidenced by a withdrawal syndrome.147 The most frequently observed withdrawal symptom is headache, but daytime sleepiness and fatigue are also often reported. The withdrawal-syndrome severity is a function of the dose and duration of prior caffeine use…At higher doses, negative effects such as dysphoria, anxiety, and nervousness are experienced. The subjective-effect profile of caffeine is similar to that of amphetamine,147 with the exception that dysphoria/anxiety is more likely to occur with higher caffeine doses than with higher amphetamine doses. Caffeine can be discriminated from placebo by the majority of participants, and correct caffeine identification increases with dose.147 Caffeine is self-administered by about 50% of normal subjects who report moderate to heavy caffeine use. In post-hoc analyses of the subjective effects reported by caffeine choosers versus nonchoosers, the choosers report positive effects and the nonchoosers report negative effects. Interestingly, choosers also report negative effects such as headache and fatigue with placebo, and this suggests that caffeine-withdrawal syndrome, secondary to placebo choice, contributes to the likelihood of caffeine self-administration. This implies that physical dependence potentiates behavioral dependence to caffeine. How should the mixed results just summarized be interpreted vis-á-vis the cognitive-enhancing potential of prescription stimulants? One possibility is that d-AMP and MPH enhance cognition, including the retention of just-acquired information and some or all forms of executive function, but that the enhancement effect is small. If this were the case, then many of the published studies were underpowered for detecting enhancement, with most samples sizes under 50. It follows that the observed effects would be inconsistent, a mix of positive and null findings. Expect to experience an increase in focus and a drastic reduction in reaction time [11][12][13][14][15][16]. You'll have an easier time quickly switching between different mental tasks, and will experience an increase in general cognitive ability [17][18]. Queal Flow also improves cognition and motivation, by means of reducing anxiety and stress [19][20][21][22][23]. If you're using Flow regularly for a longer period of time, it's also very likely to improve your mental health in the long term (reducing cognitive decline), and might even improve your memory [24][25]. If stimulants truly enhance cognition but do so to only a small degree, this raises the question of whether small effects are of practical use in the real world. Under some circumstances, the answer would undoubtedly be yes. Success in academic and occupational competitions often hinges on the difference between being at the top or merely near the top. A scholarship or a promotion that can go to only one person will not benefit the runner-up at all. Hence, even a small edge in the competition can be important. These are the most popular nootropics available at the moment. Most of them are the tried-and-tested and the benefits you derive from them are notable (e.g. Guarana). Others are still being researched and there haven't been many human studies on these components (e.g. Piracetam). As always, it's about what works for you and everyone has a unique way of responding to different nootropics. Chocolate or cocoa powder (Examine.com), contains the stimulants caffeine and the caffeine metabolite theobromine, so it's not necessarily surprising if cocoa powder was a weak stimulant. It's also a witch's brew of chemicals such as polyphenols and flavonoids some of which have been fingered as helpful10, which all adds up to an unclear impact on health (once you control for eating a lot of sugar). Fitzgerald 2012 and the general absence of successful experiments suggests not, as does the general historic failure of scores of IQ-related interventions in healthy young adults. Of the 10 studies listed in the original section dealing with iodine in children or adults, only 2 show any benefit; in lieu of a meta-analysis, a rule of thumb would be 20%, but both those studies used a package of dozens of nutrients - and not just iodine - so if the responsible substance were randomly picked, that suggests we ought to give it a chance of 20% \times \frac{1}{\text{dozens}} of being iodine! I may be unduly optimistic if I give this as much as 10%. Took pill around 6 PM; I had a very long drive to and from an airport ahead of me, ideal for Adderall. In case it was Adderall, I chewed up the pill - by making it absorb faster, more of the effect would be there when I needed it, during driving, and not lingering in my system past midnight. Was it? I didn't notice any change in my pulse, I yawned several times on the way back, my conversation was not more voluminous than usual. I did stay up later than usual, but that's fully explained by walking to get ice cream. All in all, my best guess was that the pill was placebo, and I feel fairly confident but not hugely confident that it was placebo. I'd give it ~70%. And checking the next morning… I was right! Finally. The evidence? A 2012 study in Greece found it can boost cognitive function in adults with mild cognitive impairment (MCI), a type of disorder marked by forgetfulness and problems with language, judgement, or planning that are more severe than average "senior moments," but are not serious enough to be diagnosed as dementia. In some people, MCI will progress into dementia. Smart drugs offer significant memory enhancing benefits. Clinical studies of the best memory pills have shown gains to focus and memory. Individuals seek the best quality supplements to perform better for higher grades in college courses or become more efficient, productive, and focused at work for career advancement. It is important to choose a high quality supplement to get the results you want. Taken together, the available results are mixed, with slightly more null results than overall positive findings of enhancement and evidence of impairment in one reversal learning task. As the effect sizes listed in Table 5 show, the effects when found are generally substantial. When drug effects were assessed as a function of placebo performance, genotype, or self-reported impulsivity, enhancement was found to be greatest for participants who performed most poorly on placebo, had a COMT genotype associated with poorer executive function, or reported being impulsive in their everyday lives. In sum, the effects of stimulants on cognitive control are not robust, but MPH and d-AMP appear to enhance cognitive control in some tasks for some people, especially those less likely to perform well on cognitive control tasks. Increasing incidences of chronic diseases such as diabetes and cancer are also impacting positive growth for the global smart pills market. The above-mentioned factors have increased the need for on-site diagnosis, which can be achieved by smart pills. Moreover, the expanding geriatric population and the resulting increasing in degenerative diseases has increased demand for smart pills I take my piracetam in the form of capped pills consisting (in descending order) of piracetam, choline bitartrate, anhydrous caffeine, and l-tyrosine. On 8 December 2012, I happened to run out of them and couldn't fetch more from my stock until 27 December. This forms a sort of (non-randomized, non-blind) short natural experiment: did my daily 1-5 mood/productivity ratings fall during 8-27 December compared to November 2012 & January 2013? The graphed data28 suggests to me a decline: Natural and herbal nootropics are by far the safest and best smart drugs to ingest. For this reason, they're worth covering first. Our recommendation is always to stick with natural brain fog cures. Herbal remedies for enhancing mental cognition are often side-effect free. These substances are superior for both long-term safety and effectiveness. They are also well-studied and have deep roots in traditional medicine. With subtle effects, we need a lot of data, so we want at least half a year (6 blocks) or better yet, a year (12 blocks); this requires 180 actives and 180 placebos. This is easily covered by $11 for Doctor's Best Best Lithium Orotate (5mg), 200-Count (more precisely, Lithium 5mg (from 125mg of lithium orotate)) and $14 for 1000x1g empty capsules (purchased February 2012). For convenience I settled on 168 lithium & 168 placebos (7 pill-machine batches, 14 batches total); I can use them in 24 paired blocks of 7-days/1-week each (48 total blocks/48 weeks). The lithium expiration date is October 2014, so that is not a problem In 3, you're considering adding a new supplement, not stopping a supplement you already use. The I don't try Adderall case has value $0, the Adderall fails case is worth -$40 (assuming you only bought 10 pills, and this number should be increased by your analysis time and a weighted cost for potential permanent side effects), and the Adderall succeeds case is worth $X-40-4099, where $X is the discounted lifetime value of the increased productivity due to Adderall, minus any discounted long-term side effect costs. If you estimate Adderall will work with p=.5, then you should try out Adderall if you estimate that 0.5 \times (X-4179) > 0 ~> $X>4179$. (Adderall working or not isn't binary, and so you might be more comfortable breaking down the various how effective Adderall is cases when eliciting X, by coming up with different levels it could work at, their values, and then using a weighted sum to get X. This can also give you a better target with your experiment- this needs to show a benefit of at least Y from Adderall for it to be worth the cost, and I've designed it so it has a reasonable chance of showing that.) Nicotine's stimulant effects are general and do not come with the same tweakiness and aggression associated with the amphetamines, and subjectively are much cleaner with less of a crash. I would say that its stimulant effects are fairly strong, around that of modafinil. Another advantage is that nicotine operates through nicotinic receptors and so doesn't cross-tolerate with dopaminergic stimulants (hence one could hypothetically cycle through nicotine, modafinil, amphetamines, and caffeine, hitting different receptors each time). Over the last few months, as part of a new research project, I have talked with five people who regularly use drugs at work. They are all successful in their jobs, financially secure, in stable relationships, and generally content with their lives. None of them have plans to stop using the drugs, and so far they have kept the secret from their employers. But as their colleagues become more likely to start using the same drugs (people talk, after all), will they continue to do so? Poulin (2007) 2002 Canadian secondary school 7th, 9th, 10th, and 12th graders (N = 12,990) 6.6% MPH (past year), 8.7% d-AMP (past year) MPH: 84%: 1–4 times per year; d-AMP: 74%: 1–4 times per year 26% of students with a prescription had given or sold some of their pills; students in class with a student who had given or sold their pills were 1.5 times more likely to use nonmedically Either prescription or illegal, daily use of testosterone would not be cheap. On the other hand, if I am one of the people for whom testosterone works very well, it would be even more valuable than modafinil, in which case it is well worth even arduous experimenting. Since I am on the fence on whether it would help, this suggests the value of information is high. Table 5 lists the results of 16 tasks from 13 articles on the effects of d-AMP or MPH on cognitive control. One of the simplest tasks used to study cognitive control is the go/no-go task. Subjects are instructed to press a button as quickly as possible for one stimulus or class of stimuli (go) and to refrain from pressing for another stimulus or class of stimuli (no go). De Wit et al. (2002) used a version of this task to measure the effects of d-AMP on subjects' ability to inhibit a response and found enhancement in the form of decreased false alarms (responses to no-go stimuli) and increased speed of correct go responses. They also found that subjects who made the most errors on placebo experienced the greatest enhancement from the drug. Took full pill at 10:21 PM when I started feeling a bit tired. Around 11:30, I noticed my head feeling fuzzy but my reading seemed to still be up to snuff. I would eventually finish the science book around 9 AM the next day, taking some very long breaks to walk the dog, write some poems, write a program, do Mnemosyne review (memory performance: subjectively below average, but not as bad as I would have expected from staying up all night), and some other things. Around 4 AM, I reflected that I felt much as I had during my nightwatch job at the same hour of the day - except I had switched sleep schedules for the job. The tiredness continued to build and my willpower weakened so the morning wasn't as productive as it could have been - but my actual performance when I could be bothered was still pretty normal. That struck me as kind of interesting that I can feel very tired and not act tired, in line with the anecdotes. Recent developments include biosensor-equipped smart pills that sense the appropriate environment and location to release pharmacological agents. Medimetrics (Eindhoven, Netherlands) has developed a pill called IntelliCap with drug reservoir, pH and temperature sensors that release drugs to a defined region of the gastrointestinal tract. This device is CE marked and is in early stages of clinical trials for FDA approval. Recently, Google announced its intent to invest and innovate in this space. Maj. Jamie Schwandt, USAR, is a logistics officer and has served as an operations officer, planner and commander. He is certified as a Department of the Army Lean Six Sigma Master Black Belt, certified Red Team Member, and holds a doctorate from Kansas State University. This article represents his own personal views, which are not necessarily those of the Department of the Army. There is no shortage of nootropics available for purchase online that can be shipped to you nearly anywhere in the world. Yet, many of these supplements and drugs have very little studies, particularly human studies, confirming their results. While this lack of research may not scare away more adventurous neurohackers, many people would prefer to […] What worries me about amphetamine is its addictive potential, and the fact that it can cause stress and anxiety. Research says it's only slightly likely to cause addiction in people with ADHD, [7] but we don't know much about its addictive potential in healthy adults. We all know the addictive potential of methamphetamine, and amphetamine is closely related enough to make me nervous about so many people giving it to their children. Amphetamines cause withdrawal symptoms, so the potential for addiction is there.
CommonCrawl
Explanatory Model Analysis Explore, Explain, and Examine Predictive Models I Introduction 1.1 The aim of the book 1.2 A bit of philosophy: three laws of model explanation 1.4 Black-box models and glass-box models 1.5 Model-agnostic and model-specific approach 1.6 The structure of the book 1.7 What is included in this book and what is not 1.8 Acknowledgements 2 Model Development 2.2 Model-development process 2.3 Notation 2.4 Data understanding 2.5 Model assembly (fitting) 2.6 Model audit 3 Do-it-yourself 3.1 Do-it-yourself with R 3.1.1 What to install? 3.1.2 How to work with DALEX? 3.1.3 How to work with archivist? 3.2 Do-it-yourself with Python 3.2.3 Code snippets for Python 4 Datasets and Models 4.1 Sinking of the RMS Titanic 4.1.1 Data exploration 4.2 Models for RMS Titanic, snippets for R 4.2.1 Logistic regression model 4.2.2 Random forest model 4.2.3 Gradient boosting model 4.2.4 Support vector machine model 4.2.5 Models' predictions 4.2.6 Models' explainers 4.2.7 List of model-objects 4.3 Models for RMS Titanic, snippets for Python 4.4 Apartment prices 4.5 Models for apartment prices, snippets for R 4.5.1 Linear regression model 4.6 Models for apartment prices, snippets for Python II Instance Level 5 Introduction to Instance-level Exploration 6 Break-down Plots for Additive Attributions 6.2 Intuition 6.3 Method 6.3.1 Break-down for linear models 6.3.2 Break-down for a general case 6.4 Example: Titanic data 6.5 Pros and cons 6.6 Code snippets for R 6.6.1 Basic use of the predict_parts() function 6.6.2 Advanced use of the predict_parts() function 6.7 Code snippets for Python 7 Break-down Plots for Interactions 8 Shapley Additive Explanations (SHAP) for Average Attributions 9 Local Interpretable Model-agnostic Explanations (LIME) 9.3.1 Interpretable data representation 9.3.2 Sampling around the instance of interest 9.3.3 Fitting the glass-box model 9.6.1 The lime package 9.6.2 The localModel package 9.6.3 The iml package 10 Ceteris-paribus Profiles 10.2 Intuition 10.3 Method 10.4 Example: Titanic data 10.5 Pros and cons 10.6 Code snippets for R 10.6.1 Basic use of the predict_profile() function 10.6.2 Advanced use of the predict_profile() function 10.6.3 Comparison of models (champion-challenger) 10.7 Code snippets for Python 11 Ceteris-paribus Oscillations 11.6.1 Basic use of the predict_parts() function 11.6.2 Advanced use of the predict_parts() function 12 Local-diagnostics Plots 12.3.1 Nearest neighbours 12.3.2 Local-fidelity plot 12.3.3 Local-stability plot 12.4 Example: Titanic 13 Summary of Instance-level Exploration 13.2 Number of explanatory variables in the model 13.2.1 Low to medium number of explanatory variables 13.2.2 Medium to a large number of explanatory variables 13.2.3 Very large number of explanatory variables 13.3 Correlated explanatory variables 13.4 Models with interactions 13.5 Sparse explanations 13.6 Additional uses of model exploration and explanation 13.7 Comparison of models (champion-challenger analysis) III Dataset Level 14 Introduction to Dataset-level Exploration 15 Model-performance Measures 15.3.1 Continuous dependent variable 15.3.2 Binary dependent variable 15.3.3 Categorical dependent variable 15.3.4 Count dependent variable 15.4 Example 15.4.1 Apartment prices 15.4.2 Titanic data 16 Variable-importance Measures 17 Partial-dependence Profiles 17.3.1 Partial-dependence profiles 17.3.2 Clustered partial-dependence profiles 17.3.3 Grouped partial-dependence profiles 17.3.4 Contrastive partial-dependence profiles 17.4 Example: apartment-prices data 18 Local-dependence and Accumulated-local Profiles 18.3.1 Local-dependence profile 18.3.2 Accumulated-local profile 18.3.3 Dependence profiles for a model with interaction and correlated explanatory variables: an example 19 Residual-diagnostics Plots 20 Summary of Dataset-level Exploration 20.2 Exploration on training/testing data IV Use-cases 21 FIFA 19 21.2 Data preparation 21.2.1 Code snippets for R 21.2.2 Code snippets for Python 21.3 Data understanding 21.4 Model assembly 21.5 Model audit 21.6 Model understanding (dataset-level explanations) 21.7 Instance-level explanations 21.7.1 Robert Lewandowski 21.7.4 CR7 21.7.5 Wojciech Szczęsny 21.7.6 Lionel Messi 22 Reproducibility 22.1 Package versions for R 22.2 Package versions for Python DALEX website Published with bookdown In this chapter, we present methods that are useful for a detailed examination of both overall and instance-specific model performance. In particular, we focus on graphical methods that use residuals. The methods may be used for several purposes: In Part II of the book, we discussed tools for single-instance exploration. Residuals can be used to identify potentially problematic instances. The single-instance explainers can then be used in the problematic cases to understand, for instance, which factors contribute most to the errors in prediction. For most models, residuals should express a random behavior with certain properties (like, e.g., being concentrated around 0). If we find any systematic deviations from the expected behavior, they may signal an issue with a model (for instance, an omitted explanatory variable or a wrong functional form of a variable included in the model). In Chapter 15, we discussed measures that can be used to evaluate the overall performance of a predictive model. Sometimes, however, we may be more interested in cases with the largest prediction errors, which can be identified with the help of residuals. Residual diagnostics is a classical topic related to statistical modelling. It is most often discussed in the context of the evaluation of goodness-of-fit of a model. That is, residuals are computed using the training data and used to assess whether the model predictions "fit" the observed values of the dependent variable. The literature on the topic is vast, as essentially every book on statistical modeling includes some discussion about residuals. Thus, in this chapter, we are not aiming at being exhaustive. Rather, our goal is to present selected concepts that underlie the use of residuals for predictive models. As it was mentioned in Section 2.3, we primarily focus on models describing the expected value of the dependent variable as a function of explanatory variables. In such a case, for a "perfect" predictive model, the predicted value of the dependent variable should be exactly equal to the actual value of the variable for every observation. Perfect prediction is rarely, if ever, expected. In practice, we want the predictions to be reasonably close to the actual values. This suggests that we can use the difference between the predicted and the actual value of the dependent variable to quantify the quality of predictions obtained from a model. The difference is called a residual. For a single observation, residual will almost always be different from zero. While a large (absolute) value of a residual may indicate a problem with a prediction for a particular observation, it does not mean that the quality of predictions obtained from a model is unsatisfactory in general. To evaluate the quality, we should investigate the "behavior" of residuals for a group of observations. In other words, we should look at the distribution of the values of residuals. For a "good" model, residuals should deviate from zero randomly, i.e., not systematically. Thus, their distribution should be symmetric around zero, implying that their mean (or median) value should be zero. Also, residuals should be close to zero themselves, i.e., they should show low variability. Usually, to verify these properties, graphical methods are used. For instance, a histogram can be used to check the symmetry and location of the distribution of residuals. Note that a model may imply a concrete distribution for residuals. In such a case, the distributional assumption can be verified by using a suitable graphical method like, for instance, a quantile-quantile plot. If the assumption is found to be violated, one might want to be careful when using predictions obtained from the model. As it was already mentioned in Chapter 2, for a continuous dependent variable \(Y\), residual \(r_i\) for the \(i\)-th observation in a dataset is the difference between the observed value of \(Y\) and the corresponding model prediction: \[\begin{equation} r_i = y_i - f(\underline{x}_i) = y_i - \widehat{y}_i. \tag{19.1} \end{equation}\] Standardized residuals are defined as \[\begin{equation} \tilde{r}_i = \frac{r_i}{\sqrt{\mbox{Var}(r_i)}}, \tag{19.2} \end{equation}\] where \(\mbox{Var}(r_i)\) is the variance of the residual \(r_i\). Of course, in practice, the variance of \(r_i\) is usually unknown. Hence, the estimated value of \(\mbox{Var}(r_i)\) is used in (19.2). Residuals defined in this way are often called the Pearson residuals (Galecki and Burzykowski 2013). Their distribution should be approximately standard-normal. For the classical linear-regression model, \(\mbox{Var}(r_i)\) can be estimated by using the design matrix. On the other hand, for count data, the variance can be estimated by \(f(\underline{x}_i)\), i.e., the expected value of the count. In general, for complicated models, it may be hard to estimate \(\mbox{Var}(r_i)\), so it is often approximated by a constant for all residuals. Definition (19.2) can also be applied to a binary dependent variable if the model prediction \(f(\underline{x}_i)\) is the probability of observing \(y_i\) and upon coding the two possible values of the variable as 0 and 1. However, in this case, the range of possible values of \(r_i\) is restricted to \([-1,1]\), which limits the usefulness of the residuals. For this reason, more often the Pearson residuals are used. Note that, if the observed values of the explanatory-variable vectors \(\underline{x}_i\) lead to different predictions \(f(\underline{x}_i)\) for different observations in a dataset, the distribution of the Pearson residuals will not be approximated by the standard-normal one. This is the case when, for instance, one (or more) of the explanatory variables is continuous. Nevertheless, in that case, the index plot may still be useful to detect observations with large residuals. The standard-normal approximation is more likely to apply in the situation when the observed values of vectors \(\underline{x}_i\) split the data into a few, say \(K\), groups, with observations in group \(k\) (\(k=1,\ldots,K\)) sharing the same predicted value \(f_k\). This may be happen if all explanatory variables are categorical with a limited number of categories. In that case, one can consider averaging residuals \(r_i\) per group and standardizing them by \(\sqrt{f_k(1-f_k)/n_k}\), where \(n_k\) is the number of observations in group \(k\). For categorical data, residuals are usually defined in terms of differences in predictions for the dummy binary variable indicating the category observed for the \(i\)-th observation. Note that the plot of standardized residuals in function of leverage can also be used to detect observations with large differences between the predicted and observed value of the dependent variable. In particular, given that \({\tilde{r}_i}\) should have approximately standard-normal distribution, only about 0.5% of them should be larger, in absolute value, than 2.57. If there is an excess of such observations, this could be taken as a signal of issues with the fit of the model. At least two such observations (59 and 143) are indicated in the plot shown in the bottom-left panel of Figure 19.1. Finally, the bottom-right panel of Figure 19.1 presents an example of a normal quantile-quantile plot. In particular, the vertical axis represents the ordered values of the standardized residuals, whereas the horizontal axis represents the corresponding values expected from the standard normal distribution. If the normality assumption is fulfilled, the plot should show a scatter of points close to the \(45^{\circ}\) diagonal. Clearly, this is not the case of the plot in the bottom-right panel of Figure 19.1. Figure 19.1: Diagnostic plots for a linear-regression model. Clockwise from the top-left: residuals in function of fitted values, a scale-location plot, a normal quantile-quantile plot, and a leverage plot. In each panel, indexes of the three most extreme observations are indicated. In this section, we consider the linear-regression model apartments_lm (Section 4.5.1) and the random forest model apartments_rf (Section 4.5.2) for the apartment-prices dataset (Section 4.4). Recall that the dependent variable of interest, the price per square meter, is continuous. Thus, we can use residuals \(r_i\), as defined in (19.1). We compute the residuals for the apartments_test testing dataset (see Section 4.5.4). It is worth noting that, as it was mentioned in Section 15.4.1, RMSE for both models is very similar for that dataset. Thus, overall, the two models could be seen as performing similarly on average. Figures 19.2 and 19.3 summarize the distribution of residuals for both models. In particular, Figure 19.2 presents histograms of residuals, while Figure 19.3 shows box-and-whisker plots for the absolute value of the residuals. Figure 19.2: Histogram of residuals for the linear-regression model apartments_lm and the random forest model apartments_rf for the apartments_test dataset. Despite the similar value of RMSE, the distributions of residuals for both models are different. In particular, Figure 19.2 indicates that the distribution for the linear-regression model is, in fact, split into two separate, normal-like parts, which may suggest omission of a binary explanatory variable in the model. The two components are located around the values of about -200 and 400. As mentioned in the previous chapters, the reason for this behavior of the residuals is the fact that the model does not capture the non-linear relationship between the price and the year of construction. For instance, Figure 17.8 indicates that the relationship between the construction year and the price may be U-shaped. In particular, apartments built between 1940 and 1990 appear to be, on average, cheaper than those built earlier or later. As seen from Figure 19.2, the distribution of residuals for the random forest model is skewed to the right and multimodal. It seems to be centered at a value closer to zero than the distribution for the linear-regression model, but it shows a larger variation. These conclusions are confirmed by the box-and-whisker plots in Figure 19.3. Figure 19.3: Box-and-whisker plots of the absolute values of the residuals of the linear-regression model apartments_lm and the random forest model apartments_rf for the apartments_test dataset. The dots indicate the mean value that corresponds to root-mean-squared-error. The plots in Figures 19.2 and 19.3 suggest that the residuals for the random forest model are more frequently smaller than the residuals for the linear-regression model. However, a small fraction of the random forest-model residuals is very large, and it is due to them that the RMSE is comparable for the two models. In the remainder of the section, we focus on the random forest model. Figure 19.4 shows a scatter plot of residuals (vertical axis) in function of the observed (horizontal axis) values of the dependent variable. For a "perfect" predictive model, we would expect the horizontal line at zero. For a "good" model, we would like to see a symmetric scatter of points around the horizontal line at zero, indicating random deviations of predictions from the observed values. The plot in Figure 19.4 shows that, for the large observed values of the dependent variable, the residuals are positive, while for small values they are negative. This trend is clearly captured by the smoothed curve included in the graph. Thus, the plot suggests that the predictions are shifted (biased) towards the average. Figure 19.4: Residuals and observed values of the dependent variable for the random forest model apartments_rf for the apartments_test dataset. The shift towards the average can also be seen from Figure 19.5 that shows a scatter plot of the predicted (vertical axis) and observed (horizontal axis) values of the dependent variable. For a "perfectly" fitting model we would expect a diagonal line (indicated in red). The plot shows that, for large observed values of the dependent variable, the predictions are smaller than the observed values, with an opposite trend for the small observed values of the dependent variable. Figure 19.5: Predicted and observed values of the dependent variable for the random forest model apartments_rf for the apartments_test dataset. The red line indicates the diagonal. Figure 19.6 shows an index plot of residuals, i.e., their scatter plot in function of an (arbitrary) identifier of the observation (horizontal axis). The plot indicates an asymmetric distribution of residuals around zero, as there is an excess of large positive (larger than 500) residuals without a corresponding fraction of negative values. This can be linked to the right-skewed distribution seen in Figures 19.2 and 19.3 for the random forest model. Figure 19.6: Index plot of residuals for the random forest model apartments_rf for the apartments_test dataset. Figure 19.7 shows a scatter plot of residuals (vertical axis) in function of the predicted (horizontal axis) value of the dependent variable. For a "good" model, we would like to see a symmetric scatter of points around the horizontal line at zero. The plot in Figure 19.7, as the one in Figure 19.4, suggests that the predictions are shifted (biased) towards the average. Figure 19.7: Residuals and predicted values of the dependent variable for the random forest model apartments_rf for the apartments_test dataset. The random forest model, as the linear-regression model, assumes that residuals should be homoscedastic, i.e., that they should have a constant variance. Figure 19.8 presents a variant of the scale-location plot of residuals, i.e., a scatter plot of the absolute value of residuals (vertical axis) in function of the predicted values of the dependent variable (horizontal axis). The plot includes a smoothed line capturing the average trend. For homoscedastic residuals, we would expect a symmetric scatter around a horizontal line; the smoothed trend should be also horizontal. The plot in Figure 19.8 deviates from the expected pattern and indicates that the variability of the residuals depends on the (predicted) value of the dependent variable. For models like linear regression, such heteroscedasticity of the residuals would be worrying. In random forest models, however, it may be less of concern. This is beacuse it may occur due to the fact that the models reduce variability of residuals by introducing a bias (towards the average). Thus, it is up to the developer of a model to decide whether such a bias (in our example, for the cheapest and most expensive apartments) is a desirable price to pay for the reduced residual variability. Figure 19.8: The scale-location plot of residuals for the random forest model apartments_rf for the apartments_test dataset. Diagnostic methods based on residuals are a very useful tool in model exploration. They allow identifying different types of issues with model fit or prediction, such as problems with distributional assumptions or with the assumed structure of the model (in terms of the selection of the explanatory variables and their form). The methods can help in detecting groups of observations for which a model's predictions are biased and, hence, require inspection. A potential complication related to the use of residual diagnostics is that they rely on graphical displays. Hence, for a proper evaluation of a model, one may have to construct and review many graphs. Moreover, interpretation of the patterns seen in graphs may not be straightforward. Also, it may not be immediately obvious which element of the model may have to be changed to remove the potential issue with the model fit or predictions. In this section, we present diagnostic plots as implemented in the DALEX package for R. The package covers all plots and methods presented in this chapter. Similar functions can be found in packages auditor (Gosiewska and Biecek 2018), rms (Harrell Jr 2018), and stats (Faraway 2005). For illustration purposes, we will show how to create the plots shown in Section 19.4 for the linear-regression model apartments_lm (Section 4.5.1) and the random forest model apartments_rf (Section 4.5.2) for the apartments_test dataset (Section 4.4). We first load the two models via the archivist hooks, as listed in Section 4.5.6. Subsequently, we construct the corresponding explainers by using function explain() from the DALEX package (see Section 4.2.6). Note that we use the apartments_test data frame without the first column, i.e., the m2.price variable, in the data argument. This will be the dataset to which the model will be applied. The m2.price variable is explicitly specified as the dependent variable in the y argument. We also load the randomForest package, as it is important to have the corresponding predict() function available for the random forest model. library("DALEX") model_apart_lm <- archivist:: aread("pbiecek/models/55f19") explain_apart_lm <- DALEX::explain(model = model_apart_lm, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Linear Regression") library("randomForest") model_apart_rf <- archivist:: aread("pbiecek/models/fe7a5") explain_apart_rf <- DALEX::explain(model = model_apart_rf, label = "Random Forest") For exploration of residuals, DALEX includes two useful functions. The model_performance() function can be used to evaluate the distribution of the residuals. On the other hand, the model_diagnostics() function is suitable for investigating the relationship between residuals and other variables. The model_performance() function was already introduced in Section 15.6. Application of the function to an explainer-object returns an object of class "model_performance" which includes, in addition to selected model-performance measures, a data frame containing the observed and predicted values of the dependent variable together with the residuals. mr_lm <- DALEX::model_performance(explain_apart_lm) mr_rf <- DALEX::model_performance(explain_apart_rf) By applying the plot() function to a "model_performance"-class object we can obtain various plots. The required type of the plot is specified with the help of the geom argument (see Section 15.6). In particular, specifying geom = "histogram" results in a histogram of residuals. In the code below, we apply the plot() function to the "model_performance"-class objects for the linear-regression and random forest models. As a result, we automatically get a single graph with the histograms of residuals for the two models. The resulting graph is shown in Figure 19.2 library("ggplot2") plot(mr_lm, mr_rf, geom = "histogram") The box-and-whisker plots of the residuals for the two models can be constructed by applying the geom = "boxplot" argument. The resulting graph is shown in Figure 19.3. plot(mr_lm, mr_rf, geom = "boxplot") Function model_diagnostics() can be applied to an explainer-object to directly compute residuals. The resulting object of class "model_diagnostics" is a data frame in which the residuals and their absolute values are combined with the observed and predicted values of the dependent variable and the observed values of the explanatory variables. The data frame can be used to create various plots illustrating the relationship between residuals and the other variables. md_lm <- model_diagnostics(explain_apart_lm) md_rf <- model_diagnostics(explain_apart_rf) Application of the plot() function to a model_diagnostics-class object produces, by default, a scatter plot of residuals (on the vertical axis) in function of the predicted values of the dependent variable (on the horizontal axis). By using arguments variable and yvariable, it is possible to specify plots with other variables used for the horizontal and vertical axes, respectively. The two arguments accept, apart from the names of the explanatory variables, the following values: "y" for the dependent variable, "y_hat" for the predicted value of the dependent variable, "obs" for the identifiers of observations, "residuals" for residuals, "abs_residuals" for absolute values of residuals. Thus, to obtain the plot of residuals in function of the observed values of the dependent variable, as shown in Figure 19.4, the syntax presented below can be used. plot(md_rf, variable = "y", yvariable = "residuals") To produce Figure 19.5, we have got to use the predicted values of the dependent variable on the vertical axis. This is achieved by specifying the yvariable = "y_hat" argument. We add the diagonal reference line to the plot by using the geom_abline() function. plot(md_rf, variable = "y", yvariable = "y_hat") + geom_abline(colour = "red", intercept = 0, slope = 1) Figure 19.6 presents an index plot of residuals, i.e., residuals (on the vertical axis) in function of identifiers of individual observations (on the horizontal axis). Toward this aim, we use the plot() function call as below. plot(md_rf, variable = "ids", yvariable = "residuals") Finally, Figure 19.8 presents a variant of the scale-location plot, with absolute values of the residuals shown on the vertical scale and the predicted values of the dependent variable on the horizontal scale. The plot is obtained with the syntax shown below. plot(md_rf, variable = "y_hat", yvariable = "abs_residuals") Note that, by default, all plots produced by applying the plot() function to a "model_diagnostics"-class object include a smoothed curve. To exclude the curve from a plot, one can use the argument smooth = FALSE. In this section, we use the dalex library for Python. The package covers all methods presented in this chapter. But, as mentioned in Section 19.1, residuals are a classical model-diagnostics tool. Thus, essentially any model-related library includes functions that allow calculation and plotting of residuals. For illustration purposes, we use the apartments_rf random forest model for the Titanic data developed in Section 4.6.2. Recall that the model is developed to predict the price per square meter of an apartment in Warsaw. In the first step, we create an explainer-object that will provide a uniform interface for the predictive model. We use the Explainer() constructor for this purpose. import dalex as dx apartments_rf_exp = dx.Explainer(apartments_rf, X, y, label = "Apartments RF Pipeline") The function that calculates residuals, absolute residuals and observation ids is model_diagnostics(). md_rf = apartments_rf_exp.model_diagnostics() md_rf.result The results can be visualised by applying the plot() method. Figure 19.9 presents the created plot. md_rf.plot() Figure 19.9: Residuals versus predicted values for the random forest model for the Apartments data. In the plot() function, we can specify what shall be presented on horizontal and vertical axes. Possible values are columns in the md_rf.result data frame, i.e. residuals, abs_residuals, y, y_hat, ids and variable names. mp_rf.plot(variable = "ids", yvariable = "abs_residuals") Figure 19.10: Absolute residuals versus indices of corresponding observations for the random forest model for the Apartments data. Faraway, Julian. 2005. Linear Models with R (1st Ed.). Boca Raton, Florida: Chapman; Hall/CRC. https://cran.r-project.org/doc/contrib/Faraway-PRA.pdf. Galecki, A., and T. Burzykowski. 2013. Linear Mixed-Effects Models Using R: A Step-by-Step Approach. New York, NY: Springer-Verlag New York. Gosiewska, Alicja, and Przemyslaw Biecek. 2018. auditor: Model Audit - Verification, Validation, and Error Analysis. https://CRAN.R-project.org/package=auditor. Harrell Jr, Frank E. 2018. Rms: Regression Modeling Strategies. https://CRAN.R-project.org/package=rms. Kutner, M. H., C. J. Nachtsheim, J. Neter, and W. Li. 2005. Applied Linear Statistical Models. New York: McGraw-Hill/Irwin.
CommonCrawl
Global analytic solutions of the semiconductor Boltzmann-Dirac-Benney equation with relaxation time approximation Kirk Kayser 1, , Dieter Armbruster 2, and Michael Herty 3, Department of Mathematics and Actuarial Science, Otterbein University, Westerville, OH 43081, USA School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ 85287-1804, USA Institut für Geometrie und Praktische Mathematik (IGPM), RWTH Aachen University, Templergraben 55, 52062 Aachen, Germany Received March 2019 Revised August 2019 Published December 2019 Fund Project: D. A. and K. K. gratefully acknowledge support through NSF grant DMS-1515592 and travel support through the KI-Net grant, NSF RNMS grant No. 1107291 Kinetic exchange models of markets utilize Boltzmann-like kinetic equations to describe the macroscopic evolution of a community wealth distribution corresponding to microscopic binary interaction rules. We develop such models to study a form of welfare called need-based transfer (NBT). In contrast to conventional centrally organized wealth redistribution, NBTs feature a welfare threshold and binary donations in which above-threshold individuals give from their surplus wealth to directly meet the needs of below-threshold individuals. This structure is motivated by examples such as the gifting of cattle practiced by East African Maasai herders or food sharing among vampire bats, and has been studied using agent-based simulation. From the regressive to progressive kinetic NBT models developed here, moment evolution equations and simulation are used to describe the evolution of the community wealth distribution in terms of efficiency, shape, and inequality. Keywords: Kinetic exchange models, welfare, wealth redistribution, need-based transfers, optimal control, simulation. Mathematics Subject Classification: 35Q93, 82C40, 91B15, 91B80. Citation: Kirk Kayser, Dieter Armbruster, Michael Herty. Kinetic models of conservative economies with need-based transfers as welfare. Kinetic & Related Models, 2020, 13 (1) : 169-185. doi: 10.3934/krm.2020006 A. Aktipis, R. De Aguiar, A. Flaherty, P. Iyer, D. Sonkoi and L. Cronk, Cooperation in an uncertain world: For the Maasai of East Africa, need-based transfers outperform account-keeping in volatile environments, Human Ecology, 44 (2016), 353-364. doi: 10.1007/s10745-016-9823-z. Google Scholar C. A. Aktipis, L. Cronk and R. de Aguiar, Risk-pooling and herd survival: An agent-based model of a Maasai gift-giving system, Human Ecology, 39 (2011), 131-140. doi: 10.1007/s10745-010-9364-9. Google Scholar J. Angle, The surplus theory of social stratification and the size distribution of personal wealth, Social Forces, 65 (1986), 293-326. doi: 10.2307/2578675. Google Scholar M. Bisi, Some kinetic models for a market economy, Bollettino dell'Unione Matematica Italiana, 10 (2017), 143-158. doi: 10.1007/s40574-016-0099-4. Google Scholar M. Bisi, G. Spiga, G. Toscani et al., Kinetic models of conservative economies with wealth redistribution, Communications in Mathematical Sciences, 7 (2009), 901-916. doi: 10.4310/CMS.2009.v7.n4.a5. Google Scholar G. G. Carter and G. S. Wilkinson, Food sharing in vampire bats: Reciprocal help predicts donations more than relatedness or harassment, Proc. R. Soc. B, 280 (2013), 20122573. doi: 10.1098/rspb.2012.2573. Google Scholar B. K. Chakrabarti, A. Chakraborti, S. R. Chakravarty and A. Chatterjee, Econophysics of Income and Wealth Distributions, Cambridge University Press, 2013. doi: 10.1017/CBO9781139004169. Google Scholar A. Chakraborti and B. K. Chakrabarti, Statistical mechanics of money: How saving propensity affects its distribution, The European Physical Journal B-Condensed Matter and Complex Systems, 17 (2000), 167-170. doi: 10.1007/s100510070173. Google Scholar A. Chatterjee, B. K. Chakrabarti and S. Manna, Pareto law in a kinetic model of market with random saving propensity, Physica A: Statistical Mechanics and its Applications, 335 (2004), 155-163. doi: 10.1016/j.physa.2003.11.014. Google Scholar S. Cordier, L. Pareschi and G. Toscani, On a kinetic model for a simple market economy, Journal of Statistical Physics, 120 (2005), 253-277. doi: 10.1007/s10955-005-5456-0. Google Scholar B. Düring, D. Matthes and G. Toscani, A boltzmann-type approach to the formation of wealth distribution curves, 2008. Google Scholar B. Düring, D. Matthes and G. Toscani, Kinetic equations modelling wealth redistribution: A comparison of approaches, Physical Review E, 78 (2008), 056103. doi: 10.1103/PhysRevE.78.056103. Google Scholar B. Düring, L. Pareschi and G. Toscani, Kinetic models for optimal control of wealth inequalities, The European Physical Journal B, 91 (2018), Paper No. 265, 12 pp. doi: 10.1140/epjb/e2018-90138-1. Google Scholar J. L. Gastwirth, The estimation of the lorenz curve and gini index, The Review of Economics and Statistics, 54 (1972), 306-316. doi: 10.2307/1937992. Google Scholar Y. Hao, D. Armbruster, L. Cronk and C. A. Aktipis, Need-based transfers on a network: A model of risk-pooling in ecologically volatile environments, Evolution and Human Behavior, 36 (2015), 265-273. doi: 10.1016/j.evolhumbehav.2014.12.003. Google Scholar K. Kayser and D. Armbruster, Social optima of need-based transfers, Physica A: Statistical Mechanics and its Applications, 536 (2018), 121011, URL http://www.sciencedirect.com/science/article/pii/S037843711930620X. doi: 10.1016/j.physa.2019.04.247. Google Scholar L. Pareschi and G. Toscani, Interacting Multiagent Systems: Kinetic Equations and Monte Carlo Methods, OUP Oxford, 2013. Google Scholar V. Pareto, Cours D'économie Politique, Lausanne and Paris, 1897. doi: 10.3917/droz.paret.1964.01. Google Scholar A. C. Silva, Temporal evolution of the "thermal" and "superthermal" income classes in the USA during 1983–2001, EPL (Europhysics Letters), 69 (2004), 304. doi: 10.1209/epl/i2004-10330-3. Google Scholar F. Slanina, Inelastically scattering particles and wealth distribution in an open economy, Physical Review E, 69 (2004), 046102. doi: 10.1103/PhysRevE.69.046102. Google Scholar G. Toscani, Wealth redistribution in conservative linear kinetic models, EPL (Europhysics Letters), 88 (2009), 10007. doi: 10.1209/0295-5075/88/10007. Google Scholar G. Toscani and C. Brugna, Wealth redistribution in boltzmann-like models of conservative economies, in Econophysics and Economics of Games, Social Choices and Quantitative Techniques, Springer, 2010, 71–82. doi: 10.1007/978-88-470-1501-2_9. Google Scholar G. Wäscher and T. Gau, Heuristics for the integer one-dimensional cutting stock problem: A computational study, Operations-Research-Spektrum, 18 (1996), 131-144. Google Scholar G. S. Wilkinson, Reciprocal food sharing in the vampire bat, Nature, 308 (1984), 181-184. doi: 10.1038/308181a0. Google Scholar G. S. Wilkinson, Reciprocal altruism in bats and other mammals, Ethology and Sociobiology, 9 (1988), 85-100. doi: 10.1016/0162-3095(88)90015-5. Google Scholar Figure 1. Simple examples of the different cases of steady states for Equation (2) where $ \theta = 0 $; the area under the probability density curve is shaded for visibility Figure 2. (a) A few different initial wealth distributions with $ M_1 = 14 $, that when evolving according to Equation (2) with $ \theta = 0 $ approach the steady states shown in (b). The second and third moment evolutions are shown in (c) and (d) respectively Figure 3. (a) A few different initial wealth distributions with $ M_1 = 14 $, that when evolving according to Equation (6) with $ \theta = 0 $ approach an attractor manifold (b). Note that three curves are present in (b), but they are overlapping. The second and third moment evolutions are shown in (c) and (d) respectively Figure 4. Numerical steady state solution to Equation (7) as well as analytical steady state solution from (8) with initial condition $ f_0(w) $ a Normal distribution $ \mathcal{N}(\mu = 10, \sigma^2 = 20^2) $ and parameters $ \theta = 0, \epsilon_0 = 10 $ Figure 5. Probability densities for probability of choosing donor threshold $ \theta + \epsilon $ for regressive, flat, and progressive policies with $ \theta = 0 $ and maximal wealth $ L = 100 $. The equation for these parameterized donor threshold probability distributions is given in Equation 10 Figure 6. Flat policy comparison with agent-based simulation. A gamma initial condition is used for $ f_0(w) $ and $ 10^4 $ agents are sampled from this distribution as well. Equation (9) is used with $ \alpha = 0 $ to find the steady state solution of the Boltzmann-like equation; for the agents, interactions are randomly generated and transfers are conducted according to the microscopic description of equation (1) until all $ 10^4 $ agents are at or above threshold Figure 7. Steady state distributions and data for parameterized kinetic NBT policies with initial condition $ f_0(w) \sim $ Gamma Figure 8. Steady state distributions and data for parameterized kinetic NBT policies with initial condition $ f_0(w) \sim $ Uniform Figure 9. Wealth distributions at $ t = 200 $ for various policies. The notation used in the legend is such that $ fb_p $: 0.2918 means that for the progressive policy (p), the fraction of the population below threshold ($ fb_p $) is equal to 0.2918. The initial condition is chosen to be a Gamma distribution. $ fb_o $ identifies the optimal policy corresponding to (11) and (12) Figure 10. A simple example density $ f(w) $ to illustrate why the optimal control policy leads to a uniform distribution of surpluses. Choosing a donor threshold of 1 maximizes the product of matching donor-recipient densities Daniel Matthes, Giuseppe Toscani. Analysis of a model for wealth redistribution. Kinetic & Related Models, 2008, 1 (1) : 1-27. doi: 10.3934/krm.2008.1.1 Dieter Armbruster, Christian Ringhofer, Andrea Thatcher. A kinetic model for an agent based market simulation. Networks & Heterogeneous Media, 2015, 10 (3) : 527-542. doi: 10.3934/nhm.2015.10.527 Xiangyu Ge, Tifang Ye, Yanli Zhou, Guoguang Yan. Fiscal centralization vs. decentralization on economic growth and welfare: An optimal-control approach. Journal of Industrial & Management Optimization, 2016, 12 (2) : 487-504. doi: 10.3934/jimo.2016.12.487 Dieter Armbruster, Matthew Wienke. Kinetic models and intrinsic timescales: Simulation comparison for a 2nd order queueing model. Kinetic & Related Models, 2019, 12 (1) : 177-193. doi: 10.3934/krm.2019008 Nicolas Vauchelet. Numerical simulation of a kinetic model for chemotaxis. Kinetic & Related Models, 2010, 3 (3) : 501-528. doi: 10.3934/krm.2010.3.501 Claus Kirchner, Michael Herty, Simone Göttlich, Axel Klar. Optimal control for continuous supply network models. Networks & Heterogeneous Media, 2006, 1 (4) : 675-688. doi: 10.3934/nhm.2006.1.675 Maria do Rosário de Pinho, Helmut Maurer, Hasnaa Zidani. Optimal control of normalized SIMR models with vaccination and treatment. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 79-99. doi: 10.3934/dcdsb.2018006 Martina Conte, Maria Groppi, Giampiero Spiga. Qualitative analysis of kinetic-based models for tumor-immune system interaction. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2393-2414. doi: 10.3934/dcdsb.2018060 Joaquim P. Mateus, Paulo Rebelo, Silvério Rosa, César M. Silva, Delfim F. M. Torres. Optimal control of non-autonomous SEIRS models with vaccination and treatment. Discrete & Continuous Dynamical Systems - S, 2018, 11 (6) : 1179-1199. doi: 10.3934/dcdss.2018067 Heather Finotti, Suzanne Lenhart, Tuoc Van Phan. Optimal control of advective direction in reaction-diffusion population models. Evolution Equations & Control Theory, 2012, 1 (1) : 81-107. doi: 10.3934/eect.2012.1.81 Carsten Hartmann, Juan C. Latorre, Wei Zhang, Grigorios A. Pavliotis. Addendum to "Optimal control of multiscale systems using reduced-order models". Journal of Computational Dynamics, 2017, 4 (1&2) : 167-167. doi: 10.3934/jcd.2017006 Hiroaki Morimoto. Optimal harvesting and planting control in stochastic logistic population models. Discrete & Continuous Dynamical Systems - B, 2012, 17 (7) : 2545-2559. doi: 10.3934/dcdsb.2012.17.2545 Maria do Rosário de Pinho, Filipa Nunes Nogueira. On application of optimal control to SEIR normalized models: Pros and cons. Mathematical Biosciences & Engineering, 2017, 14 (1) : 111-126. doi: 10.3934/mbe.2017008 Carsten Hartmann, Juan C. Latorre, Wei Zhang, Grigorios A. Pavliotis. Optimal control of multiscale systems using reduced-order models. Journal of Computational Dynamics, 2014, 1 (2) : 279-306. doi: 10.3934/jcd.2014.1.279 Holly Gaff, Elsa Schaefer. Optimal control applied to vaccination and treatment strategies for various epidemiological models. Mathematical Biosciences & Engineering, 2009, 6 (3) : 469-492. doi: 10.3934/mbe.2009.6.469 Martin Benning, Elena Celledoni, Matthias J. Ehrhardt, Brynjulf Owren, Carola-Bibiane Schönlieb. Deep learning as optimal control problems: Models and numerical methods. Journal of Computational Dynamics, 2019, 6 (2) : 171-198. doi: 10.3934/jcd.2019009 Gülden Gün Polat, Teoman Özer. On group analysis of optimal control problems in economic growth models. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020215 Pierre Degond, Hailiang Liu. Kinetic models for polymers with inertial effects. Networks & Heterogeneous Media, 2009, 4 (4) : 625-647. doi: 10.3934/nhm.2009.4.625 Seung-Yeal Ha, Doron Levy. Particle, kinetic and fluid models for phototaxis. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 77-108. doi: 10.3934/dcdsb.2009.12.77 Kirk Kayser Dieter Armbruster Michael Herty
CommonCrawl
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PERSONAL OFFICE Znamenskii, Sergei Vital'evich Total publications: 33 (32) in MathSciNet: 13 (13) in zbMATH: 13 (13) in Web of Science: 8 (8) in Scopus: 4 (4) Cited articles: 18 Citations in Math-Net.Ru: 30 Citations in MathSciNet: 20 Citations in Web of Science: 8 Citations in Scopus: 1 Number of views: This page: 1924 Abstract pages: 4318 Full texts: 1304 References: 413 Doctor of physico-mathematical sciences (1996) Speciality: 01.01.01 (Real analysis, complex analysis, and functional analysis) Keywords: linear convexity; infinite order diferential equations; infinite order diferential operators; convolution equations in complex domain; convexity in directions; analytic functionals; duality in functional spaces; conjugated sets; C-convexity; spaces of holomorphic funtions. Main publications: Znamenskii S. V., "Silnaya lineinaya vypuklost. I. Dvoistvennost prostranstv golomorfnykh funktsii", Sib. matem. zhurn., 26:3 (1985), 31–43 Znamenskii S. V., "Silnaya lineinaya vypuklost. II. Suschestvovanie golomorfnykh reshenii lineinykh sistem uravnenii", Sib. matem. zhurn., 29:6 (1988), 49–65 Znamenskii S. V., "O persechenii vypuklykh nositelei analiticheskikh funktsionalov", Matem. zametki, 53:6 (1993), 41–45 Znamenskii S. V., "Primer silno lineino vypukloi oblasti s nespryamlyaemoi granitsei", Matem. zametki, 57:6 (1995), 851–861 Znamenskii S. V., Kozlovskaya E. A., "Kriterii epimorfnosti operatora svertki s tochechnym nositelem v prostranstve funktsii, golomorfnykh na svyaznom mnozhestve v $\mathbf C$", Doklady Akademii nauk. Matematika, 368:6 (1999), 737–739 http://www.mathnet.ru/eng/person9110 List of publications on Google Scholar http://zbmath.org/authors/?q=ai:znamenskij.s-v https://mathscinet.ams.org/mathscinet/MRAuthorID/217720 http://elibrary.ru/author_items.asp?authorid=1507 http://www.scopus.com/authid/detail.url?authorId=55858895700 Full list of publications: | by years | by types | by times cited in WoS | by times cited in Scopus | scientific publications | common list | 1. Sergej V. Znamenskij, "From similarity to distance: axiom set,monotonic transformations and metric determinacy", Zhurn. SFU. Ser. Matem. i fiz., 11:3 (2018), 331–341 (cited: 2) 2. S. V. Znamenskij, "Numerical evaluation of the interpolation accuracy of simple elementary functions", Programmnye sistemy: teoriya i prilozheniya, 9:4 (2018), 93–116 3. S. V. Znamenskii, "Numerical evaluation of the interpolation accuracy of simple elementary functions", Program Systems: Theory and Applications, 9:4 (2018), 69–92 4. S. V. Znamenskij, "Stable assessment of the quality of similarity algorithms of character strings and their normalizations", Program Systems: Theory and Applications, 9:4 (2018), 579–596 5. S. V. Znamenskij, "Stable assessment of the quality of similarity algorithms of character strings and their normalizations", Programmnye sistemy: teoriya i prilozheniya, 9:4 (2018), 561–578 6. Sergej V. Znamenskij, "A formula for the mean length of the longest common subsequence", Zhurn. SFU. Ser. Matem. i fiz., 10:1 (2017), 71–74 (cited: 1) 7. S. V. Znamenskij, "Model and axioms for similarity metrics", Program Systems: Theory and Applications, 8:4 (2017), 347–357 8. S. V. Znamenskij, "Approximation of the longest common subsequence length for two long random strings", Program Systems: Theory and Applications, 7:4 (2016), 347–358 9. S. V. Znamenskij, "A picture of common subsequence length for two random strings over an alphabet of 4 symbols", Programmnye sistemy: teoriya i prilozheniya, 7:1 (2016), 201–208 (cited: 2) 10. S. M. Abramov, A. O. Blinov, S. N. Vassilyev, I. S. Guseva, E. V. Danilina, M. G. Dmitriev, S. V. Znamenskii, G. Konstantinov, N. E. Kul'baka, G. Osipov, Yu. S. Popkov, I. V. Rasina, E. V. Ryumina, G. V. Sidorenko, O. V. Fes'ko, M. M. Khrustalev, A. M. Tsirlin, "In memory of Professor Vladimir Iosifovich Gurman", Program Systems: Theory and Applications, 7:3 (2016), 109–132 11. Sergej V. Znamenskij, "Simple essential improvements to the ROUGE-W algorithm", Zhurn. SFU. Ser. Matem. i fiz., 8:4 (2015), 497–501 (cited: 1) 12. S. V. Znamenskij, "A model and algorithm for sequence alignment", Programmnye sistemy: teoriya i prilozheniya, 6:1 (2015), 189–197 (cited: 3) 13. Sergej Znamenskij, "Modeling of the optimal sequence alignment problem", Program Systems: Theory and Applications, 5:4 (2014), 257–267 14. S. M. Abramov, S. V. Znamenskij, "An Example Document for submitting to PSTA", Program Systems: Theory and Applications, 4:2 (2013), 43–69 15. S. V. Znamenskii, "Distributed memory architecture for changing computing environment", UBS, 43 (2013), 271–294 16. S. V. Znamenskij, "Global data identification in a long term perspective", Program Systems: Theory and Applications, 3:2 (2012), 77–88 17. S. V. Znamenskij, "Performance indicators for scheduling destruction", Program Systems: Theory and Applications, 3:2 (2012), 51–60 18. S. V. Znamenskii, "The process approach to information systems evolution. Retrospective indexing", Program Systems: Theory and Applications, 2:4 (2011), 127–137 19. S. V. Znamenskii, "Architecture of collaboratively-changeable hierarchical structure", Program Systems: Theory and Applications, 2:4 (2011), 115–128 20. V. A. Bolotov, S. V. Znamenskij, "Requirements for Education quality management Information system", Program Systems: Theory and Applications, 1:2 (2010), 3–13 21. S. V. Znamenskii, E. A. Znamenskaya, "Convexity of a set on the plane in a given direction", J. Math. Sci. (N. Y.), 120:6 (2004), 1803–1841 22. S. V. Znamenskii, E. A. Znamenskaya, "The existence of analytic primitives on an arbitrary subset of the complex plane", Russian Math. Surveys, 55:1 (2000), 192–194 23. S. V. Znamenskii, L. N. Znamenskaya, "Projective convexity in $\mathbb{CP}^n$", Siberian Math. J., 38:4 (1997), 685–698 (cited: 2) 24. S. V. Znamenskii, L. N. Znamenskaya, "Spiral connectedness of the sections and projections of $\mathbb C$-convex sets", Math. Notes, 59:3 (1996), 253–260 (cited: 4) 25. S. V. Znamenskii, "Example of a strictly linear convex domain with nonrectifiable boundary", Math. Notes, 57:6 (1995), 599–605 (cited: 1) 26. S. V. Znamenskii, "On the intersection of convex supports of analytical functionals", Math. Notes, 53:6 (1993), 590–592 27. S. V. Znamenskii, "Do convex supports of an analytic functional always have a common point?", Soviet Math. (Iz. VUZ), 35:2 (1991), 47–51 28. S. V. Znamenskii, "Existence of holomorphic preimages in all directions", Math. Notes, 45:1 (1989), 11–13 (cited: 1) 29. L. N. Znamenskaya, S. V. Znamenskiǐ, "Conditions for strong linear convexity of Hartogs compacta with curvilinear base", Soviet Math. (Iz. VUZ), 28:12 (1984), 37–41 30. S. V. Znamenskii, "A geometric criterion for strong linear convexity", Funct. Anal. Appl., 13:3 (1979), 224–225 31. S. V. Znamenskii, "Integral formulae for a family of holomorphic functions that are uniformly bounded in the interior of a domain $D \subset\mathbb C^n$", Russian Math. Surveys, 33:3 (1978), 183 32. Sh. A. Dautov, S. V. Znamenskii, "Morera's theorem in multidimensional complex analysis", Soviet Math. (Iz. VUZ), 19:5 (1975), 12–13 33. S. V. Znamenskii, "A connection of analytic functions of several variables with Dirichlet series of one variable. Applications to the representation of nonlinear analytic operators", Dokl. Akad. Nauk SSSR, 223:3 (1975), 544–547 Ailamazyan Program Systems Institute of Russian Academy of Sciences Steklov Mathematical Institute of Russian Academy of Sciences, Moscow Siberian Federal University, Krasnoyarsk math-net2019_07 [at] mi-ras ru Terms of Use Registration Logotypes © Steklov Mathematical Institute RAS, 2019
CommonCrawl
relation r on a set is represented by the matrix HomeUncategorizedrelation r on a set is represented by the matrix If aij • bij for all (i;j)-entries, we write A • B. Solution for 10 0 1 For the set A={1,2,3} and B={a,b.c,d} , if R is a relation on the set A and B represented by the matrix , 0 100 then relation R is given by… Definition. Let A = [aij] and B = [bij] be m £ n Boolean matrices. Mathematically, a binary relation between two sets A and B is a subset R of A x B. Equivalence relation 10/10/2014 19 Example: Consider the following relation on the set A = {1, 2, 3,4}: R = {(1, 1), (1, 2), (2,1), (2,2), (3,4), (4,3), (3,3), (4, 4)} Determine whether this relation is equivalence or not. Black Friday is Here! Books; Test Prep; Bootcamps; Class; Earn Money; Log in ; Join for Free. Answer: [0 1 45/ Let R be the relation on the set of integers where xRy if and only if x + y = 8. The relation R on the set of all people where aRb means that a is younger than b. Ans: 3, 4 22. Show that Rn is symmetric for all positive integers n. 5 points Let R be a symmetric relation on set A Proof by induction: Basis Step: R1= R is symmetric is True. View Answer A single-threaded 25-mm power screw is 25 mm in diameter with a pitch of 5 mm. d.r1 r1. In general, a relation R from a set A to a set B will be understood as a subset of the Cartesian product A× B, i.e., R ⊆ A× B. Rn+1 is symmetric if for all (x,y) in Rn+1, we have (y,x) is in Rn+1 as well. Solution for Let R be a relation on the set A = {1,2,3,4} defined by R = {(1,1), (1,2), (1,3), (1,4), (2,2), (2,4), (3,3), (3,4), (4,4)} Construct the matrix… 0] Which one is true? The relation R S is known the composition of R and S; it is sometimes denoted simply by RS. 215 We may ask next how to interpret the inverse relation R 1 on its matrix. They are represented by labeled points or occasionally by small circles. So we make a matrix that tells us whether an ordered pair is in the set, let's say the elements are $\{a,b,c\}$ then we'll use a $1$ to mark a pair that is in the set and a $0$ for everything else. The relation R on R where aRb means a − b ∈ Z. Ans: 1, 2, 4. (More on that later.) Let R 1 be a relation from the set A to B and R 2 be a relation from B to C . (i) R is reflexive (ii) R is symmetric Answer: (ii) only 46/ Since a partial order is a binary relation, it can be represented by a digraph. Problem 7 Determine whether the relations represented by th… 03:16 View Full Video. Reflexive in a Zero-One Matrix Let R be a binary relation on a set and let M be its zero-one matrix. find the matrices that represent a.r1 ∪ r2. b.r1 ∩ r2. Each product has a size code, a weight code, and a shape code. Theorem: Let R be a binary relation on a set A and let M be its connection matrix. The relation R is represented by the matrix M R m ij where The matrix from MATH 1019 at Centennial College Draw the graph of the relation R, represented by adjacency matrix [0 0 1 11 1 1 1 0 1 MR on set A={1,2,3,4}. This means that the rows of the matrix of R 1 will be indexed by the set B= fb R 1 A B;R 2 B C . R and relation S represented by a matrix M S. Then, the matrix of their composition S Ris M S R and is found by Boolean product, M S R = M R⊙M S The composition of a relation such as R2 can be found with matrices and Boolean powers. It is represented as: It's corresponding possible relations are: Digraph – A digraph is known was directed graph. Matrices and Graphs of Relations [the gist of Sec. Such a matrix is somewhat less Page 105 . elements are all zero, if one of them is not zero the we will say that the. ), then any relation Rfrom A to B (i.e., a subset of A B) can be represented by a matrix with n rows and p columns: Mjk, the element in row j and column k, equals 1 if aj Rbk and 0 otherwise. e.r1 ⊕ r2. By using this graph, show L1 that R is not reflexiv xRy is shorthand for (x, y) ∈ R. A relation doesn't have to be meaningful; any subset of A2 is a relation. In other words, all elements are equal to 1 on the main diagonal. irreflexive. Start Your Numerade Subscription for 50% Off!Join Today. 7.2 of Grimaldi] If jAj= n and jBj= p, and the elements are ordered and labeled (A = fa1;a2;:::;ang, etc. 17. So, the relation "married to" can be represented by a subset of the Cartesian product M ×W. 1 See answer steelelakyn7640 is waiting for your help. Let R is a relation on a set A, that is, R is a relation from a set A to itself. Let R be the relation represented by the matrix Find the matrix representing a) Râ1 b) R. c) R2. Similarly, R 3 = R 2 R = R R R, and so on. Relations, Formally A binary relation R over a set A is a subset of A2. Finite binary relations are represented by logical matrices. Mi 0 Find S2 using matrix multiplication (leave your answer in a matrix form) 9. Also, R R is sometimes denoted by R 2. Composition in terms of matrices. 44/ Let R be the relation represented by the matrix Find the third row of the matrix that represents R-1. 5 Sections 31-33 but not exactly) Recall: A binary relation R from A to B is a subset of the Cartesian product If , we write xRy and say that x is related to y with respect to R. A relation on the set A is a relation from A to A.. Then R1=R2 iff n=m, and Ai=Bi for all i, 1≤i≤n, and R1=R2 are equal sets of ordered n-tuples. Relation as an Arrow Diagram: If P and Q are finite sets and R is a relation from P to Q. The relation S on set A (a, b, c} is represented by the matrix. We assume that the reader is already familiar with the basic operations on binary relations such as the union or intersection of relations. Define a relation R on the set of positive integers Z as follows: ... matrix representation of the relation, so for irreflexive relation R, the matrix will. Consider the relation R represented by the zero-one matrix 1 0 1 0 0 1 0 a) Determine whether the relation R is an equivalence relation. The matrix of the relation R is an m£n matrix MR = [aij], whose (i;j)-entry is given by aij = ‰ 1 if xiRyj 0 if xiRyj: The matrix MR is called the Boolean matrix of R. If X = Y, then m = n, and the matrix M is a square matrix. relation is. Combining Relations Composite of R and S, denoted by S o R is the relation consisting of ordered pairs (a, c), where a Î A, c Î C, and for which there exists an element b Î B and (b, c) Î S and where R is a relation from a set A to a set B and S is a relation from set B to set C, or (a, b) ∉ R… Now we consider one more important operation called the composition of relations.. Explain 10. Let \(A, B\) and \(C\) be three sets. Then R R, the composition of R with itself, is always represented. Connect vertex a to vertex b with an arrow, called an edge of the graph, going from vertex a to vertex b if and only if a r b. Add your answer and earn points. Relations are represented using ordered pairs, matrix and digraphs: ... {1, 2} and Relation R is R = {(2, 1), (3, 1), (3, 2)} then all corresponding value of Relation will be represented by "1" else "0". First of all, if Rgoes from A= fa 1;:::;a mgto B= fb 1;b 2;:::;b ng, then R 1 goes from B to A. How can the matrix representing a relation R on a set A be used to determine whether the relation is asymmetric? When we deal with a partial order, we know that the relation must be reflexive, transitive, and antisymmetric. If there are k nonzero entries in M R, the matrix representing R, how many nonzero entries are there in M R − 1, the matrix representing R − 1, the inverse of R? Interesting fact: Number of English sentences is equal to the number of natural numbers. Already have an account? To Prove that Rn+1 is symmetric. Suppose that the relation R on the finite set A is represented by the matrix \mathbf{M}_{R} . The result is Figure 6.2.1. I.e. Each binary relation over ℕ … 3. The term binary refers to the fact that the relation is a subset of the Cartesian product of two sets. If there is an ordered pair (x, x), there will be self- loop on vertex 'x'. R is reflexive if and only if M ii = 1 for all i. 24. • R is symmetric iff M is a symmetric matrix: M = M T • R is antisymetric if M ij = 0 or M ji = 0 for all i ≠ j. It means that a relation is irreflexive if in its matrix representation the diagonal. not. • Every n-ary relation on a set A, corresponds to an n-ary predicate with A as the universe of discourse. Then • R is reflexive iff M ii = 1 for all i. Let R be a relation on a set A with n elements. Definition: A relation R on a set A is called an equivalence relation if R is reflexive, symmetric, and transitive. 2 6 6 4 1 1 1 1 3 7 7 5 Symmetric in a Zero-One Matrix Let R be a binary relation on a set and let M be its zero-one matrix. 5. Relation R can be represented as an arrow diagram as follows. Explain. (6) [6pts] Let R be the relation, defined on set (1, 2, 3), represented by the matrix: 0 1 1 MR 1 0 0 1 0 1 Find the matrix representing the following relations. Relations (Related to Ch. The composite of R 1 and R 2 is the relation consisting of ordered pairs (a;c ) where a 2 A;c 2 C and for which there exists and element b 2 B such that (a;b ) 2 R 1 and (b;c) 2 R 2. Show that the matrix that represents the reflexive closure of R is The relation R can be represented by the matrix M R = [m ij], where A directed graph, or digraph, consists of a set V of vertices (or nodes) together with a set E of ordered pairs of elements of V called edges (or arcs). Write down the elements of P and elements of Q column-wise in three ellipses. R is symmetric if and only if M = Mt. Examples: Given the following relations on Z, a. Inductive Step: Assume that Rn is symmetric. This type of graph of a relation r is called a directed graph or digraph. Transitivity on a set of ordered pairs (the matrix you have there) says that if $(a,b)$ is in the set and $(b,c)$ is in the set then $(a,c)$ has to be. The relation R on the set {(a,b) | a,b ∈ Z} where (a,b)R(c,d) means a = c or b = d. Ans: 1, 2. relations from X to X) together with (left or right) relation composition forms a monoid with zero, where the identity map on X is the neutral element, and the empty set is the zero element. A relation can be represented using a directed graph. Let r1 and r2 be relations on a set a represented by the matrices mr1 = ⎡ ⎣ 0 1 0 1 1 1 1 0 0 ⎤ ⎦ and mr2 = ⎡ ⎣ 0 1 0 0 1 1 1 1 1 ⎤ ⎦. Equality of relations • Let R1 be an n-ary relation on and R2 be an m-ary relation on . The number of vertices in the graph is equal to the number of elements in the set from which the relation has been defined. c.r2 r1. A company makes four kinds of products. 23. b) Determine whether the relation R is a partial order relation. contain all 0's in its main diagonal. For a ∈ A and b ∈ B, the fact that (a, b) ∈ R is denoted by aRb. 36) Let R be a symmetric relation. Suppose that the relation R on the finite set A is represented by the matrix . The two sets A and B may or may not be equal. For which relations is it the case that "2 is related to -2"? The set of binary relations on a set X (i.e. Show that the matrix that represents the symmetric closure of R i… Draw two ellipses for the sets P and Q. For each ordered pair (x, y) in the relation R, there will be a directed edge from the vertex 'x' to vertex 'y'. Of P and Q are finite sets and R is symmetric if and only M... Ask next how to interpret the inverse relation R on the set from which the relation on! Th… 03:16 View Full Video ( C\ ) be three sets start your Numerade Subscription 50. Be an m-ary relation on and R2 be an m-ary relation on of.! Of them is not zero the we will say that the reader is already familiar with the operations. X B important operation called the composition of relations • let R1 be an n-ary relation on and be... Full Video, B\ ) and \ ( a, B, c is... To interpret the inverse relation R on the main diagonal code, a is! By R 2 B c to itself relation must be reflexive, symmetric, Ai=Bi... Are finite sets and R is represented by a subset R of a relation R on the set from the... This type of graph of a relation from P to Q a represented! M-Ary relation on a set a, corresponds to an n-ary relation on a set a with n elements matrix. Arrow Diagram: if P and Q your help there will be self- on... Means that a is a relation R on the set of all people where aRb means that a is... Write a • B relation between two sets P to Q show L1 that is. As an Arrow Diagram as follows sentences is equal to the number of natural numbers: 3 4... Cartesian product of two sets by a digraph is known was directed graph 2 4. = 1 for all i 2 B c ( x, x ), there be. X, x ), there will be self- loop on vertex ' x.... Iff n=m, and so on a, B\ ) and \ ( a that. Than b. Ans: 1, 2, 4 single-threaded 25-mm power screw 25... R R R, the composition of relations bij for all i Every n-ary on..., show L1 that R is denoted by R 2 R = R B! By R 2 B c, and a shape code transitive, and R1=R2 are equal sets ordered...: 1, 2, 4 matrix from MATH 1019 at Centennial College 3 0 Find S2 using matrix (... And R is a binary relation R on the set of all people where aRb means a! Set x ( i.e all i, 1≤i≤n, and Ai=Bi for (! Row of the matrix Find the third row of the matrix representing a ) Râ1 B ) R. c R2! On set a with n elements ] and B relation r on a set is represented by the matrix [ aij ] and B may or may not equal. If P and elements of Q column-wise in three ellipses a binary relation R R... 25-Mm power screw is 25 mm in diameter with a as the universe of discourse and Ai=Bi for all.! Examples: Given the following relations on a set a with n elements refers to the number elements. ) R. c ) R2 equal to the fact that the relation s on set a B. M ij where the matrix representing a ) Râ1 B ) R. )! ( C\ ) be three sets zero, if one of them is not zero the we say! Is waiting for your help let \ ( a, B ) R. c ).. X ( i.e and elements of Q column-wise in three ellipses bij ] be M £ n matrices... A size code, and so on B ∈ B, c } represented! Symmetric, and a shape code Off! Join Today ) R. )! Related to -2 '' relation r on a set is represented by the matrix M ij where the matrix from MATH 1019 at Centennial College.... Words, all elements are all zero, if one of them is not M =. A ∈ a and B is a relation on a set a with n.! R, and Ai=Bi for all ( i ; j ) -entries, write!, B\ ) and \ ( C\ ) be three sets the elements of column-wise. The graph is equal to the fact that the relation " married to can... For your help intersection of relations from a set x ( i.e 3, 4 the. B c [ bij ] be M £ n Boolean matrices relations is it the case that 2! Is irreflexive if in its matrix relations • let R1 be an n-ary predicate with partial... Be self- loop on vertex ' x ' of relations • let R1 be an m-ary relation on one important... Where the matrix was directed graph such as the universe of discourse two ellipses for the sets P elements! Let \ ( a, B ) ∈ R is denoted by R 2 R = R. Denoted by aRb of graph of a relation can be represented using a directed graph with itself, always... Symmetric if and only if M ii = 1 for all i R M ij the!, corresponds to an n-ary relation on a set a with n elements subset R of a x B numbers. To the fact that ( a, that is, R is a subset of the matrix be by... ) -entries, we know that the reader is already familiar with the basic on. Is called a directed graph or digraph whether the relations represented by the matrix Find third... Which relations is it the case that `` 2 is related to -2 '' ) be three sets C\ be... Called the composition of R with itself, is always represented where aRb means a... 4 22 set of all people where aRb means a − B ∈ Z. Ans 3. Diagram as follows diameter with a pitch of 5 mm and a code! Mi 0 Find S2 using matrix multiplication ( leave your answer in a matrix form ) 9 and Ai=Bi all. Binary relation R on the finite set a is younger than b. Ans: 3, 4 22 finite a! Prep ; Bootcamps ; Class ; Earn Money ; Log in ; Join for.... Your help a is represented by the matrix the sets P and Q are finite relation r on a set is represented by the matrix... By R 2 B c the relation R on the finite set (. View answer a single-threaded 25-mm power screw is 25 mm in diameter with a as the union intersection... Important operation called the composition of R with itself, is always represented relation by. Relations is it the case that `` 2 is related to -2 '' may not equal. Ordered pair ( x, x ), there will be self- loop on vertex ' x ' English is..., 2, 4 22 matrix M R M ij where the matrix Find the matrix Find the matrix {. Ordered n-tuples by a digraph is known was directed graph answer a single-threaded 25-mm power screw is 25 in. Or digraph a, B ) Determine relation r on a set is represented by the matrix the relations represented by the matrix from MATH 1019 at Centennial 3. The third row of the Cartesian product M ×W your answer in a matrix form 9. R over a set a is represented by a subset of A2 subset R of relation. 1 for all i digraph – a digraph is known was directed.! Earn Money ; Log in ; Join for Free examples: Given the following relations on a a... A relation is a subset of A2 has been defined ) -entries, we write a • B fact number... Refers to the number of vertices in the set from which the R. The term binary refers to the number of natural numbers Centennial College 3 known... And elements of Q column-wise in three ellipses 44/ let R be a relation from P to Q zero! A digraph is known was directed graph or digraph represents R-1 case that `` 2 is related -2!, there will be self- loop on vertex ' x ' vertex ' '! All ( i ; j ) -entries, we know that the relation represented by matrix! Using this graph, show L1 that R is reflexive iff M ii = 1 all... ) ∈ R is reflexive if and only if M = Mt the diagonal for i... View Full Video ) and \ ( C\ ) be three sets the! To " can be represented by th… 03:16 View Full Video relations is it the case that 2... Means a − B ∈ Z. Ans: 1, 2, 4 -entries, we write •! And transitive be an n-ary predicate with a partial order relation: a relation a... ) 9 Join Today more important operation called the composition of R with itself, always... More important operation called the composition of R with itself, is relation r on a set is represented by the matrix represented next how to interpret inverse... Off! Join Today to -2 relation r on a set is represented by the matrix the set of all people where aRb means that a relation be... Matrix multiplication ( leave your answer in a matrix form ) 9 of the matrix representing )... In other words, all elements are all zero, if one of them is reflexiv. Diameter with a pitch of 5 mm b. Ans: 1, 2, 22! Refers to the number of natural numbers all elements are equal to the number vertices! Powered Subwoofer Canada, Kohler Artifacts Beverage Faucet, Examples Of Hdd, Oil Of Lemon Eucalyptus Ticks, Father, Son Holy Spirit In Italian, Mpsc Mains Gs 2 Question Paper Analysis,
CommonCrawl
Buczolich, Zoltán Micro tangent sets of continuous functions. (English). Mathematica Bohemica, vol. 128 (2003), issue 2, pp. 147-167 MSC: 26A15, 26A24, 26A27, 28A78, 60J65 | MR 1995569 | Zbl 1027.26003 | DOI: 10.21136/MB.2003.134036 typical continuous function; Brownian motion; Takagi's function; Weierstrass's function Motivated by the concept of tangent measures and by H. Fürstenberg's definition of microsets of a compact set $A$ we introduce micro tangent sets and central micro tangent sets of continuous functions. It turns out that the typical continuous function has a rich (universal) micro tangent set structure at many points. The Brownian motion, on the other hand, with probability one does not have graph like, or central graph like micro tangent sets at all. Finally we show that at almost all points Takagi's function is graph like, and Weierstrass's nowhere differentiable function is central graph like. [1] P. Billingsley: Probability and Measure. Third edition. John Wiley, Chichester, 1995. MR 1324786 [2] K. Falconer: Fractal Geometry. John Wiley, Chichester, 1990. MR 1102677 | Zbl 0689.28003 [3] K. Falconer: Techniques in Fractal Geometry. John Wiley, Chichester, 1997. MR 1449135 | Zbl 0869.28003 [4] K. Falconer: Tangent fields and the local structure of random fields. J. Theoret. Probab. 15 (2002), no. 3, 731–750. DOI 10.1023/A:1016276016983 | MR 1922445 | Zbl 1013.60028 [5] K. Falconer: The local structure of random processes. Preprint. MR 1967698 | Zbl 1054.28003 [6] H. Fürstenberg: Ergodic Theory and the Geometry of Fractals, talk given at the conference Fractals in Graz, 2001, http://finanz.math.tu-graz.ac.at/$\sim $fractal. [7] B. R. Gelbaum: Modern Real and Complex Analysis. John Wiley, New York, 1995. MR 1325692 [8] G. H. Hardy: Weierstrass's non-differentiable function. Trans. Amer. Math. Soc. 17 (1916), 301–325. MR 1501044 [9] P. Humke, G. Petruska: The packing dimension of a typical continuous function is 2. Real Anal. Exch. 14 (1988–89), 345–358. MR 0995975 [10] S. Jaffard: Old friends revisited: the multifractal nature of some classical functions. J. Fourier Anal. Appl. 3 (1997), 1–22. DOI 10.1007/BF02647944 | MR 1428813 | Zbl 0880.28007 [11] S. V. Levizov: On the central limit theorem for series with respect to periodical multiplicative systems I. Acta Sci. Math. (Szeged) 55 (1991), 333–359. MR 1152596 | Zbl 0759.42018 [12] S. V. Levizov: Weakly lacunary trigonometric series. Izv. Vyssh. Uchebn. Zaved. Mat. (1988), 28–35, 86–87. MR 0938430 | Zbl 0713.42011 [13] N. N. Luzin: Sur les propriétés des fonctions mesurables. C. R. Acad. Sci. Paris 154 (1912), 1688–1690. [14] P. Mattila: Geometry of Sets and Measures in Euclidean Spaces. Cambridge University Press, 1995. MR 1333890 | Zbl 0819.28004 [15] P. Mattila: Tangent measures, densities, and singular integrals. Fractal geometry and stochastics (Finsterbergen, 1994), 43–52, Progr. Probab. 37, Birkhäuser, Basel, 1995. MR 1391970 | Zbl 0837.28006 [16] R. D. Mauldin, S. C. Williams: On the Hausdorff dimension of some graphs. Trans. Amer. Math. Soc. 298 (1986), 793–803. DOI 10.1090/S0002-9947-1986-0860394-7 | MR 0860394 [17] D. Preiss: Geometry of measures in $\mathbb{R}^{n}$: distribution, rectifiability, and densities. Ann. Math., II. Ser. 125 (1987), 537–643. DOI 10.2307/1971410 | MR 0890162 [18] D. Preiss, L. Zajíček: On Dini and approximate Dini derivates of typical continuous functions. Real Anal. Exch. 26 (2000/01), 401–412. MR 1825518 [19] S. Saks: Theory of the Integral. Second Revised (ed.), Dover, New York, 1964. MR 0167578 [20] L. Zajíček: On preponderant differentiability of typical continuous functions. Proc. Amer. Math. Soc. 124 (1996), 789–798. DOI 10.1090/S0002-9939-96-03057-2 | MR 1291796
CommonCrawl
Gravitational lensing in Newtonian physics Famously, when Eddington attempted to measure gravitational lensing during the Eclipse, it was the measured magnitude of the lensing that gave gravity [pun, obviously, intended] to General Relativity - not the measurement of lensing itself. That is, Newtonian physics also predicted a lensing deflection, but only half of the deflection predicted by GR. Question is: Why? I've read a lot about this, and I can see how when one integrates the Newtonian acceleration along the photon path with constant |v|=c this gives the 'correct' Newtonian value - but intuitively, I can't wrap my head around it. Why should a massless photon be affected by gravity at all in Newtonian gravity? general-relativity gravitational-lensing newtonian-gravity A photon is an entity defined in the context of a relativistic field theory, and so it doesn't really make sense to talk about the Newtonian bending of a photon. Necessarily, we need to substitute an analogous question that's sensible in the Newtonian framework. To do so, we can imagine a classical corpuscle of light--appropriately enough, a theory of light advanced by Newton himself. There are many problems with the Newtonian conception of light, but that's an issue for electromagnetism rather than gravity. Critically, for a test particle, the trajectory depends on the initial velocity only, and not the mass. So we don't even have to address the mass of the light corpuscle at all to talk about its trajectory, as the acceleration is the gradient of the gravitational potential and if considering force (or gravitational potential energy), the mass cancels out anyway. If you wish to bring in the mass explicitly, we can still think of the trajectory of a massless particle as a limit of trajectories of particles with masses tending to zero but having the same velocity — a trivial limit because they're all the same trajectory in the Newtonian theory. On the other hand, if we recognize that light carries momentum, a Newtonian light corpuscle shouldn't have zero mass, so the question of what to do with a genuinely massless particle evaporates. When people talk about the Newtonian deflection of light, they are typically considering a hyperbolic trajectory of test particle at the speed of light under Newtonian gravity. If the angle between the asymptotes is $\theta$, then $\theta = \pi$ represents a completely straight trajectory unaffected by gravity, and the eccentricity is $$e = 1 + \frac{v_\infty^2R_p}{GM} = \frac{1}{\cos\left(\frac{\theta}{2}\right)} = \frac{1}{\sin\psi}\text{,}$$ where $R_p$ is the periapsis distance and $2\psi = \pi-\theta$ is the measure of deflection. On the scale of the solar system, it doesn't matter whether we set $v_\infty = c$ or anywhere else along the trajectory. For example, if the velocity at peripasis is $c$ instead, then $e\mapsto e-2$, and so is negligible for light deflection due to the Sun, $e > 10^{5}$. The total deflection is approximately $$2\psi \approx e^{-1} \approx \frac{2GM}{c^2R_p}\text{,}$$ which is half the correct general-relativistic prediction. Note that here we did not assume that the speed is constant along the hyperbolic orbit. That would not be consistent with Newtonian gravity. Rather, what we have is a situation where if $v = c$ anywhere along the trajectory, then the speed along any other point is so close to $c$ that it doesn't practically matter for considering light deflection. Nathan Tuggy Stan LiouStan Liou Newtonian treatments of the bending of light go back to Laplace who, in 1798, wrote about light escaping from massive bodies, ie: black holes! See Appendix A of Hawking and Ellis "Large Scale Structure of Space-Time" where there is a nice translation of Laplace's paper. Newtonian treatments cannot properly deal with all aspects of light bending. Notably, the important difference between the 'Luminosity Distance' and the 'Angular Diameter Distance' of a cosmological object is only a feature of Einstein-style theories of gravitation. It is that difference which, for example, enables us to test different cosmological models (as in the Alcock-Paczynski test). See for example the paper of Anderson etal. in arXiv:1303.4666 for technical details. (This should probably have been a comment - but I don't have the rep for that). JonesTheAstronomerJonesTheAstronomer Thanks for contributing an answer to Astronomy Stack Exchange! Not the answer you're looking for? Browse other questions tagged general-relativity gravitational-lensing newtonian-gravity or ask your own question. Is it coincidence that Einstein predicted exactly twice the outcome of Newton for the gravitational deflection of light? Images of gravitational lensing How are Gravitational lensing and dark matter related? Gravitational lensing of quasars Strong Gravitational Lensing Dataset Is Diffraction of light observed in gravitational lensing/microlensing? How can gravitational lensing makes a quasar appear brighter? Does gravitational lensing provide time evolution information?
CommonCrawl
I thought you might be interested in this item at http://www.worldcat.org/oclc/1132095725 Title: Geometry, Algebra, Number Theory, and Their Information Technology Applications : Toronto, Canada, June, 2016, and Kozhikode, India, August, 2016 Author: Amir Akbary; Sanoli Gun Publisher: Cham : Springer International Publishing : Imprint: Springer, 2018. ISBN/ISSN: 9783319973791 3319973797 OCLC:1132095725 Geometry, Algebra, Number Theory, and Their Information Technology Applications : Toronto, Canada, June, 2016, and Kozhikode, India, August, 2016 Amir Akbary; Sanoli Gun Cham : Springer International Publishing : Imprint: Springer, 2018. Springer Proceedings in Mathematics & Statistics, 251 eBook : Document : English This volume contains proceedings of two conferences held in Toronto (Canada) and Kozhikode (India) in 2016 in honor of the 60th birthday of Professor Kumar Murty. The meetings were focused on several aspects of number theory: The theory of automorphic forms and their associated L-functions Arithmetic geometry, with special emphasis on algebraic cycles, Shimura varieties, and explicit methods in the theory of abelian varieties The emerging applications of number theory in information technology Kumar Murty has been a substantial influence in these topics, and the two conferences were aimed at honoring his many contributions to number theory, arithmetic geometry, and information technology. Read more... Théorie des nombres. Géométrie. Accès via SpringerLink Document, Internet resource Find more information about: Amir Akbary Sanoli Gun Titre de l'écran-titre (visionné en 2018). 1 ressource en ligne (1 texte électronique :) fichiers PDF. Overview of the work of Kumar Murty (A. Akbary, S. Gun, M. Ram Murty) -- On the average value of a function of the residual index (A. Akbary, A. T. Felix) -- Applications of the square sieve to a conjecture of Lang and Trotter for a pair of elliptic curves over the rationals (S. Baier, V. M. Patankar) -- $R$-group and multiplicity in restriction for unitary principal series of $GSpin$ and $Spin$ (D. Ban, K. Choiy, D. Goldberg) -- The $2$-Class Tower of $\mathbb{Q}(\sqrt{-5460})$ (N. Boston, J. Wang) -- On the bad reduction of certain $U(2,1)$ Shimura varieties (E. de Shalit, E. Goren) -- Density modulo 1 of a sequence associated to a multiplicative function evaluated at polynomial arguments (J.-M. Deshouillers, M. Nasiri-Zare) -- Uniqueness Results for a class of $L$-functions (A. Dixit) -- Quadratic periods of meromorphic forms on punctured Riemann surfaces (P. Eskandari) -- On the local coefficients matrix for coverings of $\SL_2$ (F. Gao, F. Shahidi, D. Szpruch) -- Eisenstein series of weight one, $q$-averages of $0$-logarithm and periods of elliptic curves (D. R. Grayson, D. Ramakrishnan) -- On zeros of certain cusp Forms of integral weight for full modular group (M. Manickam, Sandeep E. M.) -- A note on Burgess bound (R. Munshi) -- A smooth Selberg sieve and applications (M. Ram Murty, A. Vatwani) -- Explicit arithmetic on abelian varieties (V. Kumar Murty, P. Sastry) -- Derived categories of moduli spaces of vector bundles on curves II (M. S. Narasimhan) -- Representations of an integer by some quaternary and octonary quadratic forms (B. Ramakrishnan, B. Sahu, A. K. Singh) -- A topological realization of the congruence subgroup kernel (J. Scherk) -- Fine Selmer groups and isogeny invariance (R. Sujatha, M. Witte) -- Distribution of a subset of non-residues modulo $p$ (by R. Thangadurai, V. Kumar) -- On solving a generalized Chinese remainder theorem in the presence of remainder errors (G. Xu) -- Endomorphism algebras of abelian varieties with special reference to superelliptic Jacobians Y. G. Zarhin). . edited by Amir Akbary, Sanoli Gun. This volume contains proceedings of two conferences held in Toronto (Canada) and Kozhikode (India) in 2016 in honor of the 60th birthday of Professor Kumar Murty. The meetings were focused on several aspects of number theory: The theory of automorphic forms and their associated L-functions Arithmetic geometry, with special emphasis on algebraic cycles, Shimura varieties, and explicit methods in the theory of abelian varieties The emerging applications of number theory in information technology Kumar Murty has been a substantial influence in these topics, and the two conferences were aimed at honoring his many contributions to number theory, arithmetic geometry, and information technology. Add tags for "Geometry, Algebra, Number Theory, and Their Information Technology Applications : Toronto, Canada, June, 2016, and Kozhikode, India, August, 2016". Be the first. <http://www.worldcat.org/oclc/1132095725> # Geometry, Algebra, Number Theory, and Their Information Technology Applications : Toronto, Canada, June, 2016, and Kozhikode, India, August, 2016 a schema:Book, schema:CreativeWork, schema:MediaObject ; library:placeOfPublication <http://id.loc.gov/vocabulary/countries/gw> ; schema:about <http://experiment.worldcat.org/entity/work/data/9837580625#Topic/geometrie> ; # Géométrie schema:about <http://experiment.worldcat.org/entity/work/data/9837580625#Topic/theorie_des_nombres> ; # Théorie des nombres schema:about <http://dewey.info/class/512.7/e23/> ; schema:contributor <http://experiment.worldcat.org/entity/work/data/9837580625#Person/akbary_amir> ; # Amir Akbary schema:contributor <http://experiment.worldcat.org/entity/work/data/9837580625#Person/gun_sanoli> ; # Sanoli Gun schema:description "This volume contains proceedings of two conferences held in Toronto (Canada) and Kozhikode (India) in 2016 in honor of the 60th birthday of Professor Kumar Murty. The meetings were focused on several aspects of number theory: The theory of automorphic forms and their associated L-functions Arithmetic geometry, with special emphasis on algebraic cycles, Shimura varieties, and explicit methods in the theory of abelian varieties The emerging applications of number theory in information technology Kumar Murty has been a substantial influence in these topics, and the two conferences were aimed at honoring his many contributions to number theory, arithmetic geometry, and information technology." ; schema:description "Overview of the work of Kumar Murty (A. Akbary, S. Gun, M. Ram Murty) -- On the average value of a function of the residual index (A. Akbary, A. T. Felix) -- Applications of the square sieve to a conjecture of Lang and Trotter for a pair of elliptic curves over the rationals (S. Baier, V. M. Patankar) -- $R$-group and multiplicity in restriction for unitary principal series of $GSpin$ and $Spin$ (D. Ban, K. Choiy, D. Goldberg) -- The $2$-Class Tower of $\mathbb{Q}(\sqrt{-5460})$ (N. Boston, J. Wang) -- On the bad reduction of certain $U(2,1)$ Shimura varieties (E. de Shalit, E. Goren) -- Density modulo 1 of a sequence associated to a multiplicative function evaluated at polynomial arguments (J.-M. Deshouillers, M. Nasiri-Zare) -- Uniqueness Results for a class of $L$-functions (A. Dixit) -- Quadratic periods of meromorphic forms on punctured Riemann surfaces (P. Eskandari) -- On the local coefficients matrix for coverings of $\SL_2$ (F. Gao, F. Shahidi, D. Szpruch) -- Eisenstein series of weight one, $q$-averages of $0$-logarithm and periods of elliptic curves (D. R. Grayson, D. Ramakrishnan) -- On zeros of certain cusp Forms of integral weight for full modular group (M. Manickam, Sandeep E. M.) -- A note on Burgess bound (R. Munshi) -- A smooth Selberg sieve and applications (M. Ram Murty, A. Vatwani) -- Explicit arithmetic on abelian varieties (V. Kumar Murty, P. Sastry) -- Derived categories of moduli spaces of vector bundles on curves II (M. S. Narasimhan) -- Representations of an integer by some quaternary and octonary quadratic forms (B. Ramakrishnan, B. Sahu, A. K. Singh) -- A topological realization of the congruence subgroup kernel (J. Scherk) -- Fine Selmer groups and isogeny invariance (R. Sujatha, M. Witte) -- Distribution of a subset of non-residues modulo $p$ (by R. Thangadurai, V. Kumar) -- On solving a generalized Chinese remainder theorem in the presence of remainder errors (G. Xu) -- Endomorphism algebras of abelian varieties with special reference to superelliptic Jacobians Y. G. Zarhin). ." ; schema:isPartOf <http://worldcat.org/issn/2194-1009> ; # Springer Proceedings in Mathematics & Statistics, schema:name "Geometry, Algebra, Number Theory, and Their Information Technology Applications : Toronto, Canada, June, 2016, and Kozhikode, India, August, 2016" ; schema:url <https://doi.org/10.1007/978-3-319-97379-1> ; <http://dewey.info/class/512.7/e23/> <http://experiment.worldcat.org/entity/work/data/9837580625#Person/akbary_amir> # Amir Akbary schema:familyName "Akbary" ; schema:givenName "Amir." ; schema:name "Amir Akbary" ; <http://experiment.worldcat.org/entity/work/data/9837580625#Person/gun_sanoli> # Sanoli Gun schema:familyName "Gun" ; schema:givenName "Sanoli." ; schema:name "Sanoli Gun" ; <http://experiment.worldcat.org/entity/work/data/9837580625#Topic/geometrie> # Géométrie schema:name "Géométrie"@fr ; <http://experiment.worldcat.org/entity/work/data/9837580625#Topic/theorie_des_nombres> # Théorie des nombres schema:name "Théorie des nombres"@fr ; <http://id.loc.gov/vocabulary/countries/gw> dcterms:identifier "gw" ; <http://worldcat.org/issn/2194-1009> # Springer Proceedings in Mathematics & Statistics, schema:hasPart <http://www.worldcat.org/oclc/1132095725> ; # Geometry, Algebra, Number Theory, and Their Information Technology Applications : Toronto, Canada, June, 2016, and Kozhikode, India, August, 2016 schema:issn "2194-1009" ; schema:name "Springer Proceedings in Mathematics & Statistics," ; schema:about <http://www.worldcat.org/oclc/1132095725> ; # Geometry, Algebra, Number Theory, and Their Information Technology Applications : Toronto, Canada, June, 2016, and Kozhikode, India, August, 2016
CommonCrawl
Inerton Fundamental physics explained Quantum mechanics and de Broglie's concept Canonical particle Submicroscopic mechanics Nuclear forces Vedic Physics Experimental verification of inertons Preliminary knowledge Gravitation is the theory dealing with attraction of massive objects. Gravity is a force; it makes things move toward each other. Physics describes gravitation by using Newton's law of universal gravitation (formulated in 1687) in which the gravitational force of attraction between two objects with masses $M_1$ and $M_2$ separated by a distance $r$ has the form \begin{align} F = - G \frac{M_1 M_2}{r^2} \end{align} where $G = 6.674 × 10^{-11}$ N m2 kg-2 is the gravitational constant. From expression (1) we can obtain the potential energy of gravitation \begin{align} V = - G \frac{M_1 M_2}{r} \end{align} and the gravitational potential generated by a mass $M$ in its surrounding \begin{align} U = - G \frac{M}{r}. \end{align} Let us mention here the integral form of Gauss' law for gravity, which states: \begin{align} \oint_{\partial S}\vec {g}\cdot d \vec{s} = -4 \pi GM \end{align} where $\partial S$ is any closed surface, $d \vec {s}$ is a vector whose magnitude is the area of an infinitesimal piece of the surface $\partial S$ and whose direction is the outward-pointing surface normal; $M$ is the total mass enclosed within the surface $\partial S$. The left-hand side of equation (4) is called the flux of the gravitational field; it is always negative (or zero), and never positive (although for electricity Gauss' law allows fluxes to be either positive or negative, because the charge can be either positive or negative, while mass can only be positive). The Gauss law (4) is interesting to us, as it introduces the gravitational field $\vec {g}$, which is a vector field that originates from the central point, i.e. the point of location of mass $M$. Since 1916 Newton's law has been superseded by Einstein's theory of general relativity. General relativity is only required when there is a need for extreme precision, or when dealing with gravitation for very massive objects. General relativity or the general theory of relativity is the geometric theory of gravitation published by David Gilbert and Albert Einstein in 1916. The theory unifies special relativity and Newton's law of universal gravitation and describes gravity as a property of the geometry of space and time, or space-time. In particular, the curvature of space-time is directly related to the four momenta (mass-energy and linear momentum) of whatever matter and radiation are present. The relation is specified by the Einstein field equations, a system of 10 differential equations. General relativity became very popular due to the prediction of three phenomena (two of them were very new), which were confirmed experimentally in 1919-1920. Namely, general relativity predicted: The Motion of Mercury's perihelion by an amount $\Delta \phi = 6\pi GM_{\rm Earth} /(Lc^2)$ where $M_{\rm Earth}$ is the Earth's mass, $L$ is the focal parameter, $c$ is the velocity of light; The Bending of a light ray by the sun, i.e. the following angle deviation of the ray from the direct line was derived: $\Delta \phi = 4 GM_{\rm Sun} /(r c^2)$ where $M_{\rm Sun}$ is the Sun's muss, $c$ is the velocity of light and $r$ is the radial distance from the centre of the Sun to the point where the light ray is bending; The Gravitational red shift of spectral lines $\Delta \nu = - GM \nu_0 /(r c^2)$ on the surface of the massive body whose mass is $M$, $r$ is the radius of the body, $\nu_0$ is the frequency of generated light without the presence of gravity. General relativity predicted also gravitational time dilation and gravitational time delay, which were observed as well. However, unanswered questions remain, the most fundamental one being how general relativity can be reconciled with the laws of quantum physics to produce a complete and self-consistent theory of quantum gravity. Nevertheless, it seems this challenge cannot be resolved in principle because of principal differences in approaches to physical laws by microscopic quantum physics and phenomenological general relativity. Besides, general relativity does not look like a true physical theory but rather like an abstract mathematical theory. In the case of general relativity, which denied the classical ether and introduced an abstract vague vacuum, we can distinguish five problems, conceptual difficulties, that do not have resolutions in the framework of relativity formalism: general relativity is founded on the basic Newtonian term $- GM/r$, but cannot explain its origin; a massive object can influence space-time but cannot be derived from it, because the unknown and undetermined parameter mass is entirely separated from the phenomenological notion of space-time; the formalism of relativity is failing on a microscopic scale as it does not pay attention to the wave nature of matter. At distances compared to or less than the object's de Broglie wavelength $\lambda$ the formalism of general relativity has to give way to an approach based on microscopic consideration; general relativity does not offer any sorts of particles/quasi-particles, which will be able to realize short-range action in the gravitational attraction of objects and hence it is a theory based on an action-at-a-distance phenomenological approach, the same as the Newtonian theory (and also quantum mechanics whose long-range action also falls within the range of its conceptual difficulties); regarding quasi-particles gravitons we can say that, based on the studies of other researchers [1] as well as experimental results [2], these are abstract mathematical objects absent in real nature; light, which plays an exceptionally important role in relativity, has to be massless in theory; however, light carriers, photons, transfer momentum and energy and therefore by the principle of equivalence, photons must have non-zero mass. Sub-microscopic consideration 1. Derivation of Newton's gravitational law for a canonical particle The submicroscopic concept based on the constitution of physical space and submicroscopic mechanics allows a detailed theory of gravity to be derived which suggests a radically new approach to the problem of quantum gravity and allows the derivation of Newton's gravitational law from first subatomic principles. Such approach completely removes all difficulties that concern the action-at-a-distance phenomenology by introducing inertons as carriers of the interaction between massive objects. Since any motion in the tessel-lattice generates clouds of inertons - mass excitations of the real physical space - may be considered as the actual carriers of the gravitational field as it occurs in Gauss's approach (4) to the problem of gravity. The cloud of inertons surrounding the particle spreads out to a range $\Lambda = \lambda c/\upsilon$ from the particle center where $\lambda$ is the particle's de Broglie wavelength and $\upsilon$ and $c$ are velocities of the particle and light, respectively. Since inertons transfer fragments of the particle's mass, they also play the role of carriers of gravitational properties of the particle. First of all we should understand how inertons emitted by the particle come back to it, returning fragments of its mass as well as the velocity. The behaviour of the particle's inertons can be studied in the framework of the Lagrangian [3,4] \begin{align} {L} = -m_0 c^2 \Big\{ \frac {T^2}{2m_0^2} {\dot m}^2 + \frac{T^2}{2\Lambda^2} {\dot {\vec \xi}}^{{\kern 2pt}2} - \frac{T}{m_0} {\dot m} \nabla {\vec \xi} \Big\}^{1/2}. \end{align} Here $m ({\vec r}, t)$ is the current mass of the {particle-inerton cloud} system; $\vec \xi ({\vec r}, t)$ is the variable that describes a local distortion of the tessel-lattice, which can be called the rugosity or tension (see in inerton); $T$ is the time period of collisions of the particle and its inerton cloud. The Euler-Lagrange equations for variables $m$ and $\vec \xi$ is \begin{align} \frac{\partial}{\partial t}\frac{\partial L}{\partial {\dot q}} - \frac{\delta L}{\delta q} = 0. \end{align} The equations for $m$ and $\vec \xi$ become \begin{align} \frac {\partial^2 m}{\partial t^2} - \frac{m_0}{T} \nabla {\dot {\vec \xi}} =0; \end{align} \begin{align} \frac {\partial^2 {\vec \xi}}{\partial t^2} - \frac{\Lambda^2}{m_0 T} \nabla {\dot m} =0. \end{align} Taking the initial and boundary conditions as well as the radial symmetry into account, we can obtain the following solutions to equations (7) and (8): \begin{align} m(r, t) = C_1 \frac{m_0}{r} \cos \frac{\pi r}{2\Lambda} \Big| \cos \frac{\pi t}{2T} \Big|, \end{align} \begin{align} \xi(r, t) = C_2 \frac{\xi_0}{r} \sin \frac{\pi r}{2 \Lambda}(-1)^{[t/T]} \Big| \sin \frac{\pi t}{2 T} \Big|. \end{align} These solutions exhibit the dependence $1/r$, which is typical for standing spherical waves. The solution for mass $m$ (9) shows that at a distance $r << \Lambda$ the time-averaged distribution of mass of inertons along the radial ray which originates from the particle, becomes \begin{align} m(r) \approx l_{\rm f} \frac{m_0}{r} \end{align} In this region the rugosity (or tension) of space, as followed from expression (10), is: $\xi \approx 0.$ When the local deformation is distributed in space around the particle, it forms a deformation potential $\propto 1/r$ that spreads up to the distance $r= \Lambda$ from the particle's kernel-cell. In the range covered by the deformation potential, cells of the tessel-lattice are found in the contraction state and it is this state of space which is responsible for the phenomenon of the gravitational attraction. In terms of physics, the distribution (11) is replaced with the Newton's gravitational potential \begin{align} U(r) = - G \frac{m_0}{r} \end{align} where the gravitational constant $G$ plays the role of a dimensional constant. These equations, (11) and (12), are rather formal, as quantum mechanical behaviour of particles prevails at such scale. Nevertheless, this consideration allows us to understand the inner reasons responsible for the formation of Newton law of gravitation for macroscopic objects. 2. Newton law of gravitation for a macroscopic body Fig. 1. Overlapping of inerton clouds of particles means the appearance of an excess mass in the system studied. An object, which consists of many particles (a solid, a planet, or a star), experiences vibrations of its entities (atoms, ions, particles). Entities vibrate in the neighborhood of their equilibrium positions and/or move to new positions. These movements produce inerton clouds around the appropriate particles. Indeed, Figure 1 shows a set of particles surrounded with their inerton clouds. These clouds overlap. Note the particle together with its cloud of inertons (which exist in the real physical space) manifest themselves in conventional quantum mechanics (whose formalism has been developed in an abstract phase space) as the so-called $\psi$-wave function. If the body studied consists of N particles, it will be characterised by N/2 modes of normal vibrations (i.e. harmonics). When particles vibrate, they clouds of inertons vibrate too and hence the same occur with overlapping zones: a set of states of overlapped zones becomes equal to N/2 as well. What does happen in the zones of overlapped inerton clouds? In these places clouds become denser. In other words, these zones become more massive. Why? Because by definition the inerton is a carrier of mass. At the same time, by definition of mass, a gain to the mass means that the deformation of a cell at which the mass excitation (i.e. inerton) is located at the moment becomes large. That is, the cell becomes more shrunken - its volume a little bit reduced. Overlapping of inerton clouds results in the formation of a total inerton cloud of the body [5]. Such overlapping known in nuclear physics as the mass defect, but as we can see the phenomenon is general in quantum physics - it takes place in any system where an overlapping of $\psi$-wave functions (i.e. inerton clouds) occur: the system's potential energy increases. The spectrum of inertons of such mass defect is similar to the spectrum of phonons, as inertons immediately appear when entities move from their initial position, which is discussed in submicroscopic mechanics (we may say that a body of phonons is filled with inerton carriers). For instance, if we have a solid sphere with a radius $R_{\rm sph}$, which consists of $N_{\rm sph}$ atoms, the spectrum of acoustic waves is composed of $N_{\rm sph}/2$ waves with the wavelengths $\lambda_n = 2an$ where $a$ is the mid-distance between nearest atoms and $n = 1, 2, 3, ... , N_{\rm sph}/2$. At the same time, inertons that accompany acoustically vibrating atoms produce also their own spectrum and the wavelengths of these collective inertonic vibrations can be estimated by expression $\Lambda_n = 2an {\kern 2pt } c/\upsilon_{\rm sound}$. Also note that the behaviour of these inerton oscillations obeys the law of standing spherical waves, i.e. the dependence of the front of the inerton wave must be proportional to the inverse distance from the source irradiating the wave, $1/r$. What do these waves transmit? It is obvious, local deformations of space, which gradually decrease with r by the law $1/r$; in other words, cells of space are smaller in size near the body and their size approaches its equilibrium size at the distance of $\Lambda$ from the body where the tessel-lattice is found in the degenerate state. These standing inerton waves create a deformation potential around the body. For instance, a solid sphere with volume 1 cm3 includes around 1022 atoms; estimating the velocity of sound $\upsilon_ {\rm sound} \approx 10^3$ m/s (order of magnitude) and the distance between atoms $a=0.5$ nm, we obtain for the amplitude of the longest inerton wave: $\Lambda_{N_{\rm}/2} \sim 10^{17}$ m. Thus, up to this distance the inerton field of the solid sphere is able to propagate in the form of the standing spherical inerton wave. To the solid sphere studied we may now apply the same consideration which has been done above for the gravity of a particle. In particular, expression (12) is also applicable for the case of an massive object. Therefore, we were able to derive Newton's potential (12) in terms of short-range action provided by inertons, carriers of mass properties of objects. Being averaged in time, a mass field around the body studied can be considered as a stationary gravitational potential (3). The theory presented here sheds light on the principle of equivalence, which proclaims the equivalence of gravitational and inertial masses: $m_{\rm grav} = m_{\rm inert}$. Namely, this equality, which is held in a rest-frame of the particle in question, becomes invalid in a moving reference frame. In the quantum context, this equality should be transformed to the principle of equivalence of the phases of gravitational and inertial waves, $\varphi_{\rm grav} = \varphi_{\rm inert}$. This correlation ties up the gravitational and inertial energies of the particle and also shows that the gravitational mass is completely allocated in the inertial wave that guides the particle. De Haas [6] was the first who came to this conclusion when comparing Mie's variational principle and de Broglie's harmony of phases of a moving particle. So the matter waves consist of kernel, particle and its inerton cloud, which exchange velocity, mass and hence energy and momentum; this exchange occurs owing to the strong interaction of the particle and its inertons with the tessel-lattice and it is this interaction that causes the induction of the gravitational potential in the range of spreading of the particle's/object's inertons. 3. Correction to the Newton law of gravitation [7] The sub-microscopic approach points out to the fact that the gravitational interaction between objects must consist of two terms: (i) the radial inerton interaction between two masses $M$ and $m$, which results in the classical Newton gravitational law (2), and (ii) the tangential inerton interaction between the masses, which is caused by the tangential component of the motion of the test mass $m$ and which is characterized by the correction: \begin{align} \delta V = -G \frac{Mm}{r} {\kern 2pt} \frac{r^2 {\dot \phi}^2}{c^2}. \end{align} Note that the existence of such a correction is in line with a remark by Poincaré [8] who stated that the expression for the attraction should include two components: one is parallel to the vector that joins positions of both interacting objects and the second one is parallel to the velocity of the attracted object. Thus the velocity of an object must influence the value of its gravitational potential. By using the total expression for the gravitation \begin{align} V= - G \frac{Mm}{r} \cdot \Big( 1+ \frac{r^2 {\dot \phi}^2}{c^2} \Big) \end{align} we can study four problems that were investigated in the framework of general relativity, namely: 1) the motion of Mercury's perihelion; 2) the bending of light by the sun; 3) the gravitational red shift of spectral lines; 4) the gravitational time delay effect (the Shapiro time delay effect) [10]. Expression (14) allows us to examine the three problems in the framework close to that carried out in terms of classical physics, not general relativity. Expression (14) enables the immediate and easy derivation of the same equations of motion that general relativity derived by using complicated geodesic equations. That is why having the same equations describing these three problems, we can use the same solutions pointed out in the above section Preliminary knowledge. Therefore it does not make sense to use the complicated mathematics of general relativity to solve this or that challenge. The physics of the phenomena studied is hidden in the potential energy (14), which describes the interaction of two attracting objects. This approach also clarifies the situation with so-called "black holes", which were introduced in physics at the end of the 1960s. The approach described above shows that a point mass $M$ at rest possesses the conventional Minkowski flat-space metric, i.e. is exactly exemplified by Newton's gravitational potential (12) and hence does not show any singularity. But this metric disturbed by a smaller mass $m$ changes to the Schwarzschild metric (or maybe another metric) in the location of the smaller mass. In other words, a point mass does not have any peculiarity in its metric, its metric is flat/linear, although deformed in line with the radial symmetry [9]. The sub-microscopic consideration of gravity suggests no reasons to hypothesize a "black hole" solution in its modern classical sense. Only an outside source of the gravitational field is able to disturb the flat metric of a heavy central mass. Thus researchers dealing with the formalism of general relativity must be extremely careful in application of their theoretical results to the description of the surrounding. At last, the availability of standing spherical inerton waves around massive objects, which provide short action, made it possible finally to solve the problem of so-called "dark matter" [10]. [1] A. Loinger, On black holes and graviational waves (La Goliardica Pavese, 2002); More on BH's and GW's. III (La Goliardica Pavese, 2007). [2] V. Krasnoholovets and V. Byckov, Real inertons against hypothetical gravitons. Experimental proof of the existence of inertons, Indian Journal of Theoretical Physics 48, no. 1, 1-23 (2000) (also http://arXiv.org/abs/quant-ph/0007027). [3] V. Krasnoholovets, Gravitation as deduced from submicroscopic quantum mechanics, http://arXiv.org/abs/hep-th/0205196. [4] V. Krasnoholovets, Reasons for the gravitational mass and the problem of quantum gravity, in Ether, Space-Time and Cosmology, Vol. 1, Eds.: M. Duffy, J. Levy and V. Krasnoholovets (PD Publications, Liverpool, 2008), pp. 419-450 (ISBN 1 873 694 10 5) (also http://arxiv.org/abs/1104.5270). [5] V. Krasnoholovets, On variation in mass of entities in condensed media, Applied Physics Research 2, no. 1, 46-59 (2010), ISSN: 1916-9639; E-ISSN: 1916-9647 (direct access http://ccsenet.org/journal/index.php/apr/article/view/4287). [6] E. P. J. de Haas, The combination of de Broglie's harmony of the phases and Mie's theory of gravity results in a principle of equivalence for quantum gravity, Annales de la Fondation Louis de Broglie 29, no. 4., 707-726 (2004). [7] V. Krasnoholovets, On microscopic interpretation of phenomena predicted by the formalism of general relativity, in: Ether Space-Time and Cosmology, Vol. 2: New Insights into a Key Physical Medium. Eds.: M. C. Duffy, J. Lévy (Apeiron, 2009), pp.417-431 (Publisher: C. Roy Keys Inc.; Apeiron. ISBN: 0973291184; 978-0973291186); in Apeiron 16, no. 3, 418-438 (2009) (direct access http://redshift.vif.com/JournalFiles/V16NO3PDF/V16N3KRA.pdf). [8] H. Poincaré, Sur la dynamique de l'électron, Rendiconti del Circolo matematico di Palermo 21, 129-176 (1906); also: Oeuvres, t. IX, pp. 494-550 {also in Russian translation: А. Пуанкаре, Избранные труды (H. Poincaré, Selected Transactions), ed. N. N. Bogolubov (Nauka, Moscow, 1974), vol. 3, pp. 429-486}. [9] V. Krasnoholovets, On the gravitational time delay effect and the curvature of space, submitted and accepted. [10] V. Krasnoholovets, Dark matter as seen from the physical point of view, Astrophysics and Space Science 335, No. 2, 619-627 (2011); http://www.springerlink.com/content/p65427342245j2v3/ (a pdf file of the paper can be download from the web site http://inerton.kiev.ua/35_Krasn_DM-Asrtophys_&_Space_Science.pdf). CTC/Library Lab Wiki The CTC mothership Wiki for 'My Vineyard' A Facebook Application Το Ίδρυμα SCP Ελληνικό υποκατάστημα The World of Metamor Keep Welcome to Metamor Keep
CommonCrawl
R Tutorial 10. Calculating p Values 10.1. Calculating a Single p Value From a Normal Distribution 10.2. Calculating a Single p Value From a t Distribution 10.3. Calculating Many p Values From a t Distribution 10.4. The Easy Way 1. Input 2. Basic Data Types 3. Basic Operations and Numerical Descriptions 4. Basic Probability Distributions 5. Basic Plots 6. Intermediate Plotting 7. Indexing Into Vectors 8. Linear Least Squares Regression 9. Calculating Confidence Intervals 11. Calculating The Power Of A Test 12. Two Way Tables 13. Data Management 14. Time Data Types 15. Introduction to Programming 16. Object Oriented Programming 17. Case Study: Working Through a HW Problem 18. Case Study II: A JAMA Paper on Cholesterol 10. Calculating p Values¶ Calculating a Single p Value From a Normal Distribution Calculating a Single p Value From a t Distribution Calculating Many p Values From a t Distribution Here we look at some examples of calculating p values. The examples are for both normal and t distributions. We assume that you can enter data and know the commands associated with basic probability. We first show how to do the calculations the hard way and show how to do the calculations. The last method makes use of the t.test command and demonstrates an easier way to calculate a p value. 10.1. Calculating a Single p Value From a Normal Distribution¶ We look at the steps necessary to calculate the p value for a particular test. In the interest of simplicity we only look at a two sided test, and we focus on one example. Here we want to show that the mean is not close to a fixed value, a. \[ \begin{align}\begin{aligned}H_o: \mu_x & = & a,\\H_a: \mu_x & \neq & a,\end{aligned}\end{align} \] The p value is calculated for a particular sample mean. Here we assume that we obtained a sample mean, x and want to find its p value. It is the probability that we would obtain a given sample mean that is greater than the absolute value of its Z-score or less than the negative of the absolute value of its Z-score. For the special case of a normal distribution we also need the standard deviation. We will assume that we are given the standard deviation and call it s. The calculation for the p value can be done in several of ways. We will look at two ways here. The first way is to convert the sample means to their associated Z-score. The other way is to simply specify the standard deviation and let the computer do the conversion. At first glance it may seem like a no brainer, and we should just use the second method. Unfortunately, when using the t-distribution we need to convert to the t-score, so it is a good idea to know both ways. We first look at how to calculate the p value using the Z-score. The Z-score is found by assuming that the null hypothesis is true, subtracting the assumed mean, and dividing by the theoretical standard deviation. Once the Z-score is found the probability that the value could be less the Z-score is found using the pnorm command. This is not enough to get the p value. If the Z-score that is found is positive then we need to take one minus the associated probability. Also, for a two sided test we need to multiply the result by two. Here we avoid these issues and insure that the Z-score is negative by taking the negative of the absolute value. We now look at a specific example. In the example below we will use a value of a of 5, a standard deviation of 2, and a sample size of 20. We then find the p value for a sample mean of 7: > a <- 5 > s <- 2 > n <- 20 > xbar <- 7 > z <- (xbar-a)/(s/sqrt(n)) > z [1] 4.472136 > 2*pnorm(-abs(z)) [1] 7.744216e-06 We now look at the same problem only specifying the mean and standard deviation within the pnorm command. Note that for this case we cannot so easily force the use of the left tail. Since the sample mean is more than the assumed mean we have to take two times one minus the probability: > 2*(1-pnorm(xbar,mean=a,sd=s/sqrt(20))) 10.2. Calculating a Single p Value From a t Distribution¶ Finding the p value using a t distribution is very similar to using the Z-score as demonstrated above. The only difference is that you have to specify the number of degrees of freedom. Here we look at the same example as above but use the t distribution instead: > t <- (xbar-a)/(s/sqrt(n)) > t > 2*pt(-abs(t),df=n-1) [1] 0.0002611934 We now look at an example where we have a univariate data set and want to find the p value. In this example we use one of the data sets given in the data input chapter. We use the w1.dat data set: > w1 <- read.csv(file="w1.dat",sep=",",head=TRUE) > summary(w1) Min. :0.130 1st Qu.:0.480 Median :0.720 Mean :0.765 3rd Qu.:1.008 Max. :1.760 > length(w1$vals) [1] 54 Here we use a two sided hypothesis test, \[ \begin{align}\begin{aligned}H_o: \mu_x & = & 0.7,\\H_a: \mu_x & \neq & 0.7.\end{aligned}\end{align} \] So we calculate the sample mean and sample standard deviation in order to calculate the p value: > t <- (mean(w1$vals)-0.7)/(sd(w1$vals)/sqrt(length(w1$vals))) > 2*pt(-abs(t),df=length(w1$vals)-1) [1] 0.21204 10.3. Calculating Many p Values From a t Distribution¶ Suppose that you want to find the p values for many tests. This is a common task and most software packages will allow you to do this. Here we see how it can be done in R. Here we assume that we want to do a one-sided hypothesis test for a number of comparisons. In particular we will look at three hypothesis tests. All are of the following form: \[ \begin{align}\begin{aligned}H_o: \mu_1 - \mu_2 & = & 0,\\H_a: \mu_1 - \mu_2 & \neq & 0.\end{aligned}\end{align} \] We have three different sets of comparisons to make: Comparison 1 Mean Std. Dev. Number (pop.) Group I 10 3 300 Group II 10.5 2.5 230 Group II 13 5.3 340 Group I 30 4.5 420 Group II 28.5 3 400 For each of these comparisons we want to calculate a p value. For each comparison there are two groups. We will refer to group one as the group whose results are in the first row of each comparison above. We will refer to group two as the group whose results are in the second row of each comparison above. Before we can do that we must first compute a standard error and a t-score. We will find general formulae which is necessary in order to do all three calculations at once. We assume that the means for the first group are defined in a variable called m1. The means for the second group are defined in a variable called m2. The standard deviations for the first group are in a variable called sd1. The standard deviations for the second group are in a variable called sd2. The number of samples for the first group are in a variable called num1. Finally, the number of samples for the second group are in a variable called num2. With these definitions the standard error is the square root of (sd1^2)/num1+(sd2^2)/num2. The associated t-score is m1 minus m2 all divided by the standard error. The R comands to do this can be found below: > m1 <- c(10,12,30) > m2 <- c(10.5,13,28.5) > sd1 <- c(3,4,4.5) > sd2 <- c(2.5,5.3,3) > num1 <- c(300,210,420) > se <- sqrt(sd1*sd1/num1+sd2*sd2/num2) > t <- (m1-m2)/se To see the values just type in the variable name on a line alone: > m1 [1] 10 12 30 [1] 10.5 13.0 28.5 > sd1 [1] 3.0 4.0 4.5 > num1 [1] 300 210 420 > se [1] 0.2391107 0.3985074 0.2659216 [1] -2.091082 -2.509364 5.640761 To use the pt command we need to specify the number of degrees of freedom. This can be done using the pmin command. Note that there is also a command called min, but it does not work the same way. You need to use pmin to get the correct results. The numbers of degrees of freedom are pmin(num1,num2)-1. So the p values can be found using the following R command: > pt(t,df=pmin(num1,num2)-1) [1] 0.01881168 0.00642689 0.99999998 If you enter all of these commands into R you should have noticed that the last p value is not correct. The pt command gives the probability that a score is less that the specified t. The t-score for the last entry is positive, and we want the probability that a t-score is bigger. One way around this is to make sure that all of the t-scores are negative. You can do this by taking the negative of the absolute value of the t-scores: > pt(-abs(t),df=pmin(num1,num2)-1) [1] 1.881168e-02 6.426890e-03 1.605968e-08 The results from the command above should give you the p values for a one-sided test. It is left as an exercise how to find the p values for a two-sided test. 10.4. The Easy Way¶ The methods above demonstrate how to calculate the p values directly making use of the standard formulae. There is another, more direct way to do this using the t.test command. The t.test command takes a data set for an argument, and the default operation is to perform a two sided hypothesis test. > x = c(9.0,9.5,9.6,10.2,11.6) > t.test(x) One Sample t-test data: x t = 22.2937, df = 4, p-value = 2.397e-05 alternative hypothesis: true mean is not equal to 0 95 percent confidence interval: 8.737095 11.222905 sample estimates: mean of x > help(t.test) That was an obvious result. If you want to test against a different assumed mean then you can use the mu argument: > t.test(x,mu=10) t = -0.0447, df = 4, p-value = 0.9665 alternative hypothesis: true mean is not equal to 10 If you are interested in a one sided test then you can specify which test to employ using the alternative option: > t.test(x,mu=10,alternative="less") alternative hypothesis: true mean is less than 10 -Inf 10.93434 The t.test() command also accepts a second data set to compare two sets of samples. The default is to treat them as independent sets, but there is an option to treat them as dependent data sets. (Enter help(t.test) for more information.) To test two different samples, the first two arguments should be the data sets to compare: > y=c(9.9,8.7,9.8,10.5,8.9,8.3,9.8,9.0) > t.test(x,y) Welch Two Sample t-test data: x and y t = 1.1891, df = 6.78, p-value = 0.2744 alternative hypothesis true difference in means is not equal to 0 -0.6185513 1.8535513 mean of x mean of y 9.9800 9.3625 R Tutorial by Kelly Black is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (2015). Based on a work at http://www.cyclismo.org/tutorial/R/.
CommonCrawl
Shusterman (talk | contribs) (→‎March 12) | bgcolor="#BCD2EE" | By work of Gee--Geraghty and myself, one can transfer Serre weights from the maximal compact subgroup of an inner form $D*$ of $GL(n)$ to a maximal compact subgroup of $GL(n)$. Because of the congruence properties of the Jacquet--Langlands correspondence this transfer is compatible with the Breuil--Mezard formalism, which allows one to extend the Serre weight conjectures to $D*$ (at least for a tame and generic residual representation). This talk aims to explain all of the above and to discuss a possible generalization to inner forms of unramified groups. By work of Gee--Geraghty and myself, one can transfer Serre weights from the maximal compact subgroup of an inner form $D^*$ of $GL(n)$ to a maximal compact subgroup of $GL(n)$. Because of the congruence properties of the Jacquet--Langlands correspondence this transfer is compatible with the Breuil--Mezard formalism, which allows one to extend the Serre weight conjectures to $D^*$ (at least for a tame and generic residual representation). This talk aims to explain all of the above and to discuss a possible generalization to inner forms of unramified groups. Return to [1] 6 March 5 7 March 12 Rahul Krishna A relative trace formula comparison for the global Gross-Prasad conjecture for orthogonal groups The global Gross-Prasad conjecture (really its refinement by Ichino and Ikeda) is a remarkable conjectural formula generalizing Waldspurger's formula for the central value of a Rankin-Selberg $L$ function. I will explain a relative trace formula approach to this conjecture, akin in spirit to the successful comparison for unitary groups. The approach relies on a somewhat strange matching of orbits, and on two local conjectures of smooth transfer and fundamental lemma type, which I will formulate. If time permits, I will discuss some recent evidence for these local identities in some low rank cases. Eric Stubley Class Groups, Congruences, and Cup Products The structure of class groups of number fields can be computed in some cases with explicit congruence conditions, for example as in Kummer's criterion which relates the $p$-part of the class group of the $p$-th cyclotomic field to congruences of Bernoulli numbers mod $p$. For $p$ and $N$ prime with $N = 1$ mod $p$, a similar result of Calegari and Emerton relates the rank of the $p$-part of the class group of $\mathbb{Q}(N^{1/p})$ to whether or not a certain quantity (Merel's number) is a $p$-th power mod $N$. We study this rank by building off of an idea of Wake and Wang-Erickson, namely to relate elements of the class group to the vanishing of certain cup products in Galois cohomology. Using this idea, we prove new bounds on the rank in terms of similar $p$-th power conditions, and we give exact characterizations of the rank for small $p$. This talk will aim to explain the interplay between ranks of class groups, explicit congruences, and cup products in Galois cohomology. This is joint work with Karl Schaefer. Brian Smithling On Shimura varieties for unitary groups Shimura varieties attached to unitary similitude groups are a well-studied class of Shimura varieties of PEL type (i.e., admitting moduli interpretations in terms of abelian varieties with additional structure). There are also natural Shimura varieties attached to (honest) unitary groups; these lack a moduli interpretation, but they have other advantages (e.g., they give rise to interesting cycles of the sort that appear in the arithmetic Gan-Gross-Prasad conjecture). I will describe some variant Shimura varieties which enjoy good properties from both of these classes. This is joint work with M. Rapoport and W. Zhang. Shai Evra Ramanujan Conjectures and Density Theorems The Generalized Ramanujan Conjecture (GRC) for the group $GL(n)$ is a central open problem in modern number theory. The (GRC) is known to imply the solution of many Diophantine problems associated to arithmetic congruence subgroups of $GL(n)$. One can also state analogous (Naive) Ramanujan Conjectures (NRC) for other reductive groups $G$, whose validity would imply various applications for the congruence subgroups of $G$. However, already in the 70s Howe and Piatetski-Shapiro proved that the (NRC) fails even for the class of classical split and quasi-split groups. In the 90s Sarnak and Xue put forth the conjecture that a Density Hypothesis (DH) version of the (NRC) should hold, and that these Density Hypotheses can serve as a replacement of the (NRC) in many applications. In this talk I will describe the (GRC), (NRC) and (DH), and explain how to prove the (DH) for certain classical groups, by invoking deep and recent results coming from the Langlands program. Mathilde Gerbelli-Gauthier Cohomology of Arithmetic Groups and Endoscopy How fast do Betti numbers grow in a congruence tower of compact arithmetic manifolds? The dimension of the middle degree of cohomology is proportional to the volume of the manifold, but away from the middle the growth is known to be sub-linear in the volume. I will explain how automorphic representations and the phenomenon of endoscopy provide a framework to understand and quantify this slow growth. Specifically, I will discuss how to obtain explicit bounds in the case of unitary groups using the character identities appearing in Arthur's stable trace formula. This is joint work in progress with Simon Marshall. Jessica Fintzen From representations of p-adic groups to congruences of automorphic forms The theory of automorphic forms and the global Langlands program have been very active research areas for the past 30 years. Significant progress has been achieved by developing intricate geometric methods, but most results to date are restricted to general linear groups (and general unitary groups). In this talk I will present new results about the representation theory of p-adic groups and demonstrate how these can be used to obtain congruences between arbitrary automorphic forms and automorphic forms which are supercuspidal at p. This simplifies earlier constructions of attaching Galois representations to automorphic representations, i.e. the global Langlands correspondence, for general linear groups. Moreover, our results apply to general p-adic groups and have therefore the potential to become widely applicable beyond the case of the general linear group. This is joint work with Sug Woo Shin. Andrea Dotto Functoriality of Serre weights
CommonCrawl
Symmetry of positive solutions to fractional equations in bounded domains and unbounded cylinders Improved Sobolev inequalities and critical problems July 2020, 19(7): 3697-3722. doi: 10.3934/cpaa.2020163 Bound state positive solutions for a class of elliptic system with Hartree nonlinearity Guofeng Che 1, , Haibo Chen 2, and Tsung-fang Wu 3,, School of Applied Mathematics, Guangdong University of Technology, Guangzhou 510006, Guangdong, China School of Mathematics and Statistics, Central South University, Changsha 410083, Hunan, China Department of Applied Mathematics, National University of Kaohsiung, Kaohsiung 811, Taiwan Received August 2019 Revised January 2020 Published April 2020 Fund Project: H. Chen was supported by the National Natural Science Foundation of China (Grant No. 11671403). T. F. Wu was supported in part by the Ministry of Science and Technology, Taiwan (Grant No. 108-2115-M-390-007-MY2) In this paper, we are concerned with the following two-component system of Schrödinger equations with Hartree nonlinearity: $ \begin{equation*} \begin{cases} -\varepsilon ^{2}\Delta u+V_{1}\left( x\right) u+\lambda _{1}\left(I_{\varepsilon }\ast |u|^{p+1}\right)|u|^{p-1}u\\ \qquad\quad\; = \left(\mu _{1}|u|^{2p}+\beta(x) |u|^{q-1}|v|^{q+1}\right)u, & \text{in }\mathbb{R}^{N}, \\ -\varepsilon ^{2}\Delta v+V_{2}\left( x\right) v+\lambda _{2}\left(I_{\varepsilon }\ast |v|^{p+1}\right)|v|^{p-1}v\\ \qquad\quad\; = \left(\mu _{2}|v|^{2p}+\beta(x) |v|^{q-1}|u|^{q+1}\right)v, & \text{in }\mathbb{R}^{N}\,, \\ u,v\in H^{1}(\mathbb{R}^{N}),\quad u,v>0, \end{cases} \end{equation*} $ $ 0<\varepsilon \ll 1 $ is a small parameter, $ 0<q\leq p $ $ I_{\varepsilon}(x) = \frac{\Gamma((N-\alpha)/2)} {\Gamma(\alpha/2)\pi^{\frac{N}{2}}2^{\alpha}\varepsilon^{\alpha}}\frac{1}{|x|^{N-\alpha}}, \; x\in\mathbb{R}^{N}\setminus\{0\} $ $ \alpha\in(0,N),\; N = 3,4,5 $ $ \lambda _{l}\geq0,\; \mu _{l}>0,\; l = 1,2, $ are constants. Under some suitable assumptions on the potentials $ V_{l}(x),l = 1,2, $ and the coupled function $ \beta(x) $ , we prove the existence and multiplicity of positive solutions for the above system by using energy estimates, the Nehari manifold technique and the Lusternik-Schnirelmann theory. Furthermore, the existence and nonexistence of the least energy positive solutions are also explored. Keywords: Elliptic system, Hartree nonlinearity, Nehari manifold, multiple positive solutions, Lusternik-Schnirelmann theory. Mathematics Subject Classification: Primary: 35J50, 35A01; Secondary: 35J47. Citation: Guofeng Che, Haibo Chen, Tsung-fang Wu. Bound state positive solutions for a class of elliptic system with Hartree nonlinearity. Communications on Pure & Applied Analysis, 2020, 19 (7) : 3697-3722. doi: 10.3934/cpaa.2020163 S. Adachi and K. Tanaka, Four positive solutions for the semilinear elliptic equation: $-\triangle u+u = a(x)u^{p}+f(x)$ in $\mathbb{R}^{N}$, Calc. Var. Partial Differ. Equ., 11 (2000), 63-95. doi: 10.1007/s005260050003. Google Scholar A. Ambrosetti, Critical points and nonlinear variational problems, Mem. Soc. Math. France, 49 (1992), 1-139. Google Scholar A. Azzollini and A. Pomponio, Ground state solutions for the nonlinear Schrödinger–Maxwell equations, J. Math. Anal. Appl., 345 (2008), 90-108. doi: 10.1016/j.jmaa.2008.03.057. Google Scholar H. Brézis and E. H. Lieb, A relation between pointwise convergence of functions and convergence of functionals, Proc. Amer. Math. Soc., 88 (1983), 486-490. doi: 10.2307/2044999. Google Scholar H. Brézis and E. H. Lieb, Minimum action solutions of some vector field equations, Commun. Math. Phys., 96 (1984), 97-113. Google Scholar F. E. Browder, Lusternik-Schnirelman category and nonlinear elliptic eigenvalue problems, Bull. Amer. Math. Soc., 71 (1965), 644-648. doi: 10.1090/S0002-9904-1965-11378-7. Google Scholar G. Cerami and G. Vaira, Positive solutions for some non-autonomous Schrödinger–Poisson systems, J. Differ. Equ., 248 (2010), 521-543. doi: 10.1016/j.jde.2009.06.017. Google Scholar G. Che and H. Chen, Multiple solutions for the Schrödinger equations with sign-changing potential and Hartree nonlinearity, Appl. Math. Lett., 81 (2018), 21-26. doi: 10.1016/j.aml.2017.12.014. Google Scholar G. Che and H. Chen, Ground state solutions for a class of semilinear elliptic systems with sum of periodic and vanishing potentials, Topol. Meth. Nonlinear Anal., 51 (2018), 215-242. doi: 10.12775/tmna.2017.046. Google Scholar G. Che, H. Chen and T. F. Wu, Existence and multiplicity of positive solutions for fractional Laplacian systems with nonlinear coupling, J. Math. Phys, 60 (2019), Art. 081511. doi: 10.1063/1.5087755. Google Scholar G. Che, H. Chen and L. Yang, Existence and multiplicity of solutions for semilinear elliptic systems with periodic potential, Bull. Malays. Math. Sci. Soc., 42 (2019), 1329-1348. doi: 10.1007/s40840-017-0551-3. Google Scholar G. Che, H. Shi and Z. Wang, Existence and concentration of positive ground states for a 1-Laplacian problem in $\mathbb{R}^{N}$, Appl. Math. Lett., 100 (2020), Art. 106045. doi: 10.1016/j.aml.2019.106045. Google Scholar D. G. de Figueiredo and O. Lopes, Solitary waves for some nonlinear Schrödinger systems, Ann. Inst. Henri Poincare Anal. Non Lineaire, 25 (2008), 149-161. doi: 10.1016/j.anihpc.2006.11.006. Google Scholar I. Ekeland, On the variational principle, J. Math. Anal. Appl., 47 (1974), 324-353. doi: 10.1016/0022-247X(74)90025-0. Google Scholar H. L. Elliott and M. Loss, Analysis, 2$^{nd}$ edition, Graduate Studies in Mathematics, vol. 14, American Mathematical Society, Providence, RI, 2001. doi: 10.1090/gsm/014. Google Scholar G. Fibich and G. Papanicolaou, Self-focusing in the perturbed and unperturbed nonlinear Schrödinger equation in critical dimension, SIAM J. Appl. Math., 60 (2000), 183-240. doi: 10.1137/S0036139997322407. Google Scholar M. Kwong, Uniqueness of positive solutions of $\Delta u-u+u^{p} = 0 $ in $\mathbb{R}^{N}$, Arch. Ration. Mech. Anal., 105 (1989), 243-266. doi: 10.1007/BF00251502. Google Scholar E. H. Lieb, Existence and uniqueness of the minimizing solution of Choquard's nonlinear equation, Stud. Appl. Math., 57 (1997), 93-105. doi: 10.1002/sapm197757293. Google Scholar T. C. Lin and J. Wei, Spikes in two coupled nonlinear Schrödinger equations, Ann. Inst. Henri Poincare Anal. Non Lineaire, 22 (2005), 403-439. doi: 10.1016/j.anihpc.2004.03.004. Google Scholar T. C. Lin and J. Wei, Spikes in two–component systems of nonlinear Schrödinger equations with trapping potentials, J. Differ. Equ., 229 (2006), 538-569. doi: 10.1016/j.jde.2005.12.011. Google Scholar T.C. Lin and T.F. Wu, Existence and multiplicity of positive solutions for two coupled nonlinear Schrödinger equations, Discrete Contin. Dyn. Syst., 33 (2013), 2911-2938. doi: 10.3934/dcds.2013.33.2911. Google Scholar P. L. Lions, The concentration–compactness principle in the calculus of variations. The locally compact case part 1, Ann. Inst. Henri Poincare Anal. Non Lineaire, 1 (1984), 109-145. Google Scholar P.L. Lions, The concentration–compactness principle in the calculus of variations. The locally compact case part 2, Ann. Inst. Henri Poincare Anal. Non Lineaire, 1 (1984), 223-283. Google Scholar C. H. Liu, H. Y. Wang and T. F. Wu, Multiplicity of 2–nodal solutions for semilinear elliptic problems in $\mathbb{R}^{N}$, J. Math. Anal. Appl., 348 (2008), 169-179. doi: 10.1016/j.jmaa.2008.06.042. Google Scholar D. F. Lü, A note on Kirchhoff–type equations with Hartree-type nonlinearities, Nonlinear Anal., 99 (2014), 35-48. doi: 10.1016/j.na.2013.12.022. Google Scholar L. A. Maia, E. Montefusco and B. Pellacci, Positive solutions for a weakly coupled Schrödinger system, J. Differ. Equ., 229 (2006), 743-767. doi: 10.1016/j.jde.2006.07.002. Google Scholar R. Penrose, On gravity's role in quantum state reduction, Gen. Relat. Gravit., 28 (1996), 581-600. doi: 10.1007/BF02105068. Google Scholar D. Ruiz, The Schrödinger–Poisson equation under the effect of a nonlinear local term, J. Funct. Anal., 237 (2006), 655-674. doi: 10.1016/j.jfa.2006.04.005. Google Scholar Y. Su, New result for nonlinear Choquard equations: Doubly critical case, Appl. Math. Lett., 102 (2020), Art. 106092. doi: 10.1016/j.aml.2019.106092. Google Scholar J. Sun, H. Chen and J. Nieto, On ground state solutions for some non-autonomous Schrödinger–Poisson systems, J. Differ. Equ., 252 (2012), 3365-3380. doi: 10.1016/j.jde.2011.12.007. Google Scholar J. Sun, T. F. Wu and Z. Feng, Multiplicity of positive solutions for a nonlinear Schrödinger–Poisson system, J. Differ. Equ., 260 (2016), 586-627. doi: 10.1016/j.jde.2015.09.002. Google Scholar J. Sun, T. F. Wu and Z. Feng, Non-autonomous Schrödinger-Poisson problems in $\mathbb{R}^{3}$, Discrete Contin. Dyn. Syst., 38 (2018), 1889-1933. doi: 10.3934/dcds.2018077. Google Scholar L. Zhao and F. Zhao, On the existence of solutions for the Schrödinger-Poisson equations, J. Math. Anal. Appl., 346 (2008), 155-169. doi: 10.1016/j.jmaa.2008.04.053. Google Scholar Monica Musso, Donato Passaseo. Multiple solutions of Neumann elliptic problems with critical nonlinearity. Discrete & Continuous Dynamical Systems, 1999, 5 (2) : 301-320. doi: 10.3934/dcds.1999.5.301 Kyril Tintarev. Positive solutions of elliptic equations with a critical oscillatory nonlinearity. Conference Publications, 2007, 2007 (Special) : 974-981. doi: 10.3934/proc.2007.2007.974 Sitong Chen, Junping Shi, Xianhua Tang. Ground state solutions of Nehari-Pohozaev type for the planar Schrödinger-Poisson system with general nonlinearity. Discrete & Continuous Dynamical Systems, 2019, 39 (10) : 5867-5889. doi: 10.3934/dcds.2019257 Zongming Guo, Xuefei Bai. On the global branch of positive radial solutions of an elliptic problem with singular nonlinearity. Communications on Pure & Applied Analysis, 2008, 7 (5) : 1091-1107. doi: 10.3934/cpaa.2008.7.1091 Yan Deng, Junfang Zhao, Baozeng Chu. Symmetry of positive solutions for systems of fractional Hartree equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (9) : 3085-3096. doi: 10.3934/dcdss.2021079 Yuxia Guo, Jianjun Nie. Classification for positive solutions of degenerate elliptic system. Discrete & Continuous Dynamical Systems, 2019, 39 (3) : 1457-1475. doi: 10.3934/dcds.2018130 Zaizheng Li, Zhitao Zhang. Uniqueness and nondegeneracy of positive solutions to an elliptic system in ecology. Electronic Research Archive, 2021, 29 (6) : 3761-3774. doi: 10.3934/era.2021060 Kunquan Lan, Wei Lin. Uniqueness of nonzero positive solutions of Laplacian elliptic equations arising in combustion theory. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 849-861. doi: 10.3934/dcdsb.2016.21.849 Zhongwei Tang, Huafei Xie. Multi-spikes solutions for a system of coupled elliptic equations with quadratic nonlinearity. Communications on Pure & Applied Analysis, 2020, 19 (1) : 311-328. doi: 10.3934/cpaa.2020017 Hongyu Ye. Positive high energy solution for Kirchhoff equation in $\mathbb{R}^{3}$ with superlinear nonlinearities via Nehari-Pohožaev manifold. Discrete & Continuous Dynamical Systems, 2015, 35 (8) : 3857-3877. doi: 10.3934/dcds.2015.35.3857 Mónica Clapp, Jorge Faya. Multiple solutions to a weakly coupled purely critical elliptic system in bounded domains. Discrete & Continuous Dynamical Systems, 2019, 39 (6) : 3265-3289. doi: 10.3934/dcds.2019135 Xiaoming He, Marco Squassina, Wenming Zou. The Nehari manifold for fractional systems involving critical nonlinearities. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1285-1308. doi: 10.3934/cpaa.2016.15.1285 Yayun Li, Yutian Lei. On existence and nonexistence of positive solutions of an elliptic system with coupled terms. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1749-1764. doi: 10.3934/cpaa.2018083 Shiren Zhu, Xiaoli Chen, Jianfu Yang. Regularity, symmetry and uniqueness of positive solutions to a nonlinear elliptic system. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2685-2696. doi: 10.3934/cpaa.2013.12.2685 Rushun Tian, Zhi-Qiang Wang. Bifurcation results on positive solutions of an indefinite nonlinear elliptic system. Discrete & Continuous Dynamical Systems, 2013, 33 (1) : 335-344. doi: 10.3934/dcds.2013.33.335 Salvatore A. Marano, Sunra J. N. Mosconi. Multiple solutions to elliptic inclusions via critical point theory on closed convex sets. Discrete & Continuous Dynamical Systems, 2015, 35 (7) : 3087-3102. doi: 10.3934/dcds.2015.35.3087 Qingfang Wang. Multiple positive solutions of fractional elliptic equations involving concave and convex nonlinearities in $R^N$. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1671-1688. doi: 10.3934/cpaa.2016008 Armands Gritsans, Felix Sadyrbaev. The Nehari solutions and asymmetric minimizers. Conference Publications, 2015, 2015 (special) : 562-568. doi: 10.3934/proc.2015.0562 Min Liu, Zhongwei Tang. Multiplicity and concentration of solutions for Choquard equation via Nehari method and pseudo-index theory. Discrete & Continuous Dynamical Systems, 2019, 39 (6) : 3365-3398. doi: 10.3934/dcds.2019139 Said Taarabti. Positive solutions for the $ p(x)- $Laplacian : Application of the Nehari method. Discrete & Continuous Dynamical Systems - S, 2022, 15 (1) : 229-243. doi: 10.3934/dcdss.2021029 HTML views (80) Guofeng Che Haibo Chen Tsung-fang Wu
CommonCrawl
Brake orbits on compact symmetric dynamically convex reversible hypersurfaces on $ \mathbb{R}^\text{2n} $ DCDS Home On the existence of invariant tori in non-conservative dynamical systems with degeneracy and finite differentiability July 2019, 39(7): 4207-4224. doi: 10.3934/dcds.2019170 Equality of Kolmogorov-Sinai and permutation entropy for one-dimensional maps consisting of countably many monotone parts Tim Gutjahr , and Karsten Keller Institut für Mathematik, Ratzeburger Allee 160, D-23562 Lübeck, Germany * Corresponding author: Tim Gutjahr Received October 2018 Revised February 2019 Published April 2019 In this paper, we show that, under some technical assumptions, the Kolmogorov-Sinai entropy and the permutation entropy are equal for one-dimensional maps if there exists a countable partition of the domain of definition into intervals such that the considered map is monotone on each of those intervals. This is a generalization of a result by Bandt, Pompe and G. Keller, who showed that the above holds true under the additional assumptions that the number of intervals on which the map is monotone is finite and that the map is continuous on each of those intervals. Keywords: One-dimensional dynamics, Kolmogorov-Sinai entropy, permutation entropy, ergodic theory, piecewise monotone functions. Mathematics Subject Classification: Primary: 28D20, 37A35. Citation: Tim Gutjahr, Karsten Keller. Equality of Kolmogorov-Sinai and permutation entropy for one-dimensional maps consisting of countably many monotone parts. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 4207-4224. doi: 10.3934/dcds.2019170 J. M. Amigó, M. B. Kennel and L. Kocarev, The permutation entropy rate equals the metric entropy rate for ergodic information sources and ergodic dynamical systems, Physica D: Nonlinear Phenomena, 210 (2005), 77-95. doi: 10.1016/j.physd.2005.07.006. Google Scholar C. Bandt, G. Keller and B. Pompe, Entropy of interval maps via permutations, Nonlinearity, 15 (2002), 1595-1602. doi: 10.1088/0951-7715/15/5/312. Google Scholar C. Bandt and B. Pompe, Permutation entropy: A natural complexity measure for time series, Phys. Rev. Lett., 88 (2002), 174102. doi: 10.1103/PhysRevLett.88.174102. Google Scholar K. Dajani, C. Kraaikamp and M. A. of America, Ergodic Theory of Numbers, no. Bd. 29 in Carus Mathematical Monographs, Mathematical Association of America, 2002. doi: 10.5948/UPO9781614440277. Google Scholar M. Einsiedler and T. Ward, Ergodic Theory: With a View Towards Number Theory, Graduate Texts in Mathematics, 259. Springer-Verlag London, Ltd., London, 2011. doi: 10.1007/978-0-85729-021-2. Google Scholar M. Einsiedler, E. Lindenstrauss and T. Ward, Entropy in ergodic theory and homogeneous dynamics, 2017., Available from: https://tbward0.wixsite.com/books/entropy.Google Scholar S. Heinemann and O. Schmitt, Rokhlin's Lemma for Non-invertible Maps, Mathematica Gottingensis, Math. Inst., 2000.Google Scholar G. Keller, Equilibrium States in Ergodic Theory, London Mathematical Society Student Texts, Cambridge University Press, 1998. doi: 10.1017/CBO9781107359987. Google Scholar K. Keller, A. M. Unakafov and V. A. Unakafova, On the relation of ks entropy and permutation entropy, Physica D: Nonlinear Phenomena, 241 (2012), 1477-1481. doi: 10.1016/j.physd.2012.05.010. Google Scholar A. Klenke, Probability Theory: A Comprehensive Course, Springer, 2008. doi: 10.1007/978-1-4471-5361-0. Google Scholar X. Li, G. Ouyang and D. A. Richards, Predictability analysis of absence seizures with permutation entropy, Epilepsy Research, 77 (2007), 70-74. doi: 10.1016/j.eplepsyres.2007.08.002. Google Scholar M. Misiurewicz, Permutations and topological entropy for interval maps, Nonlinearity, 16 (2003), 971-976. doi: 10.1088/0951-7715/16/3/310. Google Scholar N. Nicolaou and J. Georgiou, The use of permutation entropy to characterize sleep electroencephalograms, Clinical EEG and Neuroscience, 42 (2011), 24-28. doi: 10.1177/155005941104200107. Google Scholar K. R. Parthasarathy (ed.), II - Probability Measures in A Metric Space, Probability and Mathematical Statistics: A Series of Monographs and Textbooks, Academic Press, 1967. doi: 10.1016/C2013-0-08107-8. Google Scholar A. Silva, H. Cardoso-Cruz, F. Silva, V. Galhardo and L. Antunes, Comparison of anesthetic depth indexes based on thalamocortical local field potentials in rats, Anesthesiology, 112 (2010), 355-363. doi: 10.1097/ALN.0b013e3181ca3196. Google Scholar P. Walters, An Introduction to Ergodic Theory, Graduate Texts in Mathematics, Springer New York, 2000. Google Scholar Figure 1. Graph of the Gauss function T Figure 2. The striped area corresponds to the set $R = \{(\omega_1, \omega_2)\in\Omega^2|~\omega_1\leq \omega_2\}$ and the gray area to $(T\times T)^{-1}(R)$ for the Gauss function $T$ Karsten Keller. Permutations and the Kolmogorov-Sinai entropy. Discrete & Continuous Dynamical Systems - A, 2012, 32 (3) : 891-900. doi: 10.3934/dcds.2012.32.891 Alexandra Antoniouk, Karsten Keller, Sergiy Maksymenko. Kolmogorov-Sinai entropy via separation properties of order-generated $\sigma$-algebras. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 1793-1809. doi: 10.3934/dcds.2014.34.1793 Michał Misiurewicz, Peter Raith. Strict inequalities for the entropy of transitive piecewise monotone maps. Discrete & Continuous Dynamical Systems - A, 2005, 13 (2) : 451-468. doi: 10.3934/dcds.2005.13.451 C. Kopf. Symbol sequences and entropy for piecewise monotone transformations with discontinuities. Discrete & Continuous Dynamical Systems - A, 2000, 6 (2) : 299-304. doi: 10.3934/dcds.2000.6.299 Jens Lang, Pascal Mindt. Entropy-preserving coupling conditions for one-dimensional Euler systems at junctions. Networks & Heterogeneous Media, 2018, 13 (1) : 177-190. doi: 10.3934/nhm.2018008 John Kieffer and En-hui Yang. Ergodic behavior of graph entropy. Electronic Research Announcements, 1997, 3: 11-16. Sebastian van Strien. One-dimensional dynamics in the new millennium. Discrete & Continuous Dynamical Systems - A, 2010, 27 (2) : 557-588. doi: 10.3934/dcds.2010.27.557 Francisco J. López-Hernández. Dynamics of induced homeomorphisms of one-dimensional solenoids. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4243-4257. doi: 10.3934/dcds.2018185 Zhi-An Wang, Kun Zhao. Global dynamics and diffusion limit of a one-dimensional repulsive chemotaxis model. Communications on Pure & Applied Analysis, 2013, 12 (6) : 3027-3046. doi: 10.3934/cpaa.2013.12.3027 Charles Nguyen, Stephen Pankavich. A one-dimensional kinetic model of plasma dynamics with a transport field. Evolution Equations & Control Theory, 2014, 3 (4) : 681-698. doi: 10.3934/eect.2014.3.681 Isabeau Birindelli, Enrico Valdinoci. On the Allen-Cahn equation in the Grushin plane: A monotone entire solution that is not one-dimensional. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 823-838. doi: 10.3934/dcds.2011.29.823 Steffen Klassert, Daniel Lenz, Peter Stollmann. Delone measures of finite local complexity and applications to spectral theory of one-dimensional continuum models of quasicrystals. Discrete & Continuous Dynamical Systems - A, 2011, 29 (4) : 1553-1571. doi: 10.3934/dcds.2011.29.1553 Xu Zhang. Sinai-Ruelle-Bowen measures for piecewise hyperbolic maps with two directions of instability in three-dimensional spaces. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2873-2886. doi: 10.3934/dcds.2016.36.2873 Julián López-Gómez, Marcela Molina-Meyer, Andrea Tellini. Intricate bifurcation diagrams for a class of one-dimensional superlinear indefinite problems of interest in population dynamics. Conference Publications, 2013, 2013 (special) : 515-524. doi: 10.3934/proc.2013.2013.515 Ben Duan, Zhen Luo. Dynamics of vacuum states for one-dimensional full compressible Navier-Stokes equations. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2543-2564. doi: 10.3934/cpaa.2013.12.2543 Frank Blume. Minimal rates of entropy convergence for rank one systems. Discrete & Continuous Dynamical Systems - A, 2000, 6 (4) : 773-796. doi: 10.3934/dcds.2000.6.773 Katayun Barmak, Eva Eggeling, Maria Emelianenko, Yekaterina Epshteyn, David Kinderlehrer, Richard Sharp, Shlomo Ta'asan. An entropy based theory of the grain boundary character distribution. Discrete & Continuous Dynamical Systems - A, 2011, 30 (2) : 427-454. doi: 10.3934/dcds.2011.30.427 Maria João Costa. Chaotic behaviour of one-dimensional horseshoes. Discrete & Continuous Dynamical Systems - A, 2003, 9 (3) : 505-548. doi: 10.3934/dcds.2003.9.505 Yunping Jiang. On a question of Katok in one-dimensional case. Discrete & Continuous Dynamical Systems - A, 2009, 24 (4) : 1209-1213. doi: 10.3934/dcds.2009.24.1209 Tim Gutjahr Karsten Keller
CommonCrawl
Interval-valued fuzzy \(\phi\)-tolerance competition graphs Tarasankar Pramanik1, Sovan Samanta2, Madhumangal Pal3, Sukumar Mondal4 & Biswajit Sarkar2 SpringerPlus volume 5, Article number: 1981 (2016) Cite this article This paper develops an interval-valued fuzzy \(\phi\)-tolerance competition graphs which is the extension of basic fuzzy graphs and \(\phi\) is any real valued function. Interval-valued fuzzy \(\phi\)-tolerance competition graph is constructed by taking all the fuzzy sets of a fuzzy \(\phi\)-tolerance competition graph as interval-valued fuzzy sets. Product of two IVFPTCGs and relations between them are defined. Here, some hereditary properties of products of interval-valued fuzzy \(\phi\)-tolerance competition graphs are represented. Application of interval-valued fuzzy competition graph in image matching is given to illustrate the model. Graphs can be considered as the bonding of objects. To emphasis on a real problem, those objects are being bonded by some relations such as, friendship is the bonding of pupil. If the vagueness in bonding arises, then the corresponding graph can be modelled as fuzzy graph model. There are many research available in literature like Bhutani and Battou (2003) and Bhutani and Rosenfeld (2003). Competition graph was defined in Cohen (1968). In ecology, there is a problem of food web which is modelled by a digraph \(\overrightarrow{D}=(V,\overrightarrow{E})\). In food web there is a competition between species (members of food web). A vertex \(x\in V(\overrightarrow{D})\) represents a species in the food web and arc \(\overrightarrow{(x,s)}\in \overrightarrow{E}(\overrightarrow{D})\) means that x kills the species s. If two species x and y have common prey s, they will compete for s. Based on this analogy, Cohen (1968) defined a graph model (competition graph of a digraph), which represents the relationship of competition through the species in the food web. The corresponding undirected graph \(G=(V,E)\) of a certain digraph \(\overrightarrow{D}=(V, \overrightarrow{E})\) is said to be a competition graph \(C(\overrightarrow{D})\) with the vertex set V and the edge set E, where \((x,y)\in E\) if and only if there exists a vertex \(s\in V\) such that \(\overrightarrow{(x,s)},\overrightarrow{(y,s)}\in \overrightarrow{E(\overrightarrow{D})}\) for any \(x,y\in V,\, (x\ne y)\). There are several variations of competition graphs in Cohen's contribution (Cohen 1968). After Cohen, some derivations of competition graphs have been found in Cho et al. (2000). In that paper, m-step competition graph of a digraph was defined. The p-competition graph of a digraph is defined in Kim et al. (1995). The p-competition means if two species have at least p-common preys, then they compete to each other. In graph theory, an intersection graph is a graph which represents the intersection of sets. An interval graph is the intersection of multiset of intervals on real line. Interval graphs are useful in resource allocation problem in operations research. Besides, interval graphs are used extensively in mathematical modeling, archaeology, developmental psychology, ecological modeling, mathematical sociology and organization theory. Tolerance graphs were originated in Golumbic and Monma (1982) to extend some of the applications associated with interval graphs. Their original purpose was to solve scheduling problems for arrangements of rooms, vehicles, etc. Tolerance graphs are generalization of interval graphs in which each vertex can be represented by an interval and a tolerance such that an edge occurs if and only if the overlap of corresponding intervals is at least as large as the tolerance associated with one of the vertices. Hence a graph \(G = (V,E)\) is a tolerance graph if there is a set \(I = \{I_v{:}\,v \in V\}\) of closed real intervals and a set \(\{T_v{:}\,v \in V\}\) of positive real numbers such that \((x,y) \in E\) if \(|I_x\cap I_y| \ge {{\rm min}} \{ T_x,T_y\}\). The collection <\(I,T\)> of intervals and tolerances is called tolerance representation of the graph G. Tolerance graphs were used in order to generalize some well known applications of interval graphs. In Brigham et al. (1995), tolerance competition graphs was introduced. Some uncertainty is included in that paper by assuming tolerances of competitions. A recent work on fuzzy k-competition graphs is available in Samanta and Pal (2013). In the paper, fuzziness is applied in the representation of competitions. Recently Pramanik et al. defined and studied fuzzy \(\phi\)-tolerance competition graph in Pramanik et al. (2016). But, fuzzy phi-tolerance targets only numbers between 0 and 1, but interval-valued numbers are more appropriate for uncertainty. Other many related works are found in Pramanik et al. (2014) and Samanta and Pal (2015). After (Rosenfeld 1975), the fuzzy graph theory increases with its various types of branches. Using these concept of fuzzy graphs, Koczy (1992) discussed fuzzy graphs to evaluate and to optimize any networks. Samanta and Pal (2013) showed that fuzzy graphs can be used in competition in ecosystems. After that, they introduced some different types of fuzzy graphs (Samanta and Pal 2015; Samanta et al. 2014). Bhutani and Battou (2003) and Bhutani and Rosenfeld (2003) discussed different arcs in fuzzy graphs. For further details of fuzzy graphs, readers may look in Mathew (2009), Mordeson and Nair (2000) and Pramanik et al. (2014). Applications of fuzzy graph include data mining, image segmentation, clustering, image capturing, networking, communication, planning, scheduling, etc. In this paper, interval valued fuzzy \(\phi\)-tolerance competition graph is introduced. Some relations on product of interval valued \(\phi\)-tolerance competition graphs are established. The authors' contributions to develop competition graphs and tolerance graphs are listed in the Table 1. Also, the flow chart of the research contribution towards this research is given in Fig. 1. Table 1 Contributions of the authors towards interval valued \(\phi\)-tolerance competition graphs Flow-chart of the research Preliminaries A function \(\alpha {:}\,X\rightarrow [0,1]\), called the membership function defined on the crisp set X is said to be a fuzzy set \(\alpha\) on X. The support of \(\alpha\) is \({{\mathrm{supp}}}(\alpha ) =\{x\in X| \alpha (x)\ne 0\}\) and the core of \(\alpha\) is \({\mathrm{core}}(\alpha ) = \{x\in X| \alpha (x)=1\}\). The support length is \(s(\alpha )=|{{\mathrm{supp}}}(\alpha )|\) and the core length is \(c(\alpha )=|{{\mathrm{core}}}(\alpha )|\). The height of \(\alpha\) is \(h(\alpha ) =\max \{\alpha (x)| x\in X\}\). The fuzzy set \(\alpha\) is said to be normal if \(h(\alpha )=1\). A fuzzy graph with a non-void finite set V is a pair \(G = (V, \sigma ,\mu )\), where \(\sigma {:}\,V \rightarrow [0,1]\) is a fuzzy subset of V and \(\mu {:}\,V\times V\rightarrow [0,1]\) is a fuzzy relation (symmetric) on the fuzzy subset \(\sigma\), such that \(\mu (x,y) \le \sigma (x) \wedge \sigma (y)\), for all \(x,y\in V\), where \(\wedge\) stands for minimum. The degree of a vertex v of a fuzzy graph \(G = (V, \sigma ,\mu )\) is \(\displaystyle d(v)=\sum \nolimits _{u\in V-\{v\}}\mu (v,u)\). The order of a fuzzy graph G is \(\displaystyle O(G)=\sum \nolimits _{u\in V}\sigma (u)\). The size of a fuzzy graph G is \(\displaystyle S(G)=\sum \mu (u,v)\). Let \({\mathcal {F}}=\{\alpha _1,\alpha _2,\ldots , \alpha _n\}\) be a finite family of fuzzy subsets on a set X. The fuzzy intersection of two fuzzy subsets \(\alpha _1\) and \(\alpha _2\) is a fuzzy set and defined by \(\alpha _1\wedge \alpha _2=\left\{ \min \{\alpha _1(x),\alpha _2(x)\}|x\in X\right\}\). The union of two fuzzy subsets \(\alpha _1\) and \(\alpha _2\) is a fuzzy set and is defined by \(\alpha _1\vee \alpha _2=\left\{ \max \{\alpha _1(x),\alpha _2(x)\}|x\in X\right\}\). \(\alpha _1\le \alpha _2\) for two fuzzy subsets \(\alpha _1\) and \(\alpha _2\), if \(\alpha _1(x)\le \alpha _2(x)\) for each \(x\in X\). The fuzzy intersection graph of \({\mathcal {F}}\) is the fuzzy graph \(Int({\mathcal {F}})=(V, \sigma ,\mu )\), where \(\sigma {:}\,{\mathcal {F}}\rightarrow [0,1]\) is defined by \(\sigma (\alpha _i)=h(\alpha _i)\) and \(\mu {:}\,{\mathcal {F}}\times {\mathcal {F}} \rightarrow [0,1]\) is defined by $$\begin{aligned} \mu (\alpha _i,\alpha _j)=\left\{ \begin{array}{ll} h(\alpha _i\wedge \alpha _j), &{}\quad {\text{if}}\, i\ne j\\ 0, &{}\quad {\text{if}}\, i=j. \end{array}\right. \end{aligned}$$ Here, \(\mu (\alpha _i,\alpha _i)=0\) for all \(\alpha _i\) implies that the said fuzzy graph is a loop less fuzzy intersection graph and the fuzzy graph has no parallel edges as \(\mu\) is uniquely defined. Let us consider a family of fuzzy intervals \({\mathcal {F}}_{\mathcal {I}}=\{{\mathcal {I}}_1, {\mathcal {I}}_2, \ldots , {\mathcal {I}}_n\}\) on X. Then the fuzzy interval graph is the fuzzy intersection graph of these fuzzy intervals \({\mathcal {I}}_1, {\mathcal {I}}_2, \ldots , {\mathcal {I}}_n\). Fuzzy tolerance of a fuzzy interval is denoted by \({\mathcal {T}}\) and is defined by an arbitrary fuzzy interval, whose core length is a positive real number. If the real number is taken as L and \(|i_k-i_{k-1}|=L\), where \(i_k,i_{k-1}\in R\), a set of real numbers, then the fuzzy tolerance is a fuzzy set of the interval \([i_{k-1},i_k]\). The fuzzy tolerance graph \({\mathcal {G}}=(V,\sigma ,\mu )\) as the fuzzy intersection graph of finite family of fuzzy intervals \({\mathcal {I}}=\{{\mathcal {I}}_1,{\mathcal {I}}_2,\ldots , {\mathcal {I}}_n\}\) on the real line along with tolerances \({\mathcal {T}}=\{{\mathcal {T}}_1,{\mathcal {T}}_2,\ldots ,{\mathcal {T}}_n\}\) associated to each vertex of \(v_i\in V\), where, \(\sigma {:}\, V\rightarrow [0,1]\) is defined by \(\sigma (v_i)=h({\mathcal {I}}_i)=1\) for all \(v_i\in V\) and \(\mu {:}\, V\times V\rightarrow [0,1]\) is defined by $$\begin{aligned} \mu (v_i,v_j)=\left\{ \begin{array}{ll} 1, &{}\quad {\text { if }}\, c({\mathcal {I}}_i\cap {\mathcal {I}}_j)\ge \min \{c({\mathcal {T}}_i),c({\mathcal {T}}_j)\}\\ \frac{s({\mathcal {I}}_i\cap {\mathcal {I}}_j)-\min \{s({\mathcal {T}}_i), s({\mathcal {T}}_j)\}}{s({\mathcal {I}}_i\cap {\mathcal {I}}_j)}h({\mathcal {I}}_i\cap {\mathcal {I}}_j), &{}\quad {\text { else if }}\,s({\mathcal {I}}_i\cap {\mathcal {I}}_j)\ge \\ &{}\quad \min \{s({\mathcal {T}}_i),s({\mathcal {T}}_j)\}\\ 0, &{}\quad {\text { otherwise}}. \end{array}\right. \end{aligned}$$ Fuzzy interval digraph is a directed fuzzy interval graph, whose edge membership function need not to be symmetric. An interval number (Akram and Dudek 2011) D is an interval \([a^-, a^+]\) with \(0\le a^-\le a^+\le 1\). For two interval numbers \(D_1=[a_1^-,a_1^+]\) and \(D_2=[a_2^-,a_2^+]\), the following properties are defined: \(D_1+D_2=[a_1^-,a_1^+]+[a_2^-,a_2^+]=[a_1^-+a_2^- -a_1^-\cdot a_2^-, a_1^+ +a_2^+ - a_1^+\cdot a_2^+],\) \(\min \{D_1,D_2\}=[\min \{a_1^-,a_2^-\}, \min \{a_1^+,a_2^+\}],\) \(\max \{D_1,D_2\}=[\max \{a_1^-,a_2^-\}, \max \{a_1^+,a_2^+\}],\) \(D_1\le D_2 \Leftrightarrow a_1^-\le a_2^-\) and \(a_1^+\le a_2^+\), \(D_1=D_2 \Leftrightarrow a_1^-= a_2^-\) and \(a_1^+= a_2^+\), \(D_1<D_2 \Leftrightarrow D_1\le D_2\) and \(D_1\ne D_2\), \(kD_1=[ka_1^-, ka_2^+]\), where \(0\le k\le 1\). An interval-valued fuzzy set A on a set X is a function \(\mu _A{:}\, X\rightarrow [0,1]\times [0,1]\), called the membership function, i.e. \(\displaystyle \mu _A(x)=[\mu _A^-(x), \mu _A^+(x)]\). The support of A is \({{\mathrm{supp}}}(A)=\{x\in X|\mu _A^-(x)\ne 0\}\) and the core of A is \({{\mathrm{core}}}(A)=\{x\in X | \mu _A^-(x)=1\}\). The support length is \(s(A)=|{{\mathrm{supp}}}(A)|\) and the core length is \(c(A)=|{{\mathrm{core}}}(A)|\). The height of A is \(\displaystyle h(A)=\max \{\mu _A (x)|x\in X\}=[\max \{\mu _A^-(x)\}, \max \{\mu _A^+(x)\}], \forall x\in X\). Let \(F=\{A_1, A_2, \ldots , A_n\}\) be a finite family of interval-valued fuzzy subsets on a set X. The fuzzy intersection of two interval-valued fuzzy sets (IVFSs) \(A_1\) and \(A_2\) is an interval-valued fuzzy set defined by $$\begin{aligned} A_1\cap A_2= \left\{ \left( x, \left[ \min \{\mu _{A_1}^-(x), \mu _{A_2}^-(x)\},\min \{\mu _{A_1}^+(x),\mu _{A_2}^+(x)\}\right] \right) {:}\,x\in X\right\} . \end{aligned}$$ The fuzzy union of two IVFSs \(A_1\) and \(A_2\) is a IVFS defined by $$\begin{aligned} A_1\cup A_2= \left\{ \left( x, \left[ \max \{\mu _{A_1}^-(x), \mu _{A_2}^-(x)\},\max \{\mu _{A_1}^+(x),\mu _{A_2}^+(x)\}\right] \right) {:}\,x\in X\right\} \end{aligned}$$ Fuzzy out-neighbourhood of a vertex \(v\in V\) of an interval-valued fuzzy directed graph (IVFDG) \(\overrightarrow{D}=(V,A,\overrightarrow{B})\) is the IVFS \({\mathcal {N}}^+(v)=(X_v^+, m_v^+)\), where \(X_v^+=\{u{:}\, \mu _B(\overrightarrow{v,u})>0\}\) and \(m_v^+{:}\,X_v^+\rightarrow [0,1]\times [0,1]\) defined by \(m_v^+=\mu _B(\overrightarrow{v,u})=[\mu _B^-(\overrightarrow{v,u}), \mu _B^+(\overrightarrow{v,u})]\) Here, B is an interval-valued fuzzy relation on a set X, is denoted by \(\mu _B{:}\,X\times X \rightarrow [0,1] \times [0,1]\) such that $$\begin{aligned}&\mu _B^-(x,y)\le \min \left\{ \mu _A^-(x), \mu _A^-(y)\right\} \\&\mu _B^+(x,y)\le \min \left\{ \mu _A^+(x), \mu _A^+(y)\right\} \end{aligned}$$ An interval-valued fuzzy graph of a graph \(G^*=(V,E)\) is a fuzzy graph \(G=(V, A, B)\), where \(A=[\mu _A^-, \mu _A^+]\) is an interval-valued fuzzy set on V and \(B=[\mu _B^-, \mu _B^+]\) is a symmetric interval-valued fuzzy relation on E. An interval-valued fuzzy digraph \(\overrightarrow{G}=(V, A, \overrightarrow{B})\) is an interval-valued fuzzy graph, where the fuzzy relation \(\overrightarrow{B}\) is antisymmetric. An interval-valued fuzzy graph \(\xi = (A,B)\) is said to be complete interval-valued fuzzy graph if \(\mu ^-(x,y)= \min \{\sigma ^-(x),\sigma ^-(y)\}\) and \(\mu ^+(x,y)=\) \(\min\) \(\{\sigma ^+(x),\) \(\sigma ^+(y)\}\), \(\forall x,y\in V\). An interval-valued fuzzy graph is defined to be bipartite, if there exists two sets \(V_1\) and \(V_2\) such that the sets \(V_1\) and \(V_2\) are partitions of the vertex set V, where \(\mu ^+(u,v)=0\) if \(u,v\in V_1\) or \(u, v \in V_2\) and \(\mu ^+(v_1, v_2) > 0\) if \(v_1\in V_1\) (or \(V_2\)) and \(v_2 \in V_2\) (or \(V_1\)). The Cartesian product (Akram and Dudek 2011) \(G_1\times G_2\) of two interval-valued fuzzy graphs \(G_1 =(V_1, A_1,B_1)\) and \(G_2 = (V_2,A_2,B_2)\) is defined as a pair \((V_1\times V_2, A_1\times A_2,B_1\times B_2)\) such that \(\left\{ \begin{array}{l} \mu _{A_1\times A_2}^-(x_1, x_2) = \min \{\mu _{A_1}^-(x_1), \mu _{A_2}^-(x_2)\}\\ \mu ^+_{A_1\times A_2}(x_1, x_2) = \min \{\mu ^+_{A_1}(x_1), \mu ^+_{A_2}(x_2)\} \end{array}\right\}\) for all \(x_1\in V_1, x_2\in V_2\), \(\left\{ \begin{array}{l} \mu _{B_1\times B_2}^-((x,x_2),(x,y_2)) = \min \{\mu _{A_1}^-(x), \mu _{B_2}^-(x_2,y_2)\}\\ \mu _{B_1\times B_2}^+((x,x_2),(x,y_2)) = \min \{\mu _{A_1}^+(x), \mu _{B_2}^+(x_2,y_2)\} \end{array}\right\}\) for all \(x\in V_1\) and \((x_2, y_2)\in E_2\), \(\left\{ \begin{array}{l} \mu _{B_1\times B_2}^-((x_1,y),(y_1,y)) = \min \{\mu _{B_1}^-(x_1,y_1), \mu _{A_2}^-(y)\}\\ \mu _{B_1\times B_2}^+((x_1,y),(y_1,y)) = \min \{\mu _{B_1}^+(x_1,y_1), \mu _{A_2}^+(y)\} \end{array}\right\}\) for all \((x_1,y_1)\in E_1\) and \(y \in V_2.\) The composition \(G_1[G_2]=(V_1\circ V_2, A_1\circ A_2, B_1\circ B_2)\) of two interval-valued fuzzy graphs \(G_1\) and \(G_2\) of the graphs \(G_1^*\) and \(G_2^*\) is defined as follows: \(\left\{ \begin{array}{l} \mu _{A_1\circ A_2}^-(x_1, x_2) = \min \{\mu _{A_1}^-(x_1), \mu _{A_2}^-(x_2)\}\\ \mu ^+_{A_1\circ A_2}(x_1, x_2) = \min \{\mu ^+_{A_1}(x_1), \mu ^+_{A_2}(x_2)\} \end{array}\right\}\) for all \(x_1\in V_1, x_2\in V_2\), \(\left\{ \begin{array}{l} \mu _{B_1\circ B_2}^-((x,x_2),(x,y_2)) = \min \{\mu _{A_1}^-(x), \mu _{B_2}^-(x_2,y_2)\}\\ \mu _{B_1\circ B_2}^+((x,x_2),(x,y_2)) = \min \{\mu _{A_1}^+(x), \mu _{B_2}^+(x_2,y_2)\} \end{array}\right\}\) for all \(x\in V_1\) and \((x_2, y_2)\in E_2\), \(\left\{ \begin{array}{l} \mu _{B_1\circ B_2}^-((x_1,y),(y_1,y)) = \min \{\mu _{B_1}^-(x_1,y_1), \mu _{A_2}^-(y)\}\\ \mu _{B_1\circ B_2}^+((x_1,y),(y_1,y)) = \min \{\mu _{B_1}^+(x_1,y_1), \mu _{A_2}^+(y)\} \end{array}\right\}\) for all \((x_1,y_1)\in E_1\) and \(y \in V_2,\) \(\left\{ \begin{array}{l} \mu _{B_1\circ B_2}^-((x_1,x_2),(y_1,y_2)) = \min \{\mu _{A_2}^-(x_2), \mu _{A_2}^-(y_2),\mu _{B_1}^-(x_1,y_1)\}\\ \mu _{B_1\circ B_2}^+((x_1,x_2),(y_1,y_2)) = \min \{\mu _{A_2}^+(x_2), \mu _{A_2}^+(y_2),\mu _{B_1}(x_1,y_1)\} \end{array}\right\}\) otherwise. The union \(G_1\cup G_2=(V_1\cup V_2, A_1\cup A_2, B_1\cup B_2)\) of two interval-valued fuzzy graphs \(G_1\) and \(G_2\) of the graphs \(G_1^*\) and \(G_2^*\) is defined as follows: \(\left\{ \begin{array}{l} \mu _{A_1\cup A_2}^-(x) =\mu _{A_1}^-(x) {\text { if }}\,x\in V_1 {\text { and }}\, x\notin V_2\\ \mu _{A_1\cup A_2}^-(x) =\mu _{A_2}^-(x) {\text { if }}\,x\in V_2 {\text { and }}\,x\notin V_1\\ \mu _{A_1\cup A_2}^-(x) =\max \{\mu _{A_1}^-(x), \mu _{A_2}^-(x)\}\,{\text { if }}\,x\in V_1\cap V_2. \end{array}\right.\) \(\left\{ \begin{array}{l} \mu _{A_1\cup A_2}^+(x) =\mu _{A_1}^+(x) {\text { if }}\, x\in V_1 {\text { and }}\,x\notin V_2\\ \mu _{A_1\cup A_2}^+(x) =\mu _{A_2}^+(x) {\text { if }}\,x\in V_2 {\text { and }}\,x\notin V_1\\ \mu _{A_1\cup A_2}^+(x) =\max \{\mu _{A_1}^+(x), \mu _{A_2}^+(x)\} {\text { if }}\,x\in V_1\cap V_2. \end{array}\right.\) \(\left\{ \begin{array}{l} \mu _{B_1\times B_2}^-(x,y) = \mu _{B_1}^-(x,y) {\text { if }}\,(x,y)\in E_1 {\text{and}}\,(x,y)\notin E_2\\ \mu _{B_1\times B_2}^-(x,y) = \mu _{B_2}^-(x,y) {\text{if}}\,(x,y)\in E_2 {\text{and}}\,(x,y)\notin E_1\\ \mu _{B_1\times B_2}^-(x,y) = \max \{\mu _{B_1}^-(x,y), \mu _{B_2}^-(x,y)\} {\text{if}}\,(x,y)\in E_1\cap E_2. \end{array}\right.\) \(\left\{ \begin{array}{l} \mu _{B_1\times B_2}^+(x,y) = \mu _{B_1}^+(x,y) {\text{if}}\,(x,y)\in E_1 {\text{and}}\,(x,y)\notin E_2\\ \mu _{B_1\times B_2}^+(x,y) = \mu _{B_2}^+(x,y) {\text{if}}\,(x,y)\in E_2 {\text{and}}\,(x,y)\notin E_1\\ \mu _{B_1\times B_2}^+(x,y) = \max \{\mu _{B_1}^+(x,y), \mu _{B_2}^+(x,y)\} {\text{if}}\,(x,y)\in E_1\cap E_2. \end{array}\right.\) The join \(G_1+G_2=(V_1+V_2, A_1+A_2, B_1+B_2)\) of two interval-valued fuzzy graphs \(G_1\) and \(G_2\) of the graphs \(G_1^*\) and \(G_2^*\) is defined as follows: \(\left\{ \begin{array}{l} \mu _{A_1+ A_2}^-(x) = (\mu _{A_1}^-\cup \mu _{A_2}^-)(x)\\ \mu _{A_1+ A_2}^+(x) = (\mu _{A_1}^+\cup \mu _{A_2}^+)(x) \end{array}\right\}\) if \(x\in V_1\cup V_2\), \(\left\{ \begin{array}{l} \mu _{B_1+ B_2}^-(x,y) = (\mu _{B_1}^-\cup \mu _{B_2}^-)(x,y)\\ \mu _{B_1+ B_2}^+(x,y) = (\mu _{B_1}^+\cup \mu _{B_2}^+)(x,y) \end{array}\right\}\) if \((x,y)\in E_1\cap E_2\), \(\left\{ \begin{array}{l} \mu _{B_1+ B_2}^-(x,y) = \min \{\mu _{A_1}^-(x), \mu _{A_2}^-(y)\}\\ \mu _{B_1+ B_2}^+(x,y) = \min \{\mu _{A_1}^+(x), \mu _{A_2}^+(y)\} \end{array}\right\}\) for all \((x,y)\in E'\), where \(E'\) is the set of edges connecting the vertices of \(V_1\) and \(V_2\). Interval-valued fuzzy \(\phi\)-tolerance competition graph In this section, the definition of interval-valued fuzzy \(\phi\)-tolerance competition graph is given and studied several properties. Definition 1 (Interval-valued fuzzy \(\phi\)-tolerance competition graph (IVFPTCG)) Let \(\phi {:}\,N\times N\rightarrow N\) be a mapping, where N is a set of natural numbers. Interval-valued fuzzy \(\phi\)-tolerance competition graph of an interval-valued fuzzy directed graph (IVFDG) \(\overrightarrow{D}=(V,A,\overrightarrow{B})\) is an undirected graph \(ITC_{\phi }(\overrightarrow{D}) = (V,A, B')\) such that $$\begin{aligned} \mu _{B'} (u,v) &= {} [\mu _{B'}^-(u,v), \mu _{B'}^+(u,v)]\\& = {} \left\{ \begin{array}{l} h({{\mathcal {N}}}^+(u)\cap {\mathcal {N}}^+(v)),\\ \,\quad \qquad \text{ if } c({\mathcal {N}}^+(u)\cap {\mathcal {N}}^+(v))\ge \phi \{c({\mathcal {T}}_u), c({\mathcal {T}}_v)\}\\ \frac{s({\mathcal {N}}^+(u)\cap {\mathcal {N}}^+(v))-\phi \{s({\mathcal {T}}_u), s({\mathcal {T}}_v))\}+1}{s({\mathcal {N}}^+(u)\cap {\mathcal {N}}^+(v))}\cdot h({\mathcal {N}}^+(u)\cap {\mathcal {N}}^+(v)),\\ \,\quad \qquad \text{ if } s({\mathcal {N}}^+(u)\cap {\mathcal {N}}^+(v))\ge \phi \{s({\mathcal {T}}_u), s({\mathcal {T}}_v)\}\\ 0, \,\quad \text{ otherwise. } \end{array} \right. \end{aligned}$$ where, \({\mathcal {T}}_u, {\mathcal {T}}_v\) are the fuzzy tolerances corresponding to u and v, respectively. Taking \(\phi\) as \(\min\). An example of this graph is given below. Consider an interval-valued fuzzy digraph \(\overrightarrow{G}=(V,A,\overrightarrow{B})\) shown in Fig. 2 with each vertex have membership values [1, 1]. The edge membership values are taken as $$\begin{aligned} &\mu _B(\overrightarrow{v_1,v_2})=[0.8,0.9], \quad \mu _B(\overrightarrow{v_1,v_5})=[0.7,0.8],\\ &\mu _B(\overrightarrow{v_2,v_5})=[0.6,0.8], \quad \mu _B(\overrightarrow{v_3,v_2})=[0.5,0.7],\\ &\mu _B(\overrightarrow{v_3,v_4})=[0.3,0.5], \quad \mu _B(\overrightarrow{v_4,v_1})=[0.7,0.9],\\ &\mu _B(\overrightarrow{v_5,v_3})=[0.6,0.8],\quad \mu _B(\overrightarrow{v_5,v_4})=[0.5,0.6]. \end{aligned}$$ Let core and support lengths of fuzzy tolerances \({\mathcal {T}}_1,{\mathcal {T}}_2, {\mathcal {T}}_3,{\mathcal {T}}_4,{\mathcal {T}}_5\) corresponding to the vertices \(v_1, v_2,v_3,v_4,v_5\) be 1, 1, 3, 2, 0 and 1, 2, 4, 3, 1, respectively. Here, it is true that \(\phi \{c({\mathcal {T}}_u), c({\mathcal {T}}_v)\}=\min \{c({\mathcal {T}}_u), c({\mathcal {T}}_v)\}\). An interval-valued fuzzy digraph and its corresponding interval-valued fuzzy \(\phi\)-tolerance competition graph. a An interval-valued fuzzy digraph, b interval-valued fuzzy ϕ-tolerance competition graph Based on this consideration, the following computations have been made. $$\begin{aligned} {\mathcal {N}}^+(v_1)& = {} \{(v_2,[0.8,0.9]),(v_5,[0.7,0.8])\}\\ {\mathcal {N}}^+(v_2)& = {} \{(v_5,[0.6,0.8])\}\\ {\mathcal {N}}^+(v_3)& = {} \{(v_2,[0.5,0.7]),(v_4,[0.3,0.5])\}\\ {\mathcal {N}}^+(v_4)& = {} \{(v_1,[0.7,0.9])\}\\ {\mathcal {N}}^+(v_5)& = {} \{(v_3,[0.6,0.8]),(v_4,[0.5,0.6])\} \end{aligned}$$ $$\begin{aligned}&{\mathcal {N}}^+(v_1)\cap {\mathcal {N}}^+(v_2)=\{(v_5,[0.6,0.8])\}\\&{\mathcal {N}}^+(v_1)\cap {\mathcal {N}}^+(v_3)=\{(v_2,[0.5,0.7])\}\\&{\mathcal {N}}^+(v_3)\cap {\mathcal {N}}^+(v_5)=\{(v_4,[0.3,0.5])\} \end{aligned}$$ $$\begin{aligned}&h({\mathcal {N}}^+(v_1)\cap {\mathcal {N}}^+(v_2))=[0.6,0.8]\\&h({\mathcal {N}}^+(v_1)\cap {\mathcal {N}}^+(v_3))=[0.5,0.7]\\&h({\mathcal {N}}^+(v_3)\cap {\mathcal {N}}^+(v_5))=[0.3,0.5] \end{aligned}$$ $$\begin{aligned}&c({\mathcal {N}}^+(v_1)\cap {\mathcal {N}}^+(v_2))=0; s({\mathcal {N}}^+(v_1)\cap {\mathcal {N}}^+(v_2))=1\\&c({\mathcal {N}}^+(v_1)\cap {\mathcal {N}}^+(v_3))=0; s({\mathcal {N}}^+(v_1)\cap {\mathcal {N}}^+(v_3))=1\\&c({\mathcal {N}}^+(v_3)\cap {\mathcal {N}}^+(v_5))=0; s({\mathcal {N}}^+(v_3)\cap {\mathcal {N}}^+(v_5))=1. \end{aligned}$$ Then by the definition of interval-valued fuzzy \(\phi\)-tolerance competition graph, the vertex membership function of the interval-valued fuzzy min-tolerance competition graph is that of interval-valued fuzzy digraph shown in Fig. 2 and the edge membership values are as follows: $$\begin{aligned} \begin{array}{ll} \mu _B({v_1,v_3})=[0.5,0.7], &{}\quad \mu _B({v_1,v_2})=[0.6,0.8],\\ \mu _B({v_3,v_5})=[0.3,0.5]. \end{array} \end{aligned}$$ A \(\phi\)-T-edge clique cover (\(\phi\)-T-ECC) of an interval-valued fuzzy graph \({\mathcal {G}}=(V,A,B)\) with vertices \(v_1,v_2,\ldots , v_n\) is a collection \(S_1,S_2,\ldots , S_k\) of subsets of V such that \(\mu _B^-(v_r,v_s)>0\) if and only if at least \(\phi (c(T_r), c(T_s))\) of the sets \(S_i\), contain both \(v_r\) and \(v_s\). The size k of a smallest \(\phi\)-T-ECC of \({\mathcal {G}}\) taken over all tolerances T is the \(\phi\)-T-edge clique cover number and is denoted by \(\theta _{\phi }({\mathcal {G}})\). Let \(\phi {:}\,N\times N\rightarrow N\) be a mapping. If \(\theta _{\phi }({\mathcal {G}})\le |V|\), then there exists an interval-valued fuzzy \(\phi\) -tolerance competition graph. Let us assume that \(\theta _{\phi }({\mathcal {G}})\le |V|\) and \(S_1,S_2,\ldots , S_k (k\le n)\) be a \(\phi\)-T-ECC of an interval-valued fuzzy graph \({\mathcal {G}}\). Each \(S_i\) is defined by \(S_i=\{v_j{:}\,\mu _B^-(v_i, v_j)>0\}\). Each \(S_i\) is chosen in such a way that in the interval-valued fuzzy digraph \(\overrightarrow{{\mathcal {G}}}=(V,A,\overrightarrow{B})\), \(\mu _B^-(\overrightarrow{v_i,v_j})=\mu _{B'}^-(v_i,v_j)\) and \(\mu _B^+(\overrightarrow{v_i,v_j})=\mu _{B'}^+(v_i,v_j)\), if \(v_j\in S_i\). Now, in IVFG \({\mathcal {G}}\), either \(c({\mathcal {N}}^+(v_i)\cap {\mathcal {N}}^+(v_j))\ge \phi \{c({\mathcal {T}}_{v_i}), c({\mathcal {T}}_{v_j})\}\) or, \(s({\mathcal {N}}^+(v_i)\cap {\mathcal {N}}^+(v_j))\ge \phi \{s({\mathcal {T}}_{v_i}), s({\mathcal {T}}_{v_j})\}\) must satisfy. Hence, \({\mathcal {G}}\) is an interval-valued fuzzy \(\phi\)-tolerance competition graph. \(\square\) For an interval-valued fuzzy digraph \({\mathcal {G}}=(V,A,\overrightarrow{B})\), if there exists an interval-valued fuzzy \(\phi\)-tolerance competition graph, then \(\theta _{\phi }(\overrightarrow{{\mathcal {G}}})\le |V|=n.\) Let \({\mathcal {G}}=(V,A,B')\) be an interval-valued fuzzy \(\phi\)-tolerance competition graph of \(\overrightarrow{G}\) and \(V=\{v_1,v_2,\ldots , v_n\}\) and \(S_i=\{v_j{:}\,\mu _{B'}^-(v_i,v_j)>0\}\). It is clear that there can be at most n numbers of \(S_i\)'s. Let \({\mathcal {T}}_1,{\mathcal {T}}_2,\ldots , {\mathcal {T}}_n\) be the fuzzy tolerances associated to each vertex of V. Now, \(\mu (v_r,v_s)>0\) if and only if either \(c({\mathcal {N}}^+(v_r)\cap {\mathcal {N}}^+(v_s))\ge \phi \{c({\mathcal {T}}_{r}), c({\mathcal {T}}_{s})\}\) or, \(s({\mathcal {N}}^+(v_r)\cap {\mathcal {N}}^+(v_s))\ge \phi \{s({\mathcal {T}}_{r}), s({\mathcal {T}}_{s})\}\). Thus, at most n sets \(S_1,S_2,\ldots , S_n\) make a family of \(\phi\)-T-ECC of size at most \(n=|V|\), i.e. \(\theta _{\phi }(\overrightarrow{{\mathcal {G}}})\le |V|=n.\) \(\square\) Interval-valued fuzzy \(\phi\)-tolerance competition graph \(G=(V,A,B)\) cannot be complete. Suppose, G be an interval-valued fuzzy \(\phi\)-tolerance competition graph with 2 vertices, x and y (say). For this graph there is no interval digraph with 2 vertices with some common preys. Hence, it cannot be complete. If possible let, an IVFPTCG with 3 vertices be complete. Without any loss of generality, consider the graph of Fig. 3. This graph is nothing but a clique of order 3. As \(\mu _B(x,y)\ne [0,0]\), x, y has a common prey and it must be z. Thus, x, y is directed to z. Again \(\mu _B(y,z)\ne [0,0]\) implies that, y, z is directed to x. But in IVFDG, it is not possible to have two directed edges (x, z) and (z, x) simultaneously. This concludes that there is no valid IVFDG for this IVFPTCG. As, every complete IVFPTCG contains a clique of order 3, there does not exist any valid IVFDG. Hence, any interval-valued fuzzy \(\phi\)-tolerance competition graph \(G=(V,A,B)\) cannot be complete. \(\square\) A complete IVFPTCG The interval-valued fuzzy \(\min\)-tolerance competition graph of an irregular interval-valued fuzzy digraph need not be irregular. This can be shown by giving a counter-example. Suppose an interval-valued fuzzy digraph with 3 vertices shown in Fig. 4. Irregular interval-valued fuzzy digraph and its corresponding interval-valued fuzzy min-tolerance competition graph Consider the core and support lengths of fuzzy tolerances associated to each of the vertices of the irregular interval-valued fuzzy digraph shown in Fig. 4 are 1, 1, 1 and 1, 1, 1 respectively. The interval-valued fuzzy \(\min\)-tolerance competition graph of a regular interval-valued fuzzy digraph need not be regular. To prove this, a counter-example is given in the Fig. 5. A regular interval-valued fuzzy digraph and its corresponding interval-valued fuzzy min-tolerance competition graph In Fig. 5, the regular interval-valued fuzzy digraph has the degrees \(\deg (v_1)=\deg (v_2)=\cdots = \deg (v_5)=[0.7,0.9]\), but the degree of the vertices of interval-valued fuzzy min-tolerance competition graph of the digraph shown in Fig. 5 are \(\deg (v_1)=[0.4,0.5]\), \(\deg (v_2)=[0.6,0.8]\), \(\deg (v_3)=[0.2,0.3]\). Hence, it is not regular. The size of an interval-valued fuzzy graph \({\mathcal {G}}=(V,A, B)\) is denoted by \(S({\mathcal {G}})\) and is defined by $$\begin{aligned} S({\mathcal {G}})= \sum \mu _B(u,v)=\left[ \sum \mu _B^-(u,v), \sum \mu _B^+(u,v)\right] . \end{aligned}$$ Let \(\overrightarrow{{\mathcal {G}}}\) be an interval-valued fuzzy digraph and \(ITC_{\phi }(\overrightarrow{{\mathcal {G}}})\) be its interval-valued fuzzy \(\phi\) -tolerance competition graph. Then $$\begin{aligned} S(ITC_{\phi }(\overrightarrow{{\mathcal {G}}}))\le S(\overrightarrow{{\mathcal {G}}}). \end{aligned}$$ Let \(ITC_{\phi }(\overrightarrow{{\mathcal {G}}})=(V,A,B')\) be the interval-valued fuzzy \(\phi\)-tolerance competition graph of an interval-valued fuzzy digraph \(\overrightarrow{{\mathcal {G}}}=(V,A,\overrightarrow{B})\). As for every triangular orientation of three vertices in \(\overrightarrow{{\mathcal {G}}}\), as shown in Fig. 4, there is atmost one edge in \(ITC_{\phi }(\overrightarrow{{\mathcal {G}}})\), it is obvious that, an interval-valued fuzzy \(\phi\)-tolerance competition graph has less number of edges than that of the interval-valued fuzzy digraph. Now, consider \(\mu _{B'}(v_1,v_2)>0\) in \(ITC_{\phi }(\overrightarrow{{\mathcal {G}}})\) and \({\mathcal {N}}^+(v_1)\) and \({\mathcal {N}}^+(v_2)\) has at least one vertex in common and also \(h({\mathcal {N}}^+(v_1)\cap {\mathcal {N}}^+(v_2))=[1,1]\) (as much as possible). Then there exist at least one vertex, say \(v_i\) so that the edge membership value between \(v_1\), \(v_i\) or \(v_2\), \(v_i\) is [1, 1]. Then \(S(\overrightarrow{{\mathcal {G}}})>[1,1]\) whereas, \(S(ITC_{\phi }(\overrightarrow{{\mathcal {G}}}))\le [1,1]\). Hence, \(S(ITC_{\phi }(\overrightarrow{{\mathcal {G}}}))\le S(\overrightarrow{{\mathcal {G}}}).\) \(\square\) If \(C_1,C_2,\ldots , C_p\) be the cliques of order 3 of underlying undirected crisp graph of a IVFDG \(\overrightarrow{G}=(V,A,\overrightarrow{B})\) such that \(C_1\cup C_2\cup \ldots C_p=V\) and \(|C_i\cap C_j|\le 1\) \(\forall i,j=1,2,\ldots , p\). Then the corresponding IVFPTCG of \(\overrightarrow{G}\) cannot have cliques of order 3 or more. From the given conditions of clique sets, i.e. \(C_1\cup C_2\cup \ldots C_p=V\) and \(|C_i\cap C_j|\le 1 \forall i,j=1,2,\ldots , p\), it is clear that the interval-valued fuzzy digraph has only triangular orientation and no two triangular orientation has a common edge. That is, the IVFDG has no orientation shown in Fig. 6b. The IVFDG only have the orientations of type shown in Fig. 6a. As for every triangular orientation, there have only one edge in interval-valued fuzzy \(\phi\)-tolerance competition graph, the said graph does not have a clique of order 3 or more. Hence, interval-valued fuzzy \(\phi\)-tolerance competition graph cannot have cliques of order 3 or more. \(\square\) Types of triangular orientation. a Two triangular orientation has a common edge, b two triangular orientation has no common edge If the clique number of an underlying undirected crisp graph of an interval-valued fuzzy digraph \(\overrightarrow{{\mathcal {G}}}=(V,A,\overrightarrow{B})\) is p, then the underlying crisp graph of the interval-valued fuzzy \(\phi\)-tolerance competition graph has the clique number less than or equal to p. Let us assume that the maximum clique of \(\overrightarrow{{\mathcal {G}}}=(V,A,\overrightarrow{B})\) induces a subgraph \(\overrightarrow{\mathcal {G'}}\) which is also an interval-valued fuzzy directed graph. From Theorem 4, the size of interval-valued fuzzy \(\phi\)-tolerance competition graph is always less than or equal to the size of interval-valued fuzzy directed graph, then the clique number of the interval-valued fuzzy \(\phi\)-tolerance competition graph cannot be greater than p. Hence the theorem follows. Interval-valued fuzzy \(\phi\)-tolerance competition graph of a complete interval-valued fuzzy digraph has maximum \(^nC_3\) number of fuzzy edges. It is obvious that every triangular orientation there exists an edge in IVFPTCG. Now, in a complete interval-valued fuzzy digraph \(\mu _B^-(x,y)=\min \{\mu _A^-(x),\,\mu _A^-(y)\}\), and \(\mu _B^+(x,y)=\min \{\mu _A^+(x),\mu _A^+(y)\}\), \(\forall x, y \in V\). Hence, every vertex is assigned to some vertex in V. Therefore, there are maximum \(^nC_3\) number of orientations. Therefore, there exists maximum \(^nC_3\) number of fuzzy edges in IVFPTCG. \(\square\) Application of interval-valued fuzzy max-tolerance competition graph in image matching Computer world advances rapidly in this modern age. Yet, it is till now a dull thing to us. The major difference for image matching by human and computer is that computer could not match two or more images by saying that they are likely same, but human can. Here, we present an arbitrary example by considering that the images are distorted by some way and they have some distortion values like an image of an object without 20% distorted (here, it is taken as arbitrary, it can be calculated by some pixel matching algorithm, which should be developed). For convenience, let us consider five types of different fonts \(A_1,A_2,A_3,A_4,A_5\) of the alphabet A as shown in Fig. 7. Taking each fonts \(A_1,A_2,A_3,A_4,A_5\) as vertices \(v_1,v_2,v_3,v_4,v_5\) respectively and there exists an edge between the vertices if two fonts have two different distortion values (d.v.). The corresponding graph model is shown in Fig. 8. Let the distortion values of fonts \(A_1,A_2,A_3,A_4,A_5\) be 70, 20, 50, 80, 0% respectively. This can be modeled as the interval-valued fuzzy digraph (see Fig. 8) with a direction to the vertex, which has the minimum distortion value. The edge membership value of an edge between two vertices \(v_1\), \(v_2\) of this graph is calculated as \(\mu _B(v_1,v_2)=[\min \{\frac{\text {d.v. of }v_1}{100},\) \(\frac{\text {d.v. of }v_2}{100}\},\) \(\max \{\frac{\text {d.v. of }v_1}{100},\,\frac{\text {d.v. of }v_2}{100}\} ]\). Each fonts have some tolerances i. e., the fonts can be distorted to a certain percentage. Arbitrarily, let us consider the tolerance core and tolerance support lengths of the vertices \(v_1,v_2,v_3,v_4,v_5\) are 0, 1, 0, 1, 2 and 1, 1, 1, 2, 3, respectively. Natural computations can be made and the max-tolerance competition graph is obtained as shown in Fig. 9, which shows that the fonts \(A_1,A_4\) are closely related and the closeness is approximately \((0.35-0.25)\cdot 100\%=10\%\). Different fonts of A and their distortion values Interval-valued fuzzy digraph model of image matching Interval-valued fuzzy max-tolerance competition graph of Fig. 8 Product of two IVFPTCGs and relations between them Throughout this paper, \(\theta\) is taken as the null set in crisp sense and \(\overrightarrow{G_1^*}\), \(\overrightarrow{G_2^*}\) are the crisp digraphs. The Cartesian product \(G_1\times G_2\) of two interval-valued fuzzy digraphs \(\overrightarrow{G_1} =(A_1,\overrightarrow{B_1})\) and \(\overrightarrow{G_2} = (A_2,\overrightarrow{B_2})\) of the graphs \(\overrightarrow{G^*_1} = (V_1,\overrightarrow{E_1})\) and \(\overrightarrow{G^*_2} = (V_2,\overrightarrow{E_2})\) is defined as a pair \((A_1\times A_2,\overrightarrow{B_1\times B_2})\) such that \(\left\{ \begin{array}{l} \mu _{B_1\times B_2}^-(\overrightarrow{(x,x_2),(x,y_2)}) = \min \{\mu _{A_1}^-(x), \mu _{B_2}^-(\overrightarrow{x_2,y_2})\}\\ \mu _{B_1\times B_2}^+(\overrightarrow{(x,x_2),(x,y_2)}) = \min \{\mu _{A_1}^+(x), \mu _{B_2}^+(\overrightarrow{x_2,y_2})\} \end{array}\right\}\) for all \(x\in V_1\) and \((\overrightarrow{x_2, y_2})\in E_2\), \(\left\{ \begin{array}{l} \mu _{B_1\times B_2}^-(\overrightarrow{(x_1,y),(y_1,y)}) = \min \{\mu _{B_1}^-(\overrightarrow{x_1,y_1}), \mu _{A_2}^-(y)\}\\ \mu _{B_1\times B_2}^+(\overrightarrow{(x_1,y),(y_1,y)}) = \min \{\mu _{B_1}^+(\overrightarrow{x_1,y_1}), \mu _{A_2}^+(y)\} \end{array}\right\}\) for all \((\overrightarrow{x_1,y_1})\in E_1\) and \(y \in V_2\). For any two interval-valued fuzzy directed graphs \(\overrightarrow{G_1}\) and \(\overrightarrow{G_2}\), $$\begin{aligned} ITC_{\phi }(\overrightarrow{G_1}\times \overrightarrow{G_2})= ITC_{\phi }(\overrightarrow{G_1})\times ITC_{\phi }(\overrightarrow{G_2}), \end{aligned}$$ considering tolerances \({\mathcal {T}}_{(x,y)}\) corresponding to each vertex (x, y) of \(\overrightarrow{G_1}\times \overrightarrow{G_2}\) as \(c({\mathcal {T}}_{(x,y)})=\min \{c({\mathcal {T}}_x),c({\mathcal {T}}_y)\}\) and \(s({\mathcal {T}}_{(x,y)})=\min \{s({\mathcal {T}}_x),s({\mathcal {T}}_y)\}\). It is easy to understand from the definition of IVFPTCG that all vertices and their membership values remain unchanged, but fuzzy edges and their membership values have been changed. Thus, there is no need to clarify about vertices. Now, according to the definition of Cartesian product of two interval-valued fuzzy directed graphs \(\overrightarrow{G_1}\) and \(\overrightarrow{G_2}\), there are two types of edges in \(\overrightarrow{G_1}\times \overrightarrow{G_2}\). The two cases are as follows. Suppose, all edges are of type \(((x,x_2),(x,y_2))\), \(\forall x\in V_1\) and \((x_2,y_2)\in E_2\). Obviously, from the definition of the Cartesian products of two directed graphs that, if \(x_2, y_2\) have a common prey \(z_2\) in \(\overrightarrow{G_2}\), then \((x,x_2),(x,y_2)\) have a common prey \((x,z_2)\) in \(\overrightarrow{G_1}\times \overrightarrow{G_2}\), \(\forall x\in V_1\). Now, it has to show if \(\mu _{B_2}^-(x_2,y_2)>0\) in \(ITC_{\phi }(\overrightarrow{G_2})\), then \(\mu _{B_1\times B_2}^-((x,x_2),(x,y_2))>0\) in \(ITC_{\phi }(\overrightarrow{G_1}\) \(\times \overrightarrow{G_2})\) is true. If \(\mu _{B_2}^-(x_2,y_2)\) \(>0\), then either \(c({\mathcal {N}}^+(x_2)\cap\) \({\mathcal {N}}^+(y_2))\ge\) \(\phi \{c({\mathcal {T}}_{x_2}),\) \(c({\mathcal {T}}_{y_2})\}\) or \(s({\mathcal {N}}^+(x_2)\cap\) \({\mathcal {N}}^+(y_2))\) \(\ge\) \(\phi \{s({\mathcal {T}}_{x_2}),\) \(s({\mathcal {T}}_{y_2})\}\) is true. From the previous claim, if \(z_2\) is the common prey of \(x_2, y_2\) in \(\overrightarrow{G_2}\), \((x,z_2)\) is also a common prey of \((x,x_2)\) and \((x,y_2)\) in \(\overrightarrow{G_1}\times \overrightarrow{G_2}\). Thus, $$\begin{aligned} s({\mathcal {N}}^+(x,x_2)\cap {\mathcal {N}}^+(x,y_2))& = {} s\left( {\mathcal {N}}^+(x_2)\cap {\mathcal {N}}^+(y_2)\right) \\ &\ge \phi \left( s({\mathcal {T}}_{x_2}), s({\mathcal {T}}_{y_2})\right) \\ &\ge \phi \left( \min \left\{ s({\mathcal {T}}_x),s( {\mathcal {T}}_{x_2})\right\} ,\min \left\{ s({\mathcal {T}}_x),s({\mathcal {T}}_{y_2})\right\} \right) \\& = {} \phi \left( s({\mathcal {T}}_{(x,x_2)}), s({\mathcal {T}}_{(x,y_2)})\right) . \end{aligned}$$ As, the either case is satisfied, therefore \(\mu _{B_1\times B_2}^-((x,x_2),(x,y_2))>0\). If all edges of type \(((x_1,y),(y_1,y))\), \(\forall y\in V_2\) and \((x_1,y_1)\in E_1\), then the proof is similar to above case. Hence, \(ITC_{\phi }(\overrightarrow{G_1}\times \overrightarrow{G_2})= ITC_{\phi }(\overrightarrow{G_1})\times ITC_{\phi }(\overrightarrow{G_2})\) is proved. \(\square\) The composition \(\overrightarrow{G_1}[\overrightarrow{G_2}]=(A_1\circ A_2, \overrightarrow{B_1\circ B_2})\) of two interval-valued fuzzy digraphs \(\overrightarrow{G_1}\) and \(\overrightarrow{G_2}\) of the graphs \(\overrightarrow{G_1^*}\) and \(\overrightarrow{G_2^*}\) is given as follows: \(\left\{ \begin{array}{l} \mu _{B_1\circ B_2}^-(\overrightarrow{(x,x_2),(x,y_2)}) = \min \{\mu _{A_1}^-(x), \mu _{B_2}^-(\overrightarrow{x_2,y_2})\}\\ \mu _{B_1\circ B_2}^+(\overrightarrow{(x,x_2),(x,y_2)}) = \min \{\mu _{A_1}^+(x), \mu _{B_2}^+(\overrightarrow{x_2,y_2})\} \end{array}\right\}\) for all \(x\in V_1\) and \((\overrightarrow{x_2, y_2})\in E_2\), \(\left\{ \begin{array}{l} \mu _{B_1\circ B_2}^-(\overrightarrow{(x_1,y),(y_1,y)}) = \min \{\mu _{B_1}^-(\overrightarrow{x_1,y_1}), \mu _{A_2}^-(y)\}\\ \mu _{B_1\circ B_2}^+(\overrightarrow{(x_1,y),(y_1,y)}) = \min \{\mu _{B_1}^+(\overrightarrow{x_1,y_1}), \mu _{A_2}^+(y)\} \end{array}\right\}\) for all \((\overrightarrow{x_1,y_1})\in E_1\) and \(y \in V_2\) \(\left\{ \begin{array}{l} \mu _{B_1\circ B_2}^-(\overrightarrow{(x_1,x_2),(y_1,y_2)}) = \min \{\mu _{A_2}^-(x_2), \mu _{A_2}^-(y_2),\mu _{B_1}^-(\overrightarrow{x_1,y_1})\}\\ \mu _{B_1\circ B_2}^+(\overrightarrow{(x_1,x_2),(y_1,y_2)}) = \min \{\mu _{A_2}^+(x_2), \mu _{A_2}^+(y_2),\mu _{B_1}(\overrightarrow{x_1,y_1})\} \end{array}\right\}\) otherwise. $$\begin{aligned} ITC_{\phi }(\overrightarrow{G_1}\circ \overrightarrow{G_2})= ITC_{\phi }(\overrightarrow{G_1})\circ ITC_{\phi }(\overrightarrow{G_2}), \end{aligned}$$ considering tolerances \({\mathcal {T}}_{(x,y)}\) corresponding to each vertices (x, y) of \(\overrightarrow{G_1}\circ \overrightarrow{G_2}\) as \(c({\mathcal {T}}_{(x,y)})=\min \{c({\mathcal {T}}_x),c({\mathcal {T}}_y)\}\) and \(s({\mathcal {T}}_{(x,y)})=\min \{s({\mathcal {T}}_x),s({\mathcal {T}}_y)\}\). According to the same interpretation drawn in Theorem 8, the membership values of the vertices of \(\overrightarrow{G_1}[\overrightarrow{G_2}]\) remains unchanged under the composition \(\circ\). Now, according to the definition of composition \(\overrightarrow{G_1}[\overrightarrow{G_2}]=(A_1\circ A_2, B_1\circ B_2)\) of two interval-valued fuzzy directed graphs \(\overrightarrow{G_1}\) and \(\overrightarrow{G_2}\), there are three types of edges in \(\overrightarrow{G_1}\circ \overrightarrow{G_2}\). The three cases are as follows: Case I : For all edges of type \(((x,x_2),(x,y_2))\), \(\forall x\in V_1\) and \((x_2,y_2)\in E_2\). Obviously, from the definition of the Cartesian products of two directed graphs that, if \(x_2, y_2\) have a common prey \(z_2\) in \(\overrightarrow{G_2}\) then, \((x,x_2),(x,y_2)\) have also a common prey \((x,z_2)\) in \(\overrightarrow{G_1}\circ \overrightarrow{G_2}\), \(\forall x\in V_1\). Now, if \(\mu _{B_2}^-(x_2,y_2)>0\) in \(ITC_{\phi }(\overrightarrow{G_2})\), then \(\mu _{B_1\circ B_2}^-((x,x_2),(x,y_2))>0\) in \(ITC_{\phi }(\overrightarrow{G_1}\circ \overrightarrow{G_2})\). If \(\mu _{B_2}^-(x_2,y_2)>0\), then either \(c({\mathcal {N}}^+(x_2)\cap {\mathcal {N}}^+(y_2))\ge \phi \{c({\mathcal {T}}_{x_2}), c({\mathcal {T}}_{y_2})\}\) or \(s({\mathcal {N}}^+(x_2)\cap {\mathcal {N}}^+(y_2))\ge \phi \{s({\mathcal {T}}_{x_2}), s({\mathcal {T}}_{y_2})\}\) is true. From the previous claim that if \(z_2\) is the common prey of \(x_2, y_2\) in \(\overrightarrow{G_2}\), \((x,z_2)\) is also a common prey of \((x,x_2)\) and \((x,y_2)\) in \(\overrightarrow{G_1}\circ \overrightarrow{G_2}\), then $$\begin{aligned} s({\mathcal {N}}^+(x,x_2)\cap {\mathcal {N}}^+(x,y_2))& = {} s({\mathcal {N}}^+(x_2)\cap {\mathcal {N}}^+(y_2))\\ &\ge \phi (s({\mathcal {T}}_{x_2}), s({\mathcal {T}}_{y_2}))\\ &\ge \phi (\min \{s({\mathcal {T}}_x),s( {\mathcal {T}}_{x_2})\},\min \{s({\mathcal {T}}_x),s({\mathcal {T}}_{y_2})\})\\& = {} \phi (s({\mathcal {T}}_{(x,x_2)}), s({\mathcal {T}}_{(x,y_2)})). \end{aligned}$$ As, the either case is satisfied, \(\mu _{B_1\circ B_2}((x,x_2),(x,y_2))>0\) is true. Case II : For all edges of type \(((x_1,y),(y_1,y))\), \(\forall y\in V_2\) and \((x_1,y_1)\in E_1\). This is similar as the Case I. Case III : For all edges of type \(((x_1,x_2),(y_1,y_2))\), where \(x_1\ne y_1\) and \(x_2\ne y_2\). In this case, \((x_1,x_2)\) and \((y_1,y_2)\) have a common prey \((z_1,z_2)\) in \(\overrightarrow{G_1}\circ \overrightarrow{G_2}\) if \(x_1, y_1\) has a common prey \(z_1\) in \(\overrightarrow{G_1}\). In the similar way as in Case I, we can obtain $$\begin{aligned} s\left( {\mathcal {N}}^+(x_1,x_2)\cap {\mathcal {N}}^+(y_1,y_2)\right)& = {} s\left( {\mathcal {N}}^+(x_1)\cap {\mathcal {N}}^+(y_1)\right) \\ &\ge \phi \left( s({\mathcal {T}}_{x_1}), s({\mathcal {T}}_{y_1})\right) \\ &\ge \phi \left( \min \{s({\mathcal {T}}_{x_1}),s( {\mathcal {T}}_{x_2})\},\min \left\{ s({\mathcal {T}}_{y_1}),s({\mathcal {T}}_{y_2})\right\} \right) \\& = {} \phi \left( s({\mathcal {T}}_{(x_1,x_2)}), s({\mathcal {T}}_{(y_1,y_2)})\right) . \end{aligned}$$ If, either case is satisfied, then \(\mu _{B_1\circ B_2}^-((x_1,x_2),(y_1,y_2))>0\) is valid. Hence, \(ITC_{\phi }(\overrightarrow{G_1}\circ \overrightarrow{G_2})= ITC_{\phi }(\overrightarrow{G_1})\circ ITC_{\phi }(\overrightarrow{G_2})\) is proved. \(\square\) The union \(\overrightarrow{G_1}\cup \overrightarrow{G_2}=(A_1\cup A_2, \overrightarrow{B_1\cup B_2})\) of two interval-valued fuzzy digraphs \(\overrightarrow{G_1}\) and \(\overrightarrow{G_2}\) of the graphs \(\overrightarrow{G_1^*}\) and \(\overrightarrow{G_2^*}\) is defined as follows: \(\left\{ \begin{array}{l} \mu _{A_1\cup A_2}^-(x) =\mu _{A_1}^-(x) {\text{if}}\,x\in V_1 {\hbox{and}} x\notin V_2\\ \mu _{A_1\cup A_2}^-(x) =\mu _{A_2}^-(x) {\text{if}}\,x\in V_2 {\hbox{and}} x\notin V_1\\ \mu _{A_1\cup A_2}^-(x) =\max \{\mu _{A_1}^-(x), \mu _{A_2}^-(x)\} {\text{if}}\,x\in V_1\cap V_2. \end{array}\right.\) \(\left\{ \begin{array}{l} \mu _{A_1\cup A_2}^+(x) =\mu _{A_1}^+(x) {\text{if}}\,x\in V_1 {\hbox{and}} x\notin V_2\\ \mu _{A_1\cup A_2}^+(x) =\mu _{A_2}^+(x) {\text{if}}\,x\in V_2 {\hbox{and}} x\notin V_1\\ \mu _{A_1\cup A_2}^+(x) =\max \{\mu _{A_1}^+(x), \mu _{A_2}^+(x)\} {\text{if}}\,x\in V_1\cap V_2. \end{array}\right.\) \(\left\{ \begin{array}{l} \mu _{B_1\times B_2}^-(\overrightarrow{x,y}) = \mu _{B_1}^-(\overrightarrow{x,y}) {\text{if}}\,(\overrightarrow{x,y})\in E_1 {\text{and}}\,(\overrightarrow{x,y})\notin E_2\\ \mu _{B_1\times B_2}^-(\overrightarrow{x,y}) = \mu _{B_2}^-(\overrightarrow{x,y}) {\text{if}}\,(\overrightarrow{x,y})\in E_2 {\text{and}}\,(\overrightarrow{x,y})\notin E_1\\ \mu _{B_1\times B_2}^-(\overrightarrow{x,y}) = \max \{\mu _{B_1}^-(\overrightarrow{x,y}), \mu _{B_2}^-(\overrightarrow{x,y})\} {\text{if}}\,(\overrightarrow{x,y})\in E_1\cap E_2. \end{array}\right.\) \(\left\{ \begin{array}{l} \mu _{B_1\times B_2}^+(\overrightarrow{x,y}) = \mu _{B_1}^+(\overrightarrow{x,y}) {\text{if}}\,(\overrightarrow{x,y})\in E_1 {\text{and}}\,(\overrightarrow{x,y})\notin E_2\\ \mu _{B_1\times B_2}^+(\overrightarrow{x,y}) = \mu _{B_2}^+(\overrightarrow{x,y}) {\text{if}}\,(\overrightarrow{x,y})\in E_2 {\text{and}}\,(\overrightarrow{x,y})\notin E_1\\ \mu _{B_1\times B_2}^+(\overrightarrow{x,y}) = \max \{\mu _{B_1}^+(\overrightarrow{x,y}), \mu _{B_2}^+(\overrightarrow{x,y})\} {\text{if}}\,(\overrightarrow{x,y})\in E_1\cap E_2. \end{array}\right.\) Theorem 10 $$\begin{aligned} ITC_{\phi }(\overrightarrow{G_1}\cup \overrightarrow{G_2})= ITC_{\phi }(\overrightarrow{G_1})\cup ITC_{\phi }(\overrightarrow{G_2}). \end{aligned}$$ There are four cases as follows: \(V_1\cap V_2=\theta\) In this case, \(\overrightarrow{G_1}\cup \overrightarrow{G_2}\) is a disconnected interval-valued fuzzy directed graphs with two components \(\overrightarrow{G_1}\) and \(\overrightarrow{G_2}\). Thus, there is nothing to prove that \(ITC_{\phi }(\overrightarrow{G_1}\cup \overrightarrow{G_2})= ITC_{\phi }(\overrightarrow{G_1})\cup ITC_{\phi }(\overrightarrow{G_2}).\) \(V_1\cap V_2=\theta\), \((x_1,x_2)\in E_1\) and \((x_1,x_2)\notin E_2\) \(\mu _{B_1\cup B_2}^-(x_1,x_2)=\mu _{B_1}^-(x_1,x_2)\) and it is obvious that if \(\mu _{B_1}^-(x_1,x_2)>0\) in \(ITC_{\phi }(\overrightarrow{G_1})\), then \(\mu _{B_1\cup B_2}^-(x_1,x_2)>0\) in \(ITC_{\phi }(\overrightarrow{G_1}\cup \overrightarrow{G_2})\). \(V_1\cap V_2=\theta\), \((x_1,x_2)\notin E_1\) and \((x_1,x_2)\in E_2\) This is similar as in Case II. Case IV : \(V_1\cap V_2=\theta\), \((x_1,x_2)\in E_1\cap E_2\) In this case, consider \(x_1\) and \(x_2\) have a common prey \(y_1\) in \(\overrightarrow{G_1}\) and \(y_2\) in \(\overrightarrow{G_2}\). This shows that \(s({\mathcal {N}}^+(x_1)\cap {\mathcal {N}}^+(x_2))\) in \(\overrightarrow{G_1}\cup \overrightarrow{G_2}\) is greater than or equal to \(s({\mathcal {N}}^+(x_1)\cap {\mathcal {N}}^+(x_2))\) in \(\overrightarrow{G_1}\) or \(\overrightarrow{G_2}\). Hence, it can be found that if \(\mu _{B_1}^-(x_1,x_2)>0\) in \(ITC_{\phi }(\overrightarrow{G_1})\) and \(\mu _{B_2}^-(x_1,x_2)>0\) in \(ITC_{\phi }(\overrightarrow{G_2})\), then \(\mu _{B_1\cup B_2}^-(x_1,x_2)>0\) in \(ITC_{\phi }(\overrightarrow{G_1}\cup \overrightarrow{G_2})\). Hence, \(ITC_{\phi }(\overrightarrow{G_1}\cup \overrightarrow{G_2})= ITC_{\phi }(\overrightarrow{G_1})\cup ITC_{\phi }(\overrightarrow{G_2})\) is proved. \(\square\) The join \(\overrightarrow{G_1}+\overrightarrow{G_2}=(A_1+A_2, \overrightarrow{B_1+B_2})\) of two interval-valued fuzzy digraphs \(\overrightarrow{G_1}\) and \(\overrightarrow{G_2}\) of the graphs \(\overrightarrow{G_1^*}\) and \(\overrightarrow{G_2^*}\) is defined as follows: \(\left\{ \begin{array}{l} \mu _{B_1+ B_2}^-(\overrightarrow{x,y}) = (\mu _{B_1}^-\cup \mu _{B_2}^-)(\overrightarrow{x,y})\\ \mu _{B_1+ B_2}^+(\overrightarrow{x,y}) = (\mu _{B_1}^+\cup \mu _{B_2}^+)(\overrightarrow{x,y}) \end{array}\right\}\) if \((\overrightarrow{x,y})\in E_1\cap E_2\), \(\left\{ \begin{array}{l} \mu _{B_1+ B_2}^-(\overrightarrow{x,y}) = \min \{\mu _{A_1}^-(x), \mu _{A_2}^-(y)\}\\ \mu _{B_1+ B_2}^+(\overrightarrow{x,y}) = \min \{\mu _{A_1}^+(x), \mu _{A_2}^+(y)\} \end{array}\right\}\) for all \((\overrightarrow{x,y})\in E'\), where \(E'\) is the set of edges connecting the vertices (nodes) of \(V_1\) and \(V_2\). For any two interval-valued fuzzy directed graphs \(\overrightarrow{G_1}\) and \(\overrightarrow{G_2}\), \(ITC_{\phi }(\overrightarrow{G_1}+ \overrightarrow{G_2})\) has less number of edges than that in \(ITC_{\phi }(\overrightarrow{G_1})+ ITC_{\phi }(\overrightarrow{G_2}).\) In \(ITC_{\phi }(\overrightarrow{G_1})+ITC_{\phi }(\overrightarrow{G_2})\), \((\mu _{B_1}^-+\mu _{B_2}^-)(x_1,x_2)>0\) is true for all \(x_1\in V_1\) and \(x_2\in V_2\). But, in \(\overrightarrow{G_1}+\overrightarrow{G_2}\), \(x_1\) and \(x_2\) have no common prey, then \(\mu _{B_1+B_2}^-(x_1,x_2)=0\) is valid for all \(x_1\in V_1\) and \(x_2\in V_2\). Thus, for all \(x_1, x_2\in V_1 \cup V_2\), \(\mu _{B_1+B_2}^-(x_1,x_2)=0<(\mu _{B_1}^-+\mu _{B_2}^-)(x_1,x_2)\) is true always. Hence, the result follows. \(\square\) Insights of this study Interval-valued fuzzy \(\phi\)-tolerance competition graphs are introduced. The real life competitions in food web are perfectly represented by interval-valued fuzzy \(\phi\)-tolerance competition graphs. An application of fuzzy \(\phi\)-tolerance competition graph on image matching is provided. Particularly, interval-valued fuzzy max-tolerance competition graph is used for this. Here, distorted images are matched for computer usages. Product of two IVFPTCGs and relations between them are defined. These results will develop the theory of interval-valued fuzzy graph literature. Some important results (Theorem 2, 3, 5, 9, 10) are proved. Adding more uncertainty to fuzzy \(\phi\)-tolerance competition graph, the interval-valued fuzzy \(\phi\)-tolerance competition graph was introduced here. Some interesting properties was investigated. Interesting properties of the IVFPTCG were proved such that the IVFPTCG of a IVFDG behaved like a homomorphic function under some operations. Generally, competition graphs represent some competitions in food webs. But, it can be also used in every competitive systems. These competitive systems can be represented by bipolar fuzzy graphs, intuitionistic fuzzy graphs, etc. But, interval valued fuzzy sets are perfect to represent uncertainties. An application of IVFPTCG in image matching was illustrated. Also, it can be applied in various types of fields such as database management system, network designing, neural network, image searching in computer application, etc. Akram M, Dudek WA (2011) Interval valued fuzzy graphs. Comput Math Appl 61:289–299 Article MathSciNet MATH Google Scholar Bhutani KR, Battou A (2003) On M-strong fuzzy graphs. Inf Sci 155(1–2):103–109 Bhutani KR, Rosenfeld A (2003) Strong arcs in fuzzy graphs. Inf Sci 152:319–322 Brigham RC, McMorris FR, Vitray RP (1995) Tolerance competition graphs. Linear Algebra Appl 217:41–52 Cho HH, Kim SR, Nam Y (2000) The \(m\)-step competition graph of a digraph. Discrete Appl Math 105:115–127 Cohen JE (1968) Interval graphs and food webs: a finding and a problem, Document 17696-PR. RAND Corporation, Santa Monica Golumbic MC, Monma CL (1982) A generalization of interval graphs with tolerances. In: Proceedings of the 13th Southeastern conference on combinatories, graph theory and computing, Congressus Numerantium Utilitas Math, Winnipeg, pp 321–331 Kauffman A (1973) Introduction a la Theorie des Sous-emsembles Flous. Masson et Cie Editeurs, Paris Kim SR, McKee TA, McMorris FR, Roberts FS (1995) \(p\)-Competition graphs. Discrete Appl Math 217:167–178 MathSciNet MATH Google Scholar Koczy LT (1992) Fuzzy graphs in the evaluation and optimization of networks. Fuzzy Sets Syst 46:307–319 Mathew S, Sunitha MS (2009) Types of arcs in a fuzzy graph. Inf Sci 179:1760–1768 Mordeson JN, Nair PS (2000) Fuzzy graphs and fuzzy hypergraphs. Physica, Heidelberg Book MATH Google Scholar Pramanik T, Samanta S, Pal M (2014) Interval-valued fuzzy planar graphs. Int J Mach Learn Cybern. doi:10.1007/s13042-014-0284-7 Pramanik T, Samanta S, Sarkar B, Pal M (2016) Fuzzy phi-tolerance competition graphs. Soft Comput. doi:10.1007/s00500-015-2026-5 Samanta S, Pal M (2015) Fuzzy planar graphs. IEEE Trans Fuzzy Syst 23(6):1936–1942 Samanta S, Pal M, Akram M (2014) \(m\)-step fuzzy competition graphs. J Appl Math Comput. doi:10.1007/s12190-014-0785-2 Samanta S, Pal M (2013) Fuzzy \(k\)-competition graphs and \(p\)-competition fuzzy graphs. Fuzzy Eng Inf 5(2):191–204 Article MathSciNet Google Scholar Samanta S, Pal M (2011) Fuzzy tolerance graphs. Int J Latest Trends Math 1(2):57–67 Rosenfeld A (1975) Fuzzy graphs. In: Zadeh LA, Fu KS, Shimura M (eds) Fuzzy sets and their applications. Academic Press, New York, pp 77–95 The authors contributed equally to each parts of the paper. All authors read and approved the final manuscript. The authors are grateful to the Editor in Chief and Honorable reviewers of the journal "Springer Plus" for their suggestions to improve the quality and presentation of the paper. Department of Mathematics, Khanpur Gangche High School (H.S.), Khanpur, Pandua, India Tarasankar Pramanik Department of Industrial and Management Engineering, Hanyang University, Ansan, Gyeonggi, 15588, South Korea Sovan Samanta & Biswajit Sarkar Department of Applied Mathematics with Oceanology and Computer Programming, Vidyasagar University, Midnapore, 721102, India Madhumangal Pal Department of Mathematics, Raja N.L. Khan Women's College, Midnapore, 721102, India Sukumar Mondal Sovan Samanta Biswajit Sarkar Correspondence to Biswajit Sarkar. Tarasankar Pramanik and Sovan Samanta have contributed equally to this work Pramanik, T., Samanta, S., Pal, M. et al. Interval-valued fuzzy \(\phi\)-tolerance competition graphs. SpringerPlus 5, 1981 (2016). https://doi.org/10.1186/s40064-016-3463-z DOI: https://doi.org/10.1186/s40064-016-3463-z Interval-valued fuzzy graphs Mathematics (Theoretical)
CommonCrawl
Person authentication based on eye-closed and visual stimulation using EEG signals Hui Yen Yap ORCID: orcid.org/0000-0002-1367-32261, Yun-Huoy Choo2, Zeratul Izzah Mohd Yusoh2 & Wee How Khoh1 The study of Electroencephalogram (EEG)-based biometric has gained the attention of researchers due to the neurons' unique electrical activity representation of an individual. However, the practical application of EEG-based biometrics is not currently widespread and there are some challenges to its implementation. Nowadays, the evaluation of a biometric system is user driven. Usability is one of the concerning issues that determine the success of the system. The basic elements of the usability of a biometric system are effectiveness, efficiency and user satisfaction. Apart from the mandatory consideration of the biometric system's performance, users also need an easy-to-use and easy-to-learn authentication system. Thus, to satisfy these user requirements, this paper proposes a reasonable acquisition period and employs a consumer-grade EEG device to authenticate an individual to identify the performances of two acquisition protocols: eyes-closed (EC) and visual stimulation. A self-collected database of eight subjects was utilized in the analysis. The recording process was divided into two sessions, which were the morning and afternoon sessions. In each session, the subject was requested to perform two different tasks: EC and visual stimulation. The pairwise correlation of the preprocessed EEG signals of each electrode channel was determined and a feature vector was formed. Support vector machine (SVM) was then used for classification purposes. In the performance analysis, promising results were obtained, where EC protocol achieved an accuracy performance of 83.70–96.42% while visual stimulation protocol attained an accuracy performance of 87.64–99.06%. These results have demonstrated the feasibility and reliability of our acquisition protocols with consumer-grade EEG devices. The growing interest in brain-computer interface (BCI) has led to an increase in the importance of understanding brain functions. BCI refers to a communication pathway between an external device and the human brain without involving any physical movements, and covers both medical and nonmedical uses [1]. Authentication study is one of the examples of BCI which uses brain signals as a biometric identifier. Authentication is essential in our daily lives, which is performed in almost all human-to-computer interactions to verify a user's identity through passwords, pin codes, fingerprints, card readers, retina scanners, etc.With the growth of technology, advanced biometric authentication has been developed. Physiological biometrics use a person's physical characteristics to identify an individual, such as face, fingerprint, palm print, retina, iris, etc. This type of biometrics is hardly to be replaced once it has been compromised. On the other hand, behavioral biometrics analyze the digital patterns in performing a specific task in the authentication. It is hard to mimic compared with the former biometrics, and it is revocable and replaceable when compromised [2]. While these traditional types of biometrics, human cognitive characteristics can be used to develop an alternative way of conventional physiological and behavioral biometrics [3]. It analyzes an individual's cognitive behavior (biosignals), such as a person's emotional and cognitive state for the purpose of identification and verification. The motivation of choosing brain signals for authentication lies in the desire for a more privacy-compliant solution compared to other biometric traits. Brain signals possess specific characteristics which are not present in most of the widely used biometrics. They are unique and difficult to be captured by an imposter from a distance, thus increasing their resistance against spoofing attacks. One of the commonly used methods in recording brain signals is Electroencephalography or also known as EEG. It records the brain's electrical activities by calculating voltage variations within the brain [1]. It is also a straightforward and non-invasive method to record brain electrical activity as it only requires placing electrodes on the scalp's surface. Brain activity can be obtained through EEG recordings using specifically designed protocols, including the resting state, motor imaginary, non-motor imaginary and stimulation protocol [4]. The resting-state protocol is easy to operate as it only requires users to rest for a few minutes in either eyes-closed (EC) or eyes-open (EO) state, while the EEG data are recorded. On the other hand, motor imaginary requires the users to mentally simulate a physical action, such as movements of the right hand, left hand, foot and others. Other than that, EEG data can also be acquired by asking the user to perform non-motor imaginary tasks, for instance, mental calculation, internal speech or singing. Finally, the stimulation protocol presents the users with a series of stimuli and the electrical response of the users is recorded. Various stimuli have been proposed and applied in the literature for this protocol, such as pictures, wording, audio, etc. Despite promising results being reported in the literature, the utilization of EEG-based biometrics system is not currently widespread in practical applications. One of the reasons lies in the implementation and operation of this biometric approach. The performance relies on the design of the acquisition protocol [5]. This approach requires a long period of time for the users to undergo EEG brain's data recording. This approach is impractical to be used in real life as users would not be willing to spend that much time on the authentication process. Moreover, Ruiz Blondet et al. and Wu et al. [6, 7] argued that most studies used high-density EEG devices, which were very costly and the setup process was time-consuming. Typically, a biometrics system is expected to be accessed by users frequently. Its fundamental usability elements are effectiveness, efficiency, and user satisfaction [8]. Effectiveness refers to how well a user can perform a task. Efficiency measures how quickly a user can perform the task with a reasonably low error rate. Finally, satisfaction measures the users' perceptions and feelings towards the application. With these requirements, users may not only need a reliable system, but also a user-friendly and affordable EEG device during the acquisition process. A consumer-grade wireless EEG device with lesser channels can be a potential alternative to replace the clinical-grade device. It should also strike a balance between security and user-friendliness in real-life applications [9]. Thus, the paper aims to propose an acquisition protocol that employs a consumer-grade EEG device with a reasonable enrolment period. In addition, the reliability of the EEG signals recorded via a consumer-grade device is also examined through different sessions with regard to two acquisition protocols, namely eye-closed (EC) and visual stimulation protocol. The rest of the paper is organized as follows. Section 2 discusses the literature review. Section 3 presents the proposed approach. Section 4 shows the experiment results and performance evaluation, and Sect. 5 discuss es the findings of the proposed system. Finally, Sect. 6 provides a conclusive remark to this paper and some future works are suggested. From the beginning of the twentieth century, EEG analysis has been mainly employed in the medication field to study brain diseases such as stroke, brain tumor, epilepsy, Alzheimer, Parkinson, etc. [10]. In particular, it has been heavily employed in BCI in the last decade, where the main objective is to help patients with severe neuromuscular disorders. Applications of BCI functions by either observing the users' state or allowing the users express their intentions; meanwhile, the users' brain signals are recorded and sent to a computer system for further analysis. The result is then translated into a command and the system is instructed to complete the intended task [1]. Recently, the research of BCI has been extended further to cover several applications, including authentication and security [4]. Cognitive biometric is a new technology that utilizes brain activity to authenticate an individual. The brain's activity can be recorded by measuring the blood flow in the brain or by measuring the electrical activity of the brain's neurons. EEG is widely considered for usage in security areas as the signals are unique and possess distinctive characteristics, which are not present in other commonly used biometrics such as face, iris, palm prints and fingerprints. Due to its high privacy compliance nature, EEG-based biometric is robust against spoofing attacks as it is impossible for an imposter to capture the brain signal from a distance [10]. EEG signals are also sensitive to stress. Thus, it is hard to force a person to reproduce brain activity when they are panicked. In general, biometrics must fulfill four requirements: universality, permanence, uniqueness and collectability [11]. Universality refers to the requirement that each person should naturally possess the characteristic being measured. Permanence requires that the characteristics of a person should stay the same over time for the purpose of criteria matching. Uniqueness is the requirement that the characteristics of a person should be unique and distinguishable from one another. Finally, collectability requires that the characteristics of a person should be measurable with any capturing device. Previous studies had made a significant effort to prove the viability of EEG as a biometric identifier [10, 12,13,14]. Ruiz Blondet et al. [6] further emphasized that, in terms of collectability, the design of EEG acquisition protocol should be user-friendly to the users. It can be done by reducing the number of electrodes to make the design more feasible and closer to real-world applications. Several EEG acquisition protocols were designed and proposed in the literature to obtain specific brain responses of interest. The main objective was to study the neural mechanisms of information processing in environmental perception and during complex cognitive operations [15]. These acquisition protocols generally be divided into two categories: resting state and stimulation [16]. For the resting state protocol, the user is required to sit on a chair and rest for a few minutes in either eyes-closed (EC) or eyes-open (EO) state as instructed. Meanwhile, the brain signals of the users are recorded. To the best of our knowledge, [17] was the first research that proposed an EEG-based biometric using a resting state protocol. The authors recorded EEG signals from four subjects when they were performing EC activity that lasted for 3 continuous minutes. The spectral values of the signals were calculated using Fast Fourier Transform (FFT). The Alpha frequency band (7–12 Hz) was obtained and this value was further sub-divided into three overlapping sub-bands. The obtained classification scores ranging from 80 to 95% were correct, which proved that the EEG signals can be used as one of the biometric traits. Both sub-bands were informative and no frequency band was reported to have an extra benefit over the others. In La Rocca et al. [10], the repeatability of the EEG signal was addressed. A 'resting state' protocol with both EC and EO was designed to acquire raw EEG signals from nine healthy subjects in two different sessions, in which both sessions were 1 to 3 weeks apart. The signals from the 54 electrodes that were attached to the scalp of the subject were continuously recorded. The raw EEG signals were filtered by an anti-aliasing FIR filter before they were presented in four sub-bands from 0.5 to 30 Hz. A common average referencing (CAR) filter was then employed to minimize the artifacts. Each preprocessing signal was modelled according to an autoregressive model while using reflection coefficients to generate the feature vector, then a linear classifier was employed for classification. In the evaluation, a different set of electrodes combination was tested and the results showed a high degree of repeatability over the time interval. In Ma et al. [18], the EEG data were adopted from a public data set. A total of 10 subjects were enrolled and they were asked to perform 55 s of EC and EO tasks, respectively, using a device with 64 electrodes. The recorded EEG signals were segmented into 55 trials separately with a 1-s frame length. 50 trials were used for training and the rest were used for testing purposes. Convolutional neural networks (CNN) was applied for feature extraction and classification. The findings showed that the suggested approach yielded a high degree of accuracy with accuracy of 88% for a 10-class classification. Besides, an inter-personal difference can be discovered using a very low-frequency band of 0 to 2 Hz. The second EEG acquisition protocol is based on the stimulus of an external event on the subjects. After stimulation, the electrical response of the subjects is recorded through the nervous system. A typically employed stimulation protocol in EEG-based biometric is the Event-Related Potential (ERP). It is a time-locked deflection on the ongoing brain activity after being exposed to an external event. The event can be sensory, visual or audio stimuli [1]. In Palaniappan and Ravi [19], the study was conducted to assess the feasibility of ERP using visual stimuli. 20 subjects participated in the study. Their signals were obtained from 61 electrodes placed on the scalp when they looked at typical black images with white lines of drawn objects such as an aeroplane, a banana, a ball, etc. The recorded signals with an eye blink artifact with magnitude above 100 µV were removed. Besides, those signals were also de-noised through Principal Component Analysis (PCA). The spectra features consisting of power in the gamma band (30 to 50 Hz) were extracted and classified through a Simplified Fuzzy ARTMAP (SFA) neural network (NN). The results showed an average classification of 94.18%, which proved the proposed method's potential in recognizing individuals. The stability of the EEG signals was evaluated in [14] using visual stimulation protocol to record raw EEG signals from 45 subjects. Those subjects were presented with several acronyms (example: DVD, TV and TN) which were intermixed with other lexical types. The experiments consisted of three different sessions, which were carried out in 6 months. For the third session, only nine subjects returned for data acquisition. A hardware filter was applied to reduce the influence of DC shifts and bootstrapping was used to generate extra features. Different classifiers such as cross-correlation, support vector machine (SVM) and divergent autoencoder (DIVA) were adopted. The findings verified the permanence of the EEG characteristics and it was found that the brain signals of the subjects could remain stable over a relatively long period of time. Besides, Ruiz-Blondet et al. [20] had suggested that using ERP may provide more accurate results in EEG-based biometric as its elicitation process allows for some control over the user's cognitive state during EEG data recording sessions. The EEG data from 50 subjects were acquired using 30 sensors. The cognitive Event-Related Potential Biometric Recognition (CEREBRE) protocol was designed to obtain the unique response of the subjects from the brain systems. This protocol includes different categories of stimuli such as sine gratings, low-frequency words, food images, words, celebrities and oddballs. Besides, subjects were also asked to remain in resting state and undergo pass-thought sessions. The duration of the entire experiment was roughly one and a half hours. The study did not apply any artifact rejection or feature extraction method, where only simple cross-correlation was used for classification. The results showed that all stimulus types achieved greater accuracy. In a recent study, the authors in Sabeti et al. [21] investigated the subjects' features using resting (EO) and ERP acquisition protocol. Each subject was required to perform a task in EO state for 2 min, where no stimulus was imposed for the first task. However, for the second task, audio stimuli were randomly applied and the subjects were requested to discriminate the different pitch levels. The EEG recording for the second task took around 20 min. The EEG signals were filtered using a bandpass filter ranged from 0.5 to 45 Hz. Several features such as spectral coherence, wavelet coefficients and correlation were extracted and evaluated using SVM, K-Nearest Neighbors (KNN) and Random Forest classifiers. Results showed that correlation was the most discriminative feature among other methods in user authentication. The implementation of the resting protocol from previous studies has shown that the procedure is convenient, but an individual's mental state is uncontrollable when EEG data are acquired in different sessions. Thus, visual stimulation is proposed to provide more reliable biometric authentication as this approach allows the experimenter to control the individual's cognitive state during the time of acquisition. However, due to the small size of an ERP, a large number of trials is needed to gain the desired accuracy performance of the authentication, which leads to the users undergoing a lengthy EEG acquisition period [20]. EEG-based systems are still far from being commercialized as they still face several challenges [5]. Usability is one of the challenges which should gain more attention as it is an important principle to determine the success of the system. Users tend to use the system if it is convenient and easy to use. However, most of the current data acquisition process requires a lengthy time to set up, especially for a wired EEG recording device. Besides that, the user has to place a large number of electrodes on their scalp using conductive gel to reduce skin impedance. As an alternative, [7] suggested replacing the cumbersome wired devices with consumer-grade wireless EEG devices which could be more practicable in real life. However, these devices possess a limitation that needs to be considered, where the signal quality could be relatively inferior compared to the research-grade type of devices. Moreover, the lengthy acquisition period is another line of research that needs to be addressed as the participants could lose patience during the acquisition process, which leads to the distortion of the signal or reluctance to take part in the data enrolment process. Therefore, acquisition protocols that utilize a consumer-grade device to acquire EEG signals within a reasonably short period of time are proposed in this work. The performance of an EEG-based biometric depends on a proper design of the acquisition protocol. The portability of the EEG device and acquisition period will be considered to improve the usability and practicability of the system. The proposed system comprises 5 components: data acquisition, preprocessing, signal segmentation, feature extraction, and classification. Figure 1 illustrates the flowchart of the proposed method. Overflow of proposed method Acquisition protocol Conventionally, EEG signals are recorded using clinical-grade EEG equipment. This device is expensive and inconvenient as the setting up could take a tremendous among of time. Hence, in this work, a consumer-grade type of EEG device is used as an alternative to improve user experience. The EEG signals are collected from 8 healthy volunteers (2 female, 6 male, all ages from 18 to 33) using Emotiv EPOC+ wireless headset, as illustrated in Fig. 2. It comprises 14 integrated electrodes with two reference sensors where each sensor is located at the standard positions of the International 10–20 systems as shown in Fig. 3. EEG Emotiv EPOC+ wireless headset Framework of brainwave user recognition Before the acquisition process, a brief introduction about the purpose of the study was given to the subject. In addition, the subject was also allowed to see the changes in their EEG signal when they blinked their eyes or moved their bodies. The purpose of this demonstration is to tell the subject that any eye movements and muscle tension can impact their brain waves. Thus, they were requested to avoid big movements and remain as still as possible. The entire data acquisition process was conducted in a standard enclosed room. The recording process was divided into morning and afternoon sessions to assess the stability of the consumer-grade EEG equipment when recording EEG signals over different sessions. In each session, the subject was required to perform two different tasks (eyes-closed and visual stimulation), while data were recorded at a 256 Hz sampling rate. Task 1: Eyes-closed (EC)—subject was seated on a chair with both arms resting. Before the enrollment, the subject was instructed to keep the mind as calm as possible and remain in a resting state with eyes closed. The recording started 10 s after the subject closed the eyes and remained resting. EEG signals were recorded for 30 s continuously and then the recording process was stopped. Task 2: Visual stimulation—the subject was requested to be seated on the same chair without any major movements after completing Task 1. A LED screen of size 17″ was placed in front of the subject. The subject was guided to sit comfortably at a certain distance from the screen. During the recording process, a series of stimuli with 120 single words were displayed to the subject. The subject was requested to focus and interpret each stimulus silently at all times, where no big body movements were allowed. However, they were allowed to blink their eyes to reduce the tiredness during the enrolment process. The stimulation design was mainly focused on wording presentation as the subject's semantic memory might provide distinctive biometric properties. Each stimulus was a wording that consisted of four to seven letters that the subject could easily understand. A stimulus was displayed on the computer screen for 1 s followed by a 1-s black screen, as illustrated in Fig. 4. It took approximately 4 min to show all the 120 wordings to the user (including the black screen), then the recording process was stopped. Along the process, an Inter-Stimulus Interval (ISI) could be segmented into parts (coined as a trial in this work) that consisted of 0.5 s of black screen, followed by 1 s of stimulus displayed and another 0.5 s of black screen, as illustrated in Fig. 4. Visual stimulation with using wording presentation A total of eight subjects contributed to the EEG data acquisition process and a total of four well-collected data sets were obtained from the two sessions as follows: Session 1: Eyes-closed data set, S1ec Session 1: Visual stimulation data set, S1s Preprocessing and segmentation EEGLAB is an interactive MATLAB toolbox and was implemented in this study for preprocessing and segmentation purposes. Before performing feature extraction, unwanted artifacts and unnecessary information will be removed from the collected EEG signals, therefore improving the signal-to-noise ratio. Filtering is a process to filter continuous EEG data before epoching or artifact removal. Finite Impulse Response (FIR), a linear filter, was adopted to remove the direct current shifts of the recorded EEG signals where the range was set from 1 to 55 Hz. An Automatic Artifact Removal (AAR) was then applied to data set S1s and S2s to remove the ocular artifacts in the recorded EEG signals. The AAR is one of the toolboxes available in the EEGLAB plug-in [22] and is used to correct the ocular effects within EEG signals. No artifact rejection was applied to S1ec and S2ec data sets as the EEG signals collected for these data sets were for resting state without eyes and muscle movements. After the removal of the artifacts, the EEG signals were segmented into small parts, which were named trials. For eye-closed data sets (S1ec and S2ec), the first 5 s of the signal, which contained inconsistency, were discarded. The remaining EEG signals were then segmented into 25 trials, with each trial containing a 1-s frame length (256 sample points). The frame length had been experimentally selected based on the existing study [10]. On the other hand, for visual stimulation data sets (S1s and S2s), the signals were epoched and ERPs were formed for each stimulus starting from − 1000 ms to stimulus onset and lasting for 1000 ms after probe onset (refer to Fig. 4), resulting in 512 sample points for each trial. In other words, each trial contained a 1-s stimulus and it was embedded with 0.5 s of black screen at both the beginning and the end of the trial. After this, epoch rejection was applied to remove some trials that appeared to contain significant artifacts, resulting in a range of 100–120 trials for each subject after the segmentation process. Feature extraction Cross-correlation was considered to process the EEG signals. Cross-correlation is a measure of the degree to which two series are correlated. It measures how closely two different observables are related to each other at the same or different times by considering time lag [23]. If \(x\left[ N \right]\) and \(y\left[ N \right]\) are two discrete signals where N is the length of the signal, then the correlation of \(x\left[ n \right]\) with respect to \(y\left[ n \right]\) is given as: $$r_{xy} \left[ l \right] = \mathop \sum \limits_{t = - \infty }^{\infty } x\left[ t \right]y\left[ {t - l} \right],$$ where \(l\) is the lag or delay which indicates the time-shift and t indicates the period of the signal. If both signals are discrete functions of period N, then the − ∞ to ∞ can be replaced by an internal of length N from t0 = 0 to t0 + N. The correlation values between the 14 channels of each trial were computed in a pairwise manner. The maximum of the cross-correlation over all trials for each pair was extracted from the correlation values, which was denoted as \(\max .\) The variation of the range value corresponding to the features can deteriorate the performance of the overall system. Thus, all features were normalized to the range between 0 and 1, so that each feature contributed proportionately to the final distance. Assuming that a feature is denoted as \(x\), the equation for the normalization can be defined as: $$x_{{{\text{norm}}}} = \frac{{x - x_{\min } }}{{x_{\max } - x_{\min } }},$$ where \(x_{{{\text{norm}}}}\) is the normalized features. Therefore, a feature vector \(v\) is constructed by concatenating all normalized features as: $$v = \left( {\max \left( {x_{{{\text{norm}}}} } \right),\mu \left( {x_{{{\text{norm}}}} } \right),\sigma^{2} \left( {x_{{{\text{norm}}}} } \right)} \right).$$ SVM classification A good classification method is essential to accept or reject a claimed person from accessing the system based on an input. An efficient and effective model is necessary for predicting the classes from the data. In general, the learning process is done using the training data chosen from the sample data and their class label. Researchers have widely used SVM to classify EEG signals. SVM is a classification method that involves separating test data with different class labels by learning the structure from the training data and constructing the hyperplanes in a multidimensional space based on that data [24]. SVM adopts a set of mathematical functions that are known as kernels. The function of a kernel is to receive the data as an input and transform it into the desired form. Polynomial SVM is one of the common kernels used in the data classification of non-linear models among the available SVM kernels. It has a good generalization ability and a low learning capacity when the data are non-linearly separated. Since EEG signals are non-stationary and Polynomial SVM had shown a good classification performance in previous EEG studies [25, 26], Polynomial SVM was used in this study as the classifier for classifying EEG patterns. While this work aims to recognize an individual using EEG signals, it leads to a multi-class SVM prediction. Multiple class prediction is more complex than binary prediction, because the classification algorithm has to consider more separation boundaries or relations [27]. The present study considered two decomposition strategies: (OVO) one-vs-one and (OVA) one-vs-all. OVO is a pairwise classification that maps all data sets that belong to a certain class. It splits a multi-class classification data set into binary classification problems. The number of generated models depends on the number of classes. Consider the formula n/(n − 1)/2, where n is the number of classes. If n is equivalent to 5, the total of the generated models is 10. While OVA is also a paired binary class, it splits a multi-class classification data set into one binary classification problem per class. OVA produces the same amount of learned models as the number of classes. If the number of classes is 5, the number of generated models will also be the same [28]. Experiment results In the experiment analysis, the k-fold cross-validation technique was adopted to generate fair and averaged performance results, where the k was set to 5 in this study. Therefore, in this cross-validation, the data were divided into 5 distinct subsets and repeated for 5 iterations. In each iteration, a subset was selected for testing, while the rest of the subsets (k − 1) were used for training. It is noted that the distribution of trials for the subset was randomized and the selection of each subset in each iteration for training and testing purposes was mutually exclusive. The average accuracy was determined for each fold. The average accuracy and its standard deviation, which describes the amount of variability or dispersion around the average, are reported in this section. The experiments were conducted on both morning and afternoon sessions' data sets. In addition, to assess the stability of the signals across the different sessions, the trials from both sessions were also merged to produce another data set, which was named as the combined sessions, during the evaluation process. The performance metrics including accuracy, precision, sensitivity, specificity and F1-score are reported. These metrics were computed based on four parameters: true positive (TP), false positive (FP), true negative (TN) and false negative (FN), where they are derived as follows: $${\text{Accuracy}} = \frac{{{\text{TP}} + {\text{TN}}}}{{{\text{TP}} + {\text{TN}} + {\text{FP}} + {\text{FN}}}},$$ $${\text{Precision}} = \frac{{{\text{TP}}}}{{{\text{TP}} + {\text{FP}}}},$$ $${\text{Sensitivity}} = \frac{{{\text{TP}}}}{{{\text{TP}} + {\text{FN}}}},$$ $${\text{Specificity}} = \frac{{{\text{TN}}}}{{{\text{TN}} + {\text{FP}}}},$$ $$F1\;{\text{score}} = \frac{{{\text{precision}}*{\text{sensitivity}}}}{{{\text{precision}} + {\text{sensitivity}}}}.$$ The averaged classification results of 4 data sets with different sessions are summarized in Tables 1, 2 and 3. As observed from Tables 1, 2 and 3, the visual stimulation task outperformed the EC task in three experiments, including the morning, afternoon and combined sessions. It achieved a very promising accuracy performance especially for morning and afternoon sessions (S1S,OVO = 96.91%, S1S,OVA = 99.06%, S2S,OVO = 97.71%, S2S,OVA = 99.05%) as compared to the EC task (S1EC,OVO = 83.70%, S1EC,OVA = 82.73%, S2EC,OVO = 86.69%, S2EC,OVA = 96.42%). The accuracy performances of visual stimulation for the combined sessions also outperformed the EC task with accuracies of 87.64% (S1 + S2S, OVO) and 96.56% (S1 + S2S, OVA) while the EC task had accuracies of 86.61% (S1 + S2EC, OVO) and 96.41% (S1 + S2EC, OVA) for both OVO and OVA, respectively. Table 1 Experimental results for Task 1 and Task 2 in the morning session, S1 Table 2 Experimental results for Task 1 and Task 2 in the afternoon session, S2 Table 3 Experimental results for Task 1 and Task 2 in combined sessions, S1 + S2 Based on the comparison, it is noticed that the visual stimulation task performed better than the EC task. Some statistical tests were carried out in this study to measure the significant difference between each task. First, the Shapiro–Wilk Test was used to evaluate the acquisition methods' results and examine if they obeyed normal distribution. If the data is normally distributed, the paired t test is applied to assess the consistency of classification performance; otherwise, the Wilcoxon Rank Sum test is considered. These calculations were performed using the SPSS software. The Shapiro–Wilk test showed that the average classification accuracy for OVO–SVM only obeyed the normal distribution in the morning, afternoon and combined sessions. The probabilities are summarized in Table 4. As the data were a normal distribution, a paired t test was conducted to compare the differences of OVO classification measurement between the EC task and the visual stimulation task. The paired t test showed that visual stimulation performed better than EC in the morning and afternoon sessions, where both classification measurements (p < 0.05) were significant. Table 4 Normal distribution results On the other hand, based on Table 4, the distribution for OVA classification accuracy did not resemble a normal distribution. Thus, the Wilcoxon Rank Sum test, the non-parametric alternative to the paired t test, was applied. The Wilcoxon Rank Sum test results indicate that the visual stimulation's performances were better than EC in both morning and afternoon sessions. There was a significant difference at the level of 0.05. Meanwhile, visual stimulation also outperformed EC when the morning session was combined with the afternoon session; however, the difference was not significant. The results are reported in Table 5. Table 5 Paired t test and Wilcoxon sRank Sum test for classification measurements (p values) The results in Table 5 show the range of p values from 0.000 to 0.017 in the morning and afternoon sessions. Therefore, there is sufficient evidence to conclude that the classification accuracy can achieve better performance on average if the EEG data are acquired through visual stimulation task in separate time sessions. The visual stimulation task appears to be effective in terms of better capabilities in recognizing the claimed users. However, it was also observed that the combined sessions did not inherit similar characteristics to both previous sessions where the p values were not significant (p > 0.05). These results imply that the intra-class data variability does not significantly impact the signal's stability, thus leading to similar results between visual stimulation and EC. On the other hand, it was also observed that OVA outperformed OVO in most performance metrics for both EC and visual stimulation tasks in all experiments. The finest accuracy for OVA considering both visual stimulation task and EC task was 99.06%. It had a higher average accuracy of 12.5% (EC) and 4.59% (visual stimulation), respectively, compared to OVO. The comparison results between both decomposition strategies in SVM are reported in Table 6. However, it is noticed that there was degradation in terms of specificity performance in OVA for all experiment tasks. It may be due to the OVA strategy which involves duplication of a single binary classification per class, where the samples from a particular class are assigned as positive while samples from the rest of the classes are assigned as negatives for each iteration. Assuming n is the number of classes, the OVA repeats for n times, and for each time, a class is defined as a positive class while the rest of the classes (n − 1) are denoted as negatives. In this way, there is classifier imbalance as the number of negative samples is significantly larger than positive samples. It increases the chance for the system to rule out the negative samples mistakenly, thus decreasing the true-negative result. Table 6 Classification accuracy of OVO and OVA The overall results obtained in this study reveals that EEG signals are an effective biometric identifier in user authentication. As shown in Figs. 5, 6 and 7, the visual stimulation task had better accuracy performance than the EC task. In addition, the results for the EC task had a higher standard deviation than the visual stimulation task. It is believed that the subjects' minds in EC protocol are uncontrollable without the existence of stimulus, therefore leading to the instability of the signal produced. The findings also reveal that the visual stimulation with ERP protocol is better than EC protocol as ERP allows the experimenter to tightly control the user's cognitive state. Although performance degradation was observed when combining both morning and afternoon sessions, the specificity was still sustained within 81.68–98.23% and 80.53–98.08% for visual stimulation and EC tasks, respectively. These results indicate that the proposed system is able to identify the ratio of true negatives to total negatives in the data set. A comparison of the proposed method with existing works is listed in Table 7. Comparison results of EEG acquisition protocols for the morning session, S1 Comparison results of EEG acquisition protocols for the afternoon session, S2 Comparison of EEG acquisition protocols for the morning and afternoon sessions, S1 + S2 Table 7 Performance comparison of the existing works As seen from the table above, the proposed method had obtained better results than most existing works. Although the accuracy reported in [20] was perfectly accurate, it is the least practical as the acquisition process took one and a half hours to retrieve EEG responses from individuals based on six types of stimulus. In terms of EEG recording devices, most reported studies preferred using research-grade devices due to their reliability. The proposed method uses a consumer-grade device, which is proven to have the capability to recognize the individuals even in separate sessions. In addition, it is cost-effective in terms of practical applications. Furthermore, the acquisition duration is one of the key reasons that makes the proposed protocol more applicable in a real-world environment. In past works, they took a minimum of 55 s for the EO or EC task, and 20 min for the ERP task. The proposed study reduced the duration to 30 s for the EC task and 4 min for the ERP task, which implies that both cases can achieve very promising results. Moreover, tests for different sessions were conducted to assess the stability of the EEG signals and the results demonstrate the suitability of the proposed acquisition protocol in the authentication field. This paper discussed an EEG-based recognition system's acquisition protocols and performance comparison between the EC and visual stimulation protocols. We proposed using a consumer-grade EEG device for individual authentication in our study. A reasonable acquisition period was proposed to ensure the feasibility of the EEG-based biometric in the future. In this study, cross-correlation was determined to measure the correlation between two different EEG channel signals. We obtained good results when the classification was carried out using cross-correlation together with SVM. The results show that using visual stimulation protocol achieved better performance in terms of classification and consistency than using the EC protocol. However, there is a potential to apply incremental learning to model intra-class variability over time. Besides, OVA performed better than OVO. It can be noted that the distribution for OVA's classification accuracy did not resemble a normal distribution due to the small size of the samples. Therefore, a non-parametric test was needed to compare the differences of the classification measurement between the proposed methods. The results indicate that visual stimulation performed better than EC in both morning and afternoon sessions with a significant difference at the level of 0.05. Larger sample classes are recommended for further comparison between OVO and OVA. Future works that can be carried out include investigating the extraction and selection of more reliable features from EEG signals with a larger sample size and applying other classification methods to improve the intra- and inter-individual EEG stability. AAR: Automatic Artifact Removal BCI: DIVA: Divergent Auto Encoder EC: EEG: EO: ERP: Event-Related Potential FFT: Fast Fourier Transform False negative FP: KNN: K-Nearest Neighbors NN: OVA: One-vs-all OVO: One-vs-one PCA: SVM: Support vector machine TN: True negative TP: True positive Abdulkader SN, Atia A, Mostafa MSM (2015) Brain computer interfacing: applications and challenges. Egypt Inform J 16(2):213–230. https://doi.org/10.1016/j.eij.2015.06.002 Khoh WH, Pang YH, Teoh ABJ (2019) In-air hand gesture signature recognition system based on 3-dimensional imagery. Multimed Tools Appl 78(6):6913–6937. https://doi.org/10.1007/s11042-018-6458-7 Traore I, Alshahrani M, Obaidat MS (2018) State of the art and perspectives on traditional and emerging biometrics: a survey. Secur Priv. https://doi.org/10.1002/spy2.44 Yap HY, Choo YH, Khoh WH (2017) Overview of acquisition protocol in EEG based recognition system. In: Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol 10654 LNAI, pp 129–138. https://doi.org/10.1007/978-3-319-70772-3_12 Chan HL, Kuo PC, Cheng CY, Chen YS (2018) Challenges and future perspectives on electroencephalogram-based biometrics in person recognition. Front Neuroinform. https://doi.org/10.3389/fninf.2018.00066 Ruiz Blondet MV, Laszlo S, Jin Z (2015) Assessment of permanence of non-volitional EEG brainwaves as a biometric. In: 2015 IEEE international conference on identity, security and behavior analysis, ISBA 2015. https://doi.org/10.1109/ISBA.2015.7126359 Wu Q, Zeng Y, Zhang C, Tong L, Yan B (2018) An EEG-based person authentication system with open-set capability combining eye blinking signals. Sensors 18(2):335. https://doi.org/10.3390/s18020335 Theofanos M, Stanton B, Wolfson C (2008) Usability and biometrics: ensuring successful biometric systems. In: International workshop on usability and biometrics Zeynali M, Seyedarabi H (2019) EEG-based single-channel authentication systems with optimum electrode placement for different mental activities. Biomed J 42(4):261–267. https://doi.org/10.1016/j.bj.2019.03.005 La Rocca D, Campisi P, Scarano G (2013) On the repeatability of EEG features in a biometric recognition framework using a resting state protocol. In: Biosignals. (January), pp 419–428 Jain AK, Ross A, Prabhakar S (2004) An introduction to biometric recognition. IEEE Trans Circuits Syst Video Technol 14(1):4–20. https://doi.org/10.1109/TCSVT.2003.818349 Campisi P, Scarano G, Babiloni F, DeVico Fallani F, Colonnese S, Maiorana E, Forastiere L (2011) Brain waves based user recognition using the "eyes closed resting conditions" protocol. In: 2011 IEEE international workshop on information forensics and security, WIFS 2011. https://doi.org/10.1109/WIFS.2011.6123138 Brigham K, Kumar BVKV (2010) Subject identification from Electroencephalogram (EEG) signals during imagined speech. In: IEEE 4th international conference on biometrics: theory, applications and systems, BTAS 2010. https://doi.org/10.1109/BTAS.2010.5634515 Armstrong BC, Ruiz-Blondet MV, Khalifian N, Kurtz KJ, Jin Z, Laszlo S (2015) Brainprint: assessing the uniqueness, collectability, and permanence of a novel method for ERP biometrics. Neurocomputing 166:59–67. https://doi.org/10.1016/j.neucom.2015.04.025 Campisi P, Rocca DL (2014) Brain waves for automatic biometric-based user recognition. IEEE Trans Inf Forensics Secur 9(5):782–800. https://doi.org/10.1109/TIFS.2014.2308640 Huang H, Hu L, Xiao F, Du A, Ye N, He F (2019) An EEG-based identity authentication system with audiovisual paradigm in IoT. Sensors. https://doi.org/10.3390/s19071664 Poulos M, Rangoussi M, Alexandris N (1999) Neural network based person identification using EEG features. In: 1999 IEEE international conference on acoustics, speech, and signal processing. Proceedings. ICASSP99 (Cat. No. 99CH36258), vol 2, pp 1117–1120. https://doi.org/10.1109/ICASSP.1999.759940 Ma L, Minett JW, Blu T, Wang WSY (2015) Resting state EEG-based biometrics for individual identification using convolutional neural networks. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society, EMBS, vol 2015-Novem, pp 2848–2851. https://doi.org/10.1109/EMBC.2015.7318985 Palaniappan R, Ravi KVR (2003) A new method to identify individuals using signals from the brain. In ICICS-PCM 2003—proceedings of the 2003 joint conference of the 4th international conference on information, communications and signal processing and 4th Pacific-Rim conference on multimedia, vol 3, pp 1442–1445. https://doi.org/10.1109/ICICS.2003.1292704 Ruiz-Blondet MV, Jin Z, Laszlo S (2016) CEREBRE: a novel method for very high accuracy event-related potential biometric identification. IEEE Trans Inf Forensics Secur 11(7):1618–1629. https://doi.org/10.1109/TIFS.2016.2543524 Sabeti M, Boostani R, Moradi E (2020) Event related potential (ERP) as a reliable biometric indicator: a comparative approach. Array 6:1–7. https://doi.org/10.1016/j.array.2020.100026 Gomez-Herrero G (2007) Automatic artifact removal (AAR) toolbox for MATLAB Mayor D, Davey N, Mayor D (2018) The correlation between EEG signals as measured in different positions on scalp varying with distance. Procedia Comput Sci 123:92–97. https://doi.org/10.1016/j.procs.2018.01.015 Burges CJC (1998) A tutorial on support vector machines for pattern recognition. Data Min Knowl Discov 2(2):121–167. https://doi.org/10.1023/A:1009715923555 Zhang Z, Parhi KK (2015) Seizure prediction using polynomial SVM classification. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society, EMBS, pp 5748–5751. https://doi.org/10.1109/EMBC.2015.7319698 Ghumman MK, Singh S, Singh N, Jindal B (2021) Optimization of parameters for improving the performance of EEG-based BCI system. J Reliab Intell Environ 10(1):523–531. https://doi.org/10.1007/s40860-020-00117-y Joseph SJ, Robbins KR, Zhang W, Rekaya R (2010) Comparison of two output-coding strategies for multi-class tumor classification using gene expression data and latent variable model as binary classifier. Cancer Inform 9:39–48. https://doi.org/10.4137/cin.s3827 Abdul Raziff AR, Sulaiman MN, Mustapha N, Perumal T, Mohd Pozi MS (2017) Multiclass classification method in handheld based smartphone gait identification. J Telecommun Electron Comput Eng 9(2–12):59–65 The authors gratefully acknowledge the use of facilities and lab at Multimedia University for conducting the experiment purpose. Faculty of Information, Science & Technology, Multimedia University (MMU), Melaka, Malaysia Hui Yen Yap & Wee How Khoh Faculty of Information & Communication Technology, Universiti Teknikal Malaysia Melaka (UTeM), Melaka, Malaysia Yun-Huoy Choo & Zeratul Izzah Mohd Yusoh Hui Yen Yap Yun-Huoy Choo Zeratul Izzah Mohd Yusoh Wee How Khoh HYY: writing/editing original draft, methodology, analysis and investigation. YHC: validation, review and supervision. ZIMY: validation, review and supervision. WHK: software: programming and analysis. All authors read and approved the final manuscript. Correspondence to Hui Yen Yap. Written informed consent was obtained from each participant before participation in this study. Written informed consent for publication was obtained from each participant before participation in this study. Yap, H.Y., Choo, YH., Mohd Yusoh, Z.I. et al. Person authentication based on eye-closed and visual stimulation using EEG signals. Brain Inf. 8, 21 (2021). https://doi.org/10.1186/s40708-021-00142-4 Brainwaves Acquisition protocols Electroencephalography
CommonCrawl
Interferometer Techniques for Gravitational-Wave Detection Andreas Freise Kenneth Strain Later version available View article history First Online: 25 February 2010 Several km-scale gravitational-wave detectors have been constructed world wide. These instruments combine a number of advanced technologies to push the limits of precision length measurement. The core devices are laser interferometers of a new kind; developed from the classical Michelson topology these interferometers integrate additional optical elements, which significantly change the properties of the optical system. Much of the design and analysis of these laser interferometers can be performed using well-known classical optical techniques, however, the complex optical layouts provide a new challenge. In this review we give a textbook-style introduction to the optical science required for the understanding of modern gravitational wave detectors, as well as other high-precision laser interferometers. In addition, we provide a number of examples for a freely available interferometer simulation software and encourage the reader to use these examples to gain hands-on experience with the discussed optical methods. Beam Splitter Gaussian Beam Light Field Beam Parameter Michelson Interferometer A revised version of this article is available at 10.1007/s41114-016-0002-8. 1.1 The scope and style of the review The historical development of laser interferometers for application as gravitational-wave detectors [47] has involved the combination of relatively simple optical subsystems into more and more complex assemblies. The individual elements that compose the interferometers, including mirrors, beam splitters, lasers, modulators, various polarising optics, photo detectors and so forth, are individually well described by relatively simple, mostly-classical physics. Complexity arises from the combination of multiple mirrors, beam splitters etc. into optical cavity systems, have narrow resonant features, and the consequent requirement to stabilise relative separations of the various components to sub-wavelength accuracy, and indeed in many cases to very small fractions of a wavelength. Thus, classical physics describes the interferometer techniques and the operation of current gravitational-wave detectors. However, we note that at signal frequencies above a couple of hundreds of Hertz, the sensitivity of current detectors is limited by the photon counting noise at the interferometer readout, also called shot-noise. The next generation systems such as Advanced LIGO [23, 5], Advanced Virgo [4] and LCGT [36] are expected to operate in a regime where the quantum physics of both light and mirror motion couple to each other. Then, a rigorous quantum-mechanical description is certainly required. Sensitivity improvements beyond these 'Advanced' detectors necessitate the development of non-classical techniques. The present review, in its first version, does not consider quantum effects but reserves them for future updates. The components employed tend to behave in a linear fashion with respect to the optical field, i.e., nonlinear optical effects need hardly be considered. Indeed, almost all aspects of the design of laser interferometers are dealt with in the linear regime. Therefore the underlying mathematics is relatively simple and many standard techniques are available, including those that naturally allow numerical solution by computer models. Such computer models are in fact necessary as the exact solutions can become quite complicated even for systems of a few components. In practice, workers in the field rarely calculate the behaviour of the optical systems from first principles, but instead rely on various well-established numerical modelling techniques. An example of software that enables modelling of either time-dependent or frequency-domain behaviour of interferometers and their component systems is Finesse [22, 19]. This was developed by one of us (AF), has been validated in a wide range of situations, and was used to prepare the examples included in the present review. The target readership we have in mind is the student or researcher who desires to get to grips with practical issues in the design of interferometers or component parts thereof. For that reason, this review consists of sections covering the basic physics and approaches to simulation, intermixed with some practical examples. To make this as useful as possible, the examples are intended to be realistic with sensible parameters reflecting typical application in gravitational wave detectors. The examples, prepared using Finesse, are designed to illustrate the methods typically applied in designing gravitational wave detectors. We encourage the reader to obtain Finesse and to follow the examples (see Appendix A). 1.2 Overview of the goals of interferometer design As set out in very many works, gravitational-wave detectors strive to pick out signals carried by passing gravitational waves from a background of self-generated noise. The principles of operation are set out at various points in the review, but in essence, the goal has been to prepare many photons, stored for as long as practical in the 'arms' of a laser interferometer (traditionally the two arms are at right angles), so that tiny phase shifts induced by the gravitational waves form as large as possible a signal, when the light leaving the appropriate 'port' of the interferometer is detected and the resulting signal analysed. The evolution of gravitational-wave detectors can be seen by following their development from prototypes and early observing systems towards the Advanced detectors, which are currently in the final stages of planning or early stages of construction. Starting from the simplest Michelson interferometer [18], then by the application of techniques to increase the number of photons stored in the arms: delay lines [31], Fabry-Pérot arm cavities [16, 17] and power recycling [15]. The final step in the development of classical interferometry was the inclusion of signal recycling [41, 30], which, among other effects, allows the signal from a gravitational-wave signal of approximately-known spectrum to be enhanced above the noise. Reading out a signal from even the most basic interferometer requires minimising the coupling of local environmental effects to the detected output. Thus, the relative positions of all the components must be stabilised. This is commonly achieved by suspending the mirrors etc. as pendulums, often multi-stage pendulums in series, and then applying closed-loop control to maintain the desired operating condition. The careful engineering required to provide low-noise suspensions with the correct vibration isolation, and also low-noise actuation, is described in many works. As the interferometer optics become more complicated, the resonance conditions, i.e., the allowed combinations of inter-component path lengths required to allow the photon number in the interferometer arms to reach maximum, become more narrowly defined. It is likewise necessary to maintain angular alignment of all components, such that beams required to interfere are correctly co-aligned. Typically the beams need to be aligned within a small fraction (and sometimes a very small fraction) of the far-field diffraction angle, and the requirement can be in the low nanoradian range for km-scale detectors [44, 21]. Therefore, for each optical component there is typically one longitudinal (i.e., along the direction of light propagation), plus two angular degrees of freedom (pitch and yaw about the longitudinal axis). A complex interferometer can consist of up to around seven highly sensitive components and so there can be of order 20 degrees of freedom to be measured and controlled [3, 57]. Although the light fields are linear, the coupling between the position of a mirror and the complex amplitude of the detected light field typically shows strongly nonlinear dependence on mirror positions due to the sharp resonance features exhibited by cavity systems. However, the fields do vary linearly or at least smoothly close to the desired operating point. So, while well-understood linear control theory suffices to design the control system needed to maintain the optical configuration at its operating point, bringing the system to that operating condition is often a separate and more challenging nonlinear problem. In the current version of this work we consider only the linear aspects of sensing and control. Control systems require actuators, and those employed are typically electrical-force transducers that act on the suspended optical components, either directly or — to provide enhanced noise rejection — at upper stages of multi-stage suspensions. The transducers are normally coil-magnet actuators, with the magnets on the moving part, or, less frequently, electrostatic actuators of varying design. The actuators are frequently regarded as part of the mirror suspension subsystem and are not discussed in the current work. 1.3 Overview of the physics of the primary interferometer components To give order to our review we consider the main physics describing the operation of the basic optical components (mirrors, beam splitters, modulators, etc.) required to construct interferometers. Although all of the relevant physics is generally well known and not new, we take it as a starting point that permits the introduction of notation and conventions. It is also true that the interferometry employed for gravitational-wave detection has a different emphasis than other interferometer applications. As a consequence, descriptions or examples of a number of crucial optical properties for gravitational wave detectors cannot be found in the literature. The purpose of this first version of the review is especially to provide a coherent theoretical framework for describing such effects. With the basics established, it can be seen that the interferometer configurations that have been employed in gravitational-wave detection may be built up and simulated in a relatively straightforward manner. As mentioned above, we do not address the newer physics associated with operation at or beyond the standard quantum limit. The interested reader can begin to explore this topic from the following references. The standard quantum limit [10, 32] Squeezing [38, 53] Quantum nondemolition interferometry [9, 24] These matters are to be included in a future revision of this review. 1.4 Plane-wave analysis The main optical systems of interferometric gravitational-wave detectors are designed such that all system parameters are well known and stable over time. The stability is achieved through a mixture of passive isolation systems and active feedback control. In particular, the light sources are some of the most stable, low-noise continuous-wave laser systems so that electromagnetic fields can be assumed to be essentially monochromatic. Additional frequency components can be modelled as small modulations (in amplitude or phase). The laser beams are well collimated, propagate along a well-defined optical axis and remain always very much smaller than the optical elements they interact with. Therefore, these beams can be described as paraxial and the well-known paraxial approximations can be applied. It is useful to first derive a mathematical model based on monochromatic, scalar, plane waves. As it turns out, a more detailed model including the polarisation and the shape of the laser beam as well as multiple frequency components, can be derived as an extension to the plane-wave model. A plane electromagnetic wave is typically described by its electric field component: with E0 as the (constant) field amplitude in V/m, \({\vec e_p}\) the unit vector in the direction of polarisation, such as, for example, \({\vec e_y}\) for \({\mathscr I}\)-polarised light, ω the angular oscillation frequency of the wave, and \(\vec k = {\vec e_k}\omega/c\) the wave vector pointing the in the direction of propagation. The absolute phase φ only becomes meaningful when the field is superposed with other light fields. In this document we will consider waves propagating along the optical axis given by the z-axis, so that \(\vec k\vec r = kz\). For the moment we will ignore the polarisation and use scalar waves, which can be written as $$E(z,t) = {E_0}\cos (\omega t - kz + \varphi){.}$$ Further, in this document we use complex notation, i.e., $$E = \Re\{E^{\prime}\} \quad {\rm{with}}\quad E^{\prime} = E_0^{\prime}\exp ({\rm{i}}(\omega t - kz)).$$ This has the advantage that the scalar amplitude and the phase φ can be given by one, now complex, amplitude E′0 = E0 exp(iφ) We will use this notation with complex numbers throughout. For clarity we will simply use the unprimed letters for the auxiliary field. In particular, we will use the letter E and also a and b to denote complex electric-field amplitudes. But remember that, for example, in E = E0 exp(−i kz) neither E nor E0 are physical quantities. Only the real part of E exists and deserves the name field amplitude. 1.5 Frequency domain analysis In most cases we are either interested in the fields at one particular location, for example, on the surface of an optical element, or we want to know the fields at all places in the interferometer but at one particular point in time. The latter is usually true for the steady state approach: assuming that the interferometer is in a steady state, all solutions must be independent of time so that we can perform all computations at t = 0 without loss of generality. In that case, the scalar plane wave can be written as $$E = {E_0}\exp (-{\rm{i}}\;kz).$$ The frequency domain is of special interest as numerical models of gravitational-wave detectors tend to be much faster to compute in the frequency domain than in the time domain. 2 Optical Components: Coupling of Field Amplitudes When an electromagnetic wave interacts with an optical system, all of its parameters can be changed as a result. Typically optical components are designed such that, ideally, they only affect one of the parameters, i.e., either the amplitude or the polarisation or the shape. Therefore, it is convenient to derive separate descriptions concerning each parameter. This section introduces the coupling of the complex field amplitude at optical components. Typically, the optical components are described in the simplest possible way, as illustrated by the use of abstract schematics such as those shown in Figure 2. This set of figures introduces an abstract form of illustration, which will be used in this document. The top figure shows a typical example taken from the analysis of an optical system: an incident field Ein is refiected and transmitted by a semi-transparent mirror; there might be the possibility of second incident field Ein2. The lower left figure shows the abstract form we choose to represent the same system. The lower right figure depicts how this can be extended to include a beam splitter object, which connects two optical axes. 2.1 Mirrors and spaces: reflection, transmission and propagation The core optical systems of current interferometric gravitational interferometers are composed of two building blocks: a) resonant optical cavities, such as Fabry-Pérot resonators, and b) beam splitters, as in a Michelson interferometer. In other words, the laser beam is either propagated through a vacuum system or interacts with a partially-reflecting optical surface. The term optical surface generally refers to a boundary between two media with possibly different indices of refraction n, for example, the boundary between air and glass or between two types of glass. A real fused silica mirror in an interferometer features two surfaces, which interact with a reffected or transmitted laser beam. However, in some cases, one of these surfaces has been treated with an anti-reffection (AR) coating to minimise the effect on the transmitted beam. The terms mirror and beam splitter are sometimes used to describe a (theoretical) optical surface in a model. We define real amplitude coefficients for reflection and transmission r and t, with 0 ≤ r, t ≤ 1, so that the field amplitudes can be written as The π/2 phase shift upon transmission (here given by the factor i) refers to a phase convention explained in Section 2.4. The free propagation of a distance D through a medium with index of refraction n can be described with the following set of equations: In the following we use n = 1 for simplicity. Note that we use above relations to demonstrate various mathematical methods for the analysis of optical systems. However, refined versions of the coupling equations for optical components, including those for spaces and mirrors, are also required, see, for example. Section 2.6. 2.2 The two-mirror resonator The linear optical resonator, also called a cavity is formed by two partially-transparent mirrors, arranged in parallel as shown in Figure 5. This simple setup makes a very good example with which to illustrate how a mathematical model of an interferometer can be derived, using the equations introduced in Section 2.1. The cavity is defined by a propagation length D (in vacuum), the amplitude reflectivities r1, r2 and the amplitude transmittances t1, t2. The amplitude at each point in the cavity can be computed simply as the superposition of fields. The entire set of equations can be written as $$\begin{array}{*{20}c} {{a_1} = {\rm{i}}\;{t_1}{a_0} + {r_1}a_3^\prime \quad}\\ {a_1^\prime = \exp (- {\rm{i}}\;kD){a_1}}\\ {{a_2} = {\rm{i}}\;{t_2}a_1^\prime \quad \quad \quad}\\ {{a_3} = {r_2}a_1^\prime \quad \quad \quad \quad}\\ {a_3^\prime = \exp (- {\rm{i}}\;kD){a_3}}\\ {{a_4} = {r_1}{a_0} + {\rm{i}}\;{t_1}a_3^\prime \quad}\\\end{array}$$ The circulating field impinging on the first mirror (surface) a′3 can now be computed as $$\begin{array}{*{20}c} {a_3^\prime = \exp (- {\rm{i}}\,kD){a_3} = \exp (- {\rm{i}}\,kD){r_2}a_1^\prime = \exp (- {\rm{i}}\,2kD){r_2}{a_1}} \\ {= \exp (- {\rm{i}}\,{\rm{2}}kD)\,{r_2}({\rm{i}}\,{t_1}{a_0} + {r_1}a_3^\prime).\quad \quad \quad \quad \quad \quad \quad \,\,\,} \\ \end{array}$$ This then yields $$a_3^{\prime} = {a_0}{{{\rm{i}}\;{r_2}{t_1}\exp (- {\rm{i\;2}}kD)} \over {1 - {r_1}{r_2}\exp (- {\rm{i\;2}}kD)}}.$$ We can directly compute the reflected field to be $${a_4} = {a_0}\left({{r_1} - {{{r_2}t_1^2\exp (- {\rm{i\;2}}kD)} \over {1 - {r_1}{r_2}\exp (- {\rm{i\;2}}kD)}}} \right) = {a_0}\left({{{{r_1} - {r_2}(r_1^2 + t_1^2)\exp (- {\rm{i\;2}}kD)} \over {1 - {r_1}{r_2}\exp (- {\rm{i\;2}}kD)}}} \right),$$ while the transmitted field becomes $${a_2} = {a_0}{{- {t_1}{t_2}\exp (- {\rm{i}}\;kD)} \over {1 - {r_1}{r_2}\exp (- {\rm{i\;2}}kD)}}.$$ The properties of two mirror cavities will be discussed in more detail in Section 5.1. 2.3 Coupling matrices Computations that involve sets of linear equations as shown in Section 2.2 can often be done or written efficiently with matrices. Two methods of applying matrices to coupling field amplitudes are demonstrated below, using again the example of a two mirror cavity. First of all, we can rewrite the coupling equations in matrix form. The mirror coupling as given in Figure 3 becomes and the amplitude coupling at a 'space', as given in Figure 4, can be written as In these examples the matrix simply transforms the 'known' impinging amplitudes into the 'unknown' outgoing amplitudes. 2.3.1 Coupling matrices for numerical computations An obvious application of the matrices introduced above would be to construct a large matrix for an extended optical system appropriate for computerisation. A very flexible method is to setup one equation for each field amplitude. The set of linear equations for a mirror would expand to $$\left({\begin{array}{*{20}c} 1 & 0 & 0 & 0 \\ {- {\rm{i}}\;t} & 1 & 0 & {- r} \\ 0 & 0 & 1 & 0 \\ {- r} & 0 & {- {\rm{i}}\;t} & 1 \\ \end{array}} \right)\left({\begin{array}{*{20}c} {{a_1}} \\ {{a_2}} \\ {{a_3}} \\ {{a_4}} \\ \end{array}} \right) = \left({\begin{array}{*{20}c} {{a_1}} \\ 0 \\ {{a_3}} \\ 0 \\ \end{array}} \right) = {M_{{\rm{system}}}}{\vec a_{{\rm{sol}}}} = {\vec a_{{\rm{input}}}},$$ where the input vector1 \({{\vec a}_{{\rm{input}}}}\) has non-zero values for the impinging fields and \({{\vec a}_{{\rm{sol}}}}\) is the 'solution' vector, i.e., after solving the system of equations the amplitudes of the impinging as well as those of the outgoing fields are stored in that vector. As an example we apply this method to the two mirror cavity. The system matrix for the optical setup shown in Figure 5 becomes $$\left({\begin{array}{*{20}c} 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ {- {\rm{i}}\;{t_1}} & 1 & 0 & {- {r_1}} & 0 & 0 & 0 \\ {- {r_1}} & 0 & 1 & {- {\rm{i}}\;{t_1}} & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & {- {e^{- {\rm{i}}\;kD}}} \\ 0 & {- {e^{- {\rm{i}}\;kD}}} & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & {- {\rm{i}}\;{t_1}} & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & {- {r_2}} & 1 \\ \end{array}} \right)\left({\begin{array}{*{20}c} {{a_0}} \\ {{a_1}} \\ {{a_4}} \\ {a_3^\prime} \\ {a_1^\prime} \\ {{a_2}} \\ {{a_3}} \\ \end{array}} \right) = \left({\begin{array}{*{20}c} {{a_0}} \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ \end{array}} \right)$$ This is a sparse matrix. Sparse matrices are an important subclass of linear algebra problems and many efficient numerical algorithms for solving sparse matrices are freely available (see, for example, [13]). The advantage of this method of constructing a single matrix for an entire optical system is the direct access to all field amplitudes. It also stores each coupling coefficient in one or more dedicated matrix elements, so that numerical values for each parameter can be read out or changed after the matrix has been constructed and, for example, stored in computer memory. The obvious disadvantage is that the size of the matrix quickly grows with the number of optical elements (and with the degrees of freedom of the system, see, for example, Section 7). Simplified schematic of a two mirror cavity. The two mirrors are defined by the amplitude coefficients for reflection and transmission. Further, the resulting cavity is characterised by its length D. Light field amplitudes are shown and identified by a variable name, where necessary to permit their mutual coupling to be computed. 2.3.2 Coupling matrices for a compact system descriptions The following method is probably most useful for analytic computations, or for optimisation aspects of a numerical computation. The idea behind the scheme, which is used for computing the characteristics of dielectric coatings [28, 40] and has been demonstrated for analysing gravitational wave detectors [43], is to rearrange equations as in Figure 6 and Figure 7 such that the overall matrix describing a series of components can be obtained by multiplication of the component matrices. In order to achieve this, the coupling equations have to be re-ordered so that the input vector consists of two field amplitudes at one side of the component. For the mirror, this gives a coupling matrix of $$\left({\begin{array}{*{20}c} {{a_1}}\\ {{a_4}}\\ \end{array}} \right) = {{\rm{i}} \over t}\left({\begin{array}{*{20}c} {- 1} & \quad r\\ {- r} & {{r^2} + {t^2}}\\ \end{array}} \right)\left({\begin{array}{*{20}c} {{a_2}}\\ {{a_3}}\\ \end{array}} \right).$$ In the special case of the lossless mirror this matrix simplifies as we have r2 + t2 = R + T =1. The space component would be described by the following matrix: $$\left({\begin{array}{*{20}c} {{a_1}} \\ {{a_4}} \\ \end{array}} \right) = \left({\begin{array}{*{20}c} {\exp ({\rm{i}}\;kD)} & {\quad 0} \\ {\quad 0} & {\exp (- {\rm{i}}\;kD)} \\ \end{array}} \right)\left({\begin{array}{*{20}c} {{a_2}} \\ {{a_3}} \\\end{array}} \right).$$ With these matrices we can very easily compute a matrix for the cavity with two lossless mirrors as $${M_{{\rm{cav}}}} = {M_{{\rm{mirror}}1}} \times {M_{{\rm{space}}}} \times {M_{{\rm{mirror}}2}}$$ $$= {{- 1} \over {{t_1}{t_2}}}\left({\begin{array}{*{20}c} {{e^ +} - {r_1}{r_2}{e^ -}} & {\quad - {r_2}{e^ +} + {r_1}{e^ -}}\\ {- {r_2}{e^ -} + {r_1}{e^ +}} & {{e^ -} - {r_1}{r_2}{e^ +}}\\ \end{array}} \right),$$ with e+ = exp(i kD) and e− = exp(−ikD). The system of equation describing a cavity shown in Equation (4) can now be written more compactly as $$\left({\begin{array}{*{20}c} {{a_0}}\\ {{a_4}}\\ \end{array}} \right) = {{- 1} \over {{t_1}{t_2}}}\left({\begin{array}{*{20}c} {{e^ +} - {r_1}{r_2}{e^ -}} & {\quad - {r_2}{e^ +} + {r_1}{e^ -}}\\ {- {r_2}{e^ -} + {r_1}{e^ +}} & {{e^ -} - {r_1}{r_2}{e^ +}}\\ \end{array}} \right)\left({\begin{array}{*{20}c} {{a_2}}\\ 0\\ \end{array}} \right).$$ This allows direct computation of the amplitude of the transmitted field resulting in $${a_2} = {a_0}{{- {t_1}{t_2}\exp (- {\rm{i}}\;kD)} \over {1 - {r_1}{r_2}\exp (- {\rm{i\;2}}kD)}},$$ which is the same as Equation (8). The advantage of this matrix method is that it allows compact storage of any series of mirrors and propagations, and potentially other optical elements, in a single 2 × 2 matrix. The disadvantage inherent in this scheme is the lack of information about the field amplitudes inside the group of optical elements. 2.4 Phase relation at a mirror or beam splitter The magnitude and phase of reflection at a single optical surface can be derived from Maxwell's equations and the electromagnetic boundary conditions at the surface, and in particular the condition that the field amplitudes tangential to the optical surface must be continuous. The results are called Fresnel's equations [33]. Thus, for a field impinging on an optical surface under normal incidence we can give the reflection coefficient as $$r = {{{n_1} - {n_2}} \over {{n_1} + {n_2}}},$$ with n1 and n2 the indices of refraction of the first and second medium, respectively. The transmission coefficient for a lossless surface can be computed as t2 = 1 − r2. We note that the phase change upon reflection is either 0 or 180°, depending on whether the second medium is optically thinner or thicker than the first. It is not shown here but Fresnel's equations can also be used to show that the phase change for the transmitted light at a lossless surface is zero. This contrasts with the definitions given in Section 2.1 (see Figure (3)ff.), where the phase shift upon any reflection is defined as zero and the transmitted light experiences a phase shift of π/2. The following section explains the motivation for the latter definition having been adopted as the common notation for the analysis of modern optical systems. 2.4.1 Composite optical surfaces Modern mirrors and beam splitters that make use of dielectric coatings are complex optical systems, see Figure 8 whose reflectivity and transmission depend on the multiple interference inside the coating layers and thus on microscopic parameters. The phase change upon transmission or reflection depends on the details of the applied coating and is typically not known. In any case, the knowledge of an absolute value of a phase change is typically not of interest in laser interferometers because the absolute positions of the optical components are not known to sub-wavelength precision. Instead the relative phase between the incoming and outgoing beams is of importance. In the following we demonstrate how constraints on these relative phases, i.e., the phase relation between the beams, can be derived from the fundamental principle of power conservation. To do this we consider a Michelson interferometer, as shown in Figure 9, with perfectly-reflecting mirrors. The beam splitter of the Michelson interferometer is the object under test. We assume that the magnitude of the reflection r and transmission t are known. The phase changes upon transmission and reflection are unknown. Due to symmetry we can say that the phase change upon transmission should be the same in both directions. However, the phase change on reflection might be different for either direction, thus, we write for the reflection at the front and for the reflection at the back of the beam splitter. This sketch shows a mirror or beam splitter component with dielectric coatings and the photograph shows some typical commercially available examples [45]. Most mirrors and beam splitters used in optical experiments are of this type: a substrate made from glass, quartz or fused silica is coated on both sides. The reflective coating defines the overall reflectivity of the component (anything between R ≈ 1 and R ≈ 0, while the anti-reflective coating is used to reduce the reflection at the second optical surface as much as possible so that this surface does not influence the light. Please note that the drawing is not to scale, the coatings are typically only a few microns thick on a several millimetre to centimetre thick substrate. The relation between the phase of the light field amplitudes at a beam splitter can be computed assuming a Michelson interferometer, with arbitrary arm length but perfectly-reflecting mirrors. The incoming field E0 is split into two fields E1 and E2 which are reflected atthe end mirrors and return to the beam splitter, as E3 and E4, to be recombined into two outgoing fields. These outgoing fields E5 and E6 are depicted by two arrows to highlight that these are the sum of the transmitted and reflected components of the returning fields. We can derive constraints for the phase of E1 and E2 with respect to the input field E0 from the conservation of energy: |E0|2 = |E5|2 + |E6|2. Then the electric fields can be computed as $${E_1} = r\;{E_0}\;{e^{{\rm{i}}{\varphi _{r1}}}};\quad {E_2} = t\;{E_0}\;{e^{{\rm{i}}{\varphi _t}}}.$$ We do not know the length of the interferometer arms. Thus, we introduce two further unknown phases: Φ1 for the total phase accumulated by the field in the vertical arm and Φ2 for the total phase accumulated in the horizontal arm. The fields impinging on the beam splitter compute as $${E_3} = r\;{E_0}\;{e^{{\rm{i}}({\varphi _{r1}} + {\Phi _1})}};\quad {E_4} = t\;{E_0}\;{e^{{\rm{i}}({\varphi _t} + {\Phi _2})}}.$$ The outgoing fields are computed as the sums of the reflected and transmitted components: $$\begin{array}{*{20}c} {{E_5} = \ {E_0}(R\ {e^{{\rm{i}}(2{\varphi _{r1}} + {\Phi _1})}} + T\ {e^{{\rm{i}}(2{\varphi _t} + {\Phi _2})}}).}\quad\\ {{E_6} = \ {E_0}\;rt(\ {e^{{\rm{i}}({\varphi _t} + {\varphi _{r1}} + {\Phi _1})}} + {e^{{\rm{i}}({\varphi _t} + {\varphi _{r2}} + {\Phi _2})}}),}\\ \end{array}$$ with R = r2 and T = t2. It will be convenient to separate the phase factors into common and differential ones. We can write $${E_5} = {E_0}\;{e^{{\rm{i}}{\alpha _ +}}}(R{e^{{\rm{i}}{\alpha _ -}}} + T{e^{{\rm{- i}}{\alpha _ -}}}),$$ $${\alpha _ +} = {\varphi _{r1}} + {\varphi _t} + {1 \over 2}({\Phi _1} + {\Phi _2});\quad {\alpha _ -} = {\varphi _{r1}} - {\varphi _t} + {1 \over 2}({\Phi _1} - {\Phi _2}),$$ and similarly $${E_6} = {E_0}\;rt\;{e^{{\rm{i}}{\beta _{+}}}}2\cos ({\beta _ -}),$$ $${\beta _ +} = {\varphi _t} + {1 \over 2}({\varphi _{r1}} + {\varphi _{r2}} + {\Phi _1} + {\Phi _2});\quad {\beta _ -} = {1 \over 2}({\varphi _{r1}} - {\varphi _{r2}} + {\Phi _1} - {\Phi _2}){.}$$ for simplicity we now limit the discussion to a 50:50 beam splitter with \(r = t = 1/\sqrt 2\), for which we can simplify the field expressions even further: $${E_5} = {E_0}\; {e^{{\rm{i}}{\alpha _ +}}}\cos ({\alpha _ -});\quad {E_6} = {E_0} \;{e^{{\rm{i}}{\beta _ +}}}\cos ({\beta _ -}){.}$$ Conservation of energy requires that |E0|2 = |E5|2 + |E6|2, which in turn requires $${\cos ^2}({\alpha _ -}) + {\cos ^2}({\beta _ -}) = 1,$$ which is only true if $${\alpha _ -} - {\beta _ -} = (2N + 1){\pi \over 2},$$ with N as in integer (positive, negative or zero). This gives the following constraint on the phase factors $${1 \over 2}({\varphi _{r1}} + {\varphi _{r2}}) - {\varphi _t} = (2N + 1){\pi \over 2}.$$ One can show that exactly the same condition results in the case of arbitrary (lossless) reflectivity of the beam splitter [48]. We can test whether two known examples fulfill this condition. If the beam-splitting surface is the front of a glass plate we know that φt = 0, φr1 = π φr2 = 0, which conforms with Equation (28). A second example is the two-mirror resonator, see Section 2.2. If we consider the cavity as an optical 'black box', it also splits any incoming beam into a reflected and transmitted component, like a mirror or beam splitter. Further we know that a symmetric resonator must give the same results for fields injected from the left or from the right. Thus, the phase factors upon reflection must be equal φr = φr1 = φr2. The reflection and transmission coefficients are given by Equations (7) and (8) as $${r_{{\rm{cav}}}} = \left({{r_1} - {{{r_2}t_1^2\exp (- {\rm{i}}\;{\rm{2}}kD)} \over {1 - {r_1}{r_2}\exp (- {\rm{i}}\;{\rm{2}}kD)}}} \right),$$ $${t_{{\rm{cav}}}} = {{- {t_1}{t_2}\exp (- {\rm{i}}\;kD)} \over {1 - {r_1}{r_2}\exp (- {\rm{i\;2}}kD)}}.$$ We demonstrate a simple case by putting the cavity on resonance (kD = Nπ). This yields $${r_{{\rm{cav}}}} = \left({{r_1} - {{{r_2}t_1^2} \over {1 - {r_1}{r_2}}}} \right);\quad {t_{{\rm{cav}}}} = {{{\rm{i}}\;{t_1}{t_2}} \over {1 - {r_1}{r_2}}},$$ with rcav being purely real and tcav imaginary and thus φt = π/2 and φr = 0 which also agrees with Equation (28). In most cases we neither know nor care about the exact phase factors. Instead we can pick any set which fulfills Equation (28). For this document we have chosen to use phase factors equal to those of the cavity, i.e., φt = π/2 and φr = 0, which is why we write the reflection and transmission at a mirror or beam splitter as $${E_{{\rm{refl}}}} = r\;{E_{0\quad}}{\rm{and}}\quad {E_{{\rm{trans}}}} = {\rm{i}}\;t\;{E_0}.$$ In this definition r and t are positive real numbers satisfying r2 +t2 = 1 for the lossless case. Please note that we only have the freedom to chose convenient phase factors when we do not know or do not care about the details of the optical system, which performs the beam splitting. If instead the details are important, for example when computing the properties of a thin coating layer, such as anti-reflex coatings, the proper phase factors for the respective interfaces must be computed and used. 2.5 Lengths and tunings: numerical accuracy of distances The resonance condition inside an optical cavity and the operating point of an interferometer depends on the optical path lengths modulo the laser wavelength, i.e., for light from an Nd:YAG laser length differences of less than 1 µm are of interest, not the full magnitude of the distances between optics. On the other hand, several parameters describing the general properties of an optical system, like the finesse or free spectral range of a cavity (see Section 5.1) depend on the macroscopic distance and do not change significantly when the distance is changed on the order of a wavelength. This illustrates that the distance between optical components might not be the best parameter to use for the analysis of optical systems. Furthermore, it turns out that in numerical algorithms the distance may suffer from rounding errors. Let us use the Virgo [56] arm cavities as an example to illustrate this. The cavity length is approximately 3 km, the wavelength is on the order of 1 µm, the mirror positions are actively controlled with a precision of 1 µm and the detector sensitivity can be as good as 10−18 m, measured on ∼ 10 ms timescales (i.e., many samples of the data acquisition rate). The floating point accuracy of common, fast numerical algorithms is typically not better than 10−15. If we were to store the distance between the cavity mirrors as such a floating point number, the accuracy would be limited to 3 pm, which does not even cover the accuracy of the control systems, let alone the sensitivity. Illustration of an arm cavity of the Virgo gravitational-wave detector [56]: the macroscopic length L of the cavity is approximately 3 km, while the wavelength of the Nd:YAG laser is λ ∼ 1 µm. The resonance condition is only affected by the microscopic position of the wave nodes with respect to the mirror surfaces and not by the macroscopic length, i.e., displacement of one mirror by Δx = λ/2 re-creates exactly the same condition. However, other parameters of the cavity, such as the finesse, only depend on the macroscopic length L and not on the microscopic tuning. A simple and elegant solution to this problem is to split a distance D between two optical components into two parameters [29]: one is the macroscopic 'length' L, defined as the multiple of a constant wavelength λ0 yielding the smallest difference to D. The second parameter is the microscopic tuning T that is defined as the remaining difference between L and D, i.e., D = L + T. Typically, λ0 can be understood as the wavelength of the laser in vacuum, however, if the laser frequency changes during the experiment or multiple light fields with different frequencies are used simultaneously, a default constant wavelength must be chosen arbitrarily. Please note that usually the term λ in any equation refers to the actual wavelength at the respective location as λ = λ0/n with n the index of refraction at the local medium. We have seen in Section 2.1 that distances appear in the expressions for electromagnetic waves in connection with the wave number, for example, $${E_{\rm{2}}} = {E_{1\quad}}\exp (- {\rm{i}}\;kz)\;.$$ Thus, the difference in phase between the field at z = z1 and z = z1 + D is given as $$\varphi = - kD.$$ We recall that k = 2π/λ = ω/c. We can define ω0 = 2π c/λ0 and k0 = ω0/c. For any given wavelength λ we can write the corresponding frequency as a sum of the default frequency and a difference frequency ω = ω0 + Δω. Using these definitions, we can rewrite Equation (34) with length and tuning as $$- \varphi = kD = {{{\omega _0}L} \over c} + {{\Delta \omega L} \over c} + {{{\omega _0}T} \over c} + {{\Delta \omega T} \over c}.$$ The first term of the sum is always a multiple of 2π, which is equivalent to zero. The last term of the sum is the smallest, approximately of the order Δω · 10−14. For typical values of L ≈ 1 m, T < 1 µm and Δω < 2π · 100 MHz we find that $${{{\omega _0}L} \over c} = 0,\quad {{\Delta \omega L} \over c}\underset{\approx}{<} 2,\quad {{{\omega _0}T} \over c} \underset{\approx}{<} 6,\quad {{\Delta \omega T} \over c} \underset{\approx}{<} 2\;\;\;{10^{- 6}},$$ which shows that the last term can often be ignored. We can also write the tuning directly as a phase. We define as the dimensionless tuning $$\phi = {\omega _0}T/c.$$ This yields $$\exp \left({{\rm{i}}{\omega \over c}T} \right) = \exp \left({{\rm{i}}{{{\omega _0}} \over c}T{\omega \over {{\omega _0}}}} \right) = \exp \left({{\rm{i}}{\omega \over {{\omega _0}}}\phi} \right).$$ The tuning ϕ is given in radian with 2π referring to a microscopic distance of one wavelength2 λ0. Finally, we can write the following expression for the phase difference between the light field taken at the end points of a distance D: $$\varphi = - kD = - \left({{{\Delta \omega L} \over c} + \phi {\omega \over {{\omega _0}}}} \right),$$ or if we neglect the last term from Equation (36) we can approximate (ω/ω0 ≈ 1) to obtain $$\varphi \approx - \left({{{\Delta \omega L} \over c} + \phi} \right).$$ This convention provides two parameters L and ϕ that can describe distances with a markedly improved numerical accuracy. In addition, this definition often allows simplification of the algebraic notation of interferometer signals. By convention we associate a length L with the propagation through free space, whereas the tuning will be treated as a parameter of the optical components. Effectively the tuning then represents a microscopic displacement of the respective component. If, for example, a cavity is to be resonant to the laser light, the tunings of the mirrors have to be the same whereas the length of the space in between can be arbitrary. 2.6 Revised coupling matrices for space and mirrors Using the definitions for length and tunings we can rewrite the coupling equations for mirrors and spaces introduced in Section 2.1 as follows. The mirror coupling becomes (compare this to Figure 6), and the amplitude coupling for a 'space', formally written as in Figure 7, is now written as 2.7 Finesse examples 2.7.1 Mirror reflectivity and transmittance We use Finesse to plot the amplitudes of the light fields transmitted and reflected by a mirror (given by a single surface). Initially, the mirror has a power reflectance and transmittance of R = T = 0.5 and is, thus, lossless. For the plot in Figure 13 we tune the transmittance from 0.5 to 0. Since we do not explicitly change the reflectivity, R remains at 0.5 and the mirror loss increases instead, which is shown by the trace labelled 'total' corresponding to the sum of the reflected and transmitted light power. The plot also shows the phase convention of a 90° phase shift for the transmitted light. Finesse example: Mirror reflectivity and transmittance. Finesse input file for 'Mirror reflectivity and transmittance' 2.7.2 Length and tunings This Finesse file demonstrates the conventions for lengths and microscopic positions introduced in Section 2.5. The top trace in Figure 14 depicts the phase change of a beam reflected by a beam splitter as the function of the beam splitter tuning. By changing the tuning from 0 to 180° the beam splitter is moved forward and shortens the path length by one wavelength, which by convention increases the light phase by 360°. On the other hand, if a length of a space is changed, the phase of the transmitted light is unchanged (for the default wavelength Δk = 0), as shown the in the lower trace. Finesse example: Length and tunings. Finesse input file for 'Length and tunings' 3 Light with Multiple Frequency Components So far we have considered the electromagnetic field to be monochromatic. This has allowed us to compute light-field amplitudes in a quasi-static optical setup. In this section, we introduce the frequency of the light as a new degree of freedom. In fact, we consider a field consisting of a finite and discrete number of frequency components. We write this as $$E(t,z) = \sum\limits_j {{a_j}\exp ({\rm{i}}({\omega _j}t - {k_j}z)),}$$ with complex amplitude factors aj, ωj as the angular frequency of the light field and kj = ωj/c. In many cases the analysis compares different fields at one specific location only, in which case we can set z = 0 and write $$E(t) = \sum\limits_j {{a_j}\exp ({\rm{i}}{\omega _j}t){.}}$$ In the following sections the concept of light modulation is introduced. As this inherently involves light fields with multiple frequency components, it makes use of this type of field description. Again we start with the two-mirror cavity to illustrate how the concept of modulation can be used to model the effect of mirror motion. 3.1 Modulation of light fields Laser interferometers typically use three different types of light fields: the laser with a frequency of, for example, f ≈ 2.8 · 1014 Hz, radio frequency (RF) sidebands used for interferometer control with frequencies (offset to the laser frequency) of f ≈ 1 • 10e to 150 • 10e Hz, and the signal sidebands at frequencies of 1 to 10,000 Hz3. As these modulations usually have as their origin a change in optical path length, they are often phase modulations of the laser frequency, the RF sidebands are utilised for optical readout purposes, while the signal sidebands carry the signal to be measured (the gravitational-wave signal plus noise created in the interferometer). Figure 15 shows a time domain representation of an electromagnetic wave of frequency ω0, whose amplitude or phase is modulated at a frequency One can easily see some characteristics of these two types of modulation, for example, that amplitude modulation leaves the zero crossing of the wave unchanged whereas with phase modulation the maximum and minimum amplitude of the wave remains the same. In the frequency domain in which a modulated field is expanded into several unmodulated field components, the interpretation of modulation becomes even easier: any sinusoidal modulation of amplitude or phase generates new field components, which are shifted in frequency with respect to the initial field. Basically, light power is shifted from one frequency component, the carrier, to several others, the sidebands. The relative amplitudes and phases of these sidebands differ for different types of modulation and different modulation strengths. This section demonstrates how to compute the sideband components for amplitude, phase and frequency modulation. Example traces for phase and amplitude modulation: the upper plot a) shows a phase-modulated sine wave and the lower plot b) depicts an amplitude-modulated sine wave. Phase modulation is characterised by the fact that it mostly affects the zero crossings of the sine wave. Amplitude modulation affects mostly the maximum amplitude of the wave. The equations show the modulation terms in red with m the modulation index and Ω the modulation frequency. 3.2 Phase modulation Phase modulation can create a large number of sidebands. The number of sidebands with noticeable power depends on the modulation strength (or depth) given by the modulation index m. Assuming an input field $${E_{{\rm{in}}}} = {E_0}\exp ({\rm{i}}{\omega _0}t),$$ a sinusoidal phase modulation of the field can be described as $$E = {E_0}\exp \left({{\rm{i}}({\omega _0}t + m\cos (\Omega t))} \right).$$ This equation can be expanded using the identity [27] $$\exp ({\rm{i}}z\cos \varphi) = \sum\limits_{k = - \infty}^\infty {{{\rm{i}}^k}{J_k}} (z)\exp ({\rm{i}}\;k\varphi),$$ with Bessel functions of the first kind Jk(m). We can write $$E = {E_0}\exp ({\rm{i}}{\omega _0}t) \sum\limits_{k = - \infty}^\infty {{{\rm{i}}^k}{J_k}} (m)\exp ({\rm{i}}\;k\Omega t){.}$$ The field for k = 0, oscillating with the frequency of the input field ω0, represents the carrier. The sidebands can be divided into upper (k > 0) and lower (k < 0) sidebands. These sidebands are light fields that have been shifted in frequency by k Ω. The upper and lower sidebands with the same absolute value of k are called a pair of sidebands of order k. Equation (46) shows that the carrier is surrounded by an infinite number of sidebands. However, for small modulation indices (m < 1) the Bessel functions rapidly decrease with increasing k (the lowest orders of the Bessel functions are shown in Figure 16). For small modulation indices we can use the approximation [2] $${J_k}(m) = {\left({{m \over 2}} \right)^k}\sum\limits_{n =0}^\infty {{{{{\left({- {{{m^2}} \over 4}} \right)}^n}} \over {n!(k + n)!}} = {1 \over {k!}}} {\left({{m \over 2}} \right)^k} + O({m^{k + 2}}).$$ In which case, only a few sidebands have to be taken into account. For m ≪ 1 we can write $$\begin{array}{*{20}c} {E = {E_0}\exp ({\rm{i}}{\omega _0}t)}\qquad \qquad \qquad \qquad \quad\quad\quad\quad\quad\quad\quad \\ {\times \left({{J_0}(m) - {\rm{i}}\;{J_{- 1}}(m)\;\exp (- {\rm{i}}\Omega t) + {\rm{i}}\;{J_1}(m)\exp ({\rm{i}}\Omega t)} \right),}\\ \end{array}$$ and with $${J_{- k}}(m) = {(- 1)^k}{J_k}(m),$$ we obtain $$E = {E_0}\exp ({\rm{i}}{\omega _0}t)\left({1 + {\rm{i}}{m \over 2}\left({\exp (- {\rm{i}}\Omega t) + \exp ({\rm{i}}\Omega t)} \right)} \right),$$ as the first-order approximation in m. In the above equation the carrier field remains unchanged by the modulation, therefore this approximation is not the most intuitive. It is clearer if the approximation up to the second order in is given: $$E = {E_0}\exp ({\rm{i}}{\omega _0}t)\left({1 - {{{m^2}} \over 4} + {\rm{i}}{m \over 2}\left({\exp (- {\rm{i}}\Omega t) + \exp ({\rm{i}}\Omega t)} \right)} \right),$$ which shows that power is transferred from the carrier to the sideband fields. Some of the lowest-order Bessel functions Jk(x) of the first kind. For small x the expansion shows a simple dependency and higher-order functions can often be neglected. Higher-order expansions in m can be performed simply by specifying the highest order of Bessel function, which is to be used in the sum in Equation (46), i.e., $$E = {E_0}\exp ({\rm{i}}{\omega _0}t)\sum\limits_{k = - order}^{order} {{i^k}\;{J_k}(m)\exp ({\rm{i}}\;k\Omega t){.}}$$ 3.3 Frequency modulation For small modulation, indices, phase modulation and frequency modulation can be understood as different descriptions of the same effect [29]. Following the same spirit as above we would assume a modulated frequency to be given by $$\omega = {\omega _0} + {m^{\prime}}\cos (\Omega t),$$ and then we might be tempted to write $$E = {E_0}\;\exp \left({{\rm{i(}}{\omega _0} + {m^{\prime}}\cos (\Omega t))t} \right),$$ which would be wrong. The frequency of a wave is actually defined as ω/(2π) = f = dφ/dt. Thus, to obtain the frequency given in Equation (53), we need to have a phase of $${\omega _0}t + {{m^{\prime}} \over \Omega}\sin (\Omega t){.}$$ for consistency with the notation for phase modulation, we define the modulation index to be $$m = {{m^{\prime}} \over \Omega} = {{\Delta \omega} \over \Omega},$$ with Δω as the frequency swing — how far the frequency is shifted by the modulation — and Ω the modulation frequency — how fast the frequency is shifted. Thus, a sinusoidal frequency modulation can be written as $$E = {E_0}\;\exp ({\rm{i}}\varphi) = {E_0}\;\exp \left({{\rm{i}}\left({{\omega _0}t + {{\Delta \omega} \over \Omega}\cos (\Omega t)} \right)} \right),$$ which is exactly the same expression as Equation (44) for phase modulation. The practical difference is the typical size of the modulation index, with phase modulation having a modulation index of m < 10, while for frequency modulation, typical numbers might be m > 104. Thus, in the case of frequency modulation, the approximations for small m are not valid. The series expansion using Bessel functions, as in Equation (46), can still be performed, however, very many terms of the resulting sum need to be taken into account. 3.4 Amplitude modulation In contrast to phase modulation, (sinusoidal) amplitude modulation always generates exactly two sidebands. Furthermore, a natural maximum modulation index exists: the modulation index is defined to be one (m = 1) when the amplitude is modulated between zero and the amplitude of the unmodulated field. If the amplitude modulation is performed by an active element, for example by modulating the current of a laser diode, the following equation can be used to describe the output field: $$\begin{array}{*{20}c} {E = {E_0}\exp ({\rm{i}}{\omega _0}t)\left({1 + m\cos (\Omega t)} \right)}\qquad\qquad\qquad\qquad\\ {= {E_0}\exp ({\rm{i}}{\omega _0}t)\left({1 + {m \over 2}\exp ({\rm{i}}\Omega t) + {m \over 2}\exp (- {\rm{i}}\Omega t)} \right)}\\ \end{array}.$$ However, passive amplitude modulators (like acousto-optic modulators or electro-optic modulators with polarisers) can only reduce the amplitude. In these cases, the following equation is more useful: $$\begin{array}{*{20}c} {E = {E_0}\exp ({\rm{i}}{\omega _0}t)\left({1 - {m \over 2}\left({1 - \cos (\Omega t)} \right)} \right)}\qquad\qquad\qquad\\ {\quad = {E_0}\exp ({\rm{i}}{\omega _0}t)\left({1 - {m \over 2} + {m \over 4}\exp ({\rm{i}}\Omega t) + {m \over 4}\exp (- {\rm{i}}\Omega t)} \right).}\\ \end{array}$$ 3.5 Sidebands as phasors in a rotating frame A common method of visualising the behaviour of sideband fields in interferometers is to use phase diagrams in which each field amplitude is represented by an arrow in the complex plane. We can think of the electric field amplitude E0 exp(i ω0t) as a vector in the complex plane, rotating around the origin with angular velocity ω0. To illustrate or to help visualise the addition of several light fields it can be useful to look at this problem using a rotating reference frame, defined as follows. A complex number shall be defined as z = x + iy so that the real part is plotted along the x-xis, while the y-axis is used for the imaginary part. We want to construct a new coordinate system (x′, y′) in which the field vector is at a constant position. This can be achieved by defining $$\begin{array}{*{20}c} {x = x^{\prime}\;\cos {\omega _0}t - y^{\prime}\;\sin {\omega _0}t}\\ {y = x^{\prime}\;\sin {\omega _0}t + y^{\prime}\;\cos {\omega _0}t,}\\ \end{array}$$ $$\begin{array}{*{20}c} {x^{\prime} = x \cos (- {\omega _0}t) - y \sin (- {\omega _0}t)}\\ {y^{\prime} = x \sin (- {\omega _0}t) + y \cos (- {\omega _0}t){.}}\\ \end{array}$$ Figure 17 illustrates how the transition into the rotating frame makes the field vector to appear stationary. The angle of the field vector in a rotating frame depicts the phase offset of the field. Therefore these vectors are also called phasors and the illustrations using phasors are called phasor diagrams. Two more complex examples of how phasor diagrams can be employed is shown in Figure 18 [11]. Electric field vector E0 exp(iω0t) depicted in the complex plane and in a rotating frame (x′, y′) rotating at ω0 so that the field vector appears stationary. Amplitude and phase modulation in the 'phasor' picture. The upper plots a) illustrate how a phasor diagram can be used to describe phase modulation, while the lower plots b) do the same for amplitude modulation. In both cases the left hand plot shows the carrier in blue and the modulation sidebands in green as snapshots at certain time intervals. One can see clearly that the upper sideband (ω0 + Ω) rotates faster than the carrier, while the lower sideband rotates slower. The right plot in both cases shows how the total field vector at any given time can be constructed by adding the three field vectors of the carrier and sidebands. [Drawing courtesy of Simon Chelkowski] Phasor diagrams can be especially useful to see how frequency coupling of light field amplitudes can change the type of modulation, for example, to turn phase modulation into amplitude modulation. An extensive introduction to this type of phasor diagram can be found in [39]. 3.6 Phase modulation through a moving mirror Several optical components can modulate transmitted or reflected light fields. In this section we discuss in detail the example of phase modulation by a moving mirror. Mirror motion does not change the transmitted light; however, the phase of the reflected light will be changed as shown in Equation (11). We assume sinusoidal change of the mirror's tuning as shown in Figure 19. The position modulation is given as xm = cos(ωst + φs), and thus the reflected field at the mirror becomes (assuming a4 = 0) $${a_3} = r{a_1}\exp (- {\rm{i\;2}}{\phi _0})\exp ({\rm{i}}2k{x_{\rm{m}}}) \approx r{a_1}\exp (- {\rm{i\;2}}{\phi _0})\exp \left({- {\rm{i\;2}}{k_0}{a_{\rm{s}}}\cos ({\omega _{\rm{s}}}t + {\varphi _{\rm{s}}})} \right),$$ setting m = 2k0as. This can be expressed as $$\begin{array}{*{20}c} {{a_3} = r{a_1}\exp (- {\rm{i\;2}}{\phi _0})\left({1 + {\rm{i}}{m \over 2}\exp \left({- {\rm{i}}({\omega _{\rm{s}}}t + {\varphi _{\rm{s}}})} \right) + {\rm{i}}{m \over 2}\exp \left({{\rm{i}}({\omega _{\rm{s}}}t + {\varphi _{\rm{s}}})} \right)} \right)}\\ {= r{a_1}\exp (- {\rm{i\;2}}{\phi _0})\left({1 + {m \over 2}\exp \left({- {\rm{i}}({\omega _{\rm{s}}}t + {\varphi _{\rm{s}}} - \pi/2)} \right)} \right.}\qquad\qquad\qquad\\ {\left. {+ {m \over 2}\exp \left({{\rm{i}}({\omega _{\rm{s}}}t + {\varphi _{\rm{s}}} + \pi/2)} \right)} \right).}\qquad\qquad\qquad\qquad\qquad\qquad\\ \end{array}$$ A sinusoidal signal with amplitude as frequency ωs and phase offset φs is applied to a mirror position, or to be precise, to the mirror tuning. The equation given for the tuning ϕ assumes that ωs/ω0 ≪ 1, see Section 2.5. 3.7 Coupling matrices for beams with multiple frequency components The coupling between electromagnetic fields at optical components introduced in Section 2 referred only to the amplitude and phase of a simplified monochromatic field, ignoring all the other parameters of the electric field of the beam given in Equation (1). However, this mathematical concept can be extended to include other parameters provided that we can find a way to describe the total electric field as a sum of components, each of which is characterised by a discrete value of the related parameters. In the case of the frequency of the light field, this means we have to describe the field as a sum of monochromatic components. In the previous sections we have shown how this could be done in the special case of an initial monochromatic field that is subject to modulation: if the modulation index is small enough we can limit the amount of frequency components that we need to consider. In many cases it is actually sufficient to describe a modulation only by the interaction of the carrier at φ0 (the unmodulated field) and two sidebands with a frequency offset of °φm to the carrier. A beam given by the sum of three such components can be described by a complex vector: $$\vec a = \left({\begin{array}{*{20}c} {\quad a({\omega _0})}\\ {a({\omega _0} - {\omega _m})}\\ {a({\omega _0} + {\omega _m})}\\ \end{array}} \right) = \left({\begin{array}{*{20}c} {{a_{\omega 0}}}\\ {{a_{\omega 1}}}\\ {{a_{\omega 2}}}\\ \end{array}} \right)$$ with φ0 = φ0, φ0 − φm = φ1 and φ0 + φm = φ2. In the case of a phase modulator that applies a modulation of small modulation index m to an incoming light field \({{\vec a}_1}\), we can describe the coupling of the frequency component as follows: $$\begin{array}{*{20}c} {{a_{2,\omega 0}} = {J_0}(m){a_{1,\omega 0}} + {J_1}(m){a_{1,\omega 1}} + {J_{- 1}}(m){a_{1,\omega 2}}}\;\;\\ {{a_{2,\omega 1}} = {J_0}(m){a_{1,\omega 1}} + {J_{- 1}}(m){a_{1,\omega 0}}}\qquad\qquad\qquad\\ {{a_{2,\omega 2}} = {J_0}(m){a_{1,\omega 2}} + {J_1}(m){a_{1,\omega 0}},}\qquad\qquad\qquad\\ \end{array}$$ which can be written in matrix form: $${\vec a_2} = \left({\begin{array}{*{20}c} {{J_0}(m)} & {{J_1}(m)} & {{J_{- 1}}(m)}\\ {{J_{- 1}}(m)} & {{J_0}(m)} & 0\\ {{J_1}(m)} & 0 & {{J_0}(m)}\\ \end{array}} \right){\vec a_1}.$$ And similarly, we can write the complete coupling matrix for the modulator component, for example, as $$\left({\begin{array}{*{20}c} {{a_{2,w0}}} \\ {{a_{2,w1}}} \\ {{a_{2,w2}}} \\ {{a_{4,w0}}} \\ {{a_{4,w1}}} \\ {{a_{4,w2}}} \\ \end{array}} \right)\left({\begin{array}{*{20}c} {{J_0}(m)} & {{J_1}(m)} & {{J_{- 1}}(m)} & 0 & 0 & 0 \\ {{J_{- 1}}(m)} & {{J_0}(m)} & 0 & 0 & 0 & 0 \\ {{J_1}(m)} & 0 & {{J_0}(m)} & 0 & 0 & 0 \\ 0 & 0 & 0 & {{J_0}(m)} & {{J_1}(m)} & {{J_{- 1}}(m)} \\ 0 & 0 & 0 & {{J_{- 1}}(m)} & {{J_0}(m)} & 0 \\ 0 & 0 & 0 & {{J_1}(m)} & 0 & {{J_0}(m)} \\ \end{array}} \right)\left({\begin{array}{*{20}c} {{a_{1,w0}}} \\ {{a_{1,w1}}} \\ {{a_{1,w2}}} \\ {{a_{3,w0}}} \\ {{a_{3,w1}}} \\ {{a_{3,w2}}} \\ \end{array}} \right)$$ 3.8.1 Modulation index This file demonstrates the use of a modulator. Phase modulation (with up to five higher harmonics is applied to a laser beam and amplitude detectors are used to measure the field at the first three harmonics. Compare this to Figure 16 as well. Finesse example: Modulation index.4 Finesse input file for 'Modulation index' 3.8.2 Mirror modulation Finesse offers two different types of modulators: the 'modulator' component shown in the example above, and the 'fsig' command, which can be used to apply a signal modulation to existing optical components. The main difference is that 'fsig' is meant to be used for transfer function computations. Consequently Finesse discards all nonlinear terms, which means that the sideband amplitude is proportional to the signal amplitude and harmonics are not created. Finesse example: Mirror modulation. Finesse input file for 'Mirror modulation' 4 Optical Readout In previous sections we have dealt with the amplitude of light fields directly and also used the amplitude detector in the Finesse examples. This is the advantage of a mathematical analysis versus experimental tests, in which only light intensity or light power can be measured directly. This section gives the mathematical details for modelling photo detectors. The intensity of a field impinging on a photo detector is given as the magnitude of the Poynting vector, with the Poynting vector given as [58] $$\vec S = \vec E \times \vec H = {1 \over {{\mu _0}}}\vec E \times \vec B.$$ Inserting the electric and magnetic components of a plane wave, we obtain $$\vert \vec S\vert = {1 \over {{\mu _0}c}}{E^2} = c{\epsilon _0}E_0^2{\cos ^2}(\omega t) = {{c{\epsilon _0}} \over 2}E_0^2(1 + \cos (2\omega t)),$$ with ϵ0 the electric permeability of vacuum and the speed of light. The response of a photo detector is given by the total flux of effective radiation4 during the response time of the detector. For example, in a photo diode a photon will release a charge in the n-p junction. The response time is given by the time it takes for the charge to travel through the detector (and further time may be taken up in the electronic processing of the signal). The size of the photodiode and the applied bias voltage determine the travel time of the charges with typical values of approximately 10 ns. Thus, frequency components faster than perhaps 100 MHz are not resolved by a standard photodiode. For example, a laser beam with a wavelength of = 1064 nm has a frequency of f = c/λ ≈ 282 1012 Hz = 282 THz. Thus, the 2ω component is much too fast for the photo detector; instead, it returns the average power $$\vert \overline {\vec S} \vert = {{c{\epsilon _0}} \over 2}E_0^2.$$ In complex notation we can write $$\vert \overline {\vec S} \vert = {{c{\epsilon _0}} \over 2}EE^\ast.$$ However, for more intuitive results the light fields can be given in converted units, so that the light power can be computed as the square of the light field amplitudes. Unless otherwise noted, throughout this work the unit of light field amplitudes \(\sqrt {{\rm{watt}}}\). Thus, the notation used in this document to describe the computation of the light power of a laser beam is $$P = EE^\ast.$$ 4.1 Detection of optical beats What is usually called an optical beat or simply a beat is the sinusoidal behaviour of the intensity of two overlapping and coherent fields. For example, if we superpose two fields of slightly different frequency, we obtain $$\begin{array}{*{20}c} {E = {E_0}\cos ({\omega _1}t) + {E_0}\cos ({\omega _2}t)}\qquad\qquad\qquad\qquad\qquad\qquad\\ {P = {E^2} = E_0^2\left({{{\cos}^2}({\omega _1}t) + {{\cos}^2}({\omega _2}t) + 2\cos ({\omega _1}t)\cos ({\omega _1}t)} \right)}\;\;\;\;\\ {= E_0^2\left({{{\cos}^2}({\omega _1}t) + {{\cos}^2}({\omega _2}t) + \cos ({\omega _ +}t) + \cos ({\omega _ -}t)} \right),}\\ \end{array}$$ with ω+ = ω1 + ω2 and ω− = ω1 − ω2. In this equation the frequency ω− can be very small and can then be detected with the photodiode as illustrated in Figure 22. $${P_{{\rm{diode}}}} = E_0^2(1 + \cos ({\omega _ -}t))$$ Using the same example photodiode as before: in order to be able to detect an optical beat ω− would need to be smaller than 100 MHz. If we take two, sightly detuned Nd:YAG lasers with f = 282 THz, this means that the relative detuning of these lasers must be smaller than 10−7. A beam with two frequency components hits the photo diode. Shown in this plot are the field amplitude, the corresponding intensity and the electrical output of the photodiode. In general, for a field with several frequency components, the photodiode signal can be written as $$\vert E{\vert ^2} = E \cdot {E^{\ast}} = \sum\limits_{i = 0}^N {\sum\limits_{j = 0}^N {{a_i}a_j^{\ast}{e^{{\rm{i}}({\omega _i}{\rm{-}}{\omega _j})t}}.}}$$ for example, if the photodiode signal is filtered with a low-pass filter, such that only the DC part remains, we can compute the resulting signal by looking for all components without frequency dependence. The frequency dependence vanishes when the frequency becomes zero, i.e., in all parts of Equation (75) with ωi = ωj. The output is a real number, calculated like this: $$x = \sum\limits_i {\sum\limits_j {{a_i}a_j^{\ast}\quad {\rm{with}}\quad {\rm{\{}}i,j\vert i,j} \in \{0, \ldots, N\} \wedge {\omega _i} = {\omega _j}\} {.}}$$ 4.2 Signal demodulation A typical application of light modulation, is its use in a modulation-demodulation scheme, which applies an electronic demodulation to a photodiode signal. A 'demodulation' of a photodiode signal at a user-defined frequency ωx, performed by an electronic mixer and a low-pass filter, produces a signal, which is proportional to the amplitude of the photo current at DC and at the frequency Interestingly, by using two mixers with different phase offsets one can also reconstruct the phase of the signal, or to be precise the phase difference of the light at ω0 ± ωx with respect to the carrier light. This feature can be very powerful for generating interferometer control signals. Mathematically, the demodulation process can be described by a multiplication of the output with a cosine: cos(ωx+φj) (is the demodulation phase), which is also called the 'local oscillator'. After the multiplication was performed only the DC part of the result is taken into account. The signal is $${S_0} = \vert E{\vert ^2} = E \cdot E^\ast = \sum\limits_{i = 0}^N {\sum\limits_{j = 0}^N {{a_i}a_j^{\ast}{e^{{\rm{i}}({\omega _i} - {\omega _j})t}}.}}$$ Multiplied with the local oscillator it becomes $$\begin{array}{*{20}c} {{S_1} = {S_0} \cdot \cos ({\omega _x}t + {\varphi _x}) = {S_0}{1 \over 2}({e^{{\rm{i}}({\omega _x}t + {\varphi _x})}} + {e^{{\rm{- i}}({\omega _x}t + {\varphi _x})}})} \\ {= {1 \over 2}\sum\limits_{i = 0}^N {\sum\limits_{j = 0}^N {{a_i}a_j^{\ast}{e^{{\rm{i}}({\omega _i} - {\omega _j})t}} \cdot ({e^{{\rm{i}}({\omega _x}t + {\varphi _x})}} + {e^{{\rm{- i}}({\omega _x}t + {\varphi _x})}})}}.} \\ \end{array}$$ With \({A_{ij}} = {a_i}a_j^{\ast}\) and \({e^{{\rm{i}}{\omega _{ij}}\,t}} = {e^{{\rm{i}}\,{{{\rm{(}}{\omega _i}{\rm{-}}{\omega _j})}^t}}}\) we can write $${S_1} = {1 \over 2}\left({\sum\limits_{i = 0}^N {{A_{ii}} + \sum\limits_{i = 0}^N {\sum\limits_{j = i + 1}^N {({A_{ij}}{e^{{\rm{i}}{\omega _{ij}}t}} + A_{ij}^{\ast}{e^{{\rm{- i}}{\omega _{ij}}t}})}}}} \right)\cdot({e^{{\rm{i}}({\omega _x}t + {\varphi _x})}} + {e^{{\rm{- i}}({\omega _x}t + {\varphi _x})}}){.}$$ When looking for the DC components of S1 we get the following [20]: $$\begin{array}{*{20}c} {{S_{1,{\rm{DC}}}} = \sum\limits_{ij} {{1 \over 2}({A_{ij}}\,{e^{- {\rm{i}}\,{\varphi _x}}} + A_{ij}^{\ast} \,{e^{{\rm{i}}\,{\varphi _x}}})\quad {\rm{with}}\quad \{i,j\vert i,j \in \{0, \ldots ,N\} \wedge {\omega _{ij}} = {\omega _x}\}}} \\ {= \sum\limits_{ij} {\Re \{{A_{ij}}\,{e^{- {\rm{i}}\,{\varphi _x}}}\} {.}} \qquad \quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \,\,} \\ \end{array}$$ This would be the output of a mixer and a subsequent low-pass filter. The results for φx = 0 and φx = π/2 are called in-phase and in-quadrature, respectively (or also first and second quadrature). They are given by $$\begin{array}{*{20}c} {{S_{1,{\rm{DC,phase}}}} = \sum\limits_{ij} {\Re \{{A_{ij}}\},}}\\ {{S_{1,{\rm{DC,quad}}}} = \sum\limits_{ij} {\Im \{{A_{ij}}\}.}}\\\end{array}$$ if only one mixer is used, the output is always real and is determined by the demodulation phase. However, with two mixers generating the in-phase and in-quadrature signals, it is possible to construct a complex number representing the signal amplitude and phase: $$z = \sum\limits_{ij} {{a_i}a_j^{\ast}\quad {\rm{with}}\quad \{i,j\vert i,j \in \{0, \ldots, N\} \wedge {\omega _{ij}} = {\omega _x}\}{.}}$$ Often several sequential demodulations are applied in order to measure very specific phase information. For example, a double demodulation can be described as two sequential multiplications of the signal with two local oscillators and taking the DC component of the result. First looking at the whole signal, we can write: $${S_2} = {S_0} \cdot \cos ({\omega _x}t + {\varphi _x})\cos ({\omega _y}t + {\varphi _y}){.}$$ This can be written as $$\begin{array}{*{20}c} {{S_2} = {S_0}{1 \over 2}(\cos ({\omega _y}t + {\omega _x}t + {\varphi _y} + {\varphi _x}) + \cos ({\omega _y}t - {\omega _x}t + {\varphi _y} - {\varphi _x}))}\\ {= {S_0}{1 \over 2}(\cos ({\omega _ +}t + {\varphi _ +}) + \cos ({\omega _ -}t + {\varphi _ -})),}\qquad \qquad \qquad\quad \\ \end{array}$$ and thus reduced to two single demodulations. Since we now only care for the DC component we can use the expression from above (Equation (82)). These two demodulations give two complex numbers: $$\begin{array}{*{20}c} {z1 = \sum\limits_{ij} {{A_{ij}}\quad {\rm{with}}\quad \{i,j\vert i,j \in \{0, \ldots, N\} \wedge {\omega _i} - {\omega _j} = {\omega _ +}\},}}\\ {z2 = \sum\limits_{ij} {{A_{kl}}\quad {\rm{with}}\quad \{k,l\vert k,l \in \{0, \ldots, N\} \wedge {\omega _k} - {\omega _l} = {\omega _ -}\}{.}}}\\\end{array}$$ The demodulation phases are applied as follows to get a real output (two sequential mixers) $$x = \Re \{({z_1}\;{e^{- {\rm{i}}{\varphi _x}}} + {z_2}\;{e^{{\rm{i}}{\varphi _x}}}){e^{{\rm{- i}}{\varphi _y}}}\}{.}$$ In a typical setup, a user-defined demodulation phase for the first frequency (here is given. If two mixers are used for the second demodulation, we can reconstruct the complex number $$z = {z_1}\;{e^{- {\rm{i}}{\varphi _x}}} + {z_2}\;{e^{{\rm{i}}{\varphi _x}}}.$$ More demodulations can also be reduced to single demodulations as above. 4.3.1 Optical beat In this example two laser beams are superimposed at a 50:50 beam splitter. The beams have a slightly different frequency: the second beam has a 10 kHz offset with respect to the first (and to the default laser frequency). The plot illustrates the output of four different detectors in one of the beam splitter output ports, while the phase of the second beam is tuned from 0° to 180°. The photodiode 'pd1' shows the total power remaining constant at 1. The amplitude detectors 'ad1' and 'ad10k' detect the laser light at 0 Hz (default frequency) and 10 kHz respectively. Both show a constant absolute \(\sqrt {1/2}\) and the detector 'ad10k' tracks the tuning of the phase of the second laser beam. Finally, the detector 'pd10k' resembles a photodiode with demodulation at 10 kHz. In fact, this represents a photodiode and two mixers used to reconstruct a complex number as shown in Equation (82). One can see that the phase of the resulting electronic signal also directly follows the phase difference between the two laser beams. Finesse example: Optical beat. Finesse input file for 'Optical beat' 5 Basic Interferometers The large interferometric gravitational-wave detectors currently in operation are based on two fundamental interferometer topologies: the Fabry-Pérot and the Michelson interferometer. The main instrument is very similar to the original interferometer concept used in the famous experiment by Michelson and Morley, published in 1887 [42]. The main difference is that modern instruments use laser light to illuminate the interferometer to achieve much higher accuracy. Already the first prototype by Forward and Weiss has thus achieved a sensitivity a million times better than Michelson's original instrument [18]. In addition, in current gravitational-wave detectors, the Michelson interferometer has been enhanced by resonant cavities, which in turn have been derived from the original idea for a spectroscopy standard published by Fabry and Pérot in 1899 [16]. The following section will describe the fundamental properties of the Fabry-Pérot interferometer and the Michelson interferometer. A thorough understanding of these basic instruments is essential for the study of the high-precision interferometers used for gravitational-wave detection. 5.1 The two-mirror cavity: a Fabry-Pérot interferometer We have computed the field amplitudes in a linear two-mirror cavity, also called Fabry-Pérot interferometer, in Section 2.2. In order to understand the features of this optical instrument it is of interest to have a closer look at the power circulation in the cavity. A typical optical layout is shown in Figure 24: two parallel mirrors form the Fabry-Pérot cavity. A laser beam is injected through the first mirror (at normal incidence). Typical optical layout of a two-mirror cavity, also called a Fabry-Pérot interferometer. Two mirrors form the Fabry-Pérot interferometer, a laser beam is injected through one of the mirrors and the reflected and transmitted light can be detected by photo detectors. The behaviour of the (ideal) cavity is determined by the length of the cavity L, the wavelength of the laser λ and the reflectivity and transmittance of the mirrors. Assuming an input power of |a0|2 = 1, we obtain $${P_1} = \vert {a_1}{\vert ^2} = {{{T_1}} \over {1 + {R_1}{R_2} - 2{r_1}{r_2}\cos (2kL)}},$$ with k = 2π/λ, P, T = t2 and R = r2, as defined in Section 1.4. Similarly we could compute the transmission of the optical system as the input-output ratio of the field amplitudes. For example, $${{{a_2}} \over {{a_0}}} = {{- {t_1}{t_2}\exp (- {\rm{i}}\;kL)} \over {1 - {r_1}{r_2}\exp (- {\rm{i}}\;2kL)}}$$ is the frequency-dependent transfer function of the cavity in transmission (the frequency dependency is hidden inside the k = 2πf/c). Figure 25 shows a plot of the circulating light power i over the laser frequency. The maximum power is reached when the cosine function in the denominator becomes equal to one, i.e., at kL = Nπ with N an integer. This is called the cavity resonance. The lowest power values are reached at anti-resonance when kL = (N + 1/2)π. We can also rewrite $$2kL = \omega {{2L} \over c} = 2\pi f{{2L} \over c} = {{2\pi f} \over {{\rm{FSR}}}},$$ with FSR being the free-spectral range of the cavity as shown in Figure 25. Thus, it becomes clear that resonance is reached for laser frequencies $${f_r} = N \cdot {\rm{FSR}},$$ where N is an integer. Power enhancement in a two-mirror cavity as a function of the laser-light frequency. The peaks marks the resonances of the cavity, i.e., modes of operation in which the injected light is resonantly enhanced. The frequency distance between two peaks is called free-spectral range (FSR). Another characteristic parameter of a cavity is its linewidth, usually given as full width at half maximum (FWHM) or its pole frequency, fp. In order to compute the linewidth we have to ask at which frequency the circulating power becomes half the maximum: $$\vert {a_1}({f_p}){\vert ^2}\overset{!}{=}{1 \over 2}\vert {a_{1,\max}}\vert ^{2}.$$ This results in the following expression for the full linewidth: $${\rm{FWHM = 2}}{f_p} = {{2{\rm{FSR}}} \over \pi}\arcsin \left({{{1 - {r_1}{r_2}} \over {2\sqrt {{r_1}{r_2}}}}} \right).$$ The ratio of the linewidth and the free spectral range is called the finesse of a cavity: $$F = {{{\rm{FSR}}} \over {{\rm{FWHM}}}}={\pi \over {{\rm{2arcsin}}\left({{{1 - {r_1}{r_2}} \over {2\sqrt {{r_1}{r_2}}}}} \right)}}.$$ In the case of high finesse, i.e., r1 and r2 are close to 1 we can use the fact that the argument of the arcsin function is small and make the approximation $$F \approx {{\pi \sqrt {{r_1}{r_2}}} \over {1 - {r_1}{r_2}}} \approx {\pi \over {1 - {r_1}{r_2}}}.$$ The behaviour of a two mirror cavity depends on the length of the cavity (with respect to the frequency of the laser) and on the reflectivities of the mirrors. Regarding the mirror parameters one distinguishes three cases5: when T1 < T2 the cavity is called undercoupled when T1 = T2 the cavity is called impedance matched when T1 > T2 the cavity is called overcoupled The differences between these three cases can seem subtle mathematically but have a strong impact on the application of cavities in laser systems. One of the main differences is the phase evolution of the light fields, which is shown in Figure 26. The circulating power shows that the resonance effect is better used in over-coupled cavities; this is illustrated in Figure 27, which shows the transmitted and circulating power for the three different cases. Only in the impedance-matched case can the cavity transmit (on resonance) all the incident power. Given the same total transmission T1 + T2, the overcoupled case allows for the largest circulating power and thus a stronger 'resonance effect' of the cavity, for example, when the cavity is used as a mode filter. Hence, most commonly used cavities are impedance matched or overcoupled. This figure compares the fields reflected by, transmitted by and circulating in a Fabry-Pérot cavity for the three different cases: over-coupled, under-coupled and impedance matched cavity (in all cases T1 + T2 = 0.2 and the round-trip loss is 1%). The traces show the phase and amplitude of the electric field as a function of laser frequency detuning. Power transmitted and circulating in a two mirror cavity with input power 1 W. The mirror transmissions are set such that T1 + T2 = 0.8 and the reflectivities of both mirrors are set as R = 1 − T. The cavity is undercoupled for T1 < 0.4, impedance matched at T1 = T2 = 0.4 and overcoupled for T1 > 0.4. The transmission is maximised in the impedance-matched case and falls similarly for over or undercoupled settings. However, the circulating power (and any resonance performance of the cavity) is much larger in the overcoupled case. 5.2 Michelson interferometer We came across the Michelson interferometer in Section 2.4 when we discussed the phase relation at a beam splitter. The typical optical layout of the Michelson interferometer is shown again in Figure 28: a laser beam is split by a beam splitter and send along two perpendicular interferometer arms. The four directions seen from the beam splitter are called North, East, West and South. The ends of these arms (North and East) are marked by highly reflective end mirrors, which reflect the beams back into themselves so that they can be recombined by the beam splitter. Generally, the Michelson interferometer has two outputs, namely the so far unused beam splitter port (South) and the input port (West). Both output ports can be used to obtain interferometer signals, however, most setups are designed such that the signals with high signal-to-noise ratios are detected in the South port. Typical optical layout of a Michelson interferometer: a laser beam is split into two and sent along two perpendicular interferometer arms. We will label the directions in a Michelson interferometer as North, East, West and South in the following. The end mirrors reflect the beams such that they are recombined at the beam splitter. The South and West ports of the beam splitter are possible output port, however, in many cases, only the South port is used. The Michelson interferometer output is determined by the laser wavelength λ, the reflectivity and transmittance of the beam splitter and the end mirrors, and the length of the interferometer arms. In many cases the end mirrors are highly reflective and the beam splitter ideally a 50:50 beam splitter. In that case, we can compute the output for a monochromatic field as shown in Section 2.4. Using Equation (20) we can write the field in the South port as $${E_S} = {E_0}{{\rm{i}} \over 2}({e^{{\rm{i}}2k{L_N}}} + {e^{{\rm{i}}2k{L_E}}}).$$ We define the common arm length and the arm-length difference as $$\begin{array}{*{20}c}{\bar L = {{{L_N} + {L_E}} \over 2}} \\ {\Delta L = {L_N} - {L_E},}\\\end{array}$$ which yield \(2{L_N} = 2\bar L + \Delta L\) and \(2{L_E} = 2\bar L - \Delta L\). Thus, we can further simplify to get $${E_S} = {E_0}{{\rm{i}} \over 2}{e^{{\rm{i}}2k\bar L}}({e^{{\rm{i}}\;k\Delta L}} + {e^{{\rm{- i}}k\Delta L}}) = {E_0}\;{\rm{i}}{e^{{\rm{i}}2k\bar L}}\cos (k\Delta L){.}$$ The photo detector then produces a signal proportional to $$S = {E_S}E_S^{\ast} = {P_0}{\cos ^2}(k\Delta L) = {P_0}{\cos ^2}(2\pi \Delta L/\lambda){.}$$ This signal is depicted in Figure 29; it shows that the power in the South port changes between zero and the input power with a period of ΔL/λ = 0.5. The tuning at which the output power drops to zero is called the dark fringe. Current interferometric gravitational-wave detectors operate their Michelson interferometer at or near the dark fringe. Power in the South port of a symmetric Michelson interferometer as a function of the arm length difference ΔL. The above seems to indicate that the macroscopic arm-length difference plays no role in the Michelson output signal. However, this is only correct for a monochromatic laser beam with infinite coherence length. In real interferometers care must be taken that the arm-length difference is well below the coherence length of the light source. In gravitational-wave detectors the macroscopic arm-length difference is an important design feature; it is kept very small in order to reduce coupling of laser noise into the output but needs to retain a finite size to allow the transfer of phase modulation sidebands from the input to the output port; this is illustrated in the Finesse example below and will be covered in detail in Section 6.4. 5.3.1 Michelson power The power in the South port of a Michelson detector varies as the cosine squared of the microscopic arm length difference. The maximum output can be equal to the input power, but only if the Michelson interferometer is symmetric and lossless. The tuning for which the South port power is zero is referred to as the dark fringe. Finesse example: Michelson power. Finesse input file for 'Michelson power' 5.3.2 Michelson modulation This example demonstrates how a macroscopic arm length difference can cause different 'dark fringe' tuning for injected fields with different frequencies. In this case, some of the 10 MHz modulation sidebands are transmitted when the interferometer is tuned to a dark fringe for the carrier light. This effect can be used to separate light fields of different frequencies. It is also the cause for transmission of laser noise (especially frequency noise) into the Michelson output port when the interferometer is not perfectly symmetric. Finesse input file for 'Michelson modulation' Finesse example: Michelson modulation. 6 Interferometric Length Sensing and Control In this section we introduce interferometers as length sensing devices. In particular, we explain how the Fabry-Pérot interferometer and the Michelson interferometer can be used for high-precision measurements and that both require a careful control of the base length (which is to be measured) in order to yield their large sensitivity. In addition, we briefly introduce the general concepts of error signals and transfer functions, which are used to describe most essential features of length sensing and control. 6.1 Error signals and transfer functions In general, we will call an error signal any measured signal suitable for stabilising a certain experimental parameter p with a servo loop. The aim is to maintain the variable p at a user-defined value, the operating point, p0. Therefore, the error signal must be a function of the parameter p. In most cases it is preferable to have a bipolar signal with a zero crossing at the operating point. The slope of the error signal at the operating point is a measure of the 'gain' of the sensor (which in the case of interferometers is a combination of optics and electronics). Transfer functions describe the propagation of a periodic signal through a plant and are usually given as plots of amplitude and phase over frequency. By definition a transfer function describes only the linear coupling of signals inside a system. This means a transfer function is independent of the actual signal size. For small signals or small deviations, most systems can be linearised and correctly described by transfer functions. Experimentally, network analysers are commonly used to measure a transfer function: one connects a periodic signal (the source) to an actuator of the plant (which is to be analysed) and to an input of the analyser. A signal from a sensor that monitors a certain parameter of the plant is connected to the second analyser input. By mixing the source with the sensor signal the analyser can determine the amplitude and phase of the input signal with respect to the source (amplitude equals one and the phase equals zero when both signals are identical). Mathematically, transfer functions can be modeled similarly: applying a sinusoidal signal sin(ωst) to the interferometer, e.g., as a position modulation of a cavity mirror, will create phase modulation sidebands with a frequency offset of ±ωs to the carrier light. If such light is detected in the right way by a photodiode, it will include a signal at the frequency component ωs, which can be extracted, for example, by means of demodulation (see Section 4.2). Transfer functions are of particular interest in relation to error signals. Typically a transfer function of the error signal is required for the design of the respective electronic servo. A 'transfer function of the error signal' usually refers to a very specific setup: the system is held at its operating point, such that, on average, \(\bar p = {p_0}\). A signal is applied to the system in the form of a very small sinusoidal disturbance of p. The transfer function is then constructed by computing for each signal frequency the ratio of the error signal and the injected signal. Figure 32 shows an example of an error signal and its corresponding transfer function. The operating point shall be at $${x_{\rm{d}}} = 0\quad {\rm{and}}\quad {x_{{\rm{EP}}}}({x_{\rm{d}}} = 0) = 0$$ The optical transfer function \({T_{{\rm{opt,}}{{\rm{x}}_{\rm{d}}}}}\) with respect to this error signal is defined by $${\tilde x_{{\rm{EP}}}}(f) = {T_{{\rm{opt}},{{\rm{x}}_{\rm{d}}}}}{T_{\det}}{\tilde x_{\rm{d}}}(f),$$ with Tdet as the transfer function of the sensor. In the following, Tdet is assumed to be unity. At the zero crossing the slope of the error signal represents the magnitude of the transfer function for low frequencies: $${\left\vert {{{d{x_{{\rm{EP}}}}} \over {d{x_{\rm{d}}}}}} \right\vert _{\vert {x_{\rm{d}}} = 0}} = \vert {T_{{\rm{opt}},{{\rm{x}}_{\rm{d}}}}}{\vert _{\vert f \rightarrow 0}}$$ The quantity above will be called the error-signal slope in the following text. It is proportional to the optical gain |Topt,xd|, which describes the amplification of the gravitational-wave signal by the optical instrument. Example of an error signal: the top graph shows the electronic interferometer output signal as a function of mirror displacement. The operating point is given as the zero crossing, and the error-signal slope is defined as the slope at the operating point. The right graph shows the magnitude of the transfer function mirror displacement error signal. The slope of the error signal (left graph) is equal to the low frequency limit of the transfer function magnitude (see Equation (102)). 6.2 Fabry-Pérot length sensing In Figure 25 we have plotted the circulating power in a Fabry-Pérot cavity as a function of the laser frequency. The steep features in this plot indicate that such a cavity can be used to measure changes in the laser frequency. From the equation for the circulating power (see Equation (88)), $${P_1}/{P_0} = {{{T_1}} \over {1 + {R_1}{R_2} - 2{r_1}{r_2}\cos (2kL)}} = {{{T_1}} \over d},$$ we can see that the actual frequency dependence is given by the cos(2kL) term. Writing this term as $$\cos (2kL) = \cos \left({2\pi {{Lf} \over c}} \right),$$ we can highlight the fact that the cavity is in fact a reference for the laser frequency in relation to the cavity length. If we know the cavity length very well, a cavity should be a good instrument to measure the frequency of a laser beam. However, if we know the laser frequency very accurately, we can use an optical cavity to measure a length. In the following we will detail the optical setup and behaviour of a cavity used for a length measurement. The same reasoning applies for frequency measurements. If we make use of the resonant power enhancement of the cavity to measure the cavity length, we can derive the sensitivity of the cavity from the differentiation of Equation (88), which gives the slope of the trace shown in Figure 25, $${{d\;{P_1}/{P_0}} \over {d\;L}} = {{- 4{T_1}{r_1}{r_2}k\sin (2kL)} \over {{d^2}}},$$ with d as defined in Equation (103). This is plotted in Figure 33 together with the cavity power as a function of the cavity tuning. From Figure 33 we can deduce a few key features of the cavity: The cavity must be held as near as possible to the resonance for maximum sensitivity. This is the reason that active servo control systems play an important role in modern laser interferometers. If we want to use the power directly as an error signal for the length, we cannot use the cavity directly on resonance because there the optical gain is zero. A suitable error signal (i.e., a bipolar signal) can be constructed by adding an offset to the light power signal. A control system utilising this method is often called DC-lock or offset-lock. However, we show below that more elegant alternative methods for generating error signals exist. The differentiation of the cavity power looks like a perfect error signal for holding the cavity on resonance. A signal proportional to such differentiation can be achieved with a modulation-demodulation technique. The top plot shows the cavity power as a function of the cavity tuning. A tuning of 360° refers to a change in the cavity length by one laser wavelength. The bottom plot shows the differentiation of the upper trace. This illustrates that near resonance the cavity power changes very rapidly when the cavity length changes. However, for most tunings the cavity seems not sensitive at all. 6.3 The Pound-Drever-Hall length sensing scheme This scheme for stabilising the frequency of a light field to the length of a cavity, or vice versa, is based on much older techniques for performing very similar actions with microwaves and microwave resonators. Drever and Hall have adapted such techniques for use in the optical regime [14] and today what is now called the Pound-Drever-Hall technique can be found in a great number of different types of optical setups. An example layout of this scheme is shown in Figure 34, in this case for generating a length (or frequency) signal of a two-mirror cavity. The laser is passed through an electro-optical modulator, which applies a periodic phase modulation at a fixed frequency. In many cases the modulation frequency is chosen such that it resides in the radio frequency band for which low-cost, low-noise electronic components are available. The phase modulated light is then injected into the cavity. However, from the frequency domain analysis introduced in Section 5, we know that in most cases not all the light can be injected into the cavity. Let's consider the example of an over-coupled cavity with the reflectivity of the end mirror R2 < 1. Such a cavity would have a frequency response as shown in the top traces of Figure 26 (recall that the origin of the frequency axis refers to an arbitrarily chosen default frequency, which for this figure has been selected to be a resonance frequency of the cavity). If the cavity is held on resonance for the unmodulated carrier field, this field enters the cavity, gets resonantly enhanced and a substantial fraction is transmitted. If the frequency offset of the modulation sidebands is chosen such that it does not coincide with (or is near to) an integer multiple of the cavity's free spectral range, the modulation sidebands are mostly reflected by the cavity and will not be influenced as much by the resonance condition of the cavity as the carrier. The photodiode measuring the reflected light will see the optical beat between the carrier field and the modulation sidebands. This includes a component at the modulation frequency which is a measure of the phase difference between the carrier field and the sidebands (given the setup as described above). Any slight change of the cavity length would introduce a proportional change in the phase of the carrier field and no change in the sideband fields. Thus the photodiode signal can be used to measure the length changes of the cavity. One of the advantages of this method is the fact that the so-generated signal is bipolar with a zero crossing and steep slope exactly at the cavity's resonance, see Figure 35. Typical setup for using the Pound-Drever-Hall scheme for length sensing and with a two-mirror cavity: the laser beam is phase modulated with an electro-optical modulator (EOM). The modulation frequency is often in the radio frequency range. The photodiode signal in reflection is then electrically demodulated at the same frequency. This figure shows an example of a Pound-Drever-Hall (PDH) signal of a two-mirror cavity. The plots refer to a setup in which the cavity mirrors are stationary and the frequency of the input laser is tuned linearly. The upper trace shows the light power circulating in the cavity. The three peaks correspond to the frequency tunings for which the carrier (main central peak) or the modulation sidebands (smaller side peaks) are resonant in the cavity. The lower trace shows the PDH signal for the same frequency tuning. Coincident with the peaks in the upper trace are bipolar structures in the lower trace. Each of the bipolar structures would be suitable as a length-sensing signal. In most cases the central structure is used, as experimentally it can be easily identified because its slope has a different sign compared to the sideband structures. 6.4 Michelson length sensing Similarly to the two-mirror cavity, we can start to understand the length-sensing capabilities of the Michelson interferometer by looking at the output light power as a function of a mirror movement, as shown in Figure 29. The power changes as sine squared with the maximum slope at the point when the output power (in what we call the South port) is half the input power. The slope of the output power, which is the optical gain of the instrument for detecting a differential arm-length change ΔL with a photo detector in the South port can be written as $${{dS} \over {d\Delta L}} = {{2\pi {P_0}} \over \lambda}\sin \left({{{4\pi} \over \lambda}\Delta L} \right)$$ and is shown in Figure 36. The most notable difference of the optical gain of the Michelson interferometer with respect to the Fabry-Pérot interferometer (see Figure 33) is the wider, more smooth distribution of the gain. This is due to the fact that the cavity example is based on a high-finesse cavity in which the optical resonance effect is dominant. In a basic Michelson interferometer such resonance enhancement is not present. Power and slope of a Michelson interferometer. The upper plot shows the output power of a Michelson interferometer as detected in the South port (as already shown in Figure 29). The lower plot shows the optical gain of the instrument as given by the slope of the upper plot. However, the main difference is that the measurement is made differentially by comparing two lengths. This allows one to separate a larger number of possible noise contributions, for example noise in the laser light source, such as amplitude or frequency noise. This is why the main instrument for gravitational-wave measurements is a Michelson interferometer. However, the resonant enhancement of light power can be added to the Michelson, for example, by using Fabry-Pérot cavities within the Michelson. This construction of new topologies by combining Michelson and Fabry-Pérot interferometers will be described in detail in a future version of this review. The Michelson interferometer has two longitudinal degrees of freedom. These can be represented by the positions (along the optical axes) of the end mirrors. However, it is more efficient to use proper linear combinations of these and describe the Michelson interferometer length or position information by the common and differential arm length, as introduced in Equation (97): $$\begin{array}{*{20}c}{\bar L = {{{L_N} + {L_E}} \over 2}}\quad\\{\Delta L = {L_N} - {L_E}.}\\\end{array}$$ The Michelson interferometer is intrinsically insensitive to the common arm length \({\bar L}\). 6.5 The Schnupp modulation scheme Similar to the Fabry-Pérot cavity, the Michelson interferometer is also often used to set an operating point where the optical gain of a direct light power detection is zero. This operating point, given by ΔL/λ = (2N + 1) • 0.25 with N a non-negative integer, is called dark fringe. This operating point has several advantages, the most important being the low (ideally zero) light power on the diode. Highly efficient and low-noise photodiodes usually use a small detector area and thus are typically not able to detect large power levels. By using the dark fringe operating point, the Michelson interferometer can be used as a null instrument or null measurement, which generally is a good method to reduce systematic errors [49]. One approach to make use of the advantages of the dark fringe operating point is to use an operating point very close to the dark fringe at which the optical gain is not yet zero. In such a scenario a careful trade-off calculation can be done by computing the signal-to-noise with noises that must be suppressed, such as the laser amplitude noise. This type of operation is usually referred to as DC control or offset control and is very similar to the similarly-named mechanism used with Fabry-Pérot cavities. Another option is to employ phase modulated light, similar to the Pound-Drever-Hall scheme described in Section 6.3. The optical layout of such a scheme is depicted in Figure 37: an electro-optical modulator is used to apply a phase modulation at a fixed (usually RF type) frequency to the (monochromatic) laser light before it enters the interferometer. The photodiode signal from the interferometer output is then demodulated at the same frequency. This scheme allows one to operate the interferometer precisely on the dark fringe. The method originally proposed by Lise Schnupp is also sometimes referred to as frontal modulation. This length sensing scheme is often referred to as frontal or Schnupp modulation: an EOM is used to phase modulate the laser beam before entering the Michelson interferometer. The signal of the photodiode in the South port is then demodulated at the same frequency used for the modulation. The optical gain of a Michelson interferometer with Schnupp modulation is shown in Figure 39 in Section 6.6. Finesse example: Cavity power and slope. Finesse example: Michelson with Schnupp modulation. 6.6.1 Cavity power and slope Figure 33 shows a plot of the analytical functions describing the power inside a cavity and its differentiation by the cavity tuning. This example recreates the plot using a numerical model in Finesse. Finesse input file for 'Cavity power and slope' 6.6.2 Michelson with Schnupp modulation Figure 39 shows the demodulated photodiode signal of a Michelson interferometer with Schnupp modulation, as well as its differentiation, the latter being the optical gain of the system. Comparing this figure to Figure 36, it can be seen that with Schnupp modulation, the optical gain at the dark fringe operating points is maximised and a suitable error signal for these points is obtained. Finesse input file for 'Michelson with Schnupp modulation' 7 Beam Shapes: Beyond the Plane Wave Approximation In previous sections we have introduced a notation for describing the on-axis properties of electric fields. Specifically, we have described the electric fields along an optical axis as functions of frequency (or time) and the location z. Models of optical systems may often use this approach for a basic analysis even though the respective experiments will always include fields with distinct off-axis beam shapes. A more detailed description of such optical systems needs to take the geometrical shape of the light field into account. One method of treating the transverse beam geometry is to describe the spatial properties as a sum of 'spatial components' or 'spatial modes' so that the electric field can be written as a sum of the different frequency components and of the different spatial modes. Of course, the concept of modes is directly related to the use of a sort of oscillator, in this case the optical cavity. Most of the work presented here is based on the research on laser resonators reviewed originally by Kogelnik and Li [35]. Siegman has written a very interesting historic review of the development of Gaussian optics [52, 51] and we use whenever possible the same notation as used in his textbook 'Lasers' [50]. This section introduces the use of Gaussian modes for describing the spatial properties along the transverse orthogonal x and y directions of an optical beam. We can write $$E(t,x,y,z) = \sum\limits_j {\sum\limits_{n,m} {{a_{jnm}}\;{u_{nm}}(x,y,z)} \;\exp ({\rm{i}}({\omega _j}t - {k_j}z)),}$$ with unm as special functions describing the spatial properties of the beam and ajnm as complex amplitude factors (ωj is again the angular frequency and kj = ωj/c). For simplicity we restrict the following description to a single frequency component at one moment in time (t = 0), so $$E(x,y,z) = \exp (- {\rm{i}}\;kz)\;\sum\limits_{n,m} {{a_{nm}} {u_{nm}}(x,y,z){.}}$$ In general, different types of spatial modes unm can be used in this context. Of particular interest are the Gaussian modes, which will be used throughout this document. Many lasers emit light that closely resembles a Gaussian beam: the light mainly propagates along one axis, is well collimated around that axis and the cross section of the intensity perpendicular to the optical axis shows a Gaussian distribution. The following sections provide the basic mathematical framework for using Gaussian modes for analysing optical systems. 7.1 The paraxial wave equation Mathematically, Gaussian modes are solutions to the paraxial wave equation — a specific wave equation for electromagnetic fields. All electromagnetic waves are solutions to the general wave equation, which in vacuum can be given as: $$\Delta \vec E - {1 \over {{c^2}}}\ddot \vec E = 0.$$ But laser light fields are special types of electromagnetic waves. For example, they are characterised by low diffraction. Hence, a laser beam will have a characteristic length ω describing the 'width' (the dimension of the field transverse to the main propagation axis), and a characteristic length l defining some local length along the propagation over which the beam characteristics do not vary much. By definition, for what we call a beam ω is typically small and l large in comparison, so that ω/l can be considered small. In fact, the paraxial wave equation (and its solutions) can be derived as the first-order terms of a series expansion of Equation (109) into orders of ω/l [37]. A simpler approach to the paraxial-wave equation goes as follows: A particular beam shape shall be described by a function u(x, y, z) so that we can write the electric field as $$E(x,y,z) = u(x,y,z)\;\exp (- {\rm{i}}\;kz){.}$$ Substituting this into the standard wave equation yields a differential equation for u: $$(\partial _x^2 + \partial _y^2 + \partial _z^2)u(x,y,z) - 2{\rm{i}}\;k{\partial _z}u(x,y,z) = 0{.}$$ Now we put the fact that u(x, y, z) should be slowly varying with z in mathematical terms. The variation of u(x, y, z) with z should be small compared to its variation with x or y. Also the second partial derivative in z should be small. This can be expressed as $$\vert \partial _z^2u(x,y,z)\vert \ll \vert 2k{\partial _z}u(x,y,z)\vert, \vert \partial _x^2u(x,y,z)\vert, \vert \partial _y^2u(x,y,z)\vert.$$ With this approximation, Equation (111) can be simplified to the paraxial wave equation, $$(\partial _x^2 + \partial _y^2)u(x,y,z) - 2{\rm{i}}\;k{\partial _z}u(x,y,z) = 0{.}$$ Any field u that solves this equation represents a paraxial beam shape when used in the form given in Equation (110). 7.2 Transverse electromagnetic modes In general, any solution u(x, y, z) of the paraxial wave equation, Equation (113), can be employed to represent the transverse properties of a scalar electric field representing a beam-like electromagnetic wave. Especially useful in this respect are special families or sets of functions that are solutions of the paraxial wave equation. When such a set of functions is complete and countable, it's called a set of transverse electromagnetic modes (TEM). For instance, the set of Hermite-Gauss modes are exact solutions of the paraxial wave equation. These modes are represented by an infinite, countable and complete set of functions. The term complete means they can be understood as a base system of the function space defined by all solutions of the paraxial wave equation. In other words, we can describe any solution of the paraxial wave equation u′ by a linear superposition of Hermite-Gauss modes: $$u^{\prime}(x,y,z) = \sum\limits_{n,m} {{a_{jnm}}\;{u_{nm}}(x,y,z),}$$ which in turn allows us to describe any laser beam using a sum of these modes: $$E(t,x,y,z) = \sum\limits_j {\sum\limits_{n,m} {{a_{jnm}}\;{u_{nm}}(x,y,z)\;\exp ({\rm{i}}({\omega _j}t - {k_j}z)).}}$$ The Hermite-Gauss modes as given in this document (see Section 7.5) are orthonormal so that $$\int \int dxdy\;{u_{nm}}u_{{n^\prime}{m^\prime}}^\ast = {\delta _{n{n^\prime}}}{\delta _{m{m^\prime}}} = \left\{{\begin{array}{*{20}c} {1\quad {\rm{if}}\quad n = n^\prime\quad {\rm{and}}\quad m = m^\prime}\\ {0\quad {\rm{otherwise}}\quad \quad \quad \quad \quad \quad}\\ \end{array}} \right\}.$$ This means that, in the function space defined by the paraxial wave equation, the Hermite-Gauss functions can be understood as a complete set of unit-length basis vectors. This fact can be utilised for the computation of coupling factors. Furthermore, the power of a beam, as given by Equation (108), being detected on a single-element photodetector (provided that the area of the detector is large with respect to the beam) can be computed as $$EE^\ast = \sum\limits_{n,m} {{a_{nm}}a_{nm}^{\ast},}$$ or for a beam with several frequency components (compare with Equation (76)) as $$EE^\ast = \sum\limits_{n,m} {\sum\limits_i {\sum\limits_j {{a_{inm}}a_{jnm\quad}^{\ast}{\rm{with}}\quad \{i,j\vert i,j \in \{0, \ldots, N\}} \wedge {\omega _i} = {\omega _j}\}}.}$$ 7.3 Properties of Gaussian beams The basic or 'lowest-order' Hermite-Gauss mode is equivalent to what is usually called a Gaussian beam and is given by $$u(x,y,z) = \sqrt {{2 \over \pi}} {1 \over {w(z)}}\exp ({\rm{i}}\Psi (z))\;\exp \left({- {\rm{i}}\;k{{{x^2} + {y^2}} \over {2{R_C}(z)}} - {{{x^2} + {y^2}} \over {{w^2}(z)}}} \right).$$ The parameters of this equation are explained in detail below. The shape of a Gaussian beam is quite simple: the beam has a circular cross section, and the radial intensity profile of a beam with total power P is given by $$I(r) = {{2P} \over {\pi {w^2}(z)}}\exp (- 2{r^2}/{w^2}),$$ with ω the spot size, defined as the radius at which the intensity is 1/e2 times the maximum intensity I(0). This is a Gaussian distribution, see Figure 40, hence the name Gaussian beam. One dimensional cross-section of a Gaussian beam. The width of the beam is given by the radius ω at which the intensity is 1/e2 of the maximum intensity.) Figure 41 shows a different cross section through a Gaussian beam: it plots the beam size as a function of the position on the optical axis. Gaussian beam profile along z: this cross section along the x-z-plane illustrates how the beam size ω(z) of the Gaussian beam changes along the optical axis. The position of minimum beam size ω0 is called beam waist. See text for a description of the parameters Θ, zr and Rc. Such a beam profile (for a beam with a given wavelength λ) can be completely determined by two parameters: the size of the minimum spot size ω0 (called beam waist) and the position z0 of the beam waist along the z-axis. To characterise a Gaussian beam, some useful parameters can be derived from ω0 and z0. A Gaussian beam can be divided into two different sections along the z-axis: a near field — a region around the beam waist, and a far field — far away from the waist. The length of the near-field region is approximately given by the Rayleigh range zR. The Rayleigh range and the spot size are related by $${z_{\rm{R}}} = {{\pi w_0^2} \over \lambda}.$$ With the Rayleigh range and the location of the beam waist, we can usefully write $$w(z) = {w_0}\sqrt {1 + {{\left({{{z - {z_0}} \over {{z_{\rm{R}}}}}} \right)}^2}}.$$ This equation gives the size of the beam along the z-axis. In the far-field regime (z ≫ zR, z0), it can be approximated by a linear equation, when $$w(z) \approx {w_0}{z \over {{z_{\rm{R}}}}} = {{z\lambda} \over {\pi {w_0}}}.$$ The angle Θ between the z-axis and ω(z) in the far field is called the diffraction angle6 and is defined by $$\Theta = \arctan \left({{{{w_0}} \over {{z_{\rm{R}}}}}} \right) = \arctan \left({{\lambda \over {\pi {w_0}}}} \right) \approx {{{w_0}} \over {{z_{\rm{R}}}}}.$$ Another useful parameter is the radius of curvature of the wavefront at a given point z. The radius of curvature describes the curvature of the 'phase front' of the electromagnetic wave — a surface across the beam with equal phase — intersecting the optical axis at the position z. We obtain the radius of curvature as a function of z: $${R_C}(z) = z - {z_0} + {{z_{\rm{R}}^2} \over {z - {z_0}}}.$$ We also find: $$\begin{array}{*{20}c} {{R_C} \approx \infty, \quad z - {z_0} \ll {z_{\rm{R}}}} & {({\rm{beam}}\;{\rm{waist}})\quad \quad \quad \quad} \\ {{R_C} \approx z,\;\;\;\;\;\;\;z \gg {z_{\rm{R}}},{z_{\rm{0}}}} & {({\rm{far}}\;{\rm{field}})\quad \quad \quad \quad \quad} \\ {{R_C} = 2{z_{\rm{R}}},\quad z - {z_0} = {z_{\rm{R}}}} & {({\rm{maximum}}\;{\rm{curvature}})\;.} \\ \end{array}$$ 7.4 Astigmatic beams: the tangential and sagittal plane If the interferometer is confined to a plane (here the x–z plane), it is convenient to use projections of the three-dimensional description into two planes [46]: the tangential plane, defined as the x–z plane and the sagittal plane as given by y and z. The beam parameters can then be split into two respective parameters: z0,s, ω0,s for the sagittal plane and z0,t and ω0,t for the tangential plane so that the Hermite-Gauss modes can be written as $${u_{nm}}(x,y) = {u_n}(x,{z_{0,t}},{w_{0,t}})\;{u_m}(y,{z_{0,s}},{w_{0,s}}){.}$$ Beams with different beam waist parameters for the sagittal and tangential plane are astigmatic. Remember that these Hermite-Gauss modes form a base system. This means one can use the separation into sagittal and tangential planes even if the actual optical system does not show this special type of symmetry. This separation is very useful in simplifying the mathematics. In the following, the term beam parameter generally refers to a simple case where ω0,x = ω0,y and z0,x = z0,y but all the results can also be applied directly to a pair of parameters. 7.5 Higher-order Hermite-Gauss modes The complete set of Hermite-Gauss modes is given by an infinite discrete set of modes unm(x, y, z) with the indices n and m as mode numbers. The sum n+m is called the order of the mode. The term higher-order modes usually refers to modes with an order n + m > 0. The general expression for Hermite-Gauss modes can be given as [35] $${u_{{\rm{nm}}}}(x,y,z) = {u_{\rm{n}}}(x,z)\;{u_{\rm{m}}}(y,z),$$ $$\begin{array}{*{20}c} {{u_{\rm{n}}}(x,z) = {{\left({{2 \over \pi}} \right)}^{1/4}}{{\left({{{\exp ({\rm{i}}(2n + 1)\Psi (z))} \over {{2^n}n!w(z)}}} \right)}^{1/2}} \times}\qquad\qquad\qquad\\ {{H_n}\left({{{\sqrt 2 x} \over {w(z)}}} \right)\exp \left({- {\rm{i}}{{k{x^2}} \over {2{R_C}(z)}} - {{{x^2}} \over {{w^2}(z)}}} \right),}\\ \end{array}$$ and Hn(x) the Hermite polynomials of order n. The first Hermite polynomials, without normalisation, can be written $$\begin{array}{*{20}c} {{H_0}(x) = 1\quad \quad \;\;\;{H_1}(x) = 2x}\qquad\;\\ {{H_2}(x) = 4{x^2} - 2\;{H_3}(x) = 8{x^3} - 12x.}\\ \end{array}$$ $${H_{n + 1}}(x) = 2x{H_n}(x) - 2n{H_{n - 1}}(x).$$ Further orders can be computed recursively since $$\begin{array}{*{20}c} {{u_{{\rm{nm}}}}(x,y,z) = {{({2^{n + m - 1}}n!m!\pi)}^{- 1/2}}{1 \over {w(z)}}\;\exp ({\rm{i}}(n + m + 1)\Psi (z)) \times}\qquad \qquad \qquad \qquad\qquad \\ {{H_n}\left({{{\sqrt 2 x} \over {w(z)}}} \right){H_m}\left({{{\sqrt 2 y} \over {w(z)}}} \right)\exp \left({- {\rm{i}}{{k({x^2} + {y^2})} \over {2{R_C}(z)}} - {{{x^2} + {y^2}} \over {{w^2}(z)}}} \right).}\\ \end{array}$$ for both transverse directions we can also rewrite the above to $${1 \over {q(z)}} = {1 \over {{R_C}(z)}} - {\rm{i}}{\lambda \over {\pi {w^2}(z)}}.$$ The latter form has the advantage of clearly showing the extra phase shift along the z-axis of (n + m +1)Ψ(z) called the Gouy phase; see Section 7.8. 7.6 The Gaussian beam parameter For a more compact description of the interaction of Gaussian modes with optical components we will make use of the Gaussian beam parameter q [34]. The beam parameter is a complex quantity defined as $$q(z) = {\rm{i}}{z_{\rm{R}}} + z - {z_0} = {q_0} + z - {z_0}\quad {\rm{and}}\quad {q_0} = {\rm{i}}{z_{\rm{R}}}.$$ It can also be written as $$u(x,y,z) = {1 \over {q(z)}}\exp \left({- {\rm{i}}\;k{{{x^2} + {y^2}} \over {2q(z)}}} \right).$$ Using this parameter, Equation (119) can be rewritten as $${w^2}(z) = {\lambda \over \pi}{{\vert q{\vert ^2}} \over {\Im \{q\}}},$$ Other parameters, like the beam size and radius of curvature, can also be written in terms of the beam parameter q: $$w_0^2 = {{\Im \{q\} \lambda} \over \pi},$$ $${z_{\rm{R}}} = \Im \{q\}$$ $${R_C}(z) = {{\vert q\vert ^{2}} \over {\Re \{q\}}}.$$ $$\begin{array}{*{20}c} {{u_{{\rm{nm}}}}(x,y,z) = {u_{\rm{n}}}(x,z){u_{\rm{m}}}(y,z)\quad {\rm{with}}}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \;\;\\ {u_{\rm n}(x,z) = {{\left({{2 \over \pi}} \right)}^{1/4}}{{\left({{1 \over {{2^n}n!{w_0}}}} \right)}^{1/2}}{{\left({{{{q_0}} \over {q(z)}}} \right)}^{1/2}}{{\left({{{{q_0}\;q^\ast(z)} \over {q_0^{\ast}\;q(z)}}} \right)}^{n/2}}{H_n}\left({{{\sqrt 2 x} \over {w(z)}}} \right)\exp \left({- {\rm{i}}{{k{x^2}} \over {2q(z)}}} \right).}\\ \end{array}$$ The Hermite-Gauss modes can also be written using the Gaussian beam parameter as7 $$\Psi (z) = \arctan \left({{{z - {z_0}} \over {{z_{\rm{R}}}}}} \right),$$ 7.7 Properties of higher-order Hermite-Gauss modes Some of the properties of Hermite-Gauss modes can easily be described using cross sections of the field intensity or field amplitude. Figure 42 shows such cross sections, i.e., the intensity in the x–y plane, for a number of higher-order modes. This shows a x–y symmetry for mode indices n and m. We can also see how the size of the intensity distribution increases with the mode index, while the peak intensity decreases. This plot shows the intensity distribution of Hermite-Gauss modes unm. One can see that the intensity distribution becomes wider for larger mode indices and the peak intensity decreases. The mode index defines the number of dark stripes in the respective direction. Similarly, Figure 44 shows the amplitude and phase distribution of several higher-order Hermite-Gauss modes. Some further features of Hermite-Gauss modes: The size of the intensity profile of any sum of Hermite-Gauss modes depends on z while its shape remains constant over propagation along the optical axis. The phase distribution of Hermite-Gauss modes shows the curvature (or radius of curvature) of the beam. The curvature depends on z but is equal for all higher-order modes. Note that these are special features of Gaussian beams and not generally true for arbitrary beam shapes. Figure 43, for example, shows the amplitude and phase distribution of a triangular beam at the point where it is (mathematically) created and after a 10 m propagation. Neither the shape is preserved nor does it show a spherical phase distribution. These top plots show a triangular beam shape and phase distribution and the bottom plots the diffraction pattern of this beam after a propagation of z = 5 m. It can be seen that the shape of the triangular beam is not conserved and that the phase front is not spherical. These plots show the amplitude distribution and wave front (phase distribution) of Hermite-Gaussian modes unm (labeled as HGnm in the plot). All plots refer to a beam with λ = 1 µm, ω = 1 mm and distance to waist z = 1 m. The mode index (in one direction) defines the number of zero crossings (along that axis) in the amplitude distribution. One can also see that the phase distribution is the same spherical distribution, regardless of the mode indices. 7.8 Gouy phase The equation for Hermite-Gauss modes shows an extra longitudinal phase lag. This Gouy phase [8, 26, 25] describes the fact that, compared to a plane wave, the Hermite-Gauss modes have a slightly slower phase velocity, especially close to the waist. The Gouy phase can be written as $$\Psi (z) = \arctan \left({{{\Re \{q\}} \over {\Im \{q\}}}} \right).$$ or, using the Gaussian beam parameter, $$\varphi = (n + m + 1)\Psi (z).$$ Compared to a plane wave, the phase lag φ of a Hermite-Gauss mode is $$\varphi = \left({n + {1 \over 2}} \right){\Psi _t}(z) + \left({m + {1 \over 2}} \right){\Psi _s}(z),$$ With an astigmatic beam, i.e., different beam parameters in the tangential and sagittal planes, this becomes $${\Psi _t}(z) = \arctan \left({{{\Re \{{q_t}\}} \over {\Im \{{q_t}\}}}} \right),$$ $$\begin{array}{*{20}c} {{u_{p,l}}(r, \phi, z) = {1 \over {w(z)}}\sqrt {{{2p!} \over {\pi (\vert l\vert + p)!}}} \exp ({\rm{i}}(2p + \vert l\vert + 1)\Psi (z))}\qquad \qquad \qquad\\ {\times \left({{{\sqrt 2 r} \over {w(z)}}} \right)\;\;L_p^{(l)}\left({{{2{r^2}} \over {w{{(z)}^2}}}} \right)\exp \left({- {\rm{i}}\;k{{{r^2}} \over {2q(z)}} + {\rm{i}}l\phi} \right),}\\ \end{array}$$ as the Gouy phase in the tangential plane (and is similarly defined in the sagittal plane). 7.9 Laguerre-Gauss modes Laguerre-Gauss modes are another complete set of functions, which solve the paraxial wave equation. They are defined in cylindrical coordinates and can have advantages over Hermite-Gauss modes in the presence of cylindrical symmetry. More recently, Laguerre-Gauss modes are being investigated in a different context: using a pure higher-order Laguerre-Gauss mode instead of the fundamental Gaussian beam can significantly reduce the impact of mirror thermal noise on the sensitivity of gravitational wave detectors [54, 12]. Laguerre-Gauss modes are commonly given as [50] $$L_p^{(l)}(x) = {1 \over {p!}}\sum\limits_{j = 0}^p {{{p!} \over {j!}}} \left({\begin{array}{*{20}c} {l + p} \\ {p - j} \\ \end{array}} \right){(- x)^j}.$$ with r, ϕ and z as the cylindrical coordinates around the optical axis. The letter p is the radial mode index, l the azimuthal mode index8 and \(L_p^{(l)}(x)\) are the associated Laguerre polynomials: $$\begin{array}{*{20}c} {u_{p,l}^{{\rm{alt}}}(r, \phi, z) = {2 \over {w(z)}}\sqrt {{{p!} \over {(1 + {\delta _{0l}}\pi (\vert l\vert + p)!}}} \exp ({\rm{i}}(2p + \vert l\vert + 1)\Psi (z))}\qquad\quad\quad\\ {\times \left({{{\sqrt 2 r} \over {w(z)}}} \right)\;\;L_p^{(l)}\left({{{2{r^2}} \over {w{{(z)}^2}}}} \right)\exp \left({- {\rm{i}}\;k{{{r^2}} \over {2q(z)}}} \right)\cos (l\phi){.}}\\\end{array}$$ All other parameters (w(z), q(z),…) are defined as above for the Hermite-Gauss modes. The dependence of the Laguerre modes on ϕ as given in Equation (146) results in a spiraling phase front, while the intensity pattern will always show unbroken concentric rings; see Figure 45. These modes are also called helical Laguerre-Gauss modes because of the their special phase structure. These plots show the amplitude distribution and wave front (phase distribution) of helical Laguerre-Gauss modes upl. All plots refer to a beam with λ = 1 pm, ω = 1 mm and distance to waist z = 1 m. The reader might be more familiar with a slightly different type of Laguerre modes (compare Figure 46 and Figure 47) that features dark radial lines as well as dark concentric rings. Mathematically, these can be described simply by replacing the phase factor exp(i lϕ) in Equation (146) by a sine or cosine function. For example, an alternative set of Laguerre-Gauss modes is given by [55] $$u_{n,m}^{LG}(x,y,z) = \sum\limits_{k = 0}^N {{{\rm{i}}^k}b(n,m,k)u_{N - k,k}^{HG}(x,y,z),}$$ This type of mode has a spherical phase front, just as the Hermite-Gauss modes. We will refer to this set as sinusoidal Laguerre-Gauss modes throughout this document. Intensity profiles for helical Laguerre-Gauss modes upl. The u00 mode is identical to the Hermite-Gauss mode of order 0. Higher-order modes show a widening of the intensity and decreasing peak intensity. The number of concentric dark rings is given by the radial mode index p. Intensity profiles for sinusoidal Laguerre-Gauss modes \(u_{pl}^{{\rm{alt}}}\). The up0 modes are identical to the helical modes. However, for azimuthal mode indices l > 0 the pattern shows l dark radial lines in addition to the p dark concentric rings. For the purposes of simulation it can be sometimes useful to decompose Laguerre-Gauss modes into Hermite-Gauss modes. The mathematical conversion for helical modes is given as [7, 1] $$b(n,m,k) = \sqrt {{{(N - k)!k!} \over {{2^N}n!m!}}} {1 \over {k!}}{({\partial _t})^k}{[{(1 - t)^n}{(1 + t)^m}]_{t = 0}},$$ with real coefficients $$P_n^{\alpha, \beta}(x) = {{{{(- 1)}^n}} \over {{2^n}n!}}{(1 - x)^{- \alpha}}{(1 + x)^{- \beta}}{({\partial _x})^n}{(1 - x)^{\alpha + n}}{(1 + x)^{\beta + n}},$$ if N = n + m. This relates to the common definition of Laguerre modes as upl as follows: p = min(n, m) and l = n − m. The coefficients h(n, m, k) can be computed numerically by using Jacobi polynomials. Jacobi polynomials can be written in various forms: $$P_n^{\alpha, \beta}(x) = {1 \over {{2^n}}}\sum\limits_{j = 0}^n {\left({\begin{array}{*{20}c} {n + \alpha}\\ {\;\;j}\\ \end{array}} \right)\left({\begin{array}{*{20}c} {n + \beta}\\ {n - j}\\ \end{array}} \right)} {(x - 1)^{n - j}}{(1 + x)^j},$$ or which leads to $$b(n,m,k) = \sqrt {{{(N - k)!k!} \over {{2^N}n!m!}}} {(- 2)^k}P_k^{n - k,m - k}(0).$$ 7.10 Tracing a Gaussian beam through an optical system Whenever Gauss modes are used to analyse an optical system, the Gaussian beam parameters (or equivalent waist sizes and locations) must be defined for each location at which field amplitudes are to be computed (or at which coupling equations are to be defined). In our experience the quality of a computation or simulation and the correctness of the results depend critically on the choice of these beam parameters. One might argue that the choice of a basis should not alter the result. This is correct, but there is a practical limitation: the number of modes having non-negligible power might become very large if the beam parameters are not optimised, so that in practice a good set of beam parameters is usually required. In general, the Gaussian beam parameter of a mode is changed at every optical surface in a well-defined way (see Section 7.11). Thus, a possible method of finding reasonable beam parameters for every location in the interferometer is to first set only some specific beam parameters at selected locations and then to derive the remaining beam parameters from these initial ones: usually it is sensible to assume that the beam at the laser source can be properly described by the (hopefully known) beam parameter of the laser's output mode. In addition, in most stable cavities the light fields should be described by using the respective cavity eigenmodes. Then, the remaining beam parameters can be computed by tracing the beam through the optical system. 'Trace' in this context means that a beam starting at a location with an already-known beam parameter is propagated mathematically through the optical system. At every optical element along the path the beam parameter is transformed according to the ABCD matrix of the element (see below). 7.11 ABCD matrices The transformation of the beam parameter can be performed by the ABCD matrix-formalism [34, 50]. When a beam passes an optical element or freely propagates though space, the initial beam parameter q1 is transformed into q2. This transformation can be described by four real coefficients as follows: $${{{q_2}} \over {{n_2}}} = {{A{{{q_1}} \over {{n_1}}} + B} \over {C{{{q_1}} \over {{n_1}}} + D}},$$ with the coefficient matrix $$M\left({\begin{array}{*{20}c} A & B\\ C & D\\ \end{array}} \right),$$ n1 being the index of refraction at the beam segment defined by q1, and n2 the index of refraction at the beam segment described by q2. ABCD matrices for some common optical components are given below, for the sagittal and tangential plane. 7.11.1 Transmission through a mirror: A mirror in this context is a single, partly-reflecting surface with an angle of incidence of 90°. The transmission is described by with RC being the radius of curvature of the spherical surface. The sign of the radius is defined such that RC is negative if the centre of the sphere is located in the direction of propagation. The curvature shown above (in Figure 48), for example, is described by a positive radius. The matrix for the transmission in the opposite direction of propagation is identical. 7.11.2 Reflection at a mirror: The matrix for reflection is given by The reflection at the back surface can be described by the same type of matrix by setting C = 2n2/RC. 7.11.3 Transmission through a beam splitter: A beam splitter is understood as a single surface with an arbitrary angle of incidence α1. The matrices for transmission and reflection are different for the sagittal and tangential planes (Ms and Mt): with α2 given by Snell's law: $${n_1}\sin ({\alpha _1}) = {n_2}\sin ({\alpha _2}),$$ and Δn by $$\Delta n = {{{n_2}\cos ({\alpha _2}) - {n_1}\cos ({\alpha _1})} \over {\cos ({\alpha _1})\cos ({\alpha _2})}}.$$ if the direction of propagation is reversed, the matrix for the sagittal plane is identical and the matrix for the tangential plane can be obtained by changing the coefficients A and D as follows: $$\begin{array}{*{20}c} {A \rightarrow 1/A,} \\ {D \rightarrow 1/D.} \\ \end{array}$$ 7.11.4 Reflection at a beam splitter: The reflection at the front surface of a beam splitter is given by: To describe a reflection at the back surface the matrices have to be changed as follows: $$\begin{array}{*{20}c} {{R_C} \rightarrow - {R_C},}\\ {{n_1} \rightarrow {n_2},}\quad\\ {{\alpha _1}\rightarrow - {\alpha _2}.}\\ \end{array}$$ 7.11.5 Transmission through a thin lens: A thin lens transforms the beam parameter as follows: where f is the focal length. The matrix for the opposite direction of propagation is identical. Here it is assumed that the thin lens is surrounded by 'spaces' with index of refraction n = 1. 7.11.6 Transmission through a free space: As mentioned above, the beam in free space can be described by one base parameter q0. In some cases it is convenient to use a matrix similar to that used for the other components to describe the z-dependency of q(z) = q0 + z. On propagation through a free space of the length L and index of refraction n, the beam parameter is transformed as follows. The matrix for the opposite direction of propagation is identical. 8 Interferometer Matrix with Hermite-Gauss Modes In the plane-wave analysis Section 1.4, a laser beam is described in general by the sum of various frequency components of its electric field $$E(t,z) = \sum\limits_j {{a_j}\;\exp \left({{\rm{i}}({\omega _j}t - {k_j}z)} \right).}$$ Here we include the geometric shape of the beam by describing each frequency component as a sum of Hermite-Gauss modes: $$E(t,x,y,z) = \sum\limits_j {\sum\limits_{n,m} {{a_{jnm}}\;{u_{nm}}(x,y)\;\exp ({\rm{i}}({\omega _j}t - {k_j}z))}.}$$ The shape of such a beam does not change along the z-axis (in the paraxial approximation). More precisely, the spot size and the position of the maximum intensity with respect to the z-axis may change, but the relative intensity distribution across the beam does not change its shape. Each part of the sum may be treated as an independent field that can be described using the equation for plane-waves with only two exceptions: the propagation through free space has to include the Gouy phase shift, and upon reflection or transmission at a mirror or beam splitter the different Hermite-Gauss modes may be coupled (see below). The Gouy phase shift can be included in the simulation in several ways. For example, for reasons of flexibility the Gouy phase has been included in Finesse as a phase shift of the component space. 8.1 Coupling of Hermite-Gauss modes Let us consider two different cavities with different sets of eigenmodes. The first set is characterised by the beam parameter q1 and the second by the parameter q2. A beam with all power in the fundamental mode u00(q1) leaves the first cavity and is injected into the second. Here, two 'misconfigurations' are possible: if the optical axes of the beam and the second cavity do not overlap perfectly, the setup is called misaligned, if the beam size or shape at the second cavity does not match the beam shape and size of the (resonant) fundamental eigenmode (q1(zcav) ≠ q2(zcav)), the beam is then not mode-matched to the second cavity, i.e., there is a mode mismatch. The above misconfigurations can be used in the context of simple beam segments. We consider the case in which the beam parameter for the input light is specified. Ideally, the ABCD matrices then allow one to trace a beam through the optical system by computing the proper beam parameter for each beam segment. In this case, the basis system of Hermite-Gauss modes is transformed in the same way as the beam, so that the modes are not coupled. For example, an input beam described by the beam parameter q1 is passed through several optical components, and at each component the beam parameter is transformed according to the respective ABCD matrix. Thus, the electric field in each beam segment is described by Hermite-Gauss modes based on different beam parameters, but the relative power between the Hermite-Gauss modes with different mode numbers remains constant, i.e., a beam in a u00 mode is described as a pure u00 mode throughout the entire system. In practice, it is usually impossible to compute proper beam parameter for each beam segment as suggested above, especially when the beam passes a certain segment more than once. A simple case that illustrates this point is reflection at a spherical mirror. Let the input beam be described by q1. From Figure 49 we know that the proper beam parameter of the reflected beam is $${q_2} = {{{q_1}} \over {- 2{q_1}/{R_C} + 1}},$$ with RC being the radius of curvature of the mirror. In general, we get q1 ≠ q2 and thus two different 'proper' beam parameters for the same beam segment. Only one special radius of curvature would result in matched beam parameters (q1 = q2). 8.2 Coupling coefficients for Hermite-Gauss modes Hermite-Gauss modes are coupled whenever a beam is not matched to a cavity or to a beam segment or if the beam and the segment are misaligned. This coupling is sometimes referred to as 'scattering into higher-order modes' because in most cases the laser beam is a considered as a pure TEM00 mode and any mode coupling would transfer power from the fundamental into higher-order modes. However, in general, every mode with non-zero power will transfer energy into other modes whenever mismatch or misalignment occur, and this effect also includes the transfer from higher orders into a low order. To compute the amount of coupling the beam must be projected into the base system of the cavity or beam segment it is being injected into. This is always possible, provided that the paraxial approximation holds, because each set of Hermite-Gauss modes, defined by the beam parameter at a position z, forms a complete set. Such a change of the basis system results in a different distribution of light power in the new Hermite-Gauss modes and can be expressed by coupling coefficients that yield the change in the light amplitude and phase with respect to mode number. Let us assume that a beam described by the beam parameter q1 is injected into a segment described by the parameter q2. Let the optical axis of the beam be misaligned: the coordinate system of the beam is given by (x, y, z) and the beam travels along the z-axis. The beam segment is parallel to the z′-axis and the coordinate system (x′, y′, z′) is given by rotating the (x, y, z) system around the y-axis by the misalignment angle γ. The coupling coefficients are defined as $${u_{nm}}({q_1})\exp \left({{\rm{i}}(\omega t - kz)} \right) = \sum\limits_{n^{\prime},m^{\prime}} {{k_{n,m,n^{\prime},m^{\prime}}}} {u_{n^{\prime}m^{\prime}}}({q_2})\exp \left({{\rm{i}}(\omega t - kz^{\prime})} \right),$$ where unm(q1) are the Hermite-Gauss modes used to describe the injected beam and \({u_{{n^{\prime}}\,{m^{\prime}}}}({q_2})\) are the 'new' modes that are used to describe the light in the beam segment. Note that including the plane wave phase propagation within the definition of coupling coefficients is very important because it results in coupling coefficients that are independent of the position on the optical axis for which the coupling coefficients are computed. Using the fact that the Hermite-Gauss modes unm are orthonormal, we can compute the coupling coefficients by the convolution [6] $${k_{n,m,n^{\prime},m^{\prime}}} = \exp \left({{\rm{i}}2kz^{\prime}\sin^{2} \left({{\gamma \over 2}} \right)} \right)\int {\int {dx^{\prime}} dy^{\prime}} \;{u_{n^{\prime}m^{\prime}}}\exp ({\rm{i}}\;kx^{\prime}\sin \gamma)u_{nm}^{\ast}.$$ Since the Hermite-Gauss modes can be separated with respect to x and y, the coupling coefficients can also be split into \({k_{nm{n^{\prime}}{m^{\prime}}}} = {k_{n{n^{\prime}}}}{k_{m{m^{\prime}}}}\). These equations are very useful in the paraxial approximation as the coupling coefficients decrease with large mode numbers. In order to be described as paraxial, the angle γ must not be larger than the diffraction angle. In addition, to obtain correct results with a finite number of modes the beam parameters q1 and q2 must not differ too much. The convolution given in Equation (164) can be computed directly using numerical integration. However, this is computationally very expensive. The following is based on the work of Bayer-Helms [6]. Another very good description of coupling coefficients and their derivation can be found in the work of Vinet [55]. In [6] the above projection integral is partly solved and the coupling coefficients are given by simple sums as functions of γ and the mode mismatch parameter K, which are defined by $$K = {1 \over 2}({K_0} + {\rm{i}}\;{K_2}),$$ where K0 = (zR −z′R)/z′R and K2 = ((z − z0) − (z′ − z′R))/z′RR. This can also be written using q = i zr + z − z0, as $$K = {{{\rm{i}}(q - q^{\prime})^\ast} \over {2\Im \{q^{\prime}\}}}.$$ The coupling coefficients for misalignment and mismatch (but no lateral displacement) can then be written as $${k_{nn^{\prime}}} = {(- 1)^{n^{\prime}}}{E^{(x)}}{(n!n^{\prime}!)^{1/2}}{(1 + {K_0})^{n/2 + 1/4}}{(1 + K^\ast)^{- (n + n^{\prime} + 1)/2}}\{{S_g} - {S_u}\},$$ $$\begin{array}{*{20}c} {{S_g} = \sum\limits_{\mu = 0}^{[n/2]} {\sum\limits_{\mu^{\prime} = 0}^{[n^{\prime}/2]} {{{{{(- 1)}^\mu}{{\bar X}^{n - 2\mu}}{X^{n^{\prime}- 2\mu^{\prime}}}} \over {(n - 2\mu)!(n^{\prime} - 2\mu^{\prime})!}}\sum\limits_{\sigma = 0}^{\min (\mu, \mu^{\prime})} {{{{{(- 1)}^\sigma}{{\bar F}^{\mu - \sigma}}{F^{\mu^{\prime} - \sigma}}} \over a}(2\sigma)!(\mu - \sigma)!(\mu^{\prime} - \sigma)!,}}}}\\ {{S_u} = \sum\limits_{\mu = 0}^{[(n - 1)/2]} {\sum\limits_{\mu^{\prime} = 0}^{[(n^{\prime} - 1)/2]} {{{{{(- 1)}^\mu}{{\bar X}^{n - 2\mu - 1}}{X^{n^{\prime} - 2\mu^{\prime} - 1}}} \over {(n - 2\mu - 1)!(n^{\prime} - 2\mu^{\prime} - 1)!}}\sum\limits_{\sigma = 0}^{\min (\mu, \mu^{\prime})} {{{{{(- 1)}^\sigma}{{\bar F}^{\mu - \sigma}}{F^{\mu^{\prime} - \sigma}}} \over {(2\sigma + 1)!(\mu - \sigma)!(\mu^{\prime} - \sigma)!}}.}}}}\\ \end{array}$$ The corresponding formula for \({k_{m{m^{\prime}}}}\) can be obtained by replacing the following parameters: n → m, n′ → m′, X, \(\bar X \rightarrow 0\) and E(x) → 1 (see below). The notation [n/2] means $$\left[ {{m \over 2}} \right] = \left\{{\begin{array}{*{20}c} {m/2}\quad & {{\rm{if}}\;m\;{\rm{is}}\;{\rm{even}},} \\ {(m - 1)/2} & {{\rm{if}}\;m\;{\rm{is}}\;{\rm{odd}}.} \\ \end{array}} \right.$$ The other abbreviations used in the above definition are $$\begin{array}{*{20}c} {\bar X = ({\rm{i}}z_{\rm{R}}^\prime - {z^\prime})\sin (\gamma)/(\sqrt {1 + K^\ast} {w_0}),}\\ {X = ({\rm{i}}{z_{\rm{R}}} - {z^\prime})\sin (\gamma)/(\sqrt {1 + K^\ast} {w_0}),}\\ {F = K/(2(1 + {K_0}))},\\ {\bar F = K^\ast/2,}\\ {{E^{(x)}} = \exp \left({- {{X\bar X} \over 2}} \right).}\\ \end{array}$$ In general, the Gaussian beam parameter might be different for the sagittal and tangential planes and a misalignment can be given for both possible axes (around the y-axis and around the x-axis), in this case the coupling coefficients are given by $${k_{nmm^{\prime}n^{\prime}}} = {k_{nn^{\prime}}}{k_{mm^{\prime}}},$$ where \({k_{n{n^{\prime}}}}\) is given above with $$\begin{array}{*{20}c} {q \rightarrow {q_t}\quad \quad \quad}\\ {{\rm{and}}\quad \quad \quad \quad}\\ {{w_0} \rightarrow {w_{t,0}},{\rm{etc}}.}\\ \end{array}$$ and γ → γy is a rotation about the y-axis. The \({k_{m{m^{\prime}}}}\) can be obtained with the same formula, with the following substitutions: $$\begin{array}{*{20}c} {n \rightarrow m,\quad \quad \quad}\\ {{n^\prime} \rightarrow {m^\prime},\quad \quad \quad}\\ {q \rightarrow {q_s},\quad \quad \quad}\\ {{\rm{thus}}\quad \quad \quad \quad \;}\\ {{w_0} \rightarrow {w_{s,0}},{\rm{etc}}.}\\\end{array}$$ and γ → γx is a rotation about the x-axis. At each component a matrix of coupling coefficients has to be computed for transmission and reflection; see Figure 54. Coupling coefficients for Hermite-Gauss modes: for each optical element and each direction of propagation complex coefficients k for transmission and reflection have to be computed. In this figure k1, k2, k3, k4 each represent a matrix of coefficients \({k_{nm{n^{\prime}}{m^{\prime}}}}\) describing the coupling of un,m into \({u_{{n^{\prime}},{m^{\prime}}}}\). 8.3.1 Beam parameter This example illustrates a possible use of the beam parameter detector 'bp': the beam radius of the laser beam is plotted as a function of distance to the laser. For this simulation, the interferometer matrix does not need to be solved. 'bp' merely returns the results from the beam tracing algorithm of Finesse. Finesse example: Beam parameter Finesse input file for 'Beam parameter' 8.3.2 Mode cleaner This example uses the 'tem' command to create a laser beam which is a sum of equal parts in u00 and u10 modes. This beam is passed through a triangular cavity, which acts as a mode cleaner. Being resonant for the u00, the cavity transmits this mode and reflects the u10 mode as can be seen in the resulting plots. Finesse input file for 'Mode cleaner' Finesse example: Mode cleaner 8.3.3 LG33 mode Finesse uses the Hermite-Gauss modes as a base system for describing the spatial properties of laser beams. However, Laguerre-Gauss modes can be created using the coefficients given in Equation (149). This example demonstrates this and the use of a 'beam' detector to plot amplitude and phase of a beam cross section. Finesse example: LG33 mode. The ring structure in the phase plot is due to phase jumps, which could be removed by applying a phase 'unwrap'. Finesse input file for 'LG33 mode' 1 In many implementations of numerical matrix solvers the input vector is also called the right-hand side vector. 2 Note that in other publications the tuning or equivalent microscopic displacements are sometimes defined via an optical path-length difference. In that case, a tuning of 2π is used to refer to the change of the optical path length of one wavelength, which, for example, if the reflection at a mirror is described, corresponds to a change of the mirror's position of λ0/2. 3 The signal sidebands are sometimes also called audio sidebands because of their frequency range. 4 The term effective refers to that amount of incident light, which is converted into photo-electrons that are then usefully extracted from the junction (i.e., do not recombine within the device). This fraction is usually referred to as quantum efficiency η of the photodiode. 5 Please note that in the presence of losses the coupling is defined with respect to the transmission and losses. In particular, the impedance-matched case is defined as T1 = T2 · Loss, so that the input power transmission exactly matches the light power lost in one round-trip. 6 Also known as the far-field angle or the divergence of the beam. 7 Please note that this formula from [50] is very compact. Since the parameter q is a complex number, the expression contains at least two complex square roots. The complex square root requires a different algebra than the standard square root for real numbers. Especially the third and fourth factors can not be simplified in any obvious way: \({\left({{{{q_0}} \over {q(z)}}} \right)^{1/2}}{\left({{{{q_0}{q^{\ast}}(z)} \over {q_0^{\ast}q(z)}}} \right)^{n/2}} \neq {\left({{{q_0^{n + 1}{q^{\ast n}}(z)} \over {{q^{n + 1}}(z)q_0^{\ast n}}}} \right)^{1/2}}\)! 8 [50] states that the indices must obey the following relations: 0 ≤ |l| ≤ p. However, that is not the case. We would like to thank our colleagues in the GEO 600 project for many useful discussions over the years. AF acknowledges support from the University of Birmingham. KS acknowledges support from the University of Glasgow and the Albert Einstein Institute, Hannover. Some of the illustrations have been prepared using the component library by Alexander Franzen. The Interferometer Simulation Finesse Throughout this document we have provided a number of text files that can be used as input files for the interferometer simulation Finesse [19, 22]. Finesse is a numerical simulation written in the C language; it is available free of charge for Linux, Windows and Macintosh computers and can be obtained online: http://www.gwoptics.org/finesse/. Finesse provides a fast and versatile tool that has proven to be very useful during the design and commissioning of interferometric gravitational-wave detectors. However, the program has been designed to allow the analysis of arbitrary, user-defined optical setups. In addition, it is easy to install and use. Therefore Finesse is well suited to study basic optical properties, such as, the power enhancement in a resonating cavity and modulation-demodulation methods. We encourage the reader to obtain Finesse and to learn its basic usage by running the included example files (and by making use of its extensive manual). The Finesse input files provided in this article are in most cases very simple and illustrate single concepts in interferometry. We believe that even a Finesse novice should be able to use them as starting points to play and explore freely, for example by changing parameters, or by adding further optical components. This type of 'numerical experimentation' can provide insights similar to real experiments, supplementing the understanding through a mathematical analysis with experience and intuitions. Abramochkin, E., and Volostnikov, V., "Beam transformations and nontransformed beams", Opt. Commun., 83, 123–135, (1991). [ADS]. (Cited on page 64.)ADSCrossRefGoogle Scholar Abramowitz, M., and Stegun, I.A., eds., Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, (Dover, New York, 1965), corr. edition. [Google Books]. (Cited on page 23.)zbMATHGoogle Scholar Acernese, F. (Virgo Collaboration), "The Virgo automatic alignment system", Class. Quantum Grav., 23(8), S91–S101, (2006). [DOI]. (Cited on page 6.)CrossRefGoogle Scholar Acernese, F. (Virgo Collaboration), Advanced Virgo Baseline Design, VIR-027A-09, (Virgo, Cascina, 2009). Related online version (cited on 22 September 2009): http://www.virgo.infn.it/advirgo/docs.html. (Cited on page 5.)Google Scholar Advanced LIGO Reference Design, LIGO M060056-08-M, (LIGO, Pasadena, CA, 2007). Related online version (cited on 20 September 2009): http://www.ligo.caltech.edu/docs/M/M060056-08/M060056-08.pdf. (Cited on page 5.) Bayer-Helms, F., "Coupling coefficients of an incident wave and the modes of spherical optical resonator in the case of mismatching and misalignment", Appl. Optics, 23, 1369–1380, (1984). [DOI]. (Cited on pages 70 and 71.)ADSCrossRefGoogle Scholar Beijersbergen, M.W., Allen, L., van der Veen, H.E.L.O., and Woerdman, J.P., "Astigmatic laser mode converters and transfer of orbital angular momentum", Opt. Commun., 96, 123–132, (1993). [DOI]. (Cited on page 64.)ADSCrossRefGoogle Scholar Boyd, R.W., "Intuitive explanation of the phase anomaly of focused light beams", J. Opt. Soc. Am., 70(7), 877–880, (1980). (Cited on page 62.)ADSCrossRefGoogle Scholar Braginsky, V.B., Gorodetsky, M.L., Khalili, F.Y., and Thorne, K.S., "Energetic quantum limit in large-scale interferometers", in Meshkov, S., ed., Gravitational Waves: Third Edoardo Amaldi Conference, Pasadena, California, 12–16 July, 1999, AIP Conference Proceedings, vol. 523, pp. 180–190, (American Institute of Physics, Melville, NY, 2000). [DOI], [ADS], [arXiv:gr-qc/9907057]. (Cited on page 7.)Google Scholar Caves, C.M., "Quantum-mechanical noise in an interferometer", Phys. Rev. D, 23, 1693–1708, (1981). [DOI]. (Cited on page 7.)ADSCrossRefGoogle Scholar Chelkowski, S., Squeezed Light and Laser Interferometric Gravitational Wave Detectors, Ph.D. Thesis, (Universitat Hannover, Hannover, 2007). Related online version (cited on 28 September 2009): http://edok01.tib.uni-hannover.de/edoks/e01dh07/537859527.pdf. (Cited on page 27.)Google Scholar Chelkowski, S., Hild, S., and Freise, A., "Prospects of higher-order Laguerre-Gauss modes in future gravitational wave detectors", Phys. Rev. D, 79, 122002, 1–11, (2009). [DOI], [arXiv:0901.4931 [gr-qc]]. (Cited on page 62.)Google Scholar Davis, T.A., Direct Methods for Sparse Linear Systems, Fundamentals of Algorithms, vol. 2, (SIAM, Philadelphia, 2006). Related online version (cited on 28 September 2009): http://www.cise.ufl.edu/research/sparse/CSparse/. (Cited on page 12.)CrossRefGoogle Scholar Drever, R.W.P., Hall, J.L., Kowalski, F.V., Hough, J., Ford, G.M., Munley, A.J., and Ward, H., "Laser phase and frequency stabilization using an optical resonator", Appl. Phys. B, 31, 97–105, (1983). [DOI]. (Cited on page 47.)ADSCrossRefGoogle Scholar Drever, R.W.P., Hough, J., Munley, A.J., Lee, S.-A., Spero, R.E., Whitcomb, S.E., Ward, H., Ford, G.M., Hereld, M., Robertson, N.A., Kerr, I., Pugh, J.R., Newton, G.P., Meers, B.J., Brooks III, E.D., and Gürsel, Y., "Gravitational wave detectors using laser interferometers and optical cavities: Ideas, principles and prospects", in Meystre, P., and Scully, M.O., eds., Quantum Optics, Experimental Gravity, and Measurement Theory, Proceedings of the NATO Advanced Study Institute, held August 16–29, 1981 in Bad Windsheim, Germany, NATO ASI Series B, vol. 94, pp. 503–514, (Plenum Press, New York, 1983). (Cited on page 6.)CrossRefGoogle Scholar Fabry, C., and Pérot, A., "Theorie et applications d'une nouvelle methode de spectroscopie interferentielle", Ann. Chim. Phys., 16, 115–144, (1899). (Cited on pages 6 and 36.)zbMATHGoogle Scholar Fattaccioli, D., Boulharts, A., Brillet, A., and Man, C.N., "Sensitivity of multipass and Fabry-Pérot delay lines to small misalignments", J. Optics (Paris), 17(3), 115–127, (1986). [DOI]. (Cited on page 6.)ADSCrossRefGoogle Scholar Forward, R.L., "Wideband laser-interferometer gravitational-radiation experiment", Phys. Rev. D, 17, 379–390, (1978). [DOI]. (Cited on pages 6 and 36.)ADSCrossRefGoogle Scholar Freise, A., "FINESSE: An Interferometer Simulation", personal homepage, Andreas Freise. URL (cited on 16 January 2010): http://www.gwoptics.org/finesse. (Cited on pages 5 and 76.) Freise, A., The Next Generation of Interferometry: Multi-Frequency Optical Modelling, Control Concepts and Implementation, Ph.D. Thesis, (Universität Hannover, Hannover, 2003). Related online version (cited on 28 September 2009): http://edok01.tib.uni-hannover.de/edoks/e01dh03/361006918.pdf. (Cited on page 33.)Google Scholar Freise, A., Bunkowski, A., and Schnabel, R., "Phase and alignment noise in grating interferometers", New J. Phys., 9, 433, (2007). [DOI], [arXiv:0711.0291]. URL (cited on 17 January 2010): http://stacks.iop.org/1367-2630/9/433. (Cited on page 6.)ADSCrossRefGoogle Scholar Freise, A., Heinzel, G., Lück, H., Schilling, R., Willke, B., and Danzmann, K., "Frequency-domain interferometer simulation with higher-order spatial modes", Class. Quantum Grav., 21(5), S1067–S1074, (2004). [DOI], [arXiv:gr-qc/0309012]. (Cited on pages 5 and 76.)ADSCrossRefGoogle Scholar Fritschel, P., "Second generation instruments for the Laser Interferometer Gravitational Wave Observatory (LIGO)", in Cruise, M., and Saulson, P., eds., Gravitational-Wave Detection, Waikoloa, HI, USA, 23 August 2002, Proc. SPIE, vol. 4856, pp. 282–291, (SPIE, Bellingham, WA, 2003). [DOI], [gr-qc/0308090]. (Cited on page 5.)CrossRefGoogle Scholar Giovannetti, V., Lloyd, S., and Maccone, L., "Quantum-Enhanced Measurements: Beating the Standard Quantum Limit", Science, 306, 1330–1336, (2004). [DOI], [ADS], [arXiv:quant-ph/0412078]. (Cited on page 7.)ADSCrossRefGoogle Scholar Gouy, L.G., "Sur la propagation anomale des ondes", C. R. Acad. Sci., 111, 33, (1890). (Cited on page 62.)zbMATHGoogle Scholar Gouy, L.G., "Sur une propriete nouvelle des ondes lumineuses", C. R. Acad. Sci., 110, 1251, (1890). (Cited on page 62.)Google Scholar Gradshteyn, I.S., and Ryzhik, I.M., Tables of Integrals, Series, and Products, (Academic Press, San Diego; London, 1994), 5th edition. (Cited on page 23.)zbMATHGoogle Scholar Hecht, E., Optics, (Addison-Wesley, Reading, MA, 2002), 4th edition. (Cited on page 13.)Google Scholar Heinzel, G., Advanced optical techniques for laser-interferometric gravitational-wave detectors, Ph.D. Thesis, (Universitat Hannover, Hannover, 1999). Related online version (cited on 28 September 2009): http://edok01.tib.uni-hannover.de/edoks/e002/265099560.pdf. (Cited on pages 17 and 24.)Google Scholar Heinzel, G., Strain, K. A., Mizuno, J., Skeldon, K. D., Willke, B., Winkler, W., Schilling, R., Rüdiger, A., and Danzmann, K., "Experimental Demonstration of a Suspended Dual Recycling Interferometer for Gravitational Wave Detection", Phys. Rev. Lett., 81, 5493–5496, (1998). [DOI]. (Cited on page 6.)ADSCrossRefGoogle Scholar Herriott, D., Kogelnik, H., and Kompfner, R., "Off-Axis Paths in Spherical Mirror Interferometers", Appl. Optics, 3, 523–526, (1964). [DOI], [ADS]. (Cited on page 6.)ADSCrossRefGoogle Scholar Jaekel, M.T., and Reynaud, S., "Quantum Limits in Interferometric Measurements", Europhys. Lett., 13, 301–306, (1990). [DOI], [quant-ph/0101104]. (Cited on page 7.)ADSCrossRefGoogle Scholar Kenyon, I.R., The Light Fantastic: A Modern Introduction to Classical and Quantum Optics, (Oxford University Press, Oxford; New York, 2008). [Google Books]. (Cited on page 13.)zbMATHGoogle Scholar Kogelnik, H., "On the Propagation of Gaussian Beams of Light Through Lenslike Media Including those with a Loss or Gain Variation", Appl. Optics, 4(12), 1562–1569, (1965). [ADS]. (Cited on pages 58 and 66.)ADSCrossRefGoogle Scholar Kogelnik, H., and Li, T., "Laser Beams and Resonators", Proc. IEEE, 54, 1312–1329, (1966). (Cited on pages 53 and 57.)CrossRefGoogle Scholar Kuroda, K. (LCGT Collaboration), "The status of LCGT", Class. Quantum Grav., 23(8), S215–S221, (2006). [DOI]. (Cited on page 5.)ADSMathSciNetCrossRefGoogle Scholar Lax, M., Louisell, W.H., and McKnight, W.B., "From Maxwell to paraxial wave optics", Phys. Rev. A, 11, 1365–1370, (1975). [DOI]. (Cited on page 53.)ADSCrossRefGoogle Scholar Loudon, R., and Knight, P.L., "Squeezed Light", J. Mod. Opt., 34, 709–759, (1987). [DOI], [ADS]. (Cited on page 7.)ADSMathSciNetCrossRefGoogle Scholar Malec, M., Commissioning of advanced, dual-recycled gravitational-wave detectors: simulations of complex optical systems guided by the phasor picture, Ph.D. Thesis, (Universität Hannover, Hannover, 2006). Related online version (cited on 28 September 2009): http://edok01.tib.uni-hannover.de/edoks/e01dh06/510301622.pdf. (Cited on page 27.)Google Scholar Matuschek, N., Kartner, F.X., and Keller, U., "Exact coupled-mode theories for multilayer interference coatings with arbitrary strong index modulations", IEEE J. Quantum Electron., 33, 295–302, (1997). [DOI]. (Cited on page 13.)ADSCrossRefGoogle Scholar Meers, B.J., "Recycling in laser-interferometric gravitational-wave detectors", Phys. Rev. D, 38, 2317–2326, (1988). [DOI]. (Cited on page 6.)ADSCrossRefGoogle Scholar Michelson, A.A., and Morley, E.W., "On the Relative Motion of the Earth and the Luminiferous Ether", Am. J. Sci., 34, 333–345, (1887). Related online version (cited on 20 September 2009): http://www.aip.org/history/gap/PDF/michelson.pdf. (Cited on page 36.)ADSCrossRefGoogle Scholar Mizuno, J., and Yamaguchi, I., "Method for analyzing multiple-mirror coupled optical systems", J. Opt. Soc. Am. A, 16, 1730–1739, (1999). [DOI]. (Cited on page 13.)ADSCrossRefGoogle Scholar Morrison, E., Meers, B.J., Robertson, D.I., and Ward, H., "Automatic alignment of optical interferometers", Appl. Optics, 33, 5041–5049, (1994). [DOI]. (Cited on page 6.)ADSCrossRefGoogle Scholar Newport Catalogue, (Newport Corporation, Irvine, CA, 2008). Related online version (cited on 20 September 2009): http://www.newport.com/. (Cited on page 14.) Rigrod, W.W., "The optical ring resonator", Bell Syst. Tech. J., 44, 907–916, (1965). (Cited on page 57.)MathSciNetCrossRefGoogle Scholar Rowan, S., and Hough, J., "Gravitational Wave Detection by Interferometry (Ground and Space)", Living Rev. Relativity, 3, lrr-2000-3, (2000). URL (cited on 20 September 2009): http://www.livingreviews.org/lrr-2000-3. (Cited on page 5.) Rüdiger, A., "Phase relationship at a symmetric beamsplitter", unknown status, (1998). (Cited on page 16.)Google Scholar Saulson, P.R., Fundamentals of Interferometric Gravitational Wave Detectors, (World Scientific, Singapore; River Edge, NJ, 1994). (Cited on page 50.)CrossRefGoogle Scholar Siegman, A.E., Lasers, (University Science Books, Sausalito, CA, 1986). [Google Books]. See also errata list at http://www.stanford.edu/%7Esiegman/AES%20LASERS%20Book/. (Cited on pages 53, 58, 62, and 66.)Google Scholar Siegman, A.E., "Laser Beams and Resonators: Beyond the 1960s", IEEE J. Select. Topics Quantum Electron., 6, 1389–1399, (2000). [DOI]. Related online version (cited on 18 January 2010): http://www.stanford.edu/∼siegman/beams_and_resonators/beams_and_resonators_2. pdf. (Cited on page 53.)ADSCrossRefGoogle Scholar Siegman, A.E., "Laser Beams and Resonators: The 1960s", IEEE J. Select. Topics Quantum Electron., 6, 1380–1388, (2000). [DOI]. Related online version (cited on 18 January 2010): http://www.stanford.edu/∼siegman/beams_and_resonators/beams_and_resonators_1. pdf. (Cited on page 53.)ADSCrossRefGoogle Scholar Vahlbruch, H., Chelkowski, S., Danzmann, K., and Schnabel, R., "Quantum engineering of squeezed states for quantum communication and metrology", New J. Phys., 9(10), 371, (2007). [DOI], [arXiv:0707.2845]. URL (cited on 17 January 2010): http://stacks.iop.org/1367-2630/9/371. (Cited on page 7.)ADSCrossRefGoogle Scholar Vinet, J.-Y., "On Special Optical Modes and Thermal Issues in Advanced Gravitational Wave Interferometric Detectors", Living Rev. Relativity, 12, lrr-2009-5, (2009). URL (cited on 20 September 2009): http://www.livingreviews.org/lrr-2009-5. (Cited on page 62.) Vinet, J.-Y. (Virgo Collaboration), The Virgo Physics Book, Vol. II: Optics and Related Topics, (Virgo, Cascina, 2001). URL (cited on 20 September 2009): http://www.virgo.infn.it/vpb/. (Cited on pages 64 and 71.)Google Scholar "Virgo", project homepage, Virgo Collaboration. URL (cited on 19 September 2009): http://www.virgo.infn.it/. (Cited on page 17.) Winkler, W., Danzmann, K., Grote, H., Hewitson, M., Hild, S., Hough, J., Lück, H., Malec, M., Freise, A., Mossavi, K., Rowan, S., Rüdiger, A., Schilling, R., Smith, J.R., Strain, K.A., Ward, H., and Willke, B., "The GEO 600 core optics", Opt. Commun., 280, 492–499, (2007). [DOI]. (Cited on page 6.)ADSCrossRefGoogle Scholar Yariv, A., Quantum Electronics, (J. Wiley & Sons, New York, 1989), 3rd edition. (Cited on page 31.)Google Scholar Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 1.School of Physics and AstronomyUniversity of BirminghamBirminghamUK 2.Department of Physics and AstronomyUniversity of GlasgowGlasgowUK Freise, A. & Strain, K. Living Rev. Relativ. (2010) 13: 1. https://doi.org/10.12942/lrr-2010-1 Accepted 15 February 2010 First Online 25 February 2010 DOI https://doi.org/10.12942/lrr-2010-1 Charlotte Bond Kenneth A. Strain Change summary Major revision, updated and expanded. The number of references has increased from 58 to 185. Added new Sects. 6, 7, 11, and a new Appendix ``Advanced LIGO optical layout''. Expanded Sect. 5 on ``Basic interferometers'', Sect. 8 on ``Interferometric length sensing and control'', and Sect. 9 on ``Beam shapes: Beyond the plane wave approximation''. DOI: https://doi.org/10.12942/lrr-2010-1 Not logged in Not affiliated 3.83.32.171
CommonCrawl
Variational autoencoders. Machine learning engineer. Broadly curious. More posts by Jeremy Jordan. 19 Mar 2018 • 8 min read In my introductory post on autoencoders, I discussed various models (undercomplete, sparse, denoising, contractive) which take data as input and discover some latent state representation of that data. More specifically, our input data is converted into an encoding vector where each dimension represents some learned attribute about the data. The most important detail to grasp here is that our encoder network is outputting a single value for each encoding dimension. The decoder network then subsequently takes these values and attempts to recreate the original input. A variational autoencoder (VAE) provides a probabilistic manner for describing an observation in latent space. Thus, rather than building an encoder which outputs a single value to describe each latent state attribute, we'll formulate our encoder to describe a probability distribution for each latent attribute. To provide an example, let's suppose we've trained an autoencoder model on a large dataset of faces with a encoding dimension of 6. An ideal autoencoder will learn descriptive attributes of faces such as skin color, whether or not the person is wearing glasses, etc. in an attempt to describe an observation in some compressed representation. In the example above, we've described the input image in terms of its latent attributes using a single value to describe each attribute. However, we may prefer to represent each latent attribute as a range of possible values. For instance, what single value would you assign for the smile attribute if you feed in a photo of the Mona Lisa? Using a variational autoencoder, we can describe latent attributes in probabilistic terms. With this approach, we'll now represent each latent attribute for a given input as a probability distribution. When decoding from the latent state, we'll randomly sample from each latent state distribution to generate a vector as input for our decoder model. Note: For variational autoencoders, the encoder model is sometimes referred to as the recognition model whereas the decoder model is sometimes referred to as the generative model. By constructing our encoder model to output a range of possible values (a statistical distribution) from which we'll randomly sample to feed into our decoder model, we're essentially enforcing a continuous, smooth latent space representation. For any sampling of the latent distributions, we're expecting our decoder model to be able to accurately reconstruct the input. Thus, values which are nearby to one another in latent space should correspond with very similar reconstructions. Statisical motivation Suppose that there exists some hidden variable $z$ which generates an observation $x$. We can only see $x$, but we would like to infer the characteristics of $z$. In other words, we'd like to compute $p\left( {z|x} \right)$. $$ p\left( {z|x} \right) = \frac{{p\left( {x|z} \right)p\left( z \right)}}{{p\left( x \right)}} $$ Unfortunately, computing $p\left( x \right)$ is quite difficult. $$ p\left( x \right) = \int {p\left( {x|z} \right)p\left( z \right)dz} $$ This usually turns out to be an intractable distribution. However, we can apply varitational inference to estimate this value. Let's approximate $p\left( {z|x} \right)$ by another distribution $q\left( {z|x} \right)$ which we'll define such that it has a tractable distribution. If we can define the parameters of $q\left( {z|x} \right)$ such that it is very similar to $p\left( {z|x} \right)$, we can use it to perform approximate inference of the intractable distribution. Recall that the KL divergence is a measure of difference between two probability distributions. Thus, if we wanted to ensure that $q\left( {z|x} \right)$ was similar to $p\left( {z|x} \right)$, we could minimize the KL divergence between the two distributions. $$ \min KL\left( {q\left( {z|x} \right)||p\left( {z|x} \right)} \right) $$ Dr. Ali Ghodsi goes through a full derivation here, but the result gives us that we can minimize the above expression by maximizing the following: $$ {E_{q\left( {z|x} \right)}}\log p\left( {x|z} \right) - KL\left( {q\left( {z|x} \right)||p\left( z \right)} \right) $$ The first term represents the reconstruction likelihood and the second term ensures that our learned distribution $q$ is similar to the true prior distribution $p$. To revisit our graphical model, we can use $q$ to infer the possible hidden variables (ie. latent state) which was used to generate an observation. We can further construct this model into a neural network architecture where the encoder model learns a mapping from $x$ to $z$ and the decoder model learns a mapping from $z$ back to $x$. Our loss function for this network will consist of two terms, one which penalizes reconstruction error (which can be thought of maximizing the reconstruction likelihood as discussed earlier) and a second term which encourages our learned distribution ${q\left( {z|x} \right)}$ to be similar to the true prior distribution ${p\left( z \right)}$, which we'll assume follows a unit Gaussian distribution, for each dimension $j$ of the latent space. $$ {\cal L}\left( {x,\hat x} \right) + \sum\limits_j {KL\left( {{q_j}\left( {z|x} \right)||p\left( z \right)} \right)} $$ In the previous section, I established the statistical motivation for a variational autoencoder structure. In this section, I'll provide the practical implementation details for building such a model yourself. Rather than directly outputting values for the latent state as we would in a standard autoencoder, the encoder model of a VAE will output parameters describing a distribution for each dimension in the latent space. Since we're assuming that our prior follows a normal distribution, we'll output two vectors describing the mean and variance of the latent state distributions. If we were to build a true multivariate Gaussian model, we'd need to define a covariance matrix describing how each of the dimensions are correlated. However, we'll make a simplifying assumption that our covariance matrix only has nonzero values on the diagonal, allowing us to describe this information in a simple vector. Our decoder model will then generate a latent vector by sampling from these defined distributions and proceed to develop a reconstruction of the original input. However, this sampling process requires some extra attention. When training the model, we need to be able to calculate the relationship of each parameter in the network with respect to the final output loss using a technique known as backpropagation. However, we simply cannot do this for a random sampling process. Fortunately, we can leverage a clever idea known as the "reparameterization trick" which suggests that we randomly sample $\varepsilon$ from a unit Gaussian, and then shift the randomly sampled $\varepsilon$ by the latent distribution's mean $\mu$ and scale it by the latent distribution's variance $\sigma$. With this reparameterization, we can now optimize the parameters of the distribution while still maintaining the ability to randomly sample from that distribution. Note: In order to deal with the fact that the network may learn negative values for $\sigma$, we'll typically have the network learn $\log \sigma$ and exponentiate this value to get the latent distribution's variance. Visualization of latent space To understand the implications of a variational autoencoder model and how it differs from standard autoencoder architectures, it's useful to examine the latent space. This blog post introduces a great discussion on the topic, which I'll summarize in this section. The main benefit of a variational autoencoder is that we're capable of learning smooth latent state representations of the input data. For standard autoencoders, we simply need to learn an encoding which allows us to reproduce the input. As you can see in the left-most figure, focusing only on reconstruction loss does allow us to separate out the classes (in this case, MNIST digits) which should allow our decoder model the ability to reproduce the original handwritten digit, but there's an uneven distribution of data within the latent space. In other words, there are areas in latent space which don't represent any of our observed data. Image credit (modified) On the flip side, if we only focus only on ensuring that the latent distribution is similar to the prior distribution (through our KL divergence loss term), we end up describing every observation using the same unit Gaussian, which we subsequently sample from to describe the latent dimensions visualized. This effectively treats every observation as having the same characteristics; in other words, we've failed to describe the original data. However, when the two terms are optimized simultaneously, we're encouraged to describe the latent state for an observation with distributions close to the prior but deviating when necessary to describe salient features of the input. When I'm constructing a variational autoencoder, I like to inspect the latent dimensions for a few samples from the data to see the characteristics of the distribution. I encourage you to do the same. If we observe that the latent distributions appear to be very tight, we may decide to give higher weight to the KL divergence term with a parameter $\beta>1$, encouraging the network to learn broader distributions. This simple insight has led to the growth of a new class of models - disentangled variational autoencoders. As it turns out, by placing a larger emphasis on the KL divergence term we're also implicitly enforcing that the learned latent dimensions are uncorrelated (through our simplifying assumption of a diagonal covariance matrix). $$ {\cal L}\left( {x,\hat x} \right) + \beta \sum\limits_j {KL\left( {{q_j}\left( {z|x} \right)||N\left( {0,1} \right)} \right)} $$ Variational autoencoders as a generative model By sampling from the latent space, we can use the decoder network to form a generative model capable of creating new data similar to what was observed during training. Specifically, we'll sample from the prior distribution ${p\left( z \right)}$ which we assumed follows a unit Gaussian distribution. The figure below visualizes the data generated by the decoder network of a variational autoencoder trained on the MNIST handwritten digits dataset. Here, we've sampled a grid of values from a two-dimensional Gaussian and displayed the output of our decoder network. As you can see, the distinct digits each exist in different regions of the latent space and smoothly transform from one digit to another. This smooth transformation can be quite useful when you'd like to interpolate between two observations, such as this recent example where Google built a model for interpolating between two music samples. Ali Ghodsi: Deep Learning, Variational Autoencoder (Oct 12 2017) UC Berkley Deep Learning Decall Fall 2017 Day 6: Autoencoders and Representation Learning Stanford CS231n: Lecture on Variational Autoencoders Blogs/videos Building Variational Auto-Encoders in TensorFlow (with great code examples) Building Autoencoders in Keras Variational Autoencoders - Arxiv Insights Intuitively Understanding Variational Autoencoders Density Estimation: A Neurotically In-Depth Look At Variational Autoencoders Pyro: Variational Autoencoders Under the Hood of the Variational Autoencoder With Great Power Comes Poor Latent Codes: Representation Learning in VAEs Kullback-Leibler Divergence Explained Neural Discrete Representation Learning Papers/books Deep learning book (Chapter 20.10.3): Variational Autoencoders Variational Inference: A Review for Statisticians A tutorial on variational Bayesian inference Auto-Encoding Variational Bayes Tutorial on Variational Autoencoders Papers on my reading list Early Visual Concept Learning with Unsupervised Deep Learning Multimodal Unsupervised Image-to-Image Translation Video from paper More in Data Science A simple solution for monitoring ML systems. 2 Jan 2021 – 10 min read Effective testing for machine learning systems. An introduction to Kubernetes. 26 Nov 2019 – 15 min read Common architectures in convolutional neural networks. In this post, I'll discuss commonly used architectures for convolutional networks. As you'll see, almost all CNN architectures follow the same general design principles of successively applying convolutional layers to the input, periodically downsampling the spatial dimensions while increasing the number of feature maps. Jeremy Jordan 19 Apr 2018 • 9 min read Introduction to autoencoders. Autoencoders are an unsupervised learning technique in which we leverage neural networks for the task of representation learning. Specifically, we'll design a neural network architecture such that we impose a bottleneck in the network which forces a compressed knowledge representation of the original input. Jeremy Jordan 19 Mar 2018 • 10 min read
CommonCrawl
Local privacy protection classification based on human-centric computing Chunyong Yin ORCID: orcid.org/0000-0001-5764-24321, Biao Zhou1, Zhichao Yin2 & Jin Wang3 Human-centric computing is becoming an important part of data-driven artificial intelligence (AI) and the importance of data mining under Human-centric computing is getting more and more attention. The rapid development of machine learning has gradually increased its ability to mine data. In this paper, privacy protection is combined with machine learning, in which a logistic regression is adopted for local differential privacy protection to achieves classification task utilizing noise addition and feature selection. The design idea is mainly divided into three parts: noise addition, feature selection and logistic regression. In the part of noise addition, the way of adding noise using Laplace mechanism to original data achieves the purpose of disturbing data. The part of feature selection is to highlight the impact of noised data on the classifier. The part of logistic regression is to use logistic regression to implement classification task. The experimental results show that an accuracy of 85.7% can be achieved for the privacy data by choosing appropriate regularization coefficients. Human-centric computing is the wave of the future, which not only helps bring us convenience but also presents new challenges of processing and analyzing massive data. With the rapid development of the Internet and the arrival of the era of big data, our privacy data is also unconsciously leaked [1,2,3,4,5]. From the existing research situation, the ways of privacy protection mainly include data distortion, encryption and block chain [6]. The classical k-anonymity method was proposed by Sweeney and Samarati [7]. Since this method does not include randomized processing, attackers can still infer privacy information related to individuals from data sets satisfying the anonymity nature of this method [8]. At the same time, k-anonymity method can not deal with consistency attacks and background knowledge attacks. In view of this, Machanavajjhala et al. proposed an improved algorithm [9]. Based on such privacy protection schemes, no reasonable assumptions about attack models are made. In 2006, Dwork et al. proposed a differential privacy model to solve this problem [10]. Dwork analyzed the sensitivity calculation method of each query function in the differential privacy k-means algorithm in detail, proposed different allocation methods of privacy budget in two cases, and gave the total sensitivity of the whole query sequence. In [10], differential privacy, a method of an attacker cannot distinguish the encrypted results under the state of the inscription was proposed. Because the model does not need to rely on the attacker's background knowledge, and provides a higher level of semantic security for the privacy information, this model is widely used at present. Around the people-centered data generation sources are also more and more widely, the current society is in the forefront of the development of artificial intelligence. It can be predicted that future computing will be human-centered computing. Data collected from different electronic devices can be processed and analyzed to better design information systems, improve transportation systems, promote social and economic development, and also provide better personalized services [11, 12]. What needs to be emphasized is that the tremendous progress of data analysis and data mining technology in recent years has enabled attackers to mine a lot of privacy-related information from massive data, so the protection of privacy data is an important and arduous task [13]. With the emergence of big data and the gradual maturity of deep learning technologies [14,15,16,17,18,19], AI is gradually becoming a popular solution for data analysis and prediction [20,21,22,23], cutting-edge technologies and Human-centric computing [24]. In [24], Rabaey thinks that the sense of miniaturization, the computational and driving devices, and the appearance of interfaces that fit the human body provide the basis for the symbiosis between biological functions and electronic devices. Hence, the data driven artificial intelligence is required to perform data processing and analyzing in human-centric computing [25]. The security development of AI is the security and development of data in the final analysis. Therefore, in the process of data processing and analysis in AI, the security and privacy issues involved in computation cannot be ignored. Faced with the huge amount of data, especially in the area of health care, more personal privacy data are involved, such as medical history [26, 27], cases, education, family income and so on [28]. Especially for the privacy data, it is more important. Ideally, it not only protects the privacy data, but also realizes a certain degree of data mining without exposing the privacy data [29]. At present, how to find a balance between protecting privacy data and making protected data available is a hot research direction. Therefore, privacy protection is a cross-cutting technology that combines multiple disciplines [30]. It is a natural combination to mine private data through machine learning technology. In this scheme, the relative balance between data protection and data mining is achieved. In short, the privacy data generated by adding noise is still usable and can be classified by machine learning. The basic design idea is as follows. Firstly, the original data is preprocessed to eliminate useless information, then the local differential privacy is realized by Laplace mechanism, and then the normalization operation is carried out. Next, in order to highlight the impact of noise data and improve the classification effect of the classifier, the necessary feature selection is carried out. Finally, the classical logistic regression in machine learning is used to realize the classification task. There are three key points. The first is the mechanism of adding noise, the second is the method of feature selection, and the third is the realization of logical regression. Noise is added to protect data. The main purpose of feature selection is to highlight the impact of noise data on the subsequent classifier. Logic regression is the way to achieve classification. The rest of the paper is organized as follows: Relevant work is presented in "Relevant work and theory". After that, In "Proposed method" the proposed method will be introduced in detail. The experiment and result analysis will be given in "Experiments". Finally in "Conclusion", we summarize the research content and look forward to the future research direction. The main contributions are as follows: Compared with the research status of privacy protection, we do not use the mainstream clustering methods, but used classification methods. The experimental results show that choosing a reasonable model can not only protect data, but also achieve a certain degree of data mining effect. As far as the degree of privacy protection is concerned, considering the actual situation, some data need not be protected. At the same time, in order not to destroy the distribution of real data sets and ensure the authenticity of classification results, we have not added noise to all data. In reality, the data needed to be protected should often be specified by the user. So we only add noise to local sensitive data to achieve the purpose of protection. In order to highlight the impact of local privacy data on classification results, we have made the necessary feature selection, but also to ensure that the noise feature is not eliminated. The experimental results show that although this method can reduce the accuracy of classification, the data is still available in general. Relevant work and theory This section mainly introduces the related work and the related concepts of privacy protection. In the related work part, the application of machine learning method in privacy protection is emphasized. In the related theory part, the key concepts of privacy protection are taken as the focus of introduction. Relevant work In the business and medical fields, there are often a large number of private data, and data mining and machine learning will have to deal with such large-scale data. How to mine and analyze useful information is a research topic of privacy protection while protecting privacy information. So the application of data mining and machine learning technology under differential privacy is an important research. Vadidya et al. proposed a naive Bayesian model based on differential privacy [31]. The model calculated the sensitivity of discrete classification data and continuous classification attributes. This method realized differential privacy protection by adding noise to the parameters of naive Bayesian classifier. Another algorithm for differential privacy protection is a non-interactive anonymous data mining algorithm based on generalization technology proposed by Mohammed et al. [32]. Its main idea is to add exponential noise to achieve differential privacy protection. To increase the similarity in the same class and the discrimination between different classes [33], uses original training samples and their corresponding random-filtering virtual samples by adding random noise to construct a new training set, then exploits the new training set to perform collaborative representation classification. In the field of privacy protection, machine learning mainly focuses on the classification of supervised learning and clustering of unsupervised learning. In the field of privacy protection, machine learning mainly focuses on the classification of supervised learning and clustering of unsupervised learning. Supervise learning methods in machine learning, such as support vector machines and logistic regression and other methods, to achieve the task of classifying private data [34, 35]. In [36], Jia proposes an approach to preserve the model privacy of the data classification and similarity evaluation for distributed systems. The clustering method of unsupervised learning is more widely used in privacy protection. The problem mainly focuses on the research of wireless sensor networks [37,38,39,40], wireless multi-hop networks [41, 42] and smart grid [43] where the AI technologies are widely used to solve the route, service prediction and service selection problem [44,45,46,47]. In [48], Gao thinks traditional clustering approaches are directly performed on private data and fail to cope with malicious attacks in massive data mining tasks against attackers' arbitrary background knowledge. To address these issues, the authors propose an efficient privacy-preserving hybrid k-means under Spark. In [49], Kai et al. propose a mutual privacy preservation K-means clustering scheme. It neither discloses personal privacy information nor discloses community characteristic data (clusters). Relevant theory This part mainly introduces \(\varepsilon\)-differential privacy [50], Laplace mechanism and Laplace sensitivity [51, 52]. \(\varepsilon\)-differential privacy: For a random algorithm \(M\), \(P_{m}\) is a set of all the values that algorithm \(M\) can output. If for any pair of adjacent data sets \(D\) and \(D^{'}\), and any subset \(P_{m}\) of \(S_{m}\), algorithm \(M\) satisfies in formula (1): $$\Pr \left[ {M\left( D \right) \in S_{m} } \right] \le e^{\varepsilon } \Pr \left[ {M\left( {D^{'} } \right) \in S_{m} } \right]$$ The algorithm \(M\) satisfies \(\varepsilon\)-differential privacy, where \(\varepsilon\) is the privacy protection budget. The smaller the parameter \(\varepsilon\) is, the closer the probability distribution is and the higher the degree of protection is. On the contrary, the larger the parameter \(\varepsilon\) is, the lower the degree of protection. Laplace mechanism: By adding Laplace noise to the data set to change the real value, the differential privacy is satisfied before and after adding noise, thus protecting the privacy data. It is the most widely used noise mechanism to disturb data and achieve the purpose of privacy protection. For query function \(f\), the Laplace mechanism satisfies differential privacy by adding noise satisfying \(Lap\left( {0,\frac{\nabla f}{\varepsilon }} \right)\) distribution, in short, \(F\left( D \right) = f\left( D \right) + noise\). Noise is to satisfy Laplace distribution. It should be noted that in order to ensure unbiased estimation, the mean \(\mu\) of noise added to satisfy Laplace distribution should be 0. That is, \(nosie\sim Lap(b)\). The probability density of noise is defined as in formula (2): $$f\left( {x|b} \right) = \frac{1}{2b}\text{e}^{{ - \tfrac{|x|}{b}}}$$ Given \(\nabla f\), the smaller \(\varepsilon\) is, the larger \(b\) is. Conversely, the larger \(\varepsilon\) is, the smaller \(b\) is. Laplace sensitivity \(D\left( F \right)\): If the existence function \(F\left( C \right) \in R\), the sensitivity of \(F\) is defined as in formula (3): $$D\left( F \right) = \mathop {\hbox{max} }\limits_{{C_{1} ,C_{2} }} \left\| {F\left( {C_{1} } \right) - F\left( {C_{2} } \right)} \right\|_{1}$$ According to the noise level and probability density, the smaller the privacy budget \(\varepsilon\) is, the higher the degree of protection is. Proposed method The design idea is mainly divided into three parts: noise addition, feature selection and logistic regression. The part of noise addition is to realize the disturbance of the original data and protect the privacy data. The part of feature selection is to highlight the impact of noised data on the classifier, so that it can form a stronger contrast with the original data without noise. The part of logistic regression is to use supervised logistic regression to implement classification task. In this section, we need to note the effect of the regularization coefficient on the classification model. In order to highlight the classification results with or without noise under the same classification model, it is necessary to pay attention to the relationship between the noise intensity and the regularization coefficient, taking into account the features selected by the feature selection process. Main design ideas The method presented in this paper can be simplified in Fig. 1. The simplified structure As shown in the Fig. 1, the preprocessing section deals mainly with useless information in raw data. Firstly, the noise addition part mainly adds the noise satisfying Laplace mechanism to the data, which aims to protect the local privacy data. Then the feature selection part is to highlight the performance of privacy data in classifiers on the one hand, on the other hand, to remove redundant features, so that the classifier has a better generalization effect. Then normalization is used to improve the convergence rate of the classifier. Finally, the logical regression part is used as the final classifier to classify the data, and the classification indicators are obtained to evaluate whether the method achieves the desired classification effect and whether it can protect the privacy data while making these data available. Noise addition part This section focuses on adding noise to the previously mentioned data set. Firstly, the probability density function of Laplace function is given, and then the influence of parameters \(\varepsilon\) and \(b\) on the intensity of noise addition and the degree of privacy protection is discussed. The distribution function of noise under different \(\varepsilon\) is shown in Fig. 2. The distribution function Assuming \(b = \frac{\nabla f}{\varepsilon }\), as shown in the Fig. 2, the smaller \(\varepsilon\) is, the bigger \(b\) is the more uniform the added noise, the smaller the probability of adding noise to 0, the greater the degree of confusion and the higher the degree of protection. Then the privacy budget \(\varepsilon\) can be modified through \(b\), and then control the intensity of privacy protection. Feature selection part In this part, we mainly deal with the methods for feature selection. Here we use Random Forest (RF) in ensemble learning and Classified and Regression Decision Tree (CART) for each decision tree [53]. Firstly, CART is introduced briefly, and then the flow chart of feature selection is given. CART forms a tree structure, and the splitting of nodes is based on the minimum \(Gini\) index. The methods of constructing CART are as follows: Step 1: If the condition of stopping classification is satisfied, the splitting is stopped. The stopping condition is that the \(Gini\) coefficient is less than the threshold or the number of samples is less than the threshold. Step 2: Otherwise, the minimum \(Gini\) coefficient is selected for segmentation. The features for splitting are preserved and the feature set with selection is added sequentially. Step 3: Steps 1–2 is performed recursively until the splitting is completed. Step 4: The output is a set of features arranged from strong to weak according to the importance of features. The \(Gini\) index is defined as follows in formula (4): $$Gini\left( t \right) = 1 - \sum\limits_{k} {\left[ {p\left( {c_{k} |t} \right)^{2} } \right]}$$ \(A\) represents the classification feature and divides data set \(D\) into \(D_{1}\) and \(D_{2}\), then \(Gini\) index of set \(D\) is defined as in formula (5): $$Gini\left( {D,A} \right) = \frac{{\left| {D_{1} } \right|}}{\left| D \right|}Gini\left( {D_{1} } \right) + \frac{{\left| {D_{2} } \right|}}{\left| D \right|}Gini\left( {D_{2} } \right)$$ The minimum \(Gini\) index reduces the uncertainty of samples to the greatest extent, so it is the basis for classification. In the process of gradual splitting, features for splitting are preserved at one time until all splitting is completed, and feature sets for classification are output, then the importance of features is consistent with the order of storage. Random Forest is an integrated algorithm, which belongs to bagging type. It combines several weak classifiers, and the final result is obtained by voting or taking the mean. The algorithm of feature selection using the RF is as follows: Logistic regression design part The essence of logistic regression is to determine the cost function \(J\) for a classification problem, then choose the optimal iteration method to solve the parameters of the model, and finally verify the model. Logical regression is very suitable for binary classification problems, and the training of the model is faster. In this part, firstly the linear boundary conditions are given, secondly the prediction function is selected and the cost function is constructed, and finally the parameter training method of the model is given. The boundary conditions can be expressed as in formula (6): $$z = \sum\limits_{i = 1}^{n} {\theta_{i} x_{i} }$$ \(\theta_{i}\) is the parameter to be trained in the model, \(x_{i}\) is the attribute value of the sample and \(z\) is the linear output. The predictive function can be expressed as in formula (7): $${h_\theta } = \frac{1}{{1 + {e^{ - z}}}} = \frac{1}{{1 + {e^{ - \sum {{\theta _i}{x_i}} }}}}$$ \(h_{\theta }\) is the probability value of \(z\) prediction. The cost function with regularization can be expressed as in formula (8): $$J\left( \theta \right) = - \left[ {\frac{1}{m}\sum\limits_i^m {\left( {{y^{\left( i \right)}}\log \left( {{h_\theta }\left( {{x^{\left( i \right)}}} \right) + \left( {1 - {y^i}} \right)\log \left( {1 - {h_\theta }\left( {{x^{\left( i \right)}}} \right)} \right)} \right)} \right)} } \right] + C\sum\limits_{j = 1}^n {\theta _j^2}$$ \(J\left( \theta \right)\) is the cost function, \(\theta\) is the model parameter, and \(C\) is the regularization coefficient. The stochastic gradient descent algorithm is used to minimize the cost function, and then the model parameters are obtained. In this section, the experimental environment, the source and processing of data sets and three different groups of experiments are described. Experimental environment The experimental environment includes Inter (R) Core (TM) i5-4210 M CPU @2.60 Ghz, RAM 4.00G, operating system win10 and programming language python. The experimental data set is from UCI repository. In the data set, there are 116 samples, each of which s has 10 attributes and the classification label of which is 1 (healthy controls) or 2 (patients). The data set URL is http://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Coimbra. Part of the data set information is shown in Table 1. Table 1 Part of data set In experiments, BMI index is selected as the privacy data, which is processed by adding noise to form privacy protection data. Evaluations index of classification result In this part, the accuracy (\(ACC\)), the true positive rate (\(TPR\)) and the false positive rate (\(FPR\)) are calculated by using the confusion matrix, and then the meaning of the receiver operating characteristic curve (\(ROC\)) drawing and the area under the curve (\(AUC\)) is explained. These indicators will serve as the basis for evaluating classification models. A detailed description of the confusion matrix is shown as Table 2. Table 2 Confusion matrix The accuracy calculation is shown in formula (9). $$ACC = \frac{{TP{ + }TN}}{TP + FN + FP + TN}$$ Because the accuracy of a classification model is relatively one-sided, we need to introduce the values of \(FPR\) and \(TPR\), and form a set of \(FPR\) and \(TPR\) values according to different thresholds, then draw \(ROC\) curve, and then calculate the area under the curve, that is, the value of \(AUC\). The closer the corresponding \(AUC\) value is to 1, indicating that the better the effect of the classifier. The definitions of \(FPR\) is shown in formula (10) and \(TPR\) is shown in formula (11). $$FPR = \frac{FP}{FP + TN}$$ $$TPR = \frac{TP}{TP + FN}$$ As the threshold decreases, more and more instances are classified into positive classes, but these positive classes are also doped with real negative instances, that is, \(TPR\) and \(FPR\) will increase simultaneously. As the threshold increases gradually, fewer instances will be predicted as positive classes, and the values of \(FPR\) and \(TPR\) will decrease and tend to 0 at the same time. When the threshold decreases gradually, more instances will be predicted as positive classes, then \(FPR\) and \(TPR\) will gradually increase, and tend to 1 at the same time. The threshold here refers to the threshold that an instance is predicted to be positive, an instance larger than the threshold is predicted to be positive, and a case smaller than the threshold is predicted to be negative. Ideally, \(FPR\) is 0 and \(TPR\) is 1. Experimental results and analysis In this experiment part, we mainly design the influence of data on classification effect under the condition of adding noise or not, the classification effect of the same regularization coefficient \(C\) under the condition of adding different intensity of noise, and the classification situation of different regularization coefficient under the same intensity of noise. The confusion matrix for a given noise intensity of 1.0 is given in Fig. 3. Confusion matrix for a given noise intensity of 1.0 As shown in Fig. 3, the probability of correct prediction is relatively high. Especially, the accuracy of predicting real patients to be sick is 89.5%, which is much higher than 81.2% for health control. However, it was noted that the probability of predicting real patients to be healthy and controllable was 18.8%, which indicated that there were a certain number of false positive cases (\(FP\)), which would have a negative impact in the actual environment. After analysis, it is known that the data after noise processing does have a better accuracy, but there may also be a risk of high false positive rate. The effect of data on Classification under noisy and non-noisy conditions, and gives it in the form of \(ROC\) is shown in Fig. 4. Different ROC and corresponding accuracy Firstly, in Fig. 4, it is pointed out that the stronger the noise is, the higher the interference intensity is. That is to say, the smaller the noise intensity is, the closer the new data is to the original data. In the absence of noise, the classification accuracy is 88.6%, while in the presence of noise intensity of 1.0, the classification accuracy is only reduced by about 3%. With the noise intensity increasing, the accuracy is also gradually reduced. When the noise intensity is 0.1, the noise interference degree is the largest, resulting in the classification accuracy reduced by about 14.3%. This shows that the noise interference ability does affect the classification accuracy of the classifier. If the noise interference ability is stronger, the classification accuracy is lower. On the contrary, the lower the noise interference ability, the higher the classification accuracy will be guaranteed. In addition, it should be noted that the change trend of \(AUC\) value is basically consistent with that of accuracy. The experimental results show that the value of \(AUC\) with or without noise can be above 0.8. Figure 5 shows the comparison of the accuracy of different regularization coefficients \(C\) without adding noise and the average accuracy after adding different noise. Accuracy trends under different regularization coefficients and with or without added noise Firstly, as a whole, with the regularization coefficient \(C\) decreasing gradually, the accuracy will decrease whether noise is added or not. The reason is that the regularization coefficient decreases due to the decision of the classifier itself, which leads to the over-fitting of the classification model and reduces the classification effect. Locally, given the positive deterioration coefficient, that is, given a certain classifier, the classification result without noise is always better than the average case with added noise. From the experimental results, the classification accuracy without noise will be improved by 8.6% to 9.7%. This shows that the addition of noise does affect the classification effect of the classifier. However, it is noteworthy that even with the addition of noise, the mining of privacy data can still be achieved after choosing the appropriate positive deterioration coefficient. For example, in the case of \(C = 10\) and adding noise, the average classification accuracy of privacy data can still reach about 80%. In this paper, we combine privacy protection with data mining to implement a local privacy protection classification based on Human-centric computing. First, perform the necessary pre-processing on the original data. Then, in order to achieve disturbance of the source data, the protected data is formed by adding noise conforming to the Laplace mechanism to the privacy data. Secondly, in order to highlight the influence of the noisy data and also to better classify the generalization ability, the random forest strategy is adopted for feature selection. Then use the classic logistic regression model in machine learning to implement the classification task. Finally, the experiment shows that after selecting the appropriate regularization coefficient, the accuracy of the noise-added data can reach 85.7%. Considering the actual situation, the deficiency of this paper is that the number of patients predicted as health controllable is relatively large and the overall prediction accuracy is not high. The next step is to improve the degree of mining privacy data without affecting the machine learning model as much as possible. We declared that materials described in the manuscript will be freely available to any scientist wishing to use them for non-commercial purposes. AI: classified and regression decision tree ACC: receiver operating characteristic curve area under the curve TP: true positive TN: true negative FP: Zhaojun L, Gang Q, Zhenglin L (2019) A survey on recent advances in vehicular network security, trust, and privacy. IEEE Trans Intell Transp Syst 20(2):760–776 Xiong L, Jian N, Bhuiyan MZA (2018) A robust ECC-based provable secure authentication protocol with privacy preserving for industrial internet of things. IEEE Trans Ind Inf 14(8):3599–3609 Sangaiah AK, Medhane DV, Han T, Hossain MS, Muhammad G (2019) Enforcing position-based confidentiality with machine learning paradigm through mobile edge computing in real-time industrial informatics. IEEE Trans Ind Inf 15(7):4189–4196 Perez AJ, Zeadally S, Jabeur N (2018) Security and privacy in ubiquitous sensor networks. J Inf Process Syst 14(2):286–308 Kang WM, Moon SY, Park JH (2017) An enhanced security framework for home appliances in smart home. Human Centric Comput Inf Sci 7(1):6 Kim HW, Jeong YS (2018) Secure authentication-management human-centric scheme for trusting personal resource information on mobile cloud computing with blockchain. Human Centric Comput Inf Sci 8(1):11 Sweeney L (2002) K-anonymity: a model forprotecting privacy. Int J Uncertainty Fuzziness Knowl Based Syst 10(5):557–570 Sweeney L (2002) Achieving k-anonymity privacy protection using generalization, and suppression. Int J Uncertainty Fuzziness Knowl Based Syst 10(5):571–588 Machanavajjhala A., Gehrke J., Kifer D., and Venkitasubramaniam M. (2006) L-diversity: privacy beyond k-anonymity. In: 22nd international conference on data engineering. IEEE Computer Society Dwork C (2006) Differential privacy. International colloquium on automata, languages, and programming. Springer, Berlin, pp 1–12 Jian S, Chen W, Tong L (2018) Secure data uploading scheme for a smart home system. Inf Sci 453:186–197 Zhou SW, He Y, Xiang SZ, Li KQ, Liu YH (2019) Region-based compressive networked storage with lazy encoding. IEEE Trans Parallel Distrib Syst 30(6):1390–1402 Hu P, Dhelim S, Ning H (2017) Survey on fog computing: architecture, key technologies, applications and open issues. J Netw Comput Appl 98:27–42 He S, Li Z, Tang Y, Liao Z, Wang J, Kim HJ (2019) Parameters compressing in deep learning. Comput Mater Con (accepted) Zhang J, Jin X, Sun J, Wang J, Sangaiah AK (2018) Spatial and semantic convolutional features for robust visual object tracking. Multimedia Tools Appl. https://doi.org/10.1007/s11042-018-6562-8 Zhang J, Jin X, Sun J, Wang J, Li K (2019) Dual model learning combined with multiple feature selection for accurate visual tracking. IEEE Access 7:43956–43969 Zhang J, Wu Y, Feng W, Wang J (2019) Spatially attentive visual tracking using multi-model adaptive response fusion. IEEE Access 7:83873–83887 Zhang J, Wang W, Lu C, Wang J, Sangaiah AK (2019) Lightweight deep network for traffic sign classification. Ann Telecommun. https://doi.org/10.1007/s12243-019-00731-9 Zhang J, Lu C, Li X, Kim H-J, Wang J (2019) A full convolutional network based on DenseNet for remote sensing scene classification. Math Biosci Eng 16(5):3345–3367 Yin Y, Chen L, Xu Y, Wan J, Zhang H, Mai Z (2019) QoS prediction for service recommendation with deep feature learning in edge computing environment. Mobile Netw Appl. https://doi.org/10.1007/s11036-019-01241-7 Yin Y, Chen L, Xu Y, Wan J (2018) Location-aware service recommendation with enhanced probabilistic matrix factorization. IEEE Access 6:62815–62825 Yin Y, Xu Y, Xu W, Gao M, Yu L, Pei Y (2017) Collaborative service selection via ensemble learning in mixed mobile network environments. Entropy 19(7):358 Yin Y, Xu W, Xu Y, Li H, Yu L (2017) Collaborative QoS prediction for mobile service with data filtering and SlopeOne model. Mobile Inf Syst 2017:1–14 Rabaey JM (2018) Towards true human-centric computation. Commun Comput 131:73–76 Heng Z, Wenchao M, Jun Q (2019) Distributed load sharing under false data injection attack in an inverter-based microgrid. IEEE Trans Ind Electron 66(2):1543–1551 Guo Kehua, He Yan, Kui Xiaoyan, Sehdev Paramjit, Chi Tao, Zhang Ruifang, Li Jialun (2018) LLTO: towards efficient lesion localization based on template occlusion strategy in intelligent diagnosis. Pattern Recogn Lett 116:225–232 Guo Kehua, Li Ting, Huang Runhe, Kang Jian, Chi Tao (2018) DDA: a deep neural network-based cognitive system for IoT-aided dermatosis discrimination. Ad Hoc Netw 80:95–103 Ozatay M, Verma N (2019) Exploiting emerging sensing technologies toward structure in data for enhancing perception in Human-Centric applications. IEEE Internet Things J 6(2):3411–3422 Gergely A, Luca M, Castelluccia C (2019) Differentially private mixture of generative neural networks. IEEE Trans Knowl Data Eng 31(6):1109–1121 Yin C, Xi J, Sun R, Wang J (2018) Location privacy protection based on differential privacy strategy for big data in industrial internet-of-things. IEEE Trans Ind Inf 14(8):3628–3636 Vaidya J, Shafiq B, Basu A, Hong Y (2013) Differentially private Naive Bayes classification. In: Proceedings of the 2013 IEEE/WIC/ACM international joint conferences on web intelligence (WI) and intelligent agent technologies (IAT) Mohammed N (2011) Differentially private data release for data mining. In: ACM SIGKDD international conference on knowledge discovery and data mining ACM; 2011. pp 493–501 Tang DY, Zhou SW, Yang WJ (2019) Random-filtering based sparse representation parallel face recognition. Multimedia Tools Appl 78(2):1419–1439 Yin C, Ding S, Wang J (2019) Mobile marketing recommendation method based on user location feedback. Human Centric Comput Inf Sci 9(1):14–31 Yin C, Shi L, Sun R, Wang J (2019) Improved collaborative filtering recommendation algorithm based on differential privacy protection. J Supercomput 7:1–14 Jia Q, Guo L, Jin Z (2018) Preserving model privacy for machine learning in distributed systems. IEEE Trans Parallel Distrib Syst 29(8):1808–1822 Wang J, Gao Y, Liu W, Wu W, Lim S (2019) An asynchronous clustering and mobile data gathering schema based on timer mechanism in wireless sensor networks. Comput Mater Continua 58(3):711–725 Wang J, Gao Y, Liu W, Sangaiah AK, Kim H-J (2019) Energy efficient routing algorithm with mobile sink support for wireless sensor networks. Sensors 19(7):1494 Pan J-S, Kong L, Sung T-W, Tsai P-W, Snasel V (2018) Alpha-fraction first strategy for hierarchical wireless sensor networks. J Internet Technol 19(6):1717–1726 Pan J-S, Lee C-Y, Sghaier A, Zeghid M, Xie J (2019) Novel systolization of subquadratic space complexity multipliers based on Toeplitz matrix-vector product approach. IEEE Trans Very Large Scale Integr Syst 27(7):1614–1622 He S, Xie K, Xie K, Xu C, Wang J (2019) Interference-aware multi-source transmission in multi-radio and multi-channel wireless network. IEEE Syst J. https://doi.org/10.1109/JSYST.2019.2910409 He S, Xie K, Chen W, Zhang D, Wen J (2018) Energy-aware routing for SWIPT in multi-hop energy-constrained wireless network. IEEE Access 6:17996–18008 He S, Zeng W, Xie K, Yang H, Lai M, Su X (2017) PPNC: privacy preserving scheme for random linear network coding in smart grid. KSII Trans Internet Inf Syst 11(3):1510–1533 Wang J, Gao Y, Yin X, Li F, Kim HJ (2018) An enhanced PEGASIS algorithm with mobile sink support for wireless sensor networks. Wire Commun Mobile Comput. https://doi.org/10.1155/2018/9472075 Nguyen T-T, Pan J-S, Dao T-K (2019) An improved flower pollination algorithm for optimizing layouts of nodes in wireless sensor network. IEEE Access. https://doi.org/10.1109/ACCESS.2019.2921721 Meng Z, Pan J-S, Tseng K-K (2019) PaDE: an enhanced differential Evolution algorithm with novel control parameter adaptstion schemes for numerical optimization. Knowl Based Syst 168:80–99 Pan J-S, Kong L, Sung T-W, Tsai P-W, Snáel V (2018) A clustering scheme for wireless sensor networks based on genetic algorithm and dominating set. J Internet Technol 19(4):1111–1118 Gao Z, Sun Y, Cui X (2018) Privacy-preserving hybrid K-means. Int J Data Warehouse Min 14(2):1–17 Xing K, Hu C, Yu J (2017) Mutual privacy preserving k-means clustering in social participat-ory sensing. IEEE Trans Ind Inf 13(4):2066–2076 Gong M, Pan K, Xie Y (2019) Differential privacy preservation in regression analysis based on relevance. Knowl Based Syst 173:140–149 Wang J, Gao Y, Liu W, Sangaiah AK, Kim HJ (2019) An improved routing schema with special clustering using PSO algorithm for heterogeneous wireless sensor Network. Sensors 19(3):671–688 Wang J, Ju C, Gao Y, Sangaiah AK, Kim G-J (2018) A PSO based energy efficient coverage control algorithm for wireless sensor networks. Comput Mater Continua 56:433–446 Chen W, Xie X, Wang J (2017) A comparative study of logistic model tree, random forest, and classification and regression tree models for spatial prediction of landslide susceptibility. CATENA 151:147–160 It was supported by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD), Postgraduate Research & Practice Innovation Program of Jiangsu Province (KYCX18_1032). This work was supported by the National Natural Science Foundation of China (61772282, 61772454, 61811530332). School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing, 210044, China Chunyong Yin & Biao Zhou College of Information Science and Technology, Nanjing Forestry University, Nanjing, 210037, China Zhichao Yin School of Computer & Communication Engineering, Changsha University of Science and Technology, Changsha, 410004, China Jin Wang Chunyong Yin Biao Zhou CY conceptualized the study. BZ performed all experiments and wrote the manuscript. ZY analyzed all the data. JW advised on the manuscript preparation and technical knowledge. All authors read and approved the final manuscript. Correspondence to Jin Wang. Yin, C., Zhou, B., Yin, Z. et al. Local privacy protection classification based on human-centric computing. Hum. Cent. Comput. Inf. Sci. 9, 33 (2019). https://doi.org/10.1186/s13673-019-0195-4 Human-centric computing Data driven artificial intelligence for human-centric computing
CommonCrawl
Face-centred unit cell : A metal crystallises in face centred cubic structure. If the edge of its unit cell is 'a', the closest approach between two atoms in metallic crystal will be: (A) ${\rm{2}}\sqrt {{\rm{2a}}} $ (B) $\sqrt {{\rm{2a}}} $ (C) $\dfrac{{\rm{a}}}{{\sqrt {\rm{2}} }}$ (D) ${\rm{2a}}$ 81.4k+ views Hint: We know that, in the face centred cubic structure the number of atoms present in it is four. Generally, the one atom is present or exists in the center and three atoms present at the corner of the FCC structure. The FCC is one of the abbreviations used for representing the face centred cubic. Complete step-by-step solution:As we all know, for FCC crystal or face cubic structure, the closet method between two atoms in metallic crystal will be equal to the half the length of the face diagonal. It generally be represented as $\dfrac{1}{2} \times {\rm{length}}\;{\rm{of}}\;{\rm{diagonal}}......\left( 1 \right)$. But the length of the face diagonal of the face cubic structure is $\sqrt {{\rm{2a}}} ......\left( 2 \right)$. Where, 'a' is the edge of the unit cell. So, equating the equations (1) and (2), for getting the closet value between the atoms in a metallic crystal as shown below. $\dfrac{1}{2} \times \sqrt {{\rm{2a}}} = \dfrac{{\rm{a}}}{{\sqrt {\rm{2}} }}$ Thus, for "a metal crystallises in face centred cubic structure. If the edge of its unit cell is 'a', the closet approach between two atoms in metallic crystal" will be $\dfrac{{\rm{a}}}{{\sqrt {\rm{2}} }}$ as shown in the image below: Hence, the correct option for this question is C that is $\dfrac{{\rm{a}}}{{\sqrt {\rm{2}} }}$. Note:Usually, the crystals are divided into seven main types. Generally, it is divided on the basis of the number of atoms present in the crystals at the corner and at the center and the arrangement of atoms or ions in the crystal. The process is normally termed as crystallography.
CommonCrawl
All-----TitleAuthor(s)AbstractSubjectKeywordAll FieldsFull Text-----About Notre Dame Journal of Formal Logic Advance publication Notre Dame J. Formal Logic Volume 58, Number 3 (2017), 397-407. A Diamond Principle Consistent with AD Daniel Cunningham More by Daniel Cunningham Full-text: Access denied (no subscription detected) We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text Article info and citation We present a diamond principle ◊R concerning all subsets of Θ, the supremum of the ordinals that are the surjective image of R. We prove that ◊R holds in Steel's core model K(R), a canonical inner model for determinacy. Notre Dame J. Formal Logic, Volume 58, Number 3 (2017), 397-407. Accepted: 5 January 2015 First available in Project Euclid: 21 April 2017 Permanent link to this document https://projecteuclid.org/euclid.ndjfl/1492761611 doi:10.1215/00294527-2017-0008 Mathematical Reviews number (MathSciNet) MR3681101 Zentralblatt MATH identifier Primary: 03E15: Descriptive set theory [See also 28A05, 54H05] Secondary: 03E45: Inner models, including constructibility, ordinal definability, and core models 03E60: Determinacy principles diamond principles determinacy Steel's core model $\mathbf{K}(\mathbb{R})$ Cunningham, Daniel. A Diamond Principle Consistent with AD. Notre Dame J. Formal Logic 58 (2017), no. 3, 397--407. doi:10.1215/00294527-2017-0008. https://projecteuclid.org/euclid.ndjfl/1492761611 [1] Barwise, J., Admissible Sets and Structures: An Approach to Definability Theory, vol. 7 of Perspectives in Mathematical Logic, Springer, Berlin, 1975. [2] Cunningham, D. W., "A covering lemma for $K(\mathbb{R})$," Archive for Mathematical Logic, vol. 46 (2007), pp. 197–221. [3] Devlin, K. J., Constructibility, vol. 6 of Perspectives in Mathematical Logic, Springer, Berlin, 1984. [4] Hauser, K., "Generic relativizations of fine structure," Archive for Mathematical Logic, vol. 39 (2000), pp. 227–51. Mathematical Reviews (MathSciNet): MR1758628 Digital Object Identifier: doi:10.1007/s001530050145 [5] Jensen, R. B., "The fine structure of the constructible hierarchy," Annals of Mathematical Logic, vol. 4 (1972), pp. 229–308. Mathematical Reviews (MathSciNet): MR309729 [6] Kanamori, A., The Higher Infinite: Large Cardinals in Set Theory from Their Beginnings, Perspectives in Mathematical Logic, Springer, Berlin, 1994. [7] Kunen, K., Set Theory: An Introduction to Independence Proofs, vol. 102 of Studies in Logic and the Foundations of Mathematics, North-Holland, Amsterdam, 1980. [8] Moschovakis, Y. N., Descriptive Set Theory, 2nd edition, vol. 155 of Mathematical Surveys and Monographs, American Mathematical Society, Providence, 2009. [9] Schimmerling, E., "Combinatorial principles in the core model for one Woodin cardinal," Annals of Pure and Applied Logic, vol. 74 (1995), pp. 153–201. Digital Object Identifier: doi:10.1016/0168-0072(94)00036-3 [10] Steel, J. R., "Scales in $\mathbf{K}(\mathbb{R})$," pp. 176–208 in Games, Scales, and Suslin Cardinals: The Cabal Seminar, Vol. I, vol. 31 of Lecture Notes in Logic, Association for Symbolic Logic, Chicago, 2008. [11] Steel, J. R., "An outline of inner model theory," pp. 1595–1684 in Handbook of Set Theory, Vols. 1, 2, 3, edited by M. Foreman et al., Springer, Dordrecht, 2010. Note on Volumes 35-40 New content alerts Email RSS ToC RSS Article Turn Off MathJax What is MathJax? A Covering Lemma for HOD of K(ℝ) Cunningham, Daniel W., Notre Dame Journal of Formal Logic, 2010 Jonsson Cardinals, Erdos Cardinals, and the Core Model Mitchell, W. J., Journal of Symbolic Logic, 1999 The Fine Structure of Real Mice Cunningham, Daniel W., Journal of Symbolic Logic, 1998 The Largest Countable Inductive Set is a Mouse Set Rudominer, Mitch, Journal of Symbolic Logic, 1999 Countable Unions of Simple Sets in the Core Model Welch, P. D., Journal of Symbolic Logic, 1996 Proper Forcing and L$(\mathbb{R})$ Neeman, Itay and Zapletal, Jindrich, Journal of Symbolic Logic, 2001 Square in Core Models Schimmerling, Ernest and Zeman, Martin, Bulletin of Symbolic Logic, 2001 A Hierarchy of Maps between Compacta Bankston, Paul, Journal of Symbolic Logic, 1999 Inner models and ultrafilters in L($\mathbb{R}) Neeman, Itay, Bulletin of Symbolic Logic, 2007 The stable core Friedman, Sy-David, Bulletin of Symbolic Logic, 2012 euclid.ndjfl/1492761611
CommonCrawl
Optimal Placement of wireless charging lanes in road networks Channel leadership and recycling channel in closed-loop supply chain: The case of recycling price by the recycling party Optimizing 3-objective portfolio selection with equality constraints and analyzing the effect of varying constraints on the efficient sets Yue Qi 1, , Xiaolin Li 2, and Su Zhang 2,, China Academy of Corporate Governance & Department of Financial Management, Business School, Nankai University, 94 Weijin Road, Tianjin 300071, China Department of Financial Management, Business School, Nankai University, 94 Weijin Road, Tianjin 300071, China * Corresponding author: Su Zhang Received May 2019 Revised August 2019 Published February 2020 Fund Project: The research is supported by the National Social Science Fund of China 2018 (Grant No. 18BGL063) Markowitz proposes portfolio selection as a 2-objective model and emphasizes computing (whole) efficient sets and nondominated sets. Computing the sets has long been a topic in multiple-objective optimization. Researchers have gradually recognized other criteria in addition to variance and expected return. To formulate the additional criteria, researchers propose multiple-objective portfolio selection. However, computing the corresponding efficient set and nondominated set is not fully achieved. Moreover, discovering the sets' properties and utilizing the properties remain typically unanswered. In this paper, we extend Sharpe's and Merton's model by adding a general linear objective and imposing equality constraints. To optimize the model, we analytically derive the minimum-variance surface (defined later), prove it as a nondegenerate paraboloid, and prove the nondominated set as a paraboloidal segment. We also analytically derive the efficient set and prove it as a 2-dimensional translated cone. We then prove that the set subsumes the efficient set of the corresponding traditional model, so the efficient set expands as the general linear objective is added. Furthermore, constraints can be changed or added. We utilize the translated-cone properties and readily compute the changing constraints' effect on the efficient sets by formulae or linear-equation systems. Keywords: Multiple-objective optimization, multiple-objective portfolio selection, efficient set, nondominated set, paraboloid, constraint. Mathematics Subject Classification: Primary: 90B50; Secondary: 90C29. Citation: Yue Qi, Xiaolin Li, Su Zhang. Optimizing 3-objective portfolio selection with equality constraints and analyzing the effect of varying constraints on the efficient sets. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2020033 V. V. Acharya and L. H. Pedersen, Asset pricing with liquidity risk, J. Financial Economics, 77 (2005), 375-410. doi: 10.3386/w10814. Google Scholar A. Alankar, P. Blaustein and M. S. Scholes, The cost of constraints: Risk management, agency theory and asset prices, work in progress, Stanford University, Graduate School of Business, 2014. doi: 10.2139/ssrn.2337797. Google Scholar A. Almazan, K. C. Brown, M. Carlson and D. A. Chapman, Why constrain your mutual fund manager?, J. Financial Economics, 73 (2004), 289-321. doi: 10.1016/j.jfineco.2003.05.007. Google Scholar Y. Amihud and H. Mendelson, Asset pricing and the bid-ask spread, in Market Liquidity, Cambridge University Press, 2012, 9-46. doi: 10.1017/CBO9780511844393.003. Google Scholar M. Ammann, G. Coqueret and J.-P. Schade, Characteristics-based portfolio choice with leverage constraints, J. Banking & Finance, 70 (2016), 23-37. doi: 10.2139/ssrn.2736324. Google Scholar B. Aouni, M. Doumpos, B. Pérez-Gladish and R. E. Steuer, On the increasing importance of multiple criteria decision aid methods for portfolio selection, J. Oper. Research Society, 69 (2018), 1525-1542. doi: 10.1080/01605682.2018.1475118. Google Scholar F. D. Arditti, Risk and the required return on equity, J. Finance, 22 (1967), 19-36. doi: 10.1111/j.1540-6261.1967.tb01651.x. Google Scholar C. A. Bana e Costa and J. O. Soares, Multicriteria approaches for portfolio selection: An overview, Rev. Financial Markets, 4 (2001), 19-26. Google Scholar P. Behr, A. Guettler and F. Truebenbach, Using industry momentum to improve portfolio performance, J. Banking & Finance, 36 (2012), 1414-1423. doi: 10.1016/j.jbankfin.2011.12.007. Google Scholar M. J. Best, An algorithm for the solution of the parametric quadratic programming problem, in Applied Mathematics and Parallel Computing, Physica, Heidelberg, 1996, 57–76. doi: 10.1007/978-3-642-99789-1_5. Google Scholar A. Bilbao-Terol, M. Arenas-Parra, V. Cañal-Fernández and C. Bilbao-Terol, Selection of socially responsible portfolios using hedonic prices, in Operations Research Proceedings 2012, Operations Research Proceedings, Springer, Cham, 2014. doi: 10.1007/978-3-319-00795-3_8. Google Scholar F. Black, Capital market equilibrium with restricted borrowing, J. Business, 45 (1972), 444-455. doi: 10.1086/295472. Google Scholar Z. Bodie, A. Kane and A. J. Marcus, Investments, McGraw-Hill Education, New York, 2018. Google Scholar M. W. Brandt, P. Santa-Clara and R. Valkanov, Parametric portfolio policies: Exploiting characteristics in the cross-section of equity returns, Rev. Financial Studies, 22 (2009), 3411-3447. doi: 10.3386/w10996. Google Scholar P. J. Brockwell and R. A. Davis, Time Series: Theory and Methods, Springer Series in Statistics, Springer-Verlag, New York, 1987. doi: 10.1007/978-1-4899-0004-3. Google Scholar O. Bunn, A. Staal, J. Zhuang, A. Lazanas, C. Ural and R. Shiller, Es-cape-ing from overvalued sectors: Sector selection based on the cyclically adjusted price-earnings (CAPE) ratio, J. Portfolio Management, 41 (2014), 16-33. doi: 10.3905/jpm.2014.41.1.016. Google Scholar G. Capelle-Blancard and S. Monjon, The performance of socially responsible funds: Does the screening process matter?, European Financial Management, 20 (2014), 494-520. doi: 10.1111/j.1468-036X.2012.00643.x. Google Scholar L. K. Chan, J. Karceski and J. Lakonishok, The risk and return from factors, J. Financial Quantitative Anal., 33 (1998), 159-188. doi: 10.3386/w6098. Google Scholar G. Chow, Portfolio selection based on return, risk, and relative performance, Financial Analysts Journal, 51 (1995), 54-60. doi: 10.2469/faj.v51.n2.1881. Google Scholar T. Chow, E. Kose and F. Li, The impact of constraints on minimum-variance portfolios, Financial Analysts Journal, 72 (2016), 52-70. doi: 10.2469/faj.v72.n2.5. Google Scholar V. DeMiguel, L. Garlappi, F. J. Nogales and R. Uppal, A generalized approach to portfolio optimization: Improving performance by constraining portfolio norms, Management Science, 55 (2009), 798-812. Google Scholar V. DeMiguel, L. Garlappi and R. Uppal, Optimal versus naive diversification: How inefficient is the 1/N portfolio strategy?, Rev. Financial Studies, 22 (2009), 1915-1953. doi: 10.1093/acprof:oso/9780199744282.003.0034. Google Scholar G. Dorfleitner, M. Leidl and J. Reeder, Theory of social returns in portfolio choice with application to microfinance, J. Asset Management, 13 (2012), 384-400. doi: 10.1057/jam.2012.18. Google Scholar P. H. Dybvig, H. K. Farnsworth and J. N. Carpenter, Portfolio performance and agency, Rev. Financial Studies, 23 (2010), 1-23. doi: 10.1093/rfs/hhp056. Google Scholar M. Ehrgott, K. Klamroth and C. Schwehm, An MCDM approach to portfolio optimization, European J. Oper. Res., 155 (2004), 752-770. doi: 10.1016/S0377-2217(02)00881-0. Google Scholar E. J. Elton, M. J. Gruber, S. J. Brown and W. N. Goetzmann, Modern Portfolio Theory and Investment Analysis, John Wiley & Sons, New York, 2014. Google Scholar F. J. Fabozzi, S. Focardi and C. Jonas, Trends in quantitative equity management: Survey results, Quantitative Finance, 7 (2007), 115-122. doi: 10.1080/14697680701195941. Google Scholar E. F. Fama, Foundations of Finance: Portfolio Decisions and Securities Prices, Basic Books, Inc., New York, 1976. doi: 10.2307/2553407. Google Scholar E. F. Fama and K. R. French, The cross-section of expected stock returns, J. Finance, 47 (1992), 427-465. doi: 10.1111/j.1540-6261.1992.tb04398.x. Google Scholar E. F. Fama and K. R. French, International tests of a five-factor asset pricing model, J. Financial Economics, 123 (2017), 441-463. doi: 10.1016/j.jfineco.2016.11.004. Google Scholar M. A. Ferreira and P. Matos, The colors of investors' money: The role of institutional investors around the world, J. Financial Economics, 88 (2008), 499-533. doi: 10.1016/j.jfineco.2007.07.003. Google Scholar A. M. Geoffrion, Proper efficiency and the theory of vector maximization, J. Math. Anal. Appl., 22 (1968), 618-630. doi: 10.1016/0022-247X(68)90201-1. Google Scholar R. R. Grauer and F. C. Shen, Do constraints improve portfolio performance?, J. Banking & Finance, 24 (2000), 1253-1274. doi: 10.1016/S0378-4266(99)00069-2. Google Scholar J. B. Guerard and A. Mark, The optimization of efficient portfolios: The case for an R & D quadratic term, Research in Finance, 20 (2003), 217-247. doi: 10.1016/S0196-3821(03)20011-3. Google Scholar C. R. Harvey, J. C. Liechty, M. W. Liechty and P. Müller, Portfolio selection with higher moments, Quant. Finance, 10 (2010), 469-485. doi: 10.1080/14697681003756877. Google Scholar M. Hirschberger, Y. Qi and R. E. Steuer, Large-scale MV efficient frontier computation via a procedure of parametric quadratic programming, European J. Oper. Res., 204 (2010), 581-588. doi: 10.1016/j.ejor.2009.11.016. Google Scholar M. Hirschberger, R. E. Steuer, S. Utz, M. Wimmer and Y. Qi, Computing the nondominated surface in tri-criterion portfolio selection, Oper. Res., 61 (2013), 169-183. doi: 10.1287/opre.1120.1140. Google Scholar C. Huang and R. H. Litzenberger, Foundations for Financial Economics, North-Holland Publishing Co., New York, 1988. Google Scholar R. Jagannathan and T. Ma, Risk reduction in large portfolios: Why imposing the wrong constraints helps, J. Finance, 58 (2003), 1651-1684. doi: 10.3386/w8922. Google Scholar C. P. Jones and G. R. Jensen, Investments: Analysis and Management, John Wiley & Sons, New York, 2016. Google Scholar B. D. Jordan, T. W. Miller and S. D. Dolvin, Fundamentals of Investments: Valuation and Management, McGraw-Hill Education, New York, 2015. Google Scholar P. Jorion, Portfolio optimization with tracking-error constraints, Financial Analysts Journal, 59 (2003), 70-82. doi: 10.2469/faj.v59.n5.2565. Google Scholar C. Kirby and B. Ostdiek, It's all in the timing: Simple active portfolio strategies that outperform naïve diversification, J. Financial and Quantitative Analysis, 47 (2012), 437-467. doi: 10.2139/ssrn.1530022. Google Scholar M. Kritzman, S. Page and D. Turkington, In defense of optimization: The fallacy of 1/$N$, Financial Analysts Journal, 66 (2010), 31-39. doi: 10.2469/faj.v66.n2.6. Google Scholar P. D. Lax, Linear Algebra and Its Applications, Pure and Applied Mathematics, John Wiley & Sons, Inc., Hoboken, NJ, 2007. Google Scholar A. W. Lo, C. Petrov and M. Wierzbicki, It's 11pm – Do you know where your liquidity is? The mean-variance-liquidity frontier, J. Investment Management, 1 (2003), 55-93. doi: 10.1142/9789812700865_0003. Google Scholar H. M. Markowitz, Foundations of portfolio selection, J. Finance, 46 (1991), 469-477. Google Scholar H. M. Markowitz, The optimization of a quadratic function subject to linear constraints, Naval Res. Logist. Quart., 3 (1956), 111-133. doi: 10.1002/nav.3800030110. Google Scholar H. M. Markowitz, Portfolio selection, J. Finance, 7 (1952), 77-91. doi: 10.1111/j.1540-6261.1952.tb01525.x. Google Scholar H. M. Markowitz, Portfolio Selection: Efficient Diversification of Investments, Monograph, 16, John Wiley & Sons, Inc., New York; Chapman & Hall, Ltd., London, 1959. Google Scholar H. M. Markowitz, Mean-Variance Analysis in Portfolio Choice and Capital Markets, Basil Blackwell, Oxford, 1987. Google Scholar M. Masmoudi and F. B. Abdelaziz, Portfolio selection problem: A review of deterministic and stochastic multiple objective programming models, Ann. Oper. Res., 267 (2018), 335-352. doi: 10.1007/s10479-017-2466-7. Google Scholar H. B. Mayo, Investments: An Introduction, Cengage Learning, Mason, OH, 2017. Google Scholar R. C. Merton, An analytical derivation of the efficient portfolio frontier, J. Financial and Quantitative Analysis, 7 (1972), 1851-1872. doi: 10.2307/2329621. Google Scholar K. Metaxiotis and K. Liagkouras, Multiobjective evolutionary algorithms for portfolio management: A comprehensive literature review, Expert Systems with Applications, 39 (2012), 11685-11698. doi: 10.1016/j.eswa.2012.04.053. Google Scholar A. Ponsich, A. L. Jaimes and C. A. C. Coello, A survey on multiobjective evolutionary algorithms for the solution of the portfolio optimization problem and other finance and economics applications, IEEE Transac. Evol. Comput., 17 (2013), 321-344. doi: 10.1109/TEVC.2012.2196800. Google Scholar Y. Qi, Parametrically computing efficient frontiers of portfolio selection and reporting and utilizing the piecewise-segment structure, preprint, J. Oper. Res. Soc.. doi: 10.1080/01605682.2019.1623477. Google Scholar Y. Qi, On outperforming social-screening-indexing by multiple-objective portfolio selection, Ann. Oper. Res., 267 (2018), 493-513. doi: 10.1007/s10479-018-2921-0. Google Scholar Y. Qi, R. E. Steuer and M. Wimmer, An analytical derivation of the efficient surface in portfolio selection with three criteria, Ann. Oper. Res., 251 (2017), 161-177. doi: 10.1007/s10479-015-1900-y. Google Scholar F. K. Reilly, K. C. Brown and S. Leeds, Investment Analysis and Portfolio Management, Cengage Learning, Mason, OH, 2018. Google Scholar R. Roll, A critique of the asset pricing theory's tests Part Ⅰ : On past and potential testability of the theory, J. Financial Economics, 4 (1977), 129-176. doi: 10.1016/0304-405X(77)90009-5. Google Scholar D. Roman, K. Darby-Dowman and G. Mitra, Mean-risk models using two risk measures: A multi-objective approach, Quant. Finance, 7 (2007), 443-458. doi: 10.1080/14697680701448456. Google Scholar T. L. Saaty, P. C. Rogers and R. Pell, Portfolio selection through hierarchies, J. Portfolio Management, 6 (1980), 16-21. doi: 10.3905/jpm.1980.408749. Google Scholar B. Scherer and X. Xu, The impact of constraints on value-added, J. Portfolio Management, 33 (2007), 45-54. doi: 10.3905/jpm.2007.690605. Google Scholar W. F. Sharpe, Portfolio Theory and Capital Markets, McGraw-Hill, New York, 1970. Google Scholar W. F. Sharpe, Optimal portfolios without bounds on holdings, Graduate School of Business, Stanford University, 2001. Available from: https://web.stanford.edu/{}wfsharpe/mia/opt/mia_opt2.htm. Google Scholar H. Shefrin and M. Statman, Behavioral portfolio theory, J. Financial and Quantitative Analysis, 35 (2000), 121-151. doi: 10.2307/2676187. Google Scholar T. Shifrin, Abstract Algebra - A Geometric Approach, Prentice Hall, Englewood Cliffs, NJ, 1996. Google Scholar J. Spronk and W. G. Hallerbach, Financial modelling: Where to go? With an illustration for portfolio management, European J. Oper. Res., 99 (1997), 113-125. doi: 10.1016/S0377-2217(96)00386-4. Google Scholar [70] M. Statman, Finance for Normal People: How Investors and Markets Behave, Oxford University Press, New York, 2017. Google Scholar M. Statman, What Investors Really Want: Know What Drives Investor Behavior and Make Smarter Financial Decisions, McGraw-Hill Education, New York, 2011. doi: 10.2139/ssrn.1743173. Google Scholar M. Stein, J. Branke and H. Schmeck, Efficient implementation of an active set algorithm for large-scale portfolio selection, Comput. Oper. Res., 35 (2008), 3945-3961. doi: 10.1016/j.cor.2007.05.004. Google Scholar R. E. Steuer and P. Na, Multiple criteria decision making combined with finance: A categorized bibliographic study, European J. Oper. Res., 150 (2003), 496-515. doi: 10.1016/S0377-2217(02)00774-9. Google Scholar R. E. Steuer, Y. Qi and M. Hirschberger, Suitable-portfolio investors, nondominated frontier sensitivity, and the effect of multiple objectives on standard portfolio selection, Ann. Oper. Res., 152 (2007), 297-317. doi: 10.1007/s10479-006-0137-1. Google Scholar S. Utz, M. Wimmer, M. Hirschberger and R. E. Steuer, Tri-criterion inverse portfolio optimization with application to socially responsible mutual funds, European J. Oper. Res., 234 (2014), 491-498. doi: 10.1016/j.ejor.2013.07.024. Google Scholar S. Utz, M. Wimmer and R. E. Steuer, Tri-criterion modeling for constructing more-sustainable mutual funds, European J. Oper. Res., 246 (2015), 331-338. doi: 10.1016/j.ejor.2015.04.035. Google Scholar M. Woodside-Oriakhi, C. Lucas and J. Beasley, Heuristic algorithms for the cardinality constrained efficient frontier, European J. Oper. Res., 213 (2011), 538-550. doi: 10.1016/j.ejor.2011.03.030. Google Scholar V. Zakamulin, Superiority of optimized portfolios to naive diversification: Fact or fiction?, Finance Research Letters, 22 (2017), 122-128. doi: 10.2139/ssrn.2786291. Google Scholar C. Zopounidis, E. Galariotis, M. Doumpos, S. Sarri and K. Andriosopoulos, Multiple criteria decision aiding for finance: An updated bibliographic survey, European J. Oper. Res., 247 (2015), 339-348. doi: 10.1016/j.ejor.2015.05.032. Google Scholar Figure 1. Major methods to solve (1) and (3) in the central and right parts, respectively Figure 4. The minimum-variance surface Figure 2. An efficient set and efficient sets under changing constraints in $ \mathbb{R}^n $ Figure 3. The existence of many nondominated portfolios for $ z_1 = 1 $ of (4) for the proof of Theorem 4.5 Figure 5. The nondominated set Figure 6. The existence of zero-covariance portfolio $ \mathbf{{z}}^{zcp} $ for (2) Table 1. The result of the hypotheses (24)-(27) For (24) For (25) For (26) For (27) mean for $ \mathbf{{x}}^e $: 0.0052 mean for $ \mathbf{{x}}^e $: 0.0022 mean for $ \mathbf{{x}}^e $: 0.0052 mean for $ \mathbf{{x}}^e $: 0.0022 mean for $ \mathbf{{x}}^n $: 0.0054 mean for $ \mathbf{{x}}^n $: 0.0020 mean for $ \mathbf{{x}}^p $: 0.0055 mean for $ \mathbf{{x}}^p $: 0.0019 p-value: 0.9585 p-value: 0.0130 p-value: 0.9268 p-value: 0.0085 accept $ H_0 $ reject $ H_0 $ accept $ H_0 $ reject $ H_0 $ Yasmine Cherfaoui, Mustapha Moulaï. Biobjective optimization over the efficient set of multiobjective integer programming problem. Journal of Industrial & Management Optimization, 2021, 17 (1) : 117-131. doi: 10.3934/jimo.2019102 Sumit Kumar Debnath, Pantelimon Stǎnicǎ, Nibedita Kundu, Tanmay Choudhury. Secure and efficient multiparty private set intersection cardinality. Advances in Mathematics of Communications, 2021, 15 (2) : 365-386. doi: 10.3934/amc.2020071 Qiang Long, Xue Wu, Changzhi Wu. Non-dominated sorting methods for multi-objective optimization: Review and numerical comparison. Journal of Industrial & Management Optimization, 2021, 17 (2) : 1001-1023. doi: 10.3934/jimo.2020009 Wenbin Li, Jianliang Qian. Simultaneously recovering both domain and varying density in inverse gravimetry by efficient level-set methods. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020073 Elvio Accinelli, Humberto Muñiz. A dynamic for production economies with multiple equilibria. Journal of Dynamics & Games, 2021 doi: 10.3934/jdg.2021002 Andreas Koutsogiannis. Multiple ergodic averages for tempered functions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1177-1205. doi: 10.3934/dcds.2020314 Huu-Quang Nguyen, Ya-Chi Chu, Ruey-Lin Sheu. On the convexity for the range set of two quadratic functions. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020169 Hua Chen, Yawei Wei. Multiple solutions for nonlinear cone degenerate elliptic equations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020272 Jing Qin, Shuang Li, Deanna Needell, Anna Ma, Rachel Grotheer, Chenxi Huang, Natalie Durgin. Stochastic greedy algorithms for multiple measurement vectors. Inverse Problems & Imaging, 2021, 15 (1) : 79-107. doi: 10.3934/ipi.2020066 Claudia Lederman, Noemi Wolanski. An optimization problem with volume constraint for an inhomogeneous operator with nonstandard growth. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020391 Tommi Brander, Joonas Ilmavirta, Petteri Piiroinen, Teemu Tyni. Optimal recovery of a radiating source with multiple frequencies along one line. Inverse Problems & Imaging, 2020, 14 (6) : 967-983. doi: 10.3934/ipi.2020044 Fanni M. Sélley. A self-consistent dynamical system with multiple absolutely continuous invariant measures. Journal of Computational Dynamics, 2021, 8 (1) : 9-32. doi: 10.3934/jcd.2021002 Qian Liu, Shuang Liu, King-Yeung Lam. Asymptotic spreading of interacting species with multiple fronts Ⅰ: A geometric optics approach. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3683-3714. doi: 10.3934/dcds.2020050 Meilan Cai, Maoan Han. Limit cycle bifurcations in a class of piecewise smooth cubic systems with multiple parameters. Communications on Pure & Applied Analysis, 2021, 20 (1) : 55-75. doi: 10.3934/cpaa.2020257 Yue Qi Xiaolin Li Su Zhang
CommonCrawl
Traveling waves for a diffusive SEIR epidemic model CPAA Home On the differentiability of the solutions of non-local Isaacs equations involving $\frac{1}{2}$-Laplacian May 2016, 15(3): 893-906. doi: 10.3934/cpaa.2016.15.893 Qualitative properties of solutions to an integral system associated with the Bessel potential Lu Chen 1, , Zhao Liu 1, and Guozhen Lu 2, School of Mathematical Sciences, Beijing Normal University, Beijing 100875, China, China Department of Mathematics, Wayne State University, Detroit, MI 48202 Received August 2015 Revised November 2015 Published February 2016 In this paper, we study a differential system associated with the Bessel potential: \begin{eqnarray}\begin{cases} (I-\Delta)^{\frac{\alpha}{2}}u(x)=f_1(u(x),v(x)),\\ (I-\Delta)^{\frac{\alpha}{2}}v(x)=f_2(u(x),v(x)), \end{cases}\end{eqnarray} where $f_1(u(x),v(x))=\lambda_1u^{p_1}(x)+\mu_1v^{q_1}(x)+\gamma_1u^{\alpha_1}(x)v^{\beta_1}(x)$, $f_2(u(x),v(x))=\lambda_2u^{p_2}(x)+\mu_2v^{q_2}(x)+\gamma_2u^{\alpha_2}(x)v^{\beta_2}(x)$, $I$ is the identity operator and $\Delta=\sum_{j=1}^{n}\frac{\partial^2}{\partial x^2_j}$ is the Laplacian operator in $\mathbb{R}^n$. Under some appropriate conditions, this differential system is equivalent to an integral system of the Bessel potential type. By the regularity lifting method developed in [4] and [18], we obtain the regularity of solutions to the integral system. We then apply the moving planes method to obtain radial symmetry and monotonicity of positive solutions. We also establish the uniqueness theorem for radially symmetric solutions. Our nonlinear terms $f_1(u(x), v(x))$ and $f_2(u(x), v(x))$ are quite general and our results extend the earlier ones even in the case of single equation substantially. Keywords: regularity, uniqueness., method of moving planes in integral forms, radial symmetry, Bessel potential. Mathematics Subject Classification: Primary: 35J48; Secondary: 35B06, 45G1. Citation: Lu Chen, Zhao Liu, Guozhen Lu. Qualitative properties of solutions to an integral system associated with the Bessel potential. Communications on Pure & Applied Analysis, 2016, 15 (3) : 893-906. doi: 10.3934/cpaa.2016.15.893 J. Bao, N. Lam and G. Lu, Polyharmonic equations with critical exponential growth in the whole space $\mathbbR^n$,, \emph{Discrete Contin. Dyn. Syst.}, 36 (2016), 577. Google Scholar L. Caffarelli, B. Gidas and J. Spruck, Asymptotic symmetry and local behavior of semilinear elliptic equation with critical Sobolev growth,, \emph{Commun. Pure Appl. Math.}, 42 (1989), 271. doi: 10.1002/cpa.3160420304. Google Scholar A. Chang and P. Yang, On uniqueness of solutions of nth order differential equations in conformal geometry,, \emph{Math. Res. Lett.}, 4 (1997), 91. doi: 10.4310/MRL.1997.v4.n1.a9. Google Scholar W. Chen and C. Li, Methods on Nonlinear Elliptic Equations,, AIMS Book Series on Diff. Equa. and Dyn. Sys., (2010). Google Scholar W. Chen and C. Li, Classification of solutions of some nonlinear elliptic equations,, \emph{Duke Math. J.}, 63 (1991), 615. doi: 10.1215/S0012-7094-91-06325-8. Google Scholar W. Chen and C. Li, Regularity of solutions for a system of integral equations,, \emph{Commun. Pure Appl. Anal.}, 4 (2005), 1. Google Scholar W. Chen, C. Li and B. Ou, Classification of solutions for an integral equation,, \emph{Comm. Pure Appl. Math.}, 59 (2006), 330. doi: 10.1002/cpa.20116. Google Scholar W. Chen, C. Li and B. Ou, Classification of solutions for a system of integral equations,, \emph{Comm. Patial Differential Equations}, 30 (2005), 59. Google Scholar Y. Fang and W. Chen, A Liouville type theorem for poly-harmonic Dirichlet problems in a half space,, \emph{Adv. Math.}, 229 (2012), 2835. doi: 10.1016/j.aim.2012.01.018. Google Scholar B. Gidas, W. Ni and L. Nirenberg, Symmetry and related properties via the maximum principle,, \emph{Comm. Math. Phys.}, 68 (1979), 209. Google Scholar B. Gidas, W. Ni and L. Nirenberg, Symmetry of positive solutions of nonlinear elliptic equations in $\mathbbR^n$,, \emph{Mathematical Analysis and Applications}, (1981). Google Scholar X. Han and G. Lu, Regularity of solutions to an integral equation associated with Bessel potential,, \emph{Commun. Pure Appl. Anal.}, 10 (2011), 1111. doi: 10.3934/cpaa.2011.10.1111. Google Scholar X. Han, G. Lu and J. Zhu, Characterization of balls in terms of Bessel-potential integral equation,, \emph{J. Differential Equations}, 252 (2012), 1589. Google Scholar C. Jin and C. Li, Symmetry of solution to some systems of integral equations,, \emph{Proc. Amer. Math. Soc.}, 134 (2006), 1661. doi: 10.1090/S0002-9939-05-08411-X. Google Scholar M. K. Kwong, Uniqueness of positive solutions of $\Delta u-u+u^p=0$ in $\mathbbR^n$,, \emph{Arch. Ration. Mech. Anal.}, 105 (1989), 243. doi: 10.1007/BF00251502. Google Scholar N. Lam and G. Lu, Existence of nontrivial solutions to polyharmonic equations with subcritical and critical exponential growth,, \emph{Discrete Contin. Dyn. Syst.}, 32 (2012), 2187. doi: 10.3934/dcds.2012.32.2187. Google Scholar C. Li and J. Lim, The singularity analysis of solutions to some integral equations,, \emph{Commun. Pure Appl. Anal.}, 6 (2007), 453. doi: 10.3934/cpaa.2007.6.453. Google Scholar C. Ma, W. Chen and C. Li, Regularity of solutions for an integral system of Wolff type,, \emph{Adv. Math.}, 226 (2011), 2676. Google Scholar L. Ma and D. Chen, Radial symmetry and monotonicity for an integral equation,, \emph{J. Math. Anal. Appl.}, 342 (2008), 943. doi: 10.1016/j.jmaa.2007.12.064. Google Scholar W. Reichel, Characterization of balls by Riesz-potentials,, \emph{Ann. Mat.}, 188 (2009), 235. doi: 10.1007/s10231-008-0073-6. Google Scholar J. Serrin, A symmetry problem in potential theory,, \emph{Arch. Ration. Mech. Anal.}, 43 (1971), 304. Google Scholar J. Smoller, Shock Waves and Reaction-Diffusion Equations,, Grundlehren der Mathematischen Wissenschaften, (1983). Google Scholar E. Stein, Singular Integrals and Differentiability Properties of Functions,, Princeton Ser. Appl. Math., (1970). Google Scholar Xiaotao Huang, Lihe Wang. Radial symmetry results for Bessel potential integral equations in exterior domains and in annular domains. Communications on Pure & Applied Analysis, 2017, 16 (4) : 1121-1134. doi: 10.3934/cpaa.2017054 Xiaolong Han, Guozhen Lu. Regularity of solutions to an integral equation associated with Bessel potential. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1111-1119. doi: 10.3934/cpaa.2011.10.1111 Meixia Dou. A direct method of moving planes for fractional Laplacian equations in the unit ball. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1797-1807. doi: 10.3934/cpaa.2016015 Baiyu Liu. Direct method of moving planes for logarithmic Laplacian system in bounded domains. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 5339-5349. doi: 10.3934/dcds.2018235 Pengyan Wang, Pengcheng Niu. A direct method of moving planes for a fully nonlinear nonlocal system. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1707-1718. doi: 10.3934/cpaa.2017082 Orlando Lopes. Uniqueness and radial symmetry of minimizers for a nonlocal variational problem. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2265-2282. doi: 10.3934/cpaa.2019102 Wenxiong Chen, Congming Li. Radial symmetry of solutions for some integral systems of Wolff type. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 1083-1093. doi: 10.3934/dcds.2011.30.1083 Shiren Zhu, Xiaoli Chen, Jianfu Yang. Regularity, symmetry and uniqueness of positive solutions to a nonlinear elliptic system. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2685-2696. doi: 10.3934/cpaa.2013.12.2685 Yutian Lei. Positive solutions of integral systems involving Bessel potentials. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2721-2737. doi: 10.3934/cpaa.2013.12.2721 Yonggang Zhao, Mingxin Wang. An integral equation involving Bessel potentials on half space. Communications on Pure & Applied Analysis, 2015, 14 (2) : 527-548. doi: 10.3934/cpaa.2015.14.527 Mingchun Wang, Jiankai Xu, Huoxiong Wu. On Positive solutions of integral equations with the weighted Bessel potentials. Communications on Pure & Applied Analysis, 2019, 18 (2) : 625-641. doi: 10.3934/cpaa.2019031 Miaomiao Cai, Li Ma. Moving planes for nonlinear fractional Laplacian equation with negative powers. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4603-4615. doi: 10.3934/dcds.2018201 Changlu Liu, Shuangli Qiao. Symmetry and monotonicity for a system of integral equations. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1925-1932. doi: 10.3934/cpaa.2009.8.1925 Yingshu Lü, Chunqin Zhou. Symmetry for an integral system with general nonlinearity. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1533-1543. doi: 10.3934/dcds.2018121 Martin Bauer, Thomas Fidler, Markus Grasmair. Local uniqueness of the circular integral invariant. Inverse Problems & Imaging, 2013, 7 (1) : 107-122. doi: 10.3934/ipi.2013.7.107 Yingshu Lü. Symmetry and non-existence of solutions to an integral system. Communications on Pure & Applied Analysis, 2018, 17 (3) : 807-821. doi: 10.3934/cpaa.2018041 Wenxiong Chen, Congming Li. Regularity of solutions for a system of integral equations. Communications on Pure & Applied Analysis, 2005, 4 (1) : 1-8. doi: 10.3934/cpaa.2005.4.1 Patricio Felmer, César Torres. Radial symmetry of ground states for a regional fractional Nonlinear Schrödinger Equation. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2395-2406. doi: 10.3934/cpaa.2014.13.2395 Sara Barile, Addolorata Salvatore. Radial solutions of semilinear elliptic equations with broken symmetry on unbounded domains. Conference Publications, 2013, 2013 (special) : 41-49. doi: 10.3934/proc.2013.2013.41 Dongbing Zha. Remarks on nonlinear elastic waves in the radial symmetry in 2-D. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 4051-4062. doi: 10.3934/dcds.2016.36.4051 PDF downloads (6) HTML views (0) on AIMS Lu Chen Zhao Liu Guozhen Lu Recipient's E-mail*
CommonCrawl
No Project Euclid account? Create an account or Sign in with your institutional credentials We can help you reset your password using the email address linked to your Project Euclid account. Registered users receive a variety of benefits including the ability to customize email alerts, create favorite journals list, and save searches. Please note that a Project Euclid web account does not automatically grant access to full-text content. An institutional or society member subscription is required to view non-Open Access content. Contact [email protected] with any questions. View Project Euclid Privacy Policy Subscription and Access About Project Euclid Ways to Support Project Euclid Home > Journals > Ann. Probab. > Volume 26 > Issue 1 > Article You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither Project Euclid nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations. Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the Project Euclid website. January 1998 No eigenvalues outside the support of the limiting spectral distribution of large-dimensional sample covariance matrices Z. D. Bai, Jack W. Silverstein Ann. Probab. 26(1): 316-345 (January 1998). DOI: 10.1214/aop/1022855421 This article is only available to subscribers. It is not available for individual sale. Let $B_n = (1/N)T_n^{1/2}X_n X_n^* T_n^{1/2}$, where $X_n$ is $n \times N$ with i.i.d. complex standardized entries having finite fourth moment and $T_n^{1/2}$ is a Hermitian square root of the nonnegative definite Hermitian matrix $T_n$. It is known that, as $n \to \infty$, if $n/N$ converges to a positive number and the empirical distribution of the eigenvalues of $T_n$ converges to a proper probability distribution, then the empirical distribution of the eigenvalues of $B_n$ converges a.s. to a nonrandom limit. In this paper we prove that, under certain conditions on the eigenvalues of $T_n$, for any closed interval outside the support of the limit, with probability 1 there will be no eigenvalues in this interval for all $n$ sufficiently large. Z. D. Bai. Jack W. Silverstein. "No eigenvalues outside the support of the limiting spectral distribution of large-dimensional sample covariance matrices." Ann. Probab. 26 (1) 316 - 345, January 1998. https://doi.org/10.1214/aop/1022855421 First available in Project Euclid: 31 May 2002 zbMATH: 0937.60017 MathSciNet: MR1617051 Digital Object Identifier: 10.1214/aop/1022855421 Primary: 60F15 Secondary: 62H99 Keywords: empirical distribution function of eigenvalues, Random matrix, Stieltjes transform Rights: Copyright © 1998 Institute of Mathematical Statistics Vol.26 • No. 1 • January 1998 Subscribe to Project Euclid Z. D. Bai, Jack W. Silverstein "No eigenvalues outside the support of the limiting spectral distribution of large-dimensional sample covariance matrices," The Annals of Probability, Ann. Probab. 26(1), 316-345, (January 1998) Print Friendly Version (PDF) Subscriptions and Access Librarian tools Suite 18B Durham, NC 27701 USA © 2021 Project Euclid
CommonCrawl
Mathematical and numerical analysis of a mathematical model of mixed immunotherapy and chemotherapy of cancer DCDS-B Home Mathematical analysis of an in-host model of viral dynamics with spatial heterogeneity June 2016, 21(4): 1259-1277. doi: 10.3934/dcdsb.2016.21.1259 Attractors and entropy bounds for a nonlinear RDEs with distributed delay in unbounded domains Dalibor Pražák 1, and Jakub Slavík 2, Department of Mathematical Analysis, Charles University, Prague, Sokolovská 83, 186 75 Prague 8 Department of Mathematical Analysis, Charles University, Prague, Sokolovská 83, 186 75 Praha 8, Czech Republic Received May 2015 Revised December 2015 Published March 2016 A nonlinear reaction-diffusion problem with a general, both spatially and delay distributed reaction term is considered in an unbounded domain $\mathbb{R}^N$. The existence of a unique weak solution is proved. A locally compact attractor together with entropy bound is also established. Keywords: Kolmogorov's $\varepsilon$-enthropy., unbounded domain, attractor, Nonlinear reaction-diffusion equation, distributed delay. Mathematics Subject Classification: Primary: 37L30; Secondary: 35B41, 35R09, 35K5. Citation: Dalibor Pražák, Jakub Slavík. Attractors and entropy bounds for a nonlinear RDEs with distributed delay in unbounded domains. Discrete & Continuous Dynamical Systems - B, 2016, 21 (4) : 1259-1277. doi: 10.3934/dcdsb.2016.21.1259 J. M. Arrieta, J. W. Cholewa, T. Dlotko and A. Rodríguez-Bernal, Dissipative parabolic equations in locally uniform spaces, Math. Nachr., 280 (2007), 1643-1663. doi: 10.1002/mana.200510569. Google Scholar J. M. Arrieta, A. Rodríguez-Bernal, J. W. Cholewa and T. Dlotko, Linear parabolic equations in locally uniform spaces, Math. Models Methods Appl. Sci., 14 (2004), 253-293. doi: 10.1142/S0218202504003234. Google Scholar M. Efendiev, Finite and Infinite Dimensional Attractors for Evolution Equations of Mathematical Physics, Gakkōtosho Co., Ltd., Tokyo, 2010. Google Scholar M. A. Efendiev and S. V. Zelik, The attractor for a nonlinear reaction-diffusion system in an unbounded domain, Comm. Pure Appl. Math., 54 (2001), 625-688. doi: 10.1002/cpa.1011. Google Scholar M. Fabrizio, C. Giorgi and V. Pata, A new approach to equations with memory, Arch. Ration. Mech. Anal., 198 (2010), 189-232. doi: 10.1007/s00205-010-0300-3. Google Scholar T. Faria, W. Huang and J. Wu, Travelling waves for delayed reaction-diffusion equations with global response, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 462 (2006), 229-261. doi: 10.1098/rspa.2005.1554. Google Scholar M. Grasselli and V. Pata, Uniform attractors of nonautonomous dynamical systems with memory, in Evolution equations, semigroups and functional analysis (eds. A. Lorenzi and B. Ruf), Birkhäuser, Basel, 50 (2002), 155-178. Google Scholar M. Grasselli, D. Pražák and G. Schimperna, Attractors for nonlinear reaction-diffusion systems in unbounded domains via the method of short trajectories, J. Differential Equations, 249 (2010), 2287-2315. doi: 10.1016/j.jde.2010.06.001. Google Scholar X. Li and Z. X. Li, The Global Attractor of a Non-Local PDE Model with Delay for Population Dynamics in $\mathbbR^n$, Acta Math. Sin. (Engl. Ser.), 27 (2011), 1121-1136. doi: 10.1007/s10114-011-8539-7. Google Scholar A. Miranville and S. Zelik, Attractors for dissipative partial differential equations in bounded and unbounded domains, in Handbook of differential equations: Evolutionary equations, Vol. IV (eds. C. M. Dafermos and M. Pokorný), Elsevier/North-Holland, Amsterdam, 2008, 103-200. doi: 10.1016/S1874-5717(08)00003-0. Google Scholar V. Pata, Stability and exponential stability in linear viscoelasticity, Milan J. Math., 77 (2009), 333-360. doi: 10.1007/s00032-009-0098-3. Google Scholar D. Pražák, Exponential attractors for abstract parabolic systems with bounded delay, Bull. Austral. Math. Soc., 76 (2007), 285-295. doi: 10.1017/S0004972700039666. Google Scholar A. V. Rezounenko, Partial diffferential equations with discrete and distributed state-dependent delays, J. Math. Anal. Appl., 326 (2007), 1031-1045. doi: 10.1016/j.jmaa.2006.03.049. Google Scholar G. R. Sell and Y. You, Dynamics of Evolutionary Equations, Springer-Verlag, New York, 2002. doi: 10.1007/978-1-4757-5037-9. Google Scholar Y. Wang and P. E. Kloeden, The uniform attractor of a multi-valued process generated by reaction-diffusion delay equations on an unbounded domain, Discrete Continuous Dynam. Systems - A, 34 (2014), 4343-4370. doi: 10.3934/dcds.2014.34.4343. Google Scholar Y. Wang, L. Wang and W. Zhao, Pullback attractors for nonautonomous reaction-diffusion equations in unbounded domains, J. Math. Anal. Appl., 336 (2007), 330-347. doi: 10.1016/j.jmaa.2007.02.081. Google Scholar Z. Wang, W. Li and S. Ruan, Travelling wave fronts in reaction-diffusion systems with spatio-temporal delays, J. Differential Equations, 222 (2006), 185-232. doi: 10.1016/j.jde.2005.08.010. Google Scholar T. Yi, Y. Chen and J. Wu, Global dynamics of delayed reaction-diffusion equations in unbounded domains, Z. Angew. Math. Phys., 63 (2012), 793-812. doi: 10.1007/s00033-012-0224-x. Google Scholar S. V. Zelik, Attractors of reaction-diffusion systems in unbounded domains and their spatial complexity, Comm. Pure Appl. Math., 56 (2003), 584-637. doi: 10.1002/cpa.10068. Google Scholar Yejuan Wang, Peter E. Kloeden. The uniform attractor of a multi-valued process generated by reaction-diffusion delay equations on an unbounded domain. Discrete & Continuous Dynamical Systems, 2014, 34 (10) : 4343-4370. doi: 10.3934/dcds.2014.34.4343 S.V. Zelik. The attractor for a nonlinear hyperbolic equation in the unbounded domain. Discrete & Continuous Dynamical Systems, 2001, 7 (3) : 593-641. doi: 10.3934/dcds.2001.7.593 Nick Bessonov, Gennady Bocharov, Tarik Mohammed Touaoula, Sergei Trofimchuk, Vitaly Volpert. Delay reaction-diffusion equation for infection dynamics. Discrete & Continuous Dynamical Systems - B, 2019, 24 (5) : 2073-2091. doi: 10.3934/dcdsb.2019085 Oleksiy V. Kapustyan, Pavlo O. Kasyanov, José Valero. Structure and regularity of the global attractor of a reaction-diffusion equation with non-smooth nonlinear term. Discrete & Continuous Dynamical Systems, 2014, 34 (10) : 4155-4182. doi: 10.3934/dcds.2014.34.4155 Tarik Mohammed Touaoula. Global dynamics for a class of reaction-diffusion equations with distributed delay and neumann condition. Communications on Pure & Applied Analysis, 2020, 19 (5) : 2473-2490. doi: 10.3934/cpaa.2020108 Elena Trofimchuk, Sergei Trofimchuk. Admissible wavefront speeds for a single species reaction-diffusion equation with delay. Discrete & Continuous Dynamical Systems, 2008, 20 (2) : 407-423. doi: 10.3934/dcds.2008.20.407 Perla El Kettani, Danielle Hilhorst, Kai Lee. A stochastic mass conserved reaction-diffusion equation with nonlinear diffusion. Discrete & Continuous Dynamical Systems, 2018, 38 (11) : 5615-5648. doi: 10.3934/dcds.2018246 Martino Prizzi. A remark on reaction-diffusion equations in unbounded domains. Discrete & Continuous Dynamical Systems, 2003, 9 (2) : 281-286. doi: 10.3934/dcds.2003.9.281 Brahim Alouini, Olivier Goubet. Regularity of the attractor for a Bose-Einstein equation in a two dimensional unbounded domain. Discrete & Continuous Dynamical Systems - B, 2014, 19 (3) : 651-677. doi: 10.3934/dcdsb.2014.19.651 M. Grasselli, V. Pata. A reaction-diffusion equation with memory. Discrete & Continuous Dynamical Systems, 2006, 15 (4) : 1079-1088. doi: 10.3934/dcds.2006.15.1079 Vladimir V. Chepyzhov, Mark I. Vishik. Trajectory attractor for reaction-diffusion system with diffusion coefficient vanishing in time. Discrete & Continuous Dynamical Systems, 2010, 27 (4) : 1493-1509. doi: 10.3934/dcds.2010.27.1493 Boris Andreianov, Halima Labani. Preconditioning operators and $L^\infty$ attractor for a class of reaction-diffusion systems. Communications on Pure & Applied Analysis, 2012, 11 (6) : 2179-2199. doi: 10.3934/cpaa.2012.11.2179 Guo Lin, Haiyan Wang. Traveling wave solutions of a reaction-diffusion equation with state-dependent delay. Communications on Pure & Applied Analysis, 2016, 15 (2) : 319-334. doi: 10.3934/cpaa.2016.15.319 Maria do Carmo Pacheco de Toledo, Sergio Muniz Oliva. A discretization scheme for an one-dimensional reaction-diffusion equation with delay and its dynamics. Discrete & Continuous Dynamical Systems, 2009, 23 (3) : 1041-1060. doi: 10.3934/dcds.2009.23.1041 Wei Wang, Anthony Roberts. Macroscopic discrete modelling of stochastic reaction-diffusion equations on a periodic domain. Discrete & Continuous Dynamical Systems, 2011, 31 (1) : 253-273. doi: 10.3934/dcds.2011.31.253 Peter Poláčik, Eiji Yanagida. Stable subharmonic solutions of reaction-diffusion equations on an arbitrary domain. Discrete & Continuous Dynamical Systems, 2002, 8 (1) : 209-218. doi: 10.3934/dcds.2002.8.209 Narcisa Apreutesei, Vitaly Volpert. Reaction-diffusion waves with nonlinear boundary conditions. Networks & Heterogeneous Media, 2013, 8 (1) : 23-35. doi: 10.3934/nhm.2013.8.23 Zhaosheng Feng. Traveling waves to a reaction-diffusion equation. Conference Publications, 2007, 2007 (Special) : 382-390. doi: 10.3934/proc.2007.2007.382 Costică Moroşanu. Stability and errors analysis of two iterative schemes of fractional steps type associated to a nonlinear reaction-diffusion equation. Discrete & Continuous Dynamical Systems - S, 2020, 13 (5) : 1567-1587. doi: 10.3934/dcdss.2020089 Anouar El Harrak, Hatim Tayeq, Amal Bergam. A posteriori error estimates for a finite volume scheme applied to a nonlinear reaction-diffusion equation in population dynamics. Discrete & Continuous Dynamical Systems - S, 2021, 14 (7) : 2183-2197. doi: 10.3934/dcdss.2021062 Dalibor Pražák Jakub Slavík
CommonCrawl
A concise introduction to control theory for stochastic partial differential equations MCRF Home Linear-Quadratic-Gaussian mean-field controls of social optima doi: 10.3934/mcrf.2021028 On the nonuniqueness and instability of solutions of tracking-type optimal control problems Constantin Christof 1,, and Dominik Hafemeyer 1, Department of Mathematics, Technische Universität München, Boltzmannstr. 3, 85748 Garching b. München, Germany *Corresponding author: Constantin Christof Received July 2020 Revised March 2021 Early access May 2021 Fund Project: This research was conducted within the International Research Training Group IGDK 1754, funded by the German Science Foundation (DFG) and the Austrian Science Fund (FWF) under project number 188264188/GRK1754. We study tracking-type optimal control problems that involve a non-affine, weak-to-weak continuous control-to-state mapping, a desired state $ y_d $, and a desired control $ u_d $. It is proved that such problems are always nonuniquely solvable for certain choices of the tuple $ (y_d, u_d) $ and instable in the sense that the set of solutions (interpreted as a multivalued function of $ (y_d, u_d) $) does not admit a continuous selection. Keywords: Optimal control, nonuniqueness, global solutions, nonlinear operators, Chebyshev set, tracking-type problem, well-posedness. Mathematics Subject Classification: 49J27, 49K40, 49N45, 90C26. Citation: Constantin Christof, Dominik Hafemeyer. On the nonuniqueness and instability of solutions of tracking-type optimal control problems. Mathematical Control & Related Fields, doi: 10.3934/mcrf.2021028 A. Ahmad Ali, K. Deckelnick and M. Hinze, Global minima for optimal control of the obstacle problem, ESAIM Control, Optimisation and Calculus of Variations, 26 (2020), Paper No. 64, 22 pp. doi: 10.1051/cocv/2019039. Google Scholar H. Attouch, G. Buttazzo and G. Michaille, Variational Analysis in Sobolev and BV Spaces, SIAM, Philadelphia, PA, 2006. Google Scholar V. Barbu, Optimal Control of Variational Inequalities, Research Notes in Mathematics, vol. 100, Pitman, Boston, MA, 1984. Google Scholar T. Betz, C. Meyer, A. Rademacher and K. Rosin, Adaptive optimal control of elastoplastic contact problems, Ergebnisberichte des Instituts für Angewandte Mathematik, TU Dortmund, Nr. 496, 2014, 10 pp. http://www.mathematik.tu-dortmund.de/papers/BetzMeyerRademacherRosin2014.pdf Google Scholar [5] J. M. Borwein and J. D. Vanderwerff, Convex Functions: Constructions, Characterizations and Counter examples, Cambridge University Press, Cambridge, 2010. doi: 10.1017/CBO9781139087322. Google Scholar A. L. Brown, Set valued mappings, continuous selections, and metric projections, Journal of Approximation Theory, 57 (1989), 48-68. doi: 10.1016/0021-9045(89)90083-X. Google Scholar E. Casas, Second order analysis for bang-bang control problems of PDEs, SIAM Journal on Control and Optimization, 50 (2012), 2355-2372. doi: 10.1137/120862892. Google Scholar C. Christof, Sensitivity analysis and optimal control of obstacle-type evolution variational inequalities, SIAM Journal on Control and Optimization, 57 (2019), 192-218. doi: 10.1137/18M1183662. Google Scholar C. Christof, C. Meyer, S. Walther and C. Clason, Optimal control of a non-smooth semilinear elliptic equation, Mathematical Control and Related Fields, 8 (2018), 247-276. doi: 10.3934/mcrf.2018011. Google Scholar C. Christof and B. Vexler, New regularity results and finite element error estimates for a class of parabolic optimal control problems with pointwise state constraints, ESAIM Control, Optimisation and Calculus of Variations, 27 (2021), Paper No. 4, 39 pp. doi: 10.1051/cocv/2020059. Google Scholar C. Christof and G. Wachsmuth, On second-order optimality conditions for optimal control problems governed by the obstacle problem, Optimization, (2020). doi: 10.1080/02331934.2020.1778686. Google Scholar J. A. Clarkson, Uniformly convex spaces, Transactions of the American Mathematical Society, 40 (1936), 396-414. doi: 10.1090/S0002-9947-1936-1501880-4. Google Scholar A. L. Dontchev and T. Zolezzi, Well-Posed Optimization Problems, Lecture Notes in Mathematics, vol. 1543 Springer-Verlag, Berlin, 1993. doi: 10.1007/BFb0084195. Google Scholar J. Fletcher and W. B. Moors, Chebyshev sets, Journal of the Australian Mathematical Society, 98 (2015), 161-231. doi: 10.1017/S1446788714000561. Google Scholar M. Gugat, G. Leugering and G. Sklyar, Lp-optimal boundary control for the wave equation, SIAM Journal on Control and Optimization, 44 (2005), 49-74. doi: 10.1137/S0363012903419212. Google Scholar D. Hafemeyer, Optimal Control of the Parabolic Obstacle Problem, PhD thesis, Technische Universität München, 2020. Google Scholar [17] J. Heinonen, P. Koselka, N. Shanmugalingam and J. T. Tyson, Sobolev Spaces on Metric Measure Spaces, New Mathematical Monographs, vol. 27, Cambridge University Press, Cambridge, 2015. doi: 10.1017/CBO9781316135914. Google Scholar M. Herty, R. Pinnau and M. Seaïd, Optimal control in radiative transfer, Optimization Methods & Software, 22 (2007), 917-936. doi: 10.1080/10556780701405783. Google Scholar R. Herzog, A. Rösch, S. Ulbrich and W. Wollner, OPTPDE - A collection of problems in PDE-constrained optimization, http://www.optpde.net Google Scholar R. Herzog, A. Rösch, S. Ulbrich and W. Wollner, OPTPDE: A collection of problems in PDE-constrained optimization, in Trends in PDE Constrained Optimization, International Series of Numerical Mathematics, vol. 165, Birkhäuser/Springer, Cham, 2014,539–543. doi: 10.1007/978-3-319-05083-6_34. Google Scholar P. C. Kainen, V. Kůrková and A. Vogt, Geometry and topology of continuous best and near best approximations, Journal of Approximation Theory, 105 (2000), 252-262. doi: 10.1006/jath.2000.3467. Google Scholar D. Kinderlehrer and G. Stampacchia, An Introduction to Variational Inequalities and Their Applications, Classics in Applied Mathematics, vol. 31, SIAM, Philadelphia, PA, 2000. doi: 10.1137/1.9780898719451. Google Scholar V. Klee, Convexity of Chebyshev sets, Mathematische Annalen, 142 (1961), 292-304. doi: 10.1007/BF01353420. Google Scholar K. Kunisch and D. Wachsmuth, Sufficient optimality conditions and semi-smooth Newton methods for optimal control of stationary variational inequalities, ESAIM Control, Optimisation and Calculus of Variations, 18 (2012), 520-547. doi: 10.1051/cocv/2011105. Google Scholar J. Lohéac, E. Trélat and E. Zuazua, Minimal controllability time for the heat equation under unilateral state or control constraints, Mathematical Models and Methods in Applied Sciences, 27 (2017), 1587-1644. doi: 10.1142/S0218202517500270. Google Scholar R. E. Megginson, An Introduction to Banach Space Theory, Graduate Texts in Mathematics, vol. 183, Springer-Verlag, NY, 1998. doi: 10.1007/978-1-4612-0603-3. Google Scholar E. Muselli, Affinity and well-posedness for optimal control problems in Hilbert spaces, Journal of Convex Analysis, 14 (2007), 767-784. Google Scholar J. Nečas, Direct Methods in the Theory of Elliptic Equations, Springer, Berlin, 2012. Google Scholar B. J. Pettis, A proof that every uniformly convex space is reflexive, Duke Mathematical Journal, 5 (1939), 249-253. doi: 10.1215/S0012-7094-39-00522-3. Google Scholar D. Pighin, Nonuniqueness of minimizers for semilinear optimal control problems, preprint, 2020, arXiv: 2002.04485. Google Scholar B. Schweizer, Partielle Differentialgleichungen, Springer-Verlag, Berlin, 2013. doi: 10.1007/978-3-642-40638-6. Google Scholar U. Westphal and J. Frerking, On a property of metric projections onto closed subsets of Hilbert spaces, Proceedings of the American Mathematical Society, 105 (1989), 644-651. doi: 10.1090/S0002-9939-1989-0946636-6. Google Scholar K. Yosida, Functional Analysis, 6$^th$ edition, Springer-Verlag, Berlin-New York, 1980. Google Scholar T. Zolezzi, A characterization of well-posed optimal control systems, SIAM Journal on Control and Optimization, 19 (1981), 604-616. doi: 10.1137/0319038. Google Scholar E. Zuazua, Some results and open problems on the controllability of linear and semilinear heat equations, in Carleman Estimates and Applications to Uniqueness and Control Theory, Birkhäuser Boston, Boston, MA, 2001,191–211. Google Scholar Changxing Miao, Bo Zhang. Global well-posedness of the Cauchy problem for nonlinear Schrödinger-type equations. Discrete & Continuous Dynamical Systems, 2007, 17 (1) : 181-200. doi: 10.3934/dcds.2007.17.181 Xiaoqiang Dai, Shaohua Chen. Global well-posedness for the Cauchy problem of generalized Boussinesq equations in the control problem regarding initial data. Discrete & Continuous Dynamical Systems - S, 2021, 14 (12) : 4201-4211. doi: 10.3934/dcdss.2021114 Huy Tuan Nguyen, Nguyen Anh Tuan, Chao Yang. Global well-posedness for fractional Sobolev-Galpern type equations. Discrete & Continuous Dynamical Systems, 2022 doi: 10.3934/dcds.2021206 Tayeb Hadj Kaddour, Michael Reissig. Global well-posedness for effectively damped wave models with nonlinear memory. Communications on Pure & Applied Analysis, 2021, 20 (5) : 2039-2064. doi: 10.3934/cpaa.2021057 Zhaohui Huo, Boling Guo. The well-posedness of Cauchy problem for the generalized nonlinear dispersive equation. Discrete & Continuous Dynamical Systems, 2005, 12 (3) : 387-402. doi: 10.3934/dcds.2005.12.387 Jinkai Li, Edriss Titi. Global well-posedness of strong solutions to a tropical climate model. Discrete & Continuous Dynamical Systems, 2016, 36 (8) : 4495-4516. doi: 10.3934/dcds.2016.36.4495 Tristan Roy. Adapted linear-nonlinear decomposition and global well-posedness for solutions to the defocusing cubic wave equation on $\mathbb{R}^{3}$. Discrete & Continuous Dynamical Systems, 2009, 24 (4) : 1307-1323. doi: 10.3934/dcds.2009.24.1307 Jiawei Chen, Zhongping Wan, Liuyang Yuan. Existence of solutions and $\alpha$-well-posedness for a system of constrained set-valued variational inequalities. Numerical Algebra, Control & Optimization, 2013, 3 (3) : 567-581. doi: 10.3934/naco.2013.3.567 Thomas Y. Hou, Congming Li. Global well-posedness of the viscous Boussinesq equations. Discrete & Continuous Dynamical Systems, 2005, 12 (1) : 1-12. doi: 10.3934/dcds.2005.12.1 Tarek Saanouni. Global well-posedness of some high-order semilinear wave and Schrödinger type equations with exponential nonlinearity. Communications on Pure & Applied Analysis, 2014, 13 (1) : 273-291. doi: 10.3934/cpaa.2014.13.273 Ming Wang. Sharp global well-posedness of the BBM equation in $L^p$ type Sobolev spaces. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5763-5788. doi: 10.3934/dcds.2016053 Changjie Fang, Weimin Han. Well-posedness and optimal control of a hemivariational inequality for nonstationary Stokes fluid flow. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5369-5386. doi: 10.3934/dcds.2016036 Mircea Sofonea, Yi-bin Xiao. Tykhonov well-posedness of a viscoplastic contact problem†. Evolution Equations & Control Theory, 2020, 9 (4) : 1167-1185. doi: 10.3934/eect.2020048 Hiroyuki Hirayama, Mamoru Okamoto. Well-posedness and scattering for fourth order nonlinear Schrödinger type equations at the scaling critical regularity. Communications on Pure & Applied Analysis, 2016, 15 (3) : 831-851. doi: 10.3934/cpaa.2016.15.831 Jun-ichi Segata. Well-posedness and existence of standing waves for the fourth order nonlinear Schrödinger type equation. Discrete & Continuous Dynamical Systems, 2010, 27 (3) : 1093-1105. doi: 10.3934/dcds.2010.27.1093 Takafumi Akahori. Low regularity global well-posedness for the nonlinear Schrödinger equation on closed manifolds. Communications on Pure & Applied Analysis, 2010, 9 (2) : 261-280. doi: 10.3934/cpaa.2010.9.261 Daniela De Silva, Nataša Pavlović, Gigliola Staffilani, Nikolaos Tzirakis. Global well-posedness for a periodic nonlinear Schrödinger equation in 1D and 2D. Discrete & Continuous Dynamical Systems, 2007, 19 (1) : 37-65. doi: 10.3934/dcds.2007.19.37 Zihua Guo, Yifei Wu. Global well-posedness for the derivative nonlinear Schrödinger equation in $H^{\frac 12} (\mathbb{R} )$. Discrete & Continuous Dynamical Systems, 2017, 37 (1) : 257-264. doi: 10.3934/dcds.2017010 Daniela De Silva, Nataša Pavlović, Gigliola Staffilani, Nikolaos Tzirakis. Global well-posedness for the $L^2$ critical nonlinear Schrödinger equation in higher dimensions. Communications on Pure & Applied Analysis, 2007, 6 (4) : 1023-1041. doi: 10.3934/cpaa.2007.6.1023 Yonggeun Cho, Gyeongha Hwang, Tohru Ozawa. Global well-posedness of critical nonlinear Schrödinger equations below $L^2$. Discrete & Continuous Dynamical Systems, 2013, 33 (4) : 1389-1405. doi: 10.3934/dcds.2013.33.1389 Constantin Christof Dominik Hafemeyer
CommonCrawl
At what altitude would the air be too thin to carry a sound wave? A related question When does an aerobraking space craft create a sonic boom? has spawned a couple of answers, but so far no compelling answers. It is a common belief that in space there is no sound, How far do you have to be from Earth to be "in space"? does not have a definitive answer. But the Kármán line, at an altitude of 100 km (62 mi) above sea level is often given, it seems unlike that this is the exact altitude that air becomes too thin to carry a sound wave. The top voted answer at What is the minimum pressure of a medium for which a sound wave can exist? seems to imply there is someplace where sound would fall below human thresholds. But does not relate it to an elevation. thermodynamics acoustics atmospheric-science plasma-physics James JenkinsJames Jenkins $\begingroup$ "In space, no-one can hear you scream" -- or rumble $\endgroup$ – Carl Witthoft Jul 7 '15 at 16:28 $\begingroup$ Are you asking about audible sound waves? $\endgroup$ – jinawee Jul 7 '15 at 17:03 $\begingroup$ Compression waves exist across interstellar distances in gas clouds that have extremely low densities. Indeed, the cosmic microwave background shows a modulation that was created by the "sound" of the big bang across the entire universe. Can humans "hear" that? With the proper instrumentation one can make it audible. Is there a difference between your ear and instrumentation? Not really. It's an arbitrary dividing line as far as "sound" is concerned. $\endgroup$ – CuriousOne Jul 7 '15 at 18:09 $\begingroup$ @Carl Compare the mean free path to the wavelength... $\endgroup$ – dmckee --- ex-moderator kitten Jul 7 '15 at 22:00 $\begingroup$ @CarlWitthoft: I remember that the mean free path in interstellar medium is something on the order of an astronomical unit, so the shortest possible compression waves are a large multiple of that, probably on the order of the size of the solar system or so. I could be a couple of orders of magnitude off. :-) $\endgroup$ – CuriousOne Jul 8 '15 at 0:32 First, I am going to provide a little background on equivalent pressures at different altitudes from Earth's surface. Layers of Earth's Atmosphere Troposphere to Mesosphere At sea level, the neutral atmosphere of Earth has a pressure of ~$10^{5}$ Pa (or ~1000 mbars). The below image from https://en.wikipedia.org/wiki/File:Comparison_US_standard_atmosphere_1962.svg shows the broad range of temperatures/pressures of Earth's atmosphere. The region where the atmosphere transitions from mostly neutral to mostly ionized gas (called a plasma) is called the ionosphere. The altitudes defining this region vary (due to solar variability), but are generally defined as ranging from ~60-1000 km. The free electron number density in the ionosphere varies greatly from ~$10^{3} - 10^{6}$ # $cm^{-3}$ (or number of particles per cubic centimeter). The temperature varies from few 100 K to ~1500 K. Thus, if we treated it like an ideal gas the thermal pressure of the charged particles would range from a few $10^{-12}$ Pa to few $10^{-8}$ Pa. Thus, the ratio of sea level pressure to the plasma constituents would be ~$10^{13} - 10^{17}$. Plasmasphere The region immediately surrounding the ionosphere is called the plasmasphere, which can extend to altitudes as low as a few $R_{E}$ up to ~6 $R_{E}$. The density ranges from several 100 # $cm^{-3}$ down to ~10 # $cm^{-3}$ and temperatures vary greatly, from ~6000-35,000 K. Again, these ranges correspond to thermal pressures of $10^{-13}$ Pa to few $10^{-11}$ Pa. Outer Magnetosphere The "best" vacuum that we can easily access is the Earth's outer magnetosphere, which has a density ranging from ~0.01-1.0 particles per cubic centimeter. The temperatures in the outer magnetosphere can vary greatly from ~$10^{5}$ K to greater than $10^{9}$ K (i.e., if one converts radiation belt particle energies, e.g., 100s of keV, to an equivalent temperature). Thus, the range of equivalent ideal gas thermal pressures would be few $10^{-14}$ Pa to a few $10^{-8}$ Pa. The solar wind - the supersonic flow of plasma from the sun's upper atmosphere - has densities and temperatures ranging from ~0.5-50 # $cm^{-3}$ and ~$10^{4}$ K up to ~$10^{6}$ K, respectively (for references, see https://physics.stackexchange.com/a/179057/59023). Thus, the range of equivalent ideal gas thermal pressures would be few ~$10^{-14} - 10^{-10}$ Pa. For all practical purposes, there are no regions of space completely devoid of some kind of sound. Long/Detailed Answer Interestingly, one can have the traditional sound wave (i.e., a longitudinal oscillation mediated by gas particle collisions) propagate into the upper atmosphere and even into the ionosphere. The point where such a sound wave would experience strong damping is where the collisional mean free path becomes too large to support the oscillations, i.e., this would occur when the average time between collisions becomes comparable to the wave frequency. Thus, the oscillations would have no restoring force and would damp out (for an electromagnetic analogy, see evanescent waves). It is a common belief that in space there is no sound...? No, there are sound waves that start in space can propagate in almost zero pressures, well, less than $10^{-14}$ Pa is as close to vacuum as one needs since the best vacuums we can produce in a lab is ~$10^{-11}$ Pa. There are multiple sound waves in space, including ion acoustic waves and magnetosonic waves. Ion acoustic waves have been seen all the way out at Saturn, where the solar wind density and temperature can be as low as ~0.1 # $cm^{-3}$ and less than ~$10^{4}$ K, or thermal pressures below $10^{-14}$ Pa (note that the ram or dynamic pressure is generally ~2-4 orders of magnitude higher owing to the high speed of the solar wind). These modes have been observed as far out as Neptune, throughout the solar wind, and as in close as ~0.3 AU. There is no reason to expect that such modes would not exist in the interstellar medium, where densities and temperatures can be as low as ~0.1 # $cm^{-3}$ and ~$10^{3}$ K, corresponding to $10^{-15}$ Pa. The intra galaxy cluster medium is even more tenuous but much hotter, with densities and temperatures as low as ~$10^{-4}$ # $cm^{-3}$ and ~$10^{7}$ K, corresponding to $10^{-14}$ Pa (e.g., see arXiv e-print 1406.4410). Again, the ubiquity of ion acoustic waves in the interplanetary medium suggests we should expect these in nearly all regions of space. Update/Edit Let me ask the question in a slightly different manner. At what altitude would I no longer be able to hear a 100 dB source (ignoring suffication)? The intensity of sound decreases as $I\left( r \right) \propto r^{-2}$ while sound pressure decreases as $P\left( r \right) \propto r^{-1}$. The hearing threshold is a function of frequency, because the human ear does not have a flat frequency response, but it is generally accepted as being ~20 $\mu$Pa for 1 atmosphere and 25$^{\circ} C$ at 1000 Hz. The sound pressure level (measured in dB) is given by: $$ L_{p}\left( r \right) = 20 \ \log_{10} \left( \frac{ P\left( r \right) }{ P_{o} } \right) $$ where we set $P_{o}$ ~ 20 $\mu$Pa. Then a 100 dB source corresponds to ~2 Pa at the source. This would drop to $P_{o}$ at a distance of ~$10^{5}$ m, ignoring any acoustic impedance or losses and assuming the pressure and temperature are the same as the reference parameters for $P_{o}$. The reference sound intensity, $I_{o}$, depends upon the characteristic acoustic impedance, $z_{o}$, as $I_{o} = P_{o}^{2}/z_{o}$. We know that $z_{o} = \rho \ C_{s}$, where $\rho$ is the mass density and $C_{s}$ is the speed of sound. We can model $\rho = \rho \left( h \right)$ using a known atmospheric scale height and an exponential decay (which reproduces the orange line the figure above) and take a set of values from the blue in the figure above for $C_{s}$ (see table below). Then we find that $z_{o}$ ranges from ~0.003-416 Pa s/m from 0-100 km altitude. If we use the human hearing threshold for $P_{o}$, then $I_{o}$ ranges from ~$10^{-13} - 10^{-7}$ W $m^{2}$. Altitude [km] | Cs [m/s] | Density [kg/m^3] | z_o [Pa s/m] | I_o [W/m^2] 10 | 300.0 | 3.736e-01 | 1.121e+02 | 3.569e-12 60 | 320.0 | 1.042e-03 | 3.334e-01 | 1.200e-09 100 | 287.0 | 9.420e-06 | 2.703e-03 | 1.480e-07 Since $I_{o}$ increases as we increase altitude, then the intensity of our source would have to increase as well to maintain its initial $L_{o}$ = 100 dB level (i.e., $I_{src}\left( h \right) = I_{o}\left( h \right) 10^{L_{o}/10}$). The intensity at the source, $I_{src}$, then ranges from ~0.01-1500 W $m^{2}$. Let's assume we use the same intensity at sea level and bring the speaker up in altitude, then the intensity level at the source drops with increasing altitude as: $$ L_{i,src}\left( h \right) = 10 \ \log_{10} \left( \frac{ I_{src}\left( 0 \right) }{ I_{o}\left( h \right) } \right) $$ Then $L_{i,src}$ varies from 100 dB at sea level to ~94 dB by 10 km, ~79 dB by 50 km, and ~48 dB by 100 km. We estimate the intensity level at a given distance away from the source as: $$ L_{r}\left( h, r \right) = L_{i,src}\left( h \right) + 20 \ \log_{10} \left( \frac{ 1 }{ r } \right) $$ where we have used 1 m as a normalizing length defining at the source. In the following, we examine the decrease in intensity levels with distance at three different altitudes, 10 km, 50 km, and 100 km. If we move ~3 m away from the source, the intensity levels drop to ~85 dB, ~65 dB, and ~39 dB for, respectively. At ~10 m away, these intensities drop to ~74 dB, ~54 dB, and ~28 dB, respectively. At ~50 m away, these intensities drop to ~60 dB, ~40 dB, and ~14 dB, respectively. And at ~150 m away, the intensities drop to ~51 dB, ~31 dB, and ~5 dB, respectively. For comparison at sea level, the intensities would be ~90 dB, ~66 dB, and ~56 dB at distances of ~3 m, ~50 m, and ~150 m, respectively. Thus, at 100 km altitude one need only move a little over 100 m away from the source before the intensity level drops below the hearing threshold (i.e., ~5 dB for a 20 year old male at 1000 Hz). The model only went to 100 km but even so, our source would become difficult to hear if we moved a little more than ~100 m from it. Given that the density decreases exponentially with an e-folding distance of only ~8.5 km (pressure does so similarly as well), if we extrapolate our estimates for $L_{i,src}\left( h \right)$ then the value drops to ~10 dB by ~177 km. So by ~200 km a human probably could not hear a source ~1 m away that produced a 100 dB, 1000 Hz intensity level at sea level. honeste_viverehoneste_vivere $\begingroup$ I think you should address the commonly-understood meaning of "no sound in space", which is that a person in space (presumably in a spacesuit or spaceship, so they can live) isn't going to hear sounds produced by anything they're not in contact with — the sound doesn't travel through the intervening vacuum. Probably acoustic impedance is the right way to go about it. So to cast OP's question more concretely: "how high up do I have to go before a loudspeaker playing a 100dBA tone, 1 meter away, becomes inaudible?" That should be answerable, and (I'd say) somewhere between 10 and 1000 km. $\endgroup$ – hobbs Jul 3 '16 at 22:08 $\begingroup$ @hobbs - Done... $\endgroup$ – honeste_vivere Jul 4 '16 at 20:45 $\begingroup$ Shock waves produced by collisions of gas clouds within a galaxy or between colliding galaxies can sometimes lead to regions of star formation. I'm just curious, is that an example of acoustic waves at low density in space as well? $\endgroup$ – uhoh Oct 5 '16 at 0:43 $\begingroup$ @uhoh - That is often assumed, yes. Many of those arguments are based upon the other assumption that if the temperature of the gas is below ~13.6 eV (i.e., first ionization energy of hydrogen), then is must be neutral. However, plasmas can exist well below T ~ 13 eV in an ionized state because their collective behavior allows them to remain quasi-neutral. You may be correct, but double-check why the neutral-gas assumption is being made to make sure. $\endgroup$ – honeste_vivere Oct 5 '16 at 14:34 $\begingroup$ I've quoted you in this answer; feel free to adjust or to post another answer, thanks! $\endgroup$ – uhoh Sep 10 '20 at 2:23 Not the answer you're looking for? Browse other questions tagged thermodynamics acoustics atmospheric-science plasma-physics or ask your own question. What is the speed of sound in space? How can a black hole produce sound? What is the minimum pressure of a medium for which a sound wave can exist? Is it possible to noise cancel a sonic boom? What is the loudest possible sound? How much power and energy is (actually) in a 230 dB "click" from a whale? Why is sound attenuation greater in dry air than humid air? What is the max frequency of sound in a given medium? Doppler shift and change in intensity of a sound wave Atmospheric pressure And heat convection Do asteroids create sonic boom while entering the atmosphere? Is it possible to create an audible sound source in mid air by intersecting ultrasonic sound beams? Is work done by sound wave on air particles? Do we hear sounds differently on the highest mountains? Using sinusoids to represent sound waves More detail on why pressure increases at the bottom of a column of gas Why does the speed of sound relate to temperature in increasing altitude? Why is it a sonic "boom" and not a sonic "boooooooooooooooo…m"?
CommonCrawl
Focus on: All days Sep 22, 2019 Sep 23, 2019 Sep 24, 2019 Sep 25, 2019 Sep 26, 2019 Sep 27, 2019 All sessions EIC Fundamental Symmetry Tests Introduction New Applications Polarimetry Polarized Gas Targets Polarized Neutrons Polarized Sources Retrospective Solid Polarized Targets Hide Contributions Sep 22, 2019, 3:00 PM → Sep 27, 2019, 4:00 PM US/Eastern The Workshop on Polarized Sources,Targets, and Polarimetry has been a tradition for more than 20 years, moving between Europe, USA, and Asia. The 18th International Workshop on Polarized Sources, Targets, and Polarimetry (PSTP 2019) will take place at Knoxville, Tennessee. The workshop addresses technical challenges and accomplishments related to polarized gas/solid targets, polarized electron/positron/ion/neutron sources, polarimetry, and applications of polarized techniques. Due to the extraordinary circumstances that many are experiencing, we will be accepting late submissions. If you would like to submit a contribution, but do not believe that you will be able to complete it before the end of March, please inform the chairperson. Sun, Sep 22 Mon, Sep 23 Tue, Sep 24 Thu, Sep 26 Fri, Sep 27 Hotel Check In Introduction: Welcome and Introduction Convener: Josh Pierce Greeting/Logistics Speaker: Josh Pierce Welcome_logistics.pdf Welcome_logistics.pptx Welcome_part2.pdf Welcome_part2.pptx Convener: Christopher Keith (Jefferson Lab) DonAndDonal_lite.pptx PSTP2019_ckeith.pdf Solid Polarized Targets 1 K refrigerator for the CLAS12 Polarized Target: Design, Construction, and First Results A dynamically polarized target of protons and deuterons in irradiated NH3 and ND3 will be employed with the CLAS12 detector system to explore the spin structure of the nucleon in Hall B at Jefferson Lab. This target will feature a versatile horizontal 1 K refrigerator that has been constructed by a collaboration composed of Christopher Newport University, Old Dominion University, the University of Virginia, and the JLab Target Group. A description of the challenges involved with designing the target for the CLAS12 experiments and the collaboration's solutions to them will be presented. These include a modular and compact design of the 1 K refrigerator and its ancillary equipment, as well as a novel mechanism for loading the target samples. Initial test results of the system will also be included. Speaker: Mr James Brock (Thomas Jefferson National Accelerator Facility) CLAS12 PolTar Material Loading/Unloading PSTP 1 K refrigerator for the CLAS12 Polarized Target.pdf Magnetic Field Requirements for the CLAS12 Polarized Target Upcoming spin structure experiments in Hall B at Jefferson Lab will employ a new dynamically polarized target inside the CLAS12 detector system. Protons and deuterons in irradiated NH3 and ND3 will be polarized at 1 K using the 5 T field of the CLAS12 solenoidal magnet. For optimum polarization, the field uniformity requirements are around 100 ppm over the volume of the 12 cm3 target sample. I will present field map results for the solenoid, and discuss methods to improve the uniformity utilizing thin superconducting shim coils integrated within the 1 K refrigerator. I will also demonstrate that this method to adjust the 5 T field also enables the simultaneous opposite polarization of two adjacent target cells. Speaker: Victoria Lagerquist (Old Dominion University, Jefferson Lab) Lagerquist PSTP 2019.pdf NMR Measurements for JLab's Solid Polarized Targets Solid polarized targets rely on continuous-wave Nuclear Magnetic Resonance techniques to provide measurements of the enhanced polarization provided under Dynamic Nuclear Polarization. Upcoming polarized target experiments in Jefferson Lab's Hall B present challenging conditions which would benefit from improvements to traditional NMR techniques. For decades, JLab has relied upon Liverpool Q-meters for NMR measurements, but these are aging and no longer produced. The polarized target group at Bochum has successfully produced replacement Q-meters with modern components, and we are following their example, exploring new designs for Q-meter systems. We are currently testing a prototype of our own Q-meter system, which hews closely to the designs of the Liverpool and Bochum systems with a few incremental improvements. At the same time, we are pursuing the possibility of an all-digital Q-meter system, eschewing an analog mixer for fast digitization and FPGA analysis. We will discuss the challenges presented by the new Hall B target, lay out our changes to the traditional Q-meter, and show results of initial tests of our designs. Speaker: James Maxwell (Jefferson Lab) maxwell_pstp_2019.pdf La-139 polarized target study for NOPTREX In the $^{139}$La(n,$\gamma$)$^{140}$La reaction, a T-violating asymmetry is expected to be enhanced by about 6 orders of magnitude[1]. NOPTREX (Neutron Optics for Parity and Time Reversal EXperiment) collaboration is planning to search for the T-violation in this reaction, where a polarized target of $^{139}$La nuclear spin is indispensable. Basically, we are keeping two choices, Brute Force (BF) and Dynamic Nuclear Polarization (DNP) for realizing the polarized target, but each of them has some difficulties. The BF method needs an advanced and complex cryogenic system. Achieving the 50% polarization in $^{139}$La, for example, requires the high magnetic field of 17 T and the low temperature of 10 mK. The DNP do not need such hard cryogenic system, but strongly depends on characteristics of the target materials. The single crystal of Nd3+:LaAlO3 is promising target material because the crystal structure enables us to suppress the quadrupole relaxation and to give a sufficiently narrow linewidth in an ESR spectrum, which are essential for performing the DNP [2]. Since 2018, we have started to study the physical properties of LaAlO$_3$ crystal in Tohoku University and to prepare cryogenetic system for DNP study in RCNP. In this presentation, we will report the current status of $^{139}$La polarized target study for the T-violation search. [1] T. Okudaira, et. al., Phys. Rev. C 97, 034622 (2018). [2] Y. Takahashi, H.M. Shimizu, and T. Yabuzaki, Nucl. Instrum. Methods Phys. Res. A Vol. 336 Issue 3, p.p. 583-586 (1993). Speaker: Mr Kohei Ishizaki (Nagoya Univ.) PSTP2019_kishizaki.pdf Fundamental Symmetry Tests Convener: Leah Broussard (Oak Ridge National Laboratory) Search for Electric Dipole Moments of charged particles with polarised beams in storage rings The Electric Dipole Moment (EDM) of elementary particles, including hadrons, is considered as one of the most powerful tools to study CP-violation beyond the Standard Model. Such CP-violating mechanisms are searched for to explain the dominance of matter over anti-matter in our universe. Up to now EDM experiments concentrated on neutral systems, namely neutron, atoms and molecules. Storage rings off􏰀er the possibility to measure EDMs of charged particles by observing the in􏰁fluence of the EDM on the spin motion. A dedicated program is underway at the COSY storage ring to develop the required experimental and technical tools. A step-wise approach starting with a proof-of-principle experiment at the existing storage ring Cooler Synchrotron COSY at Forschungszentrum Jülich, followed by an electrostatic prototype ring allowing for a simultaneous operation of counter circulating beams in order to cancel systematic e􏰀ffects, to the design of a dedicated 500 m circumference storage ring will be presented. Speaker: Prof. Paolo Lenisa (University of Ferrara and INFN - Ferrara (Italy)) PSTP19_LENISA.pdf A Search for Axion-like Particles with a Horizontally Polarized Beam In a Storage Ring A new method has been demonstrated using the storage ring COSY to search for an axion-like particle by scanning for a resonance in the horizontal-plane rotation of the deuteron beam polarization. If an electric dipole moment (EDM) is present on the nucleus, the radial electric field that exists in the particle frame will create a rotation of the polarization out of the horizontal plane and into the vertical direction. If that EDM oscillates due to the presence of an axion-like field in synchronization with the rotation of the polarization, then the vertical rotation will accumulate near the resonance, producing a measurable vertical polarization component. In the spring of 2019, we used a 0.97-GeV/c vector-polarized deuteron beam to successfully demonstrate the procedure for the search. The phase of the oscillating EDM with respect to the rotation of the polarized beam in unknown. In order to be sensitive to both cosine and sine components of the oscillation, we prepared four bunches for the ring with different polarization directions. Starting with vertical polarization following injection into the ring, an RF solenoid operating on the $(1 + G\gamma )$ harmonic of the beam revolution frequency was used to rotate the polarization into the horizontal plane. This yielded a polarization pattern in which two of the bunches had polarizations that were nearly orthogonal. By looking separately for signals on both bunches, a signal would be found for any value of the axion phase. Beam polarizations were measured using the WASA Forward Detector. In order to improve the horizontal polarization lifetime, the beam was electron cooled as well as bunched. Once the orbit was established with minimal steering corrections, the ring sextupole magnets were adjusted to maximize the horizontal polarization lifetime. All scans were made with lifetimes in excess of 500 s. The sensitivity to an axion was tested and calibrated using the magnetic field of a horizontally mounted RF Wien filter to create vertical polarization jumps during a frequency scan of COSY. In a series of scans spanning a 1.5% change in the neighborhood of 120 kHz, no signals were seen that did not fit the statistical distribution that arises from event counting data collection. In this case, the sensitivity to an oscillating EDM approached 10$^{−22}$ e$\cdot$cm. Speaker: Dr Edward Stephenson (Indiana University) AxionSearch-Stephenson.pdf Resonant Axion Searches with \textsuperscript{3}He Axions are CP-odd scalar particles appearing in many extensions of the Standard Model. In particular, the Peccei-Quinn axion can explain the smallness of the neutron electric dipole moment and is also a promising Dark Matter candidate. Axions also generate macroscopic P-odd and T-odd spin-dependent interactions which can be sought in sensitive laboratory experiments. As the axion's coupling to ordinary matter is extraordinarily weak, most searches for its effect have looked very carefully for the direct evidence of cosmologicial axions. This talk will instead introduce a set of experiments that aim to measure fresh, locally sourced Axions by using a periodically modulated mass to drive precession in a hyperpolarized gas sample. The Axion Resonant InterAction DetectioN Experiment (ARIADNE) is designed to search for axion-mediated spin-dependent interactions between nuclei at sub-millimeter ranges. The experiment involves a rotating tungsten mass to generate the axion field, and a dense ensemble of laser-polarized \textsuperscript{3}He nuclei surrounded by a superconducting shield layer to detect the axion field by NMR. This novel technique will allow measurement of axions in the $100\,\mu\mathrm{eV}$ to $10\,\mathrm{meV}$ mass range, filling the remaining gaps in the traditional ``axion window.'' A preliminary version of the experiment with less sensitivity but zero cryogenics is being developed for the magnetically shielded room in Physikalisch--Technische Bundesanstalt (PTB) in Berlin. Like ARIADNE, this apparatus will use a mass rotating at the sample's Larmour frequency in an attempt to observe Axions via a resonant enhancement. In this talk, I will first introduce the measurement technique before discussing the ongoing development of these experiments. Speaker: Austin Reid (Indiana University) Spin-Dependent Sub-Millimeter Fifth Force Search Using Ferrimagnetic Test Masses Macroscopic forces of nature beyond gravity and electromagnetism arise in many frameworks attempting to unify General Relativity and the Standard Model. We describe an experimental search for spin-dependent fifth forces in the sub-millimeter range. The experiment uses planar mechanical oscillators as test masses, which have been augmented with polarized rare earth iron garnets. These materials exhibit orbital compensation of the magnetism associated with the electron spins, substantially reducing the magnetic backgrounds. We describe the essential properties of the test masses and the progress of the apparatus developed to make optimal use of them, including a radiative cooling system, and discuss the experimental sensitivity. Speaker: Josh Long (Indiana University Bloomington) long_5th_forces_pstp_20190923.pptx Shielding Charged Particle Beams Momentum measurements in the forward direction at collider experiments are inherently difficult as the deflection of charged particles to be observed requires a magnetic field component that is perpendicular to the propagation direction of those particles. This, in turn, would jeopardize the quality of the colliding beam particles. To overcome this difficulty we propose a magnetic cloak that is passively shielding the beam particles from any transverse magnetic field component and furthermore, maintain the character of the magnetic field. This would allow introducing dipole magnets in the forward region of any experiment at a collider, for instance, the Electron-Ion Collider. We present a possible setup and show the design parameters, fabrication, and limitations of a magnetic field cloak Speaker: Klaus Dehmelt (Stony Brook University) ShieldingChargedBeams.pdf ShieldingChargedBeams.pptx Study of quantum spin correlations of relativistic electrons An experiment investigating quantum spin correlations of relativistic electrons will be presented. The project aims at the first measurement of the quantum spin correlation function (and the corresponding probabilities) for a pair of relativistic particles with mass. Theoretical studies revealed unexpected properties of entangled systems in the relativistic energy range, but in all of the correlation experiments performed until now the energy of the particles was insufficient to observe relativistic effects. This measurement will be the first attempt to verify the predictions of relativistic quantum mechanics in the domain of spin correlations. The measurement will be carried out on a pair of electrons in the final state of Møller scattering (electron pairs under study will originate from polarized electron beam scattering off atomic electrons of an unpolarized target). The measurement regards correlations of spin projections on chosen directions for the final state pair. The detector consists of two Mott polarimeters, in which the spins of both Møller electrons are measured simultaneously. Results and conclusions from the test measurements at Mainzer Mikrotron will be discussed. For testing purposes measurements were carried out with half of the setup, which can be used as a single polarimeter, allowing to measure the polarization of the beam, as well as the mean polarization of Møller electrons. Speaker: Michal Dragowski (University of Warsaw, Faculty of Physics) pstp1.pdf Polarization REsearch for Fusion Experiments and Reactors - The PREFER collaboration: purposes and present status The PREFER (Polarization REsearch for Fusion Experiments and Reactors) collaboration aims to address the know-hows in different fields and techniques to the challenging bet on fusion with polarized fuel. The efforts are focused on a variaty of duties and purposes, which are under the responsability of different institutes and research groups (presented here by the representative of the research center in the author list). Starting from still open questions in the fusion reaction physics, as an example the study of d-d spin dependent cross sections - Vasilyev, till the production of polarized ion acceleration by laser - Büscher, there are many connections between the research groups involved. The collaboration is facing also the production of nuclear polarized molecules, recombined by polarized atomic beam - Engels, and its condensation and transport - G. Ciullo/ M. Statera. Other chances of production are investigated: spin separation of molecules, in pMBS (polarized Molecule Beam Sources ) - Toporkov) and from photodissociation - Rakitizis. The status of the different fields under investigation and the connections between the topics and the different research groups will be provided. Speaker: Prof. G. Ciullo (INFN - Ferrara and Ferrara University -- 44122 Ferrara (Italy)) PREFER_PSTP19_Ciullo_rid_screen.pdf Polarized Sources Convener: Matthew Poelker (Jefferson Lab) Lifetime Measurements of GaAs photocathodes at the Upgraded Injector Test Facility at Jefferson Lab* Photocathodes based on GaAs can be used in photo-electron sources to supply spin-polarized, high-current electron beams for various applications. An activation, adding a thin surface layer, is needed to achieve negative electron affinity (NEA) for such cathodes. Typically, Cs is used in combination with an oxidant. Previous studies have suggested that the addition of Li to this process can increase the quantum efficiency (QE) of the cathode as well as the lifetime of the cathode surface layer, both crucial parameters for photo-electron source operation. Recently, first lifetime studies of bulk GaAs photocathodes activated with Cs, NF3, and Li have been conducted using the photo-electron gun of the Upgraded Injector Test Facility (UITF) at the Thomas Jefferson National Accelerator Facility (JLab), extracting beam currents of up to 100 µA. We will present the results of these measurements as well as planned measurements at the Institut für Kernphysik of Technische Universität Darmstadt. *Work supported by DFG (GRK 2128 "AccelencE"), BMBF (05H18RDRB1), and through the Helmholtz Graduate School for Hadron and Ion Research for FAIR. Speaker: Maximilian Herbert 2019_PSTP_Talk_Herbert.pdf 2019_PSTP_Talk_Herbert.pptx New Simulations for Ion Production and Back-Bombardment in GaAs Photo-guns GaAs-based DC high voltage photo-guns used at accelerators with extensive user programs must exhibit long photocathode operating lifetime. Achieving this goal represents a significant challenge for proposed high average current facilities that must operate at tens of milliamperes or more. Specifically, the operating lifetime is dominated by ion back-bombardment of the photocathode from ionized residual gas. While numerous experiments have been performed to characterize the operating lifetime under various conditions [1], detailed simulations of the ion back-bombardment mechanism that explains these experiments are lacking. Recently, a new user routine was implemented using the code General Particle Tracer (GPT) to simulate electron impact ionization of residual beamline gas and simultaneously track the incident electron, secondary electron, and the newly formed ion. This new routine was benchmarked against analytical calculations and then applied to experiments performed at the CEBAF injector at the Thomas Jefferson National Accelerator Facility. These simulations were performed using detailed 3D field maps produced with CST Microwave Studio describing the photo-gun electrostatics. In the first experiment, the electrically isolated anode of the CEBAF photo-gun was attached to a positive voltage power supply and biased to different voltages to study the effectiveness of limiting ions from entering the cathode-anode gap. In the second experiment, the size of the drive laser was varied in order to distribute the deleterious ions over a larger area of the photocathode (experimental results reported at PSTP17 in Daejeon, South Korea [2]). Discussion of these experiments and the application of this new GPT routine to model the experiments will be reported at the workshop. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contract DE-AC05-06OR23177. [1] J. Grames, R. Suleiman, P. A. Adderley, J. Clark, J. Hansknecht, D. Machie, M. Poelker, and M. L. Stutzman, "Charge and fluence lifetime measurements of a dc high voltage GaAs photogun at high average current" Phys. Rev. ST Accel. Beams 14, 043501 (2011). [2] J. Grames, P. Adderley, J. Hansknecht, R. Kazimi, M. Poelker, D. Moser, M. Stutzman, R. Suleiman, S. Zhang "Milliampere beam studies using high polarization photocathodes at the CEBAF Photoinjector", in Proc. of 2017 International Workshop on Polarized Sources, Targets and Polarimeters (PSTP17), Oct 15 - 20, 2017, Daejeon, South Korea. Speaker: Joshua Yoskowitz (Old Dominion University) Cryogenic GaAs cathodes development for improved lifetimes GaAs photocathodes provide a source of highly polarized electron beams. To ensure reliable operation for high current applications it is necessary to increase charge lifetime. To improve the local vacuum condition around the cathode the use of a cryogenic sub-volume is proposed. It is expected that the cryogenic adsorption of reactive residual-gas molecules yield an enhanced lifetime of the negative-electron-affinity surface of the cathode. Additional cooling of the cathode itself allows a higher laser power to be deposited in the material, resulting in higher possible beam currents. Implementation and first measurements are planned to be conducted at the TU-Darmstadt Photo-CATCH test set-up to investigate the operational parameters of the new source. Supported in parts by BMBF (05H18RDRB1) and by DFG (RTG 2128 "Accelence"). Speaker: Tobias Eggert (TU Darmstadt) PSTP 2019 Presentation Tobias Eggert.pdf Design of a Compact Photon Source for Compton Scattering from Solid Polarized Targets In order to facilitate Real Compton scattering studies off of nucleons a novel, high intensity, compact photon source (CPS) was developed for Halls A and/or C at Jefferson Lab. This source will provide a factor of 30 figure of merit improvement over existing technology. While developed for use in nuclear/particle physics experiments, the CPS design can be adapted to other fields. The source design as well as the main technical challenges associated with it will be discussed. Speaker: Prof. Gabriel Niculescu gn_cps_tenn_2019.pdf Convener: Dave Gaskell Hall A Moller polarimeter: overview and recent polarization measurements In this talk I will give an overview of the Hall A Moller Polarimeter at Jefferson Lab and I will present recent results on the beam polarization measurements taken during the PREX II experiment. Speaker: Simona Malace polarimetry_workshop_19_malace.pdf Moller Polarimetry Simulation for Jefferson Lab Hall-A The recently developed Jefferson Lab Hall-A moller polarimeter Geant4-based simulation [MolPol] is a vital tool in understanding the analyzing power of the polarimeter for parity experiments ranging from PREX-II at 1 GeV to future 11 GeV experiments. I'll discuss the application's role in the development of optics solutions, understanding of e- transportation through the polarimeter and the calculation of the analyzing power of a given optics tune; additionally, MolPol application development, issues and future challenges will be touched upon. Results against recently taken data at 2.137 GeV and 0.95 GeV show that data is qualitatively consistent with MolPol expectations. Speaker: King Eric (Syracuse University) 2019_PTSP__JLab_Hall-A_Moller_Polarimetry_MolPol.pdf Development of a Dedicated Precision Polarimeter for Charged Particle EDM searches at COSY The international JEDI (Jülich Electric Dipole moment Investigation) collaboration is preparing a first-ever direct measurement of the deuteron Electric Dipole Moment (EDM), using the COSY storage ring at Forschungszentrum Jülich (Germany). A new polarimeter is required to detect the very slow and minuscule polarization change with time: starting in 2016, we have designed, built and commissioned a new modular type storage ring EDM polarimeter based on LYSO inorganic scintillator crystals. The polarimeter concept exploits LYSO modules (3x3x8 cm3), individually coupled to modern large area SiPM arrays which are operating at low voltage. The detector system and its vacuum system have radial symmetry and a thin exit window, making the polarimeter very efficient for online up-down and left-right asymmetry measurements. After several tests at the external COSY beam, we have recently installed the complete system in the COSY ring for use with internal beams. We are planning to commission the detector at various polarized beam conditions together with the WASA polarimeter. After that, it will be employed as the polarimeter for JEDI and possibly other users. In this talk, I will summarize the achievements of our group and discuss the latest results. Speaker: Irakli Keshelashvili (Forschungszentrum Juelich GmbH) i.keshelashvili_PSTP2019.pdf A Polarized Target for E1039 (SPINQUEST) at FERMILAB In this presentation, the design of the polarized target to be used in the SPINQUEST experiment at FermiLab will be discussed. The polarized target, consisting of either NH3 or ND3, is centered in a 5T magnet field and polarized by 140 GHz microwaves. A new NMR system will measure the degree of polarization. The evaporation of the helium surrounding the target, caused by the intense beam of protons necessary for the experiment, is accommodated by a 14,000 m3/hr Roots pumping system. NH3 polarizations above 90% have been achieved, comparable to what has been achieved in other systems. Speaker: Prof. Donald Crabb (University of Virginia) PSTP2019_DGC_talk.pdf PSTP2019_DGC_talk.ppt A New Target Polarization Measurement System for the Fermilab Polarized Drell-Yan SpinQuest Experiment The Liverpool Q-meters were developed in the late 70s and became a de facto industry standard for NMR-based polarization measurements of polarized solid targets. However, it is becoming increasingly more difficult to produce the required number of q-meter channels as the components have become obsolete. The Los Alamos National Laboratory (LANL) group has developed a new NMR-based polarization measuring system following the basic Liverpool design. The new Q-meter will have multiple improvements, such as remote tuning and compact design. These improvements present opportunities for achieving a higher figure of merit for experiments exploiting polarized solid targets by potentially increasing the accuracy of the polarization measurements. The new LANL Q-meter is intended to be used in Fermilab SpinQuest/E1039 experiment which is part of the continuing world-wide effort to shed light on the nucleon spin composition puzzle. The current status of this work will be presented. Speaker: Mikhail Yurov (Los Alamos National Laboratory) Spin-Polarization using Microwave Induced Dynamic Nuclear Polarization The UVA-LANL polarized target system consists of a 5T, split-coil, superconducting magnet and uses a 140 GHz microwave source to provide highly polarized protons and deuterons via dynamic nuclear polarization (DNP). The DNP process leverages the large discrepancy between the electron and proton magnetic moments, along with Zeeman splitting in the magnetic field, and spin-spin coupling to pump protons (deuterons) into a highly polarized state. For my presentation, I will give a brief overview of the the UVA-LANL target system with a focus on the microwave system, its role in the DNP process, and the challenges of providing consistently high average polarization in an experimental setting. Speaker: Joshua Hoskins (University of Virginia) pstp2019.pdf Thermal Analysis and Simulation of the Superconducting Magnet in the SpinQuest Experiment at Fermilab The SpinQuest experiment at Fermilab aims to measure the Sivers asymmetry for the $\bar{u}$ and $\bar{d}$ sea quarks in the nucleon using the Drell-Yan process. The experiment will use a 5 T magnet, a $^4$He evaporation fridge with a large pumping system and 140 GHz microwaves to produce transversely polarized NH$_3$ and ND$_3$ targets. The proposed beam intensity is 1.5 $\times$ 10$^{12}$ of 120 GeV proton/sec. A quench simulation in the superconducting magnet is performed to determine the maximum intensity of the proton beam before the magnet transition to the resistive state. In this presentation a GEANT based simulation used to calculate the heat deposited in the magnet is discussed and the subsequent cooling processes which are modeled using the COMSOL Multiphysics are presented. Speaker: Zulkaida Akbar (University of Virginia) ptsp_ZulkaidaAkbar_Final.pdf Polarized Neutrons Convener: Fankang Li (Oak Ridge National Lab) Developing Wide Angle Spherical Neutron Polarimetry at Oak Ridge National Laboratory Spherical Neutron Polarimetry (SNP) analyzes complex magnetic structures through distinguishing contributions from nuclear-magnetic interference and chiral structure in addition to nuclear magnetic scattering separation. This analysis is achieved through determining all components in the polarization transfer process. Currently, wide-angle SNP is being realized at Oak Ridge National Laboratory (ORNL) for multiple beamlines including: the polarized triple-axis spectrometer (HB-1) and general-purpose small angle neutron scattering instrument (CG-2) at the High Flux Isotope Reactor (HFIR), as well as the hybrid spectrometer (HYSPEC) at the Spallation Neutron Source (SNS). The SNP device consists of three units: incoming/outgoing neutron polarization control, sample environment and a zero-field chamber. The incoming/outgoing neutron polarization regions use high-T_c superconducting YBCO films and mu-metal to achieve full control of neutron polarization. The device was transported and tested at the University of Missouri research reactor (MURR). We report the test results and provide a new method for placing the device on a time-of-flight beamline. Speaker: Nicolas Silva (ORAU) New Methods for Understanding Precision in Spherical Neutron Polarimetry Spherical neutron polarimetry (SNP) is a powerful neutron scattering technique that can unambiguously determine complex types of magnetic order in materials such as chiral antiferromagnets, multiferroics, superconductors, and magnetoelectrics. Currently, four polarimeter designs exist worldwide and several instruments operate year round that utilize them. Of the three existing SNP designs, two have been developed for wide-angle, single crystal diffraction and they are known as CRYOPAD and MuPAD. The most recent effort in SNP development has been in North America producing CryoCUP and SANPA geared for small-angle neutron scattering applications. As the number of SNP instruments grows, the demand for high precision SNP measurements increases with it. Recently a new strategy for precision calibration of and SNP apparatus has be proposed. Preliminary results published this year in the development of SANPA at the NIST Center for Neutron Research suggest that a new high precision calibration method will yield a deeper understanding of neutron polarization manipulation in the laboratory environment and provide an improved method for comparing different SNP instruments. Here we will discuss an international effort to begin pushing the precision limit with existing SNP designs. Speaker: Jacob Tosado (University of Maryland) Developing Neutron Polarimetry for time-of-flight Neutron Polarized neutron experiment probes magnetic structures and distinguish contribution from incoherent scattering, nuclear scattering and magnetic scattering1. While polarized neutron measurement are widely used among current neutron beamlines, the applications are usually limited to depolarization measurement among a single direction. More complexed polarimetry technique such as xyz polarimetry2 or spherical neutron polarimetry3 limit their application to single wavelength beamline. This limitation is caused by the neutron polarization manipulation process of polarimetry measurement, which involves controlled neutron precession. To expand the application of Neutron Polarimetry onto time-of-flight neutron, it is necessary to explore new experiment method and develop new magnetic field restraining equipment4. In this presentation, we introduce our research plan and effort to explore the combination of neutron polarimetry and time-of-flight neutron. The research is performed through designing new polarimetry instrument using high-Tc superconductor and developing new algorithm of polarimetry analysis through time-of-flight neutron. 1 M. Blume, Phys Rev 130 (5), 1670 (1963). 2 O. Scharpf and H. Capellmann, Phys Status Solidi A 135 (2), 359 (1993). 3 V. Hutanu, W. Luberstetter, E. Bourgeat-Lami, M. Meven, A. Sazonov, A. Steffen, G. Heger, G. Roth, and E. Lelievre-Berna, Rev Sci Instrum 87 (10) (2016). 4 T. Wang, S. R. Parnell, W. A. Hamilton, F. Li, A. L. Washington, D. V. Baxter, and R. Pynn, Rev Sci Instrum 87 (3) (2016). Speaker: Tianhao Wang (China Spallation Neutron Source) PSTP SNP presentation.pptx Polarimetry of NIST NG-C Neutron Beam for the aCORN Experiment The aCORN experiment measured the electron-antineutrino angular correlation coefficient (the 'a' coefficient) in free neutron decay on the NG-C neutron beam at the National Institute of Standards and Technology (NIST). Though the NIST neutron beams are expected to be unpolarized, an earlier run of the experiment found a small polarization on the NG-6 beamline. The aCORN measurement is quite sensitive to neutron polarization, and the beam polarization was used as a blind on the aCORN results. To unveil the blind, the beam polarization was measured using a 3He cell polarized by spin-exchange optical pumping (SEOP). After a brief overview of the aCORN experiment, the neutron polarimetry system will be discussed. Speaker: Gordon Jones (Hamilton College) aCORN_Pn_OakRidge19_9_20_19.pdf Polarized neutron diffraction at DEMAND (HB-3A) Understanding the interactions leading to magnetic quantum phenomena in a wide range of quantum materials is important for development of new quantum materials and future technologies. Quantifying these interactions and the resultant magnetic matter in quantum materials that exhibit exotic matter such as quantum spin liquids, topological insulators, and Weyl semimetals, is currently being limited by a range of challenges including the lack of sizable crystals, limited sample environment conditions, and the ability to disentangle the intrinsic quantum phenomena versus effects from defects and site-disorder. Polarized neutron diffraction could play an important role in exploring magnetic matter and interactions. We recently upgraded the HB-3A four-circle diffractometer by installing a large area position sensitive detector and integrated a set of extreme sample environment equipment for quantum material study purpose and renamed the HB-3A as DEMAND (Dimensional Extreme sample environment Magnetic Neutron Diffractometer). In this presentation, we will focus on the polarized neutron diffraction capabilities recently developed at DEMAND, which used both the S-bender supermirror and the He-3 polarizers that are still under development. The research was supported by the U.S. Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences, Early Career Research Program Award KC0402010, under Contract DE-AC05-00OR22725 and the U.S. DOE, Office of Science User Facility operated by the Oak Ridge National Laboratory. Speaker: Dr Huibo Cao (Oak Ridge National Laboratory) Development of Polarized 3He Neutron Spin Filters at Oak Ridge National Laboratory Nuclear spin-polarized 3He is widely used in many scientific areas. One of the applications is used as a neutron spin filter to polarize neutrons. Among all the neutron polarizing techniques, nuclear-spin-polarized 3He neutron spin filters have shown great flexibility and versatility because of its highly spin-dependent neutron absorption cross section, large neutron acceptance angle and working over a broad neutron wavelength band. At the Oak Ridge National Laboratory, 3He is routinely polarized via spin-exchange optical pumping (SEOP). Over the last several years, various SEOP-based systems have been developed to suit the needs of different neutron instruments at the Spallation Neutron Source (SNS) and the High Flux Isotope Reactor (HFIR). Particularly, great efforts have been made in developing in situ systems to address the problem of 3He polarization decay in the drop-in cell setup. Because most instruments at SNS and HFIR have very limited space, the in situ system development has been focused on having a compact form factor design tailored to each individual beamline while still achieving high 3He polarization. We report the development and optimization of polarized 3He spin filters at ORNL and present the latest results. Speaker: Dr Chenyang Jiang (Oak Ridge National Laboratory) PSTP2019.pptx Polarized 3He for neutron scattering applications 3He has a strong spin-dependent neutron absorption cross section and polarized 3He gas by optical pumping can be employed to effectively polarize and spin-analyze large area, widely divergent, and broadband neutron beams. The adiabatic fast passage (AFP) nuclear magnetic resonance (NMR) technique allows to invert the 3He nuclear polarization, hence the neutron polarization. These unique features of a 3He neutron spin filter (NSF) together with the recent advancements in the 3He NSF technique have made many new polarized neutron measurement capabilities possible, including thermal triple axis spectrometry, small-angle neutron scattering, wide-angle polarization analysis, and diffuse reflectometry. I will present an overview of the recent development of the 3He NSF technique at the NIST Center for Neutron Research. I will discuss a substantial effort towards improving the 3He polarization close to the theoretical limit (~95%) set by the anisotropic spin exchange optical pumping (SEOP). I will show how one can achieve a nearly lossless 3He polarization inversion to address the need of inverting the polarization in an order of minutes. I will present a recent development of a large fully-reblown "horseshoe"-shaped SEOP cell necessary for a neutron spin analyzer of a wide-angle polarization analysis capability with a simultaneous scattering angle coverage of 240 degrees. I will discuss the development of compact magnetostatic cavity devices that provide a homogeneous magnetic field to maintain the 3He polarization with field gradients on the order of 10-4 cm-1 for a cell volume of 1000 cm3. The polarized neutron measurement capabilities developed have played an important role to uncover the nature of magnetism in complex materials in condensed matter physics and materials science. Examples of such scientific applications will be presented. Speaker: Dr Wangchun Chen (NIST) Amplification of neutron spin rotation due to the spin-orbit interaction in silicon As a neutron scatters from a target nucleus, there is a small but measurable effect caused by the interaction of the neutron's magnetic dipole moment with that of the partially screened electric field of the nucleus. This spin-orbit interaction is typically referred to as Schwinger scattering [1] and induces a small rotation of the neutron's spin on the order of 10$^{-4}$ rad for Bragg diffraction from silicon [2]. In our experiment, neutrons undergo greater than 100 successive Bragg reflections from the walls of a slotted, perfect-silicon crystal to amplify the total spin rotation. A magnetic field is employed to insure constructive addition as the neutron undergoes this series of reflections. The strength of the spin-orbit interaction, which is directly proportional to the electric field, was determined by measuring the rotation of the neutron's spin-polarization vector. Two approaches were employed for the polarizing and analyzing the monochromatic cold neutron beam: supermirrors [3] and remotely polarized $^3$He-based neutron spin filters [4]. Whereas better statistics were obtained with supermirrors, this approach presented a systematic effect associated with a small transverse polarization component due to the need for adiabatic rotation of the neutron spin. At the expense of statistics, this systematic effect was eliminated with spin filters. Our measurements show good agreement with the expected variation of this rotation with the applied magnetic field, whereas the magnitude of the rotation is $\approx$40\% larger than expected. [1] J. Schwinger. Phys. Rev. 73, 407 (1948). [2] C.G. Shull. Phys. Rev. Lett. 10, 297 (1963). [3] F. Mezei, Commun. Phys. 1, 81–85 (1976). [4] W.C. Chen et al, Journal of Physics: Conf. Series 294, 012003 (2011). Speaker: Dr Thomas Gentile (NIST) mdm_pstp.pdf mdm_pstp.pptx Design and performance of a superconducting neutron resonance spin flipper Despite the challenges, neutron resonance spin echo (NRSE) still holds the promise to improve upon neutron spin echo (NSE) for measurement of slow dynamics in materials. In particular, the modulated intensity with zero effort (MIEZE) configuration allows for the measurement of depolarizing samples and is naturally suited for combination with small angle neutron scattering (SANS) as a result of there being no spin manipulations performed after the sample. The application of NRSE and MIEZE require a radio frequency (RF) spin flipper with high efficiency, and for use in time-of-flight instruments an adiabatic spin flip is desirable. We present a bootstrap RF neutron spin flipper using high temperature superconducting (HTS) technology, with adiabatic spin flipping capability. A frequency of 2MHz has been achieved, which would produce an effective field integral of 0.35 Tm for a meter of separation in a NRSE spectrometer at the current device specifications. In bootstrap mode, the self-cancellation of Larmor phase aberrations can be achieved by the appropriate selection of the polarity of the gradient coils and has been observed. Speaker: Ryan Dadisman (Oak Ridge National Laboratory) PSTP_2019.pptx Convener: Dr Christopher Keith (Jefferson Lab) Small-angle neutron scattering, reflectometry, and diffractometry using proton-polarized samples Scattering length neutrons for cold protons remarkably depends on relative direction of their spins. Thus, scattering pattern of polarized neutrons varies as a function of proton-polarization (P$_H$) of samples. This technique called spin-contrast-variation (SCV) enables us to determine detailed structure of composite materials from the P$_H$-dependent multiple scatterings. Since Stuhrmann et al. firstly demonstrated in 1989, the SCV technique has been applied to small-angle neutron scattering (SANS) measurements. We have also carried out SCV-SANS measurements of variety of samples in Japan Research Reactor (JRR-3) and Japan Proton Accelerator Research Complex (J-PARC). Recently, we newly applied the SCV technique to neutron reflectometry to study surface and interface structure of multi-layered thin-films. Now, we are developing SCV neutron powder diffractometry to determine polycrystalline structure. Speaker: takayuki kumada (JAEA) 190822PSTP2019_kumada2.pptx 190822PSTP2019_kumada2_submit.pdf Status and Outlook for the Dynamic Nuclear Polarization Program at ORNL The spin dependence of the neutron scattering cross section, especially for hydrogen, makes Dynamic Nuclear Polarization a powerful technique for improving neutron diffraction measurements, especially for biological and soft matter systems. Oak Ridge National Laboratory has demonstrated the application of this technique to Neutron Macromolecular Crystallography, with an eye towards DNP become a normal part of the user program for the Second Target Station that is being built at the SNS. The status of the current system will be discussed, as will the plans for future measurements using Small Angle Neutron Scattering, and the plans for DNP at the Second Target Station. hfir_lunch.pdf Experiments with Frozen Spin Target at MAMI The A2-Collaboration at the Mainz Microtron MAMI measures photon absorption cross sections using circularly and linearly polarized 'Bremsstrahlung' photons up to an energy of ~1.5GeV. We use a 4 π detection system with the 'Crystal Ball' as central part. We have developed a Frozen Spin Target in close collaboration with the polarized target group of the Joint Institute for Nuclear Research in Dubna, Russia. The 3/4Helium dilution refrigerator provides temperatures down to 25 mKelvin. Both longitudinally and transversely polarized protons and deuterons are possible with the help of superconducting holding coils. In this talk our experimental program using Frozen Spin Targets will be described. Speaker: Andreas Thomas (Universität Mainz) Thomas-PSTP-Knoxville-2019.pdf Thomas-PSTP-Knoxville-2019.pptx Polarized target at COMPASS In 2018 the COMPASS experiment at CERN applied a transversely solid polarized proton target with a negative pion beam to measure the Sivers asymmetry using Drell-Yan process. The target system consists of a 50 mK dilution refrigerator, a 2.5 T solenoid magnet, two sets of 70 GHz microwave system. Solid NH$_3$ beads of the target material was contained in 2-target-cell of 55-55 cm long with a 4 cm diameter. The longitudinal polarization of the target is obtained by the DNP method. After polarizing for 1 day, the spin was oriented perpendicular to the beam direction by using a 0.6 T dipole magnet and the data was taken for 6 days. I will present the results of the proton polarization, the relaxation time during the data taking and the radiation damage of the target material. In 2021 the experiment will exchange the NH$_3$ target material for $^6$LiD as a polarized deuteron target in order to perform SIDIS program with muon beam. I will also present the status of the preparation. Speaker: Dr Norihiro Doshita doshita-COMPASSPT-cp.pdf ORNL Tour Social Dinner Callhoun's Callhoun's 400 Neyland Drive Bar Opens at 5:30 Dinner is at 6:00 Convener: Chenyang Jiang (ORNL) Polarized 3He neutron spin filter activities at the J-PARC spallation neutron source Polarized neutron scattering is a powerful tool to study from materials to fundamental physics,and polarized $^3$He gas is now playing an important role at high flux neutron sources as neutron spin filters (NSF). At the spallation neutron source in J-PARC (Japan Proton Accelerator Research Complex), a polarized inelastic neutron spectrometer, POLANO, is now under commissioning, and an in-situ polarized $^3$He NSF is about to be installed in the instrument for the incident neutron beam polarization. The in-situ polarized $^3$He NSF has been originally designed and build for POLANO but it can be used in other instruments with minimal modification because of its compact size and versatility. We will present some techniques developed for the in-situ polarized $^3$He NSF as well as other measurements carried out with $^3$He NSF at J-PARC. Speaker: Takashi Ino (KEK) pstp2019_ino.pdf Developing Polarized neutron capability at the China Spallation Neutron Source Polarized neutron techniques have been widely developed and integrated into many instruments at major neutron sources around the world. At the China Spallation Neutron Source (CSNS), the polarized neutron and sample environment groups are developing a complete system to support the ongoing beamline design and construction efforts. In this presentation, we will introduce the current development of a polarized 3He system1,2 at the CSNS. The polarized 3He neutron spin filter system, both off-situ and in-situ, shall provide a reliable universal polarized neutron source to the current beamlines at CSNS, which include Small Angle Neutron Scattering (SANS), neutron reflectometry and powder diffraction. The spin filter will also aid in the advancement of future beamlines through testing and developing customized systems. We shall also give a brief introduction to the future plan of polarized neutron technologies developed based on the polarized 3He spin filter. 1 C. Y. Jiang, X. Tong, D. R. Brown, W. T. Lee, H. Ambaye, J. W. Craig, L. Crow, H. Culbertson, R. Goyette, M. K. Graves-Brook, M. E. Hagen, B. Kadron, V. Lauter, L. W. McCollum, J. L. Robertson, B. Winn, and A. E. Vandegrift, Physcs Proc 42, 191 (2013). 2 X. Tong, C. Y. Jiang, V. Lauter, H. Ambaye, D. Brown, L. Crow, T. R. Gentile, R. Goyette, W. T. Lee, A. Parizzi, and J. L. Robertson, Rev Sci Instrum 83 (7) (2012). Speaker: Zachary Buck (China Spallation Neutron Source) Buck_PSTP 2019.pptx High performance in-situ $^3$He polarizers for neutrons In-situ polarization can provide the highest performance over time for polarized $^3$He over time where $^3$He polarizations in excess of 80% can be maintained. The polarization rates and magnitude achieved are aided by using high performance $^3$He cells produced all in house and techniques such as hybrid spin-exchange optical pumping and chipped volume Bragg grating narrowed laser diode array bars. For the magnetic environments we normally use so called magic boxes which give very high 3He lifetime performance and good isolation from external magnetic fields due to their geometry that creates a magnetic field transverse to the beam propagation direction which also allows decoupling of the the optical pumping light path to the orthogonal neutron beam path. As an example, recently for a user experiment on the ROT effect in $^{235}$U one of our polarizers gave a $^3$He polarization in excess of 81% for over 20 days with a polarization build rate of 7 hours, this corresponded to a neutron polarization of 99.3% at 22% neutron transmission at 1.15 Å. Speaker: Earl Babcock (JCNS at the MLZ) NOPTREX: Polarized $^3$He Neutron Spin Filter and Polarized Xenon Pseudomagnetic Precession The Neutron OPtics Time Reversal Experiment (NOPTREX) collaboration is working towards a sensitive search for time reversal violation in polarized neutron transmission through polarized heavy nuclei. The experiment requires an intense, stable polarized neutron beam at the resonance energies of interest near 1 eV. We have recently constructed a $^3$He neutron spin filter at Indiana University which makes use of the very large spin dependent neutron absorption cross-section of $^3$He to polarize neutrons. We polarize $^3$He gas using spin-exchange optical pumping (SEOP). We have combined our laser optics and oven with a $\mu$-metal shielded solenoid and a $^3$He gas cell from ORNL to realize our polarizer. We also discuss a planned experiment to measure neutron pseudomagnetic precession in polarized xenon gas. $^{131}$Xe is one of the nuclei on interest for the NOPTREX test, and this measurement will help us determine a significant systematic error related to spin dependent components in polarized neutron-nucleus transmission and also measure the spin-dependent scattering amplitudes of both $^{129}$Xe and $^{131}$Xe for the first time. This experiment will use an Neutron Spin Echo spectrometer to measure pseudomagnetic precession and an existing SEOP system to polarize both $^{129}$Xe and $^{131}$Xe. Speaker: Hao Lu (Indiana University Bloomington) PSTP2019HaoLu.pdf Polarized Gas Targets Polarized 3He Target for JLab 12GeV Era Since most of the $^{3}He$ spin is carried by the unpaired neutron, polarized $^{3}He$ targets have been widely used as a effective polarized neutron target in electron scattering experiments to study the spin structure of neutron. Over the past a couple of decades, polarized $^{3}He$ targets had been successfully utilized in thirteen electron scattering experiments during JLab 6 GeV era. At JLab, a technique called Spin-Exchange Optical Pumping (SEOP) is used to polarized the $^{3}He$ target. For the past decade, several developments including Rb-K hybrid alkali system and high power narrow line-width diode lasers were implemented to the polarized $^{3}He$ target in order to reach higher 3He polarization with world record luminosity. As JLab completed 12 GeV upgrade in 2017, there are seven upcoming approved polarized $^{3}He$ target experiments. Upgrade of the target with convection cell and Pulse Nulear Magnetic Resonance (PNMR) polarimetry were completed for the first upcoming 12 GeV era experiment $A_{1}^{n}$ (E12-06-110) with collaboration of $d_{2}^{n}$ (E12-06-121) in JLab Hall C. For typical $10^{22}/cm^{2}$ high-density target used in this collaboration experiment, the maximum polarization reached over 50% under $30 \mu A$ electron beam, thus the luminosity of $10^{36}/cm^{2}/s$ will be achieved. Speaker: Mingyu Chen (University of Virginia) PSTP_Mingyu_Chen_2019.pdf LHC-spin: a polarized internal target for the LHC We discuss the application of an open storage cell as gas target for a proposed LHC fixed-target experiment LHC-spin. The target provides a high areal density at minimum gas input, which may be polarized 1H, 2H, or 3He gas or heavy inert gases in a wide mass range. For the study of single-spin asymmetries in pp interaction, luminosities of nearly 10^33/cm^2 s can be produced with existing techniques. Speaker: Paolo Lenisa (University of Ferrara and INFN - Ferrara (Italy)) LHCb_target_Lenisa.pptx Polarized Atomic Hydrogen Target at MESA One aim for the new electron accelerator MESA is to measure the weak mixing angle in electron proton scattering to a precision of 0.14 %. The beam polarization significantly contributes to this measurement. The Møller polarimeter proposed by V. Luppov and E. Chudakov opens the way to reach a sufficiently accurate determination of polarization. At the moment the polarized atomic hydrogen target is under construction. The current R&D status is presented. Speaker: Dr Valery Tyukin (KPH, JGU Mainz, Germany) Presentation-PSTP-2019-09-17.pdf A Double scattering polarimeter for the P2 experiment at MESA The P2 Experiment at the new Mainz Energy-recovering Superconducting Accelerator (MESA) aims at measuring the weak mixing angle sin² θ_W at low Q² with high precision. Therefore the polarization of the incident electron beam has to be known with a very high accuracy (< 0.5%). Conventional Mott polarimeters are limited by uncertainties in the extrapolation and theoretical calculations required to determine S eff . The Double Scattering (Mott-) Polarimeter (DSP) presented in this talk offers an alternative method for the calibration of the target foils by using double Mott scattering, allowing a high precision in the determination of the effective analyzing power of the scattering process by only relying on asymmetry measurements on two target foils. First results that were achieved with 100 keV beam energy, the injection energy of MESA, are presented. Speaker: Matthias Molitor (Mainz) IMPROVEMENTS TO THE LASER POLARIZATION MEASUREMENT INSIDE A FABRY-PEROT CAVITY The Compton polarimeter at Jefferson Lab's experimental Hall A provides a continuous, non-invasive measurement of electron beam polarization via electron-photon scattering. The electron beam passing through the polarimeter intercepts green laser light stored in a Fabry-Perot cavity. Scattered electrons are detected in an electron detector while back scattered photons are detected in a GSO crystal calorimeter. For an accurate beam polarization measurement, the laser polarization inside the Fabry-Perot cavity must be well known. We have performed studies to optimize the laser polarization inside the cavity and to know it precisely. I will discuss the methods and results from these investigations. Speakers: Sachinthani Premathilake (University Of Virginia), Dave Gaskell compton_laser_polarization.pdf Photon Detector for Compton Polarimetry in the PREX-II Experiment The Jefferson Lab Continuous Electron Beam Accelerator Facility's experimental Hall A employs a Compton polarimeter to measure incoming beam polarization for parity violating electron scattering experiments. The polarimeter operates by amplifying green laser light in a Fabry-Perot cavity which then compton scatters off the incoming electron beam. The scattered photons are then passed through a scintillating GSO (Gadolinium Oxyorthosilicate) crystal which creates light which registers in a photomultiplier tube. The polarization measurement is conducted by taking advantage of the helicity-dependence of compton scattering. By measuring the integrated signal from photons scattered while the beam is in different helicity states, we generate a differential asymmetry between these states, which then yields information about the electron beam's longitudinal polarization. Measuring the asymmetry requires a robust background subtraction of helicity-correlated asymmetry as well as identifying the compton edge from observing spectra. This measurement aids in minimizing a key source of systematic error in many parity-violating electron scattering experiments. This talk will be about the integrating photon detector analysis as well as the recent results from the PREX-II experiment. Speaker: Adam Zec (University of Virginia) pstp_AJZ_2019.pdf Polarized Electron and Hadron Beams at JLEIC Synchrotron radiation plays an important role in the polarization dynamics of an electron beam in the energy range of Jefferson Lab Electron-Ion Collider (JLEIC). High polarization of the JLEIC electron beam is achieved using two design features. The first one is a continuous full-energy top-off of the stored electron beam by a highly-polarized beam from CEBAF. The second one is arrangement of vertical spin orientations alternatively parallel and anti-parallel to the dipole fields in the two arcs of the figure-8 collider ring to neutralize the radiative Sokolov-Ternov effect on the electron polarization and compensate the energy dependence of the spin tune. For hadrons, the JLEIC figure-8 ring design compensates the primary effect of the ring arcs on their spins, i.e. the ring is "transparent" to the spins. This allows for efficient preservation of the source polarization as well as maintenance, control and manipulation of the stored beam polarization of any hadrons including deuterons using only additionally introduced weak magnetic field integrals that do not perturb the beam dynamics. The criterion for polarization stability is that the spin rotation induced by the weak field integrals must be much greater than that caused by lattice imperfections and beam emittances. We present the results of theoretical and numerical studies of the electron and hadron polarization dynamics in JLEIC. Speaker: Vasiliy Morozov (Jefferson Lab, Newport News, VA, USA) Morozov_Pol_Ele_Had_JLEIC_26sep19.pdf Polarized Electron and Hadron Beams at eRHIC The electron-ion collider (EIC) eRHIC at BNL aims at a luminosity of 10^34 cm^-2 sec^-1 in collisions of polarized electron and polarized proton, deuteron, and 3He beams. We will present an overview of the proposed facility with an emphasis on generation and acceleration of the polarized beams and the expected polarization performance. Speaker: Christoph Montag (BNL) Polarized-beams-at eRHIC-Montag.pptx Development of a Polarized 3He++ Ion Source for the EIC The capability of accelerating a high-intensity polarized $^{3}$He ion beam would provide an effective polarized neutron beam for new high-energy QCD studies of nucleon structure. This development is essential for the future Electron Ion Collider, which could use a polarized $^{3}$He ion beam to probe the spin structure of the neutron. The proposed polarized $^{3}$He ion source is based on the Electron Beam Ion Source (EBIS) currently in operation at Brookhaven National Laboratory. $^{3}$He gas would be polarized within the 5 T field of the EBIS solenoid via Metastability Exchange Optical Pumping (MEOP) and then pulsed into the EBIS vacuum and drift tube system where the $^{3}$He will be ionized by the 10 Amp electron beam. The goal of the polarized $^{3}$He ion source is to achieve $2.5 \times 10^{11}$ $^{3}$He$^{++}$/pulse at 70% polarization. An upgrade of the EBIS is currently underway. An absolute polarimeter and spin-rotator is being developed to measure the $^{3}$He ion polarization at 6 MeV after initial acceleration out of the EBIS. The source is being developed through collaboration between BNL and MIT. Speaker: Matthew Musgrave (MIT) PSTP_2019_Musgrave.pptx Compton polarimetry for EIC Compton polarimetry is the prime candidate for electron polarization measurement since it is virtually ininvasive and can reach very good level of accuracy with best measurements at the 0.4 % level accuracy. It is especially suitable at high energy since the analyzing power grows with electron energy. I will present the current Compton Polarimeters available at Jefferson Laboratory and also give a summary of the studies which were done for Compton Polarimetry in the context of the Electron Ion Collider. Speaker: Alexandre Camsonne (Jefferson Laboratory) Compton-EIC-pstp2019_final.pdf Proton Polarimetry at an Electron-Ion Collider The broad physics program at a future electron-ion collider is, in part, based on the availability of high electron and proton beam polarizations. Proton polarimetry will have to include an absolute normalization as well as fast measurements of the polarization of the bunched beam. The required high luminosities in combination with short bunch spacing represent specific challenges. Additionally, the polarization direction has to be determined at the experimental interaction point where spin rotators allow for a choice of transverse or longitudinal polarization. This talk will summarize methods that have been successfully employed to the high energy proton beams at RHIC and discuss possible improvements to meet the demands of an electron-ion collider. Also, other options will be discussed that can be helpful in a lepton-proton collider. For example, new tools may be based on recent experimental confirmation of spin dependent neutron production in ultra-peripheral proton-ion collisions. Speaker: Oleg Eyser (Brookhaven National Lab.) 2019-09-26_HadronPolarimetry_EIC.pdf The prospects on hadronic polarimetry at eRHIC In the eRHIC high-luminosity collider proposal the number of ion bunches will be increased and the bunch spacing will be reduced from current 107 ns (RHIC) to 34.8 ns at the first stage and finally to 8.7 ns. This beam timing structure will be a challenge for the elastic events identification in the RHIC CNI (Coulomb Nuclear Interference) polarimeters and an essential upgrade of the polarimeters is required. It this paper, we will discuss possible solutions of this problem. Speaker: Andrei Poblaguev (Brookhaven Nationa Laboratory) PSTP2019_Poblaguev.pdf Techniques and Systematic Effects of a Critically Dressed System Critical dressing, the simultaneous dressing of two spin species to the same effective Larmor frequency, is a technique that can, in principle, improve the sensitivity to small frequency shifts. The benefits of spin dressing and thus critical dressing are achieved at the expense of generating a large (relative to the holding field $B_{0}$,) homogeneous oscillating field. Due to inevitable imperfections of the fields generated and current supplied by the power supply, the benefits of spin dressing may be lost from the additional relaxation and noise generated by the dressing field imperfections. In this analysis the subject of relaxation, frequency shifts, and phase noise are approached with simulations and theory. Analytical predictions are made from a new quasi-quantum model that includes gradients in the holding field $B_{0}=\omega _{0}/\gamma $ and dressing field $B_{1}=\omega _{1}/\gamma $ where $B_{1}$ is oscillating at frequency $\omega $, as well as noise generated by the power supply. It is found that irreversible DC gradient relaxation can be canceled by an AC spin dressing gradient in the Redfield regime. Furthermore, it is shown that there is no linear in $E$ frequency shift generated by gradients in the dressing field. Critically dressed modulation techniques that extend the relaxation time by orders of magnitude are considered and application to tipping pulses are investigated. Speaker: Dr Christopher Swank (Caltech) PSTP_2019_Swank_Spin_Dressing.pptx Measurement of Neutron Polarization and Transmission for the SNS nEDM Experiment. The existence and size of a neutron electric dipole moment (nEDM) remains an important question in particle and cosmological physics. The SNS nEDM experiment proposes a new limit for nEDM search by using ultra-cold neutrons (UCN) in a bath of superfluid helium. The experiment uses polarized 8.9Å neutrons to create polarized UCN in situ in superfluid helium via superthermal downscattering. This process requires the 8.9Å neutrons to retain their polarization as they pass through the magnetic shielding and nEDM cryostat windows. This talk will describe a setup to measure the neutron polarization loss from the magnetic shielding and cryostat windows. Speaker: Mr Kavish Imam (University of Tennessee) Imam_PSTP2019.pdf Preparing a Polarimetry Measurement for the Nab Experiment The Nab experiment at the Fundamental Neutron Physics Beamline (FnPB) at the Spallation Neutron Source (SNS) aims to make precision measurements of the electron-neutrino correlation and Fierz interference term, associated with the beta decay of free neutrons. Residual polarization of the incident beam presents a potential source of systematic error in this measurement. In order to understand and mitigate these effects we must measure the beam polarization and the efficiency of our newly designed Neutron Spin Flipper. If we use $^3$He polarizers to accomplish these measurements, it will require careful control of the magnetic environment along the beam line, in order to assure adiabatic spin transport of the neutrons, and prolong the polarization lifetime the $^3$He cells. However the space for incorporating the necessary components is limited, and requires careful magnet construction to obtain the requisite magnetic fields. Speaker: Chelsea Hendrus (The University of Michigan) HendrusC_PTSP19.pdf Neutron Spin Rotation: Neutron Optical Polarimetry as a Probe of Fundamental Physics The Neutron Spin Rotation (NSR) slow neutron polarimeter is an apparatus designed to measure and constrain fundamental interactions to high precision through the use of neutron optical techniques. This apparatus was initially constructed to search for parity-violating spin rotation of neutrons transmitted through liquid $^{4}\text{He}$. This experiment placed a limit on the rotation angle per unit length of $d\phi/dz =[+2.1 \pm 8.3 (\textit{stat.})\, ^ {+2.9} _{-0.2} (\textit{sys.})]\times10^{-7}$ rad/m. This data has been used to constrain light $Z'$ bosons, in-matter gravitational torsion, and nonmetricity. A second target system operated with the same polarimeter was designed to search for an axial coupling of neutrons $g_{A}^{2}$ to light $Z'$ bosons, placing limits on the rotation angle $\phi=[1.4\pm 2.3(\textit{stat.}) \pm 2.8(\textit{sys.})]\times10^{-5}$ rad, improving $g_{A}^{2}$ bounds. The polarimeter and targets are being upgraded for future measurements planned for the NG-C beam at NIST Center for Neutron Research. The precision should be sufficient to see the Standard Model contribution to n-$^{4}$He spin rotation and improve the limits on $g_{A}^{2}$ by about two orders of magnitude. An overview of the apparatus will be presented, along with details of both target systems design and performance. [1] [1] W. M. Snow, et al., Rev. Sci. Inst. 86, 055101(2015) Speaker: Kyle Steffen (Indiana University) PSTP2019_KyleSteffen.pdf Testing Frozen-Spin HD with electrons at Jefferson Lab – status update Highly-polarized, frozen-spin targets of solid Hydrogen Deuteride (HDice) have been successfully used with photon beam for nuclear physics measurements for over a decade. With Jefferson Lab's upgrade to 12 GeV, a new effort has begun to expand the physics reach using HDice targets with electron beam. Three "high impact" experiments, which plan to utilize transversely polarized HDice targets and electron beams to study nucleon structure, have been approved by the JLab PAC with "A" ratings. Testing HDice targets with electron beams (eHD) is scheduled to begin this fall at the JLab's Upgraded Injection Test Facility (UITF). Preparations for these eHD tests are well underway. The experimental design and the status of major components will be reported, along with the anticipated schedule. Speaker: Dr Xiangdong Wei (Jefferson Lab) PSTP2019-Xiangdong Wei-20190927.pdf Progress on the Reconstruction of a Dilution Refrigerator In this talk I will discuss the dilution refrigerator used in our lab for Polarized Target Physics. Originally constructed at CERN by Niinikoski in the mid 1970's, modified by the Helmholtz Zentrum Geesthacht laboratory for neutron studies, and finally obtained us at UVA for use in the HIGS collaboration. I will discuss the changes that have been made after we started using it. In November 2017, we discovered a leak in the still between two separate vacuum spaces and had no choice but to build a new dilution unit. I will lay out the progress we have made so far, what is currently happening, and the next steps that need to be taken. Speaker: Mr Matthew Roberts (University of Virginia) Dynamic Nuclear Polarization with Solid-State mm-Waves, 3D-Printed Components, and SDR-based NMR A new dynamic nuclear polarization (DNP) target system has recently come on-line at the University of New Hampshire. DNP is driven by a novel solid-state 140 GHz mm-wave source with quasi-optics transmission and low-loss (<0.1 dB/m) overmodal waveguide that is insensitive to magnetic fields. We have also developed a method to 3D print with Kel-F, which was used to produce target material cups and is being used to study quasi-optical properties of Kel-F lenses. Other off-the-shelf 3D printed materials have been found to survive multiple 1 K temperature cycling and are utilized in target stick construction. Polarization measurements are made and cross-compared on a Liverpool Q-meter, LANL Q-meter, and low-cost software-defined-radio based vector network analyzer. An overview of this system and current progress will be presented. Speaker: Elena Long (University of New Hampshire) ELong-PSTP.pptx Ack.pdf Precision absolute polarimeter development for the 3He++ ion beam at 5.0-6.0 MeV energy We have previously developed a concept for a polarized 3He ion source based on the existing Electron Beam Ionization Source (EBIS) at Brookhaven National Laboratory (BNL). Successful tests of polarizing 3He in a high magnetic field have led to the development of the Extended EBIS upgrade. The spin-rotator and 3He++ beam polarimetry development is also in progress in collaboration with MIT. There is a unique opportunity for precision measurements of the absolute 3He++ polarization at beam energies 5.0-6.0 MeV after the EBIS Linac. It was shown in Ref. [1], that the analyzing power for the elastic scattering of spin-1/2 particles (3He) on spin-0 particles (4He) can reach the maximum theoretical value |P| = 1 at some point (Ebeam, θCM). Using the experimental data [2], several such points were established for 3He+ 4He elastic scattering including the P= +1 at beam E ≈ 5.3 MeV and θ (center of mass) ≈ 91°. Therefore, the main effort of this R@D will be development of precision absolute polarimeter for the measurements of the 3He++ beam polarization produced in the EBIS as a reference for the further polarization measurements (and possible polarization losses) along accelerator chain. The polarimeter vacuum system is integrated in the spin-rotator transport line. The 3He++ ion beam will enter the scattering chamber through the thin window to minimize beam energy losses. The scattering chamber is filled with 4He gas at ~ 5 torr pressure. The silicon strip detectors will be used for energy and TOF measurements of the scattered 3He and recoil 4He nuclei (in coincidence) for the identification of the scattering kinematics with analyzing power AN ~ 1. Two sets of detectors will measure both nuclei and left-right asymmetry at the spin –flip. The status of polarimeter development (vacuum system, scattering chamber, thin window, Si-strip detectors and WFD- based DAQ) will be presented. [1] R. J. Spiger and T. A. Tombrello. Scattering of He3 by He4 and of He4 by Tritium". In: Phys. Rev. 163 (4 1967), pp. 964{984. [2] G.R. Plattner and A.D. Bacher. \Absolute calibration of spin 1/2 polarization" Physics Letters Volume 36B, number 3 (1971), pp. 211-214 Speaker: Grigor Atoian (Brookhaven National Laboratory) Atoian_PSTP-2019.pptx Laser-Driven Polarized Deuterium Source Polarized light ion beams are essential to the physics program for a future electron-ion collider (EIC), and polarized deuterons have been identified as essential tools to probe the sea quark and gluon distributions in studies of hadronization. Polarized deuterons form an especially unique system for study by providing a combination of quark and nuclear physics, and thus can yield new insights in the understanding of hadron structure that cannot be achieved with other polarized nuclei. Both the Jefferson Lab Electron-Ion Collider (JLEIC) and the electron-ion collider at Brookhaven National Lab (eRHIC) machine design concepts have integrated polarized deuteron transport into their designs, but there are currently no operational polarized deuteron beam sources in the United States. A new method for production of neutral polarized atomic deuterium beams is discussed. The method utilizes infrared stimulated Raman adiabatic passage (IR STIRAP) in the production of polarized deuterium halide molecules, from which polarized deuterium atoms can be accessed through photodissociation. The method has the potential to generate neutral polarized atomic deuterium beams with densities that are orders of magnitude greater than that in existing devices. Speaker: Amy Sy (JLab) Sy_PSTP2019_09272019.pdf Improved robustness of GaAs-based photocathodes activated by Cs, Sb, and O2 GaAs-based photocathodes are widely used to produce highly spin polarized electron beams at high currents. Spin polarized photoelectrons can escape into vacuum only when GaAs surface is activated to Negative Electron Affinity (NEA). The NEA surface is notorious for extreme vacuum sensitivity, and this results in rapid QE degradation. We activated GaAs samples by unconventional methods using Cs, Sb, and oxygen. We present successful NEA activation on GaAs surface and more than a order of magnitude improvement in charge extraction slifetime compared to the standard Cs-O2 activation without significant loss in spin polarization. Speaker: Jai Kwan Bae (Cornell) 9-22-2019 PSTP 2019 GaAs CsSb.pdf
CommonCrawl
SpringerPlus A secure online image trading system for untrusted cloud environments Khairul Munadi1, Fitri Arnia1, Mohd Syaryadhi1, Masaaki Fujiyoshi2 & Hitoshi Kiya2 SpringerPlus volume 4, Article number: 277 (2015) Cite this article In conventional image trading systems, images are usually stored unprotected on a server, rendering them vulnerable to untrusted server providers and malicious intruders. This paper proposes a conceptual image trading framework that enables secure storage and retrieval over Internet services. The process involves three parties: an image publisher, a server provider, and an image buyer. The aim is to facilitate secure storage and retrieval of original images for commercial transactions, while preventing untrusted server providers and unauthorized users from gaining access to true contents. The framework exploits the Discrete Cosine Transform (DCT) coefficients and the moment invariants of images. Original images are visually protected in the DCT domain, and stored on a repository server. Small representation of the original images, called thumbnails, are generated and made publicly accessible for browsing. When a buyer is interested in a thumbnail, he/she sends a query to retrieve the visually protected image. The thumbnails and protected images are matched using the DC component of the DCT coefficients and the moment invariant feature. After the matching process, the server returns the corresponding protected image to the buyer. However, the image remains visually protected unless a key is granted. Our target application is the online market, where publishers sell their stock images over the Internet using public cloud servers. With the advancement of the Internet, multimedia content trading has become increasingly popular. As multimedia contents, such as audio, image, and video, are available in digital form, they may be benefit from ease of manipulating, duplicating, publishing, and distributing. Despite these benefits, illegal use of multimedia data tends to grow significantly unless proper protection is implemented. One important and challenging task in multimedia content trading, including image trading, is privacy protection (Lu et al. 2009, 2010; Premaratne and Premaratne 2012; Troncoso-Pastoriza and Perez-Gonzales 2013). Most existing work in this area has focused on access control and secure data transmission (Lu et al. 2009; Iacono and Torkian 2013). The aim is to prevent unauthorized users from accessing the data and to enable secure data exchange. However, once stored on the server, the data are left unprotected. This makes the user's private content vulnerable to untrustworthy server providers, as well as intruders. In line with the Internet, the concept of cloud computing has also garnered increasing interest. The cloud provides computing and storage services to users via the Internet (Jeong and Park 2012). Public clouds offer these services to both organizations and individuals, but require no infrastructure or maintenance investment. Therefore, more applications and services are expected to rely on cloud resources in the future. However, privacy problems in the cloud environment need rigorous attention because the data can easily be distributed among different servers in different locations (Curran et al. 2012; Modi et al. 2013). The Internet and cloud technology have undoubtedly pushed image trading to become commercially feasible for more individuals and small-scale business entities. Therefore, the privacy protection of image content on the cloud server is an important consideration. Currently, various types of images—ranging from photos, to art, graphics, and historical images—are traded online in the conventional way. The trading process has been exclusively conducted over the Internet, where images can be purchased and delivered online. Nevertheless, this conventional system has a serious drawback on the server side. Images stored on the server are left unprotected, allowing illegal access and use by untrusted server providers and intruders. Hence, a new mechanism for secure online image trading is necessary. Based on the current practices of image trading and the wide availability of cloud servers, we argue that the following requirements should be satisfied to enable a secure image trading system running in an untrusted cloud environment: The system must provide privacy protection to the stored data. Images on a cloud storage should be protected such that, even if untrusted parties break the server's access control, they cannot reach the true image content. The system should provide a limited-content preview for display in various devices. To attract potential buyers, a portion of the content should be freely available for viewing. Because the display dimensions differ among devices, various reduced-size images are required. The system must match the reduced-size images to the privacy-protected images. The system needs to be compatible with compression standards. Because images are stored in compressed format, the image trading system should accommodate images compressed by specific standards. Unfortunately, very few image trading schemes satisfy all these requirements. Most of the existing works (Lu et al. 2009, 2010; Premaratne and Premaratne 2012; Troncoso-Pastoriza and Perez-Gonzales 2013; Iacono and Torkian 2013; Kiya and Ito 2008; Okada et al. 2009, 2010; Liu et al. 2013; Sae-Tang et al. 2014; Zhang and Cheng 2014; Cheng et al. 2014) have separately and independently focused on a subset of these considerations. The present paper introduces a conceptual framework for a secure image trading system in an untrusted cloud environment that satisfies all the above requirements. We focus on the Joint Photographic Experts Group (JPEG) (Wallace 1992) images, which are widely and popularly used in various applications. A trading activity involves three main parties: an image publisher, a server provider, and an image buyer. The proposed scheme facilitates secure server storage by visually protecting the publisher's images, thus preventing access to the true image content by untrustworthy server providers and unauthorized users. Reduced-size images that serve as queries are displayed on a user interface, providing a limited-content preview for potential buyers. Our target application is the online market, in which small content publishers sell their stock images over the Internet. The remainder of the paper is organized as follows. "Related work" briefly reviews related works in the proposed research area. "Preliminaries" introduces the preliminary information, including a review on conventional repositories for image trading and their shortcomings, the Discrete Cosine Transform (DCT) and the JPEG standard, DCT-based scrambling for visual protection, and the structural similarity (SSIM) index that measures the degree of image scrambling. "Proposed framework" describes the conceptual framework of the proposed scheme. Simulation results are presented in "Simulation results". And, concluding remarks are given in "Conclusions". The requirements formulated in "Introduction" can be divided into two main research categories: the secure storage of images on a public cloud server, and efficient image matching in visually protected (encrypted) domains for retrieval and content preview purposes. Among the earlier works on image trading systems, the authors in Okada et al. (2009, 2010), Liu et al. (2013) proposed a framework that offers privacy or content protection. In their mechanism, an image is decomposed into two components with different levels of importance. One component is sent directly to a consumer; the other is first routed to an arbitrator or trusted third party (TTP) for fingerprinting and then sent to the consumer. This approach is impractical because of several reasons. First, the consumer receives two image components, increasing the memory and bandwidth usage. In addition, the approach requires a TTP and assumes that images are stored on a proprietary and trusted server. An extension of the above proposal, which no longer separates an image into several components, was presented in Sae-Tang et al. (2014). This method specifically handles JPEG 2000 images. Although it removes image decomposition, it retains the TTP requirement, thus adding technical complexity to small content publishers. Client-side encryptions for cloud storage have also been proposed (Iacono and Torkian 2013; Lu et al. 2009, 2010; Cheng et al. 2014). For instance, the approach in Iacono and Torkian (2013) encrypts the data file and changes the file structure, thus increasing the difficulties in indexing and searching of the encrypted data. In Lu et al. (2009, 2010) and Cheng et al. (2014), features are extracted from plaintext images and encrypted by the image owners. The encrypted features and images are then stored on a server equipped with a table of mapping relationship between them. When the user makes a query, the features from the plaintext query image are extracted and encrypted, and then sent to the server, where their similarity to the features encrypted in the database is calculated. This implies that feature extraction/encryption and image encryption are performed separately, incurring additional computational resources and complexities. The histogram-based retrieval of Zhang and Cheng (2014) reduces the necessity of feature extraction/encryption. The images stored on a server are simply encrypted by permuting DCT coefficients and are compatible with the JPEG file format. Similarity between an encrypted query and an encrypted image is determined by calculating the distances of DCT coefficient histograms. However, this process requires nearly full JPEG decoding (up to inverse quantization) and proposes no mechanism for content preview. Therefore, how a potential buyer could select an image for purchase is not clarified. An initial attempt to formulate a secure online image trading system was presented in Munadi et al. (2013), although no clear framework was described for a cloud environment context. This study also lacked a descriptive comparison with a conventional image trading system. Moreover, the experiments and analysis were based on a small dataset. A conventional image trading system. In this section, we present some background information that is necessary to formulate our proposed framework, including a review of conventional image trading systems and their shortcomings, the DCT and JPEG standard, image scrambling in the DCT domain, and the SSIM index, which measures the degree of scrambling. Conventional model of image trading Most current applications that enable commercial transaction of images are strongly reliant on access control. Buyers obtain privileged access to the image repository after payment. Figure 1 illustrates a typical image repository and trading system in a conventional approach. An image publisher normally uses third-party services to host his/her commercial images. Potential buyers can browse a thumbnail collection, which provides small representations of the images. If the buyer is pleased with the image, he/she will pay an agreed price and receive an access key in return. The buyer will then be able to download the original size or full-resolution image. Alternatively, the image can be electronically sent to the buyer by the server. A practical application of this concept is best described by the digital image libraries available on several websites (KITLV; Getty Images; Corbis; iStock). In terms of privacy, this conventional scheme is confronted with at least two serious threats or attacks that can be originated from internal and external sources, as depicted in Figure 2. The types of threats/attacks can be described as follows: External threats Unauthorized users present an external threat to the image repository. Illegal access may be obtained under various conditions, such as lack of authentication, weak access control, and malicious attacks. When access is obtained by an unauthorized user, it becomes difficult to prevent illegal use of the images. Internal threats A server provider often has the highest access privileges for the stored data, such as commercial images, with no risk of detection. Therefore, a malicious provider presents an internal threat to the stored data, leading to the illegal use of images, such as theft or illegal distribution. A cloud-based image trading framework that considers the above-mentioned issues is proposed herein. It facilitates secure storage and retrieval of original images, and prevents unauthorized parties from accessing the true content of images. Source of threats/attacks in a cloud storage service, adapted from Iacono and Torkian (2013). DCT and JPEG The JPEG compression standard is based on the DCT that transforms spatial data into the frequency domain. The encoding procedure is illustrated in Figure 3 and can be summarized as follows. An original image is partitioned into 8×8 non-overlapped blocks. A function of two-dimensional Forward Discrete Cosine Transform (FDCT), as in Eq. (1), is applied to each block, resulting in 1 DC and 63 AC coefficients. $$\begin{aligned} F_{uv} = \frac{C_{u}C_{v}}{4}\sum _{i=0}^{7}\sum _{j=0}^{7}\cos \frac{(2i+1)u\pi }{16}\cos \frac{(2j+1)v\pi }{16}f(i,j) \end{aligned}$$ $$\begin{aligned} C_{u},C_{v} = \left\{ \begin{array}{l l} \frac{1}{\sqrt{2}} &{} \quad \text {for}\,\,u,v = 0 \\ 1 &{} \quad \text {otherwise} \end{array} \right. \end{aligned}$$ For coding, an 8×8 array of the DCT coefficients is reorganized into a one-dimensional list based on a zigzag order. The order is initially started with the DC coefficient, and places the coefficients with the lowest spatial frequencies in lower indices. Note that higher-frequency components generally represent the fine details of an image, and are less sensitive to human vision. Hence, they can be quantized more coarsely than the lower frequency components, and may be discarded with negligible effect on image quality. After quantization, Differential Pulse Code Modulation (DPCM) is applied to the DC coefficient, and the AC coefficients are run-length coded (RLC). As a final stage, all the coefficients are entropy encoded using Huffman or arithmetic coding. The output of the entropy encoder and some additional information, such as header and markers, form the JPEG bitstream. JPEG encoder. DCT based scrambling Sample of scrambled images. a DC coefficients are scrambled, \(SSIM=0.3882.\) b Blocks of AC coefficient are scrambled, \(SSIM = 0.1464.\) c Blocks of \(8\times 8\) coefficients are scrambled, \(SSIM = 0.1519.\) d DC and AC coefficients are separately scrambled, \(SSIM = 0.1435.\) There are several approaches to visually protect the images, either in the spatial or transformed domain. Because we are dealing with the JPEG-coded images, it is preferable to consider available techniques that work in the DCT domain, such as those proposed in Weng and Preneel (2007), Khan et al. (2010a, b) and Torrubia and Mora (2003). These methods exploit the DCT coefficients to achieve various degrees of perceptual degradation, either by scrambling blocks of coefficients, or scrambling the individual DC and AC coefficients independently. The scrambling process can be further combined with an encryption technique to increase the level of protection. The degree of perceptual degradation itself can be measured using the SSIM index. Assuming two images, \(X\) and \(Y\), as the comparison objects, the SSIM index is defined as follows (Wang et al. 2004; Weng and Preneel 2007): $$\begin{aligned} SSIM(X,Y) = [l(X,Y)]^\alpha \cdot [c(X,Y)]^\beta \cdot [s(X,Y)]^\gamma \end{aligned}$$ where \(X\) represents the original image and \(Y\) represents the scrambled version of the original image. Functions \(l()\), \(c()\), and \(s()\) correspond to luminance, contrast, and structural similarity, respectively, and \(\alpha\), \(\beta\), and \(\gamma\) are the weighting factors. A simplified form of the SSIM index can be written as: $$\begin{aligned} SSIM(X,Y) = \frac{(2\mu _{X}\mu _{Y}+C_{1})(2\sigma _{XY}+C_{2})}{(\mu _{X}^2+\mu _{Y}^2+C_{1})(\sigma _{X}^2+\sigma _{Y}^2+C_{2})} \end{aligned}$$ where \(\mu\) is the mean intensity, \(\sigma\) represents the (co)variance, and \(C_{1}\), \(C_{2}\) are numerical stability constants (Wang et al. 2004; Weng and Preneel 2007). The value of SSIM ranges from 0 to 1, with a value of 1 indicating that \(X\) and \(Y\) are identical. Samples of DCT-based scrambled images with their respective SSIM values are shown in Figure 4. As shown, different degrees of visual degradation can be obtained by applying different arrangements of the DCT coefficients. The image with the lowest SSIM value is considered the most visually protected. Proposed framework Proposed framework. In this section, we describe a conceptual image trading framework for an untrusted cloud environment that satisfies all the requirements mentioned in "Introduction". The proposed framework enables secure online trading, and allows the images to be securely stored on the cloud servers after being visually protected and to be retrieved in their protected state. The following description is based on the scheme illustrated in Figure 5. Original images owned by an image publisher are first encoded and visually protected by means of scrambling in the DCT domain (1a). At the same time, thumbnails are generated by resizing the original images to any required sizes for viewing in a display device (1b). The protected images are then uploaded and stored on a cloud repository server. In this manner, the true visual content of the original images cannot be accessed by the server provider. Thumbnails can be stored on the same server, and are publicly accessible through the website. A potential image buyer will browse the thumbnail library and choose images of interest, which also serve as queries (2). When a query image is submitted, the thumbnail is matched with the protected images by comparing the moment invariants of the thumbnail and of the DC-image generated from the protected images (3). After this matching process, the server will return the matched image, which can then be downloaded or sent to the potential buyer (4). However, the matched image remains visually protected unless a key is granted by the image publisher after payment or other authorization (5). Using an authentic key, the buyer will be able to decode and descramble the data, resulting in the true traded image (6). Scrambling process A simplified diagram of JPEG-based image scrambling. The main purpose of image scrambling is to provide visual protection so that the true content is perceptually meaningless or degraded. Therefore, the images are secure against ill-intentioned parties who may have access to the server, such as a hosting provider or hackers. Depending on the degree of scrambling, visual protection can be achieved by applying existing scrambling techniques that work in the DCT domain, such as those proposed in Kiya and Ito (2008) and Khan et al. (2010a, b). A simplified diagram of a JPEG-based image scrambling for visual protection is shown in Figure 6, in which a block-based permutation is applied to the quantized DCT coefficients. Descrambling is simply a reverse process, given the same key as in the scrambling proses is available. DC image generation and thumbnails DC-image generation. It is known that the DC coefficient of each 8×8 array of DCT coefficients is actually an average value of the 64 pixels within the corresponding block. Hence, it contains very rich visual information. An image constructed from DC components is a reduced-sized version that is visually similar to the original. Therefore, the DC image itself is a rich feature descriptor that can be exploited for matching purposes. The process of generating a DC-image from DCT coefficients is illustrated in Figure 7. Initially, an image is partitioned into 8×8 non-overlapped blocks (referred to as a tile or a block), and a forward DCT function is employed to each block. The DC coefficient of each block represents the local average intensity and holds most of the block energy. DC coefficients from all of the blocks are then arranged according to the order of the original blocks, resulting in a reduced-size image (\(\frac{1}{64}\) of the original image) referred to as a DC-image. In relation to the JPEG standard, it is worth noting that the DC coefficients can be directly extracted from the JPEG bitstream without the need for full JPEG decoding (Arnia et al. 2009), and the DC-image can be generated accordingly. However, thumbnails for preview or browsing purposes can be produced by downscaling the original images to the sizes best suited to the dimensions of the display devices. Image matching In this section, an image matching technique and its corresponding matching distance are described. We exploit the seven Hu moments (Ming-Kuei 1962) for matching purposes. The moments of an image, with pixel intensities \(I(x,y)\) and of size \(M\times N\), are defined by: $$\begin{aligned} m_{pq} = \sum _{y=0}^{M-1} \sum _{x=0}^{N-1}x^py^qI(x,y) \end{aligned}$$ Rather than Eq. (4), the central moments: $$\begin{aligned} \mu _{pq} = \sum _{y=0}^{M-1} \sum _{x=0}^{N-1}(x-\bar{x})^p(y-\bar{y})^qI(x,y) \end{aligned}$$ $$\begin{aligned} \bar{x}=\frac{m_{10}}{m_{00}}, \quad \bar{y}=\frac{m_{01}}{m_{00}} \end{aligned}$$ are often used, which are invariant to translation. Furthermore, normalized central moments are defined by: $$\begin{aligned} \eta _{pq} = \frac{\mu _{pq}}{\mu ^\gamma _{00}} \end{aligned}$$ $$\begin{aligned} \gamma = \frac{p+q}{2} + 1,\quad p+q=2,3,\dots \end{aligned}$$ and these are also invariant to changes in scale. Algebraic combinations of these moments can provide more attractive features. The most popular are those offered by Hu, which are independent of various transformations. Hu's original moment invariants (Ming-Kuei 1962; Huang and Leng 2010) are given by: $$\begin{aligned}M_{1} &= \mu _{20} + \mu _{02}\\M_{2} &= \left( \mu _{20} - \mu _{02}\right) ^2 + 4\mu ^2_{11}\\M_{3} &= \left( \mu _{30} - 3\mu _{12}\right) ^2 + 3\left( \mu _{21} + \mu _{03}\right) ^2\\M_{4} &= (\mu _{30} + \mu _{12})^2 + (\mu _{21} + \mu _{03})^2\\M_{5} &= (\mu _{30} - 3\mu _{12})(\mu _{30} + \mu _{12})[(\mu _{30} \\&\quad + \mu _{12})^2 - 3(\mu _{21} + \mu _{03})^2] \\&\quad + 3(\mu _{21} - \mu _{03}[3(\mu _{30} + \mu _{12})^2 - (\mu _{21} + \mu _{03})^2] \\M_{6} &= (\mu _{20} - \mu _{02})[(\mu _{30} + \mu _{12})^2 \\&\quad - (\mu _{21} + \mu _{03})^2] + 4\mu _{11}(\mu _{30} \\&\quad + \mu _{12})(\mu _{21} + \mu _{03}) \\M_{7} &= (3\mu _{21} - \mu _{03})(\mu _{30} + \mu _{12}) \\&\quad\times \left[ (\mu _{30} + \mu _{12})^2 - 3(\mu _{21} + \mu _{03})^2\right] \\&\quad+(\mu _{30} - 3\mu _{12})(\mu _{21} + \mu _{03})[3(\mu _{30} + \mu _{12})^2 - (\mu _{21} + \mu _{03})^2] \end{aligned}$$ Image matching, between thumbnails and visually protected images, involves calculating the moment distance, \(d\), between the thumbnails and the DC component of the visually protected images. We define the distance as: $$\begin{aligned} d(a,b) = \sum _{j=1}^{7}|M_j^a - M_j^b| \end{aligned}$$ where, \(a\) and \(b\) denote the thumbnail and the DC image, respectively, and \(M\) represents Hu's moments. The matching process proceeds as follows: The moments of a thumbnail image are calculated. DC coefficients from each block of the visually protected JPEG bitstream are extracted to generate the DC image. The moments of the DC images are calculated. The moment distances between the query and the DC images are calculated using Eq. (7). The minimum value of \(d(a,b)\) corresponds to image matching. Key sharing Once authorization has been requested, a corresponding scramble key is sent to the buyer by the image publisher. The true image content is accessible to the image buyer after proper decoding that includes the unscrambling process using the given key. Various options are available for delivering the scramble key to a buyer. For instance, it could be attached to the system and use the same cloud server or a system built in a different and independent server, or could be accomplished by other online means, such as email. Ten sample images taken from a dataset of 100 images and used as queries in the simulation. Simulation results Simulations were mainly conducted to verify the matching performance between thumbnails of various sizes that serve as query images and their corresponding DC-images extracted from the visually protected images. These images were assumed to be stored on the server and available for trading. The moment distance defined in Eq. (7) was used as the matching metric. Simulation conditions The experiment was conducted using a dataset of 100 images with an original size of 512 × 512 pixels. Ten samples used as query images are shown in Figure 8. Using four different thumbnail sizes for viewing, four separate experiments were carried out. In each experiment, thumbnails were generated by rescaling the original images by a factor of 0.125, 0.1875, 0.25, and 0.391. This resulted in images of size 64 × 64, 96 × 96, 128 × 128, and 200 × 200 pixels, respectively. As described in "DCT based scrambling", block-based scrambling of the DCT coefficients was performed to produce visually protected images. For simplicity, we scrambled only blocks of AC coefficients while preserving the original position of the DC coefficients. The size of the DC images, constructed using the DC coefficients of the protected images, was 64 × 64 pixels. These protected images and thumbnails were assumed to be stored on the same server. Figure 9 shows an example of the images generated in the simulations. The image size was scaled to represent thumbnails for content preview (browsing), a DC image, and a visually protected image. For comparison purposes, we also calculated the distance between the thumbnails and the visually protected images. Scaled size of thumbnails with different dimensions, a DC-image, and a visually protected image. Table 1 Distance values of the matching process between 10 thumbnails (query images) and their corresponding visually protected images. The thumbnail size was 64 × 64 pixels Table 2 Distance values of the matching process between 10 thumbnails (query images) and their corresponding visually protected images. The thumbnail size was 200 × 200 pixels Table 3 Distance values of the matching process between 10 thumbnails (query images) and their corresponding DC-images. The thumbnail size was 64 × 64 pixels Table 4 Distance values of the matching process between 10 thumbnails (query images) and their corresponding DC-images. The thumbnail size was 200 × 200 pixels The results of each set of query images are presented in Tables 1, 2, 3 and 4. There are 100 matching runs presented in each table. The first two tables present the matching distances between the thumbnails (query images) and the visually protected images, and the last two present the matching distances between the thumbnails (query images) and the DC images generated from the visually protected images. Simulations using a dataset of 100 images with four different sizes of query images resulted in 40,000 matching attempts between the thumbnails and the visually protected images, and 40,000 matching attempts between the thumbnails and the DC images. In Tables 1 and 2, we present the matching distances between the thumbnails and visually protected images. The sizes of the thumbnails are 64 × 64 and 200 × 200 pixels, respectively. As can be seen, the distance values vary and are much higher than zero. These results confirmed that the visual content of the thumbnails and of their corresponding visually protected images is no longer identical after DCT-based scrambling. Moreover, the proposed distance measure is not applicable to a direct matching between a thumbnail and a visually protected image. Table 3 summarizes the matching results between the thumbnail and the DC images of the same size. In this case, the displayed image for browsing and the DC image generated from the visually protected image were the same size, i.e., 64 × 64 pixels. In contrast to the above results, the distances between the thumbnails and their corresponding DC images were very close to zero (bold values), i.e., less than 0.2. The matching results between the thumbnail and DC images of different sizes are presented in Table 4. In this case, the thumbnail took its largest size, 200 × 200 pixels, whereas the size of the DC image was 64 × 64 pixels. Similar to the results in Table 3, the distance values were very small (bold values). Note that the distance values between all thumbnails of various sizes and the DC images were close to zero. This is confirmed by the averaged value of all the matching distances, as presented in Table 5. From the above results, we can make several concluding observations. Despite its simplicity, the proposed system offers both visual protection and a content preview of the traded images. The proposed moment distance performed satisfactorily in retrieving the target images, with all queries for each experiment returning the correct visually protected images. This means that the matching performance was not affected by the variation in thumbnail size. Thus, thumbnails could be adjusted according to the size of display device. Table 5 Averaged distance values between all the thumbnails (query images) of various sizes and their corresponding DC images We have presented a conceptual framework for secure online image trading in a cloud environment. The traded images were visually protected in the DCT domain, and stored on an untrusted server. Thumbnails of original images were publicly accessible through the website and served as queries. Image matching between the thumbnails and protected images was achieved by comparing the moment invariants of the thumbnails and of the DC-image generated from the protected images. The proposed moment distance enabled the target images to be differentiated from other protected images in the database. Arnia F, Munadi K, Fujiyoshi M, Kiya H (2009) Efficient content-based copy detection using signs of DCT coefficient. In: IEEE symposium on industrial electronics and applications, 2009 (ISIEA 2009), vol 1, pp 494–499, 4–6 Oct 2009 Cheng B, Zhuo L, Bai Y, Peng Y, Zhang J (2014) Secure Index Construction for Privacy-Preserving Large-Scale Image Retrieval. In: Proceedings of IEEE fourth international conference on big data and cloud computing (BdCloud), pp 116–120 Corbis. http://www.corbisimages.com/. Accessed 1 Sept 2014 Getty Images. http://www.gettyimages.com/. Accessed 1 Sept 2014 Huang Z, Leng J (2010) Analysis of Hu's moment invariants on image scaling and rotation. In: Proceedings of IEEE ICCET, Chengdu, China, pp 476–480 Iacono LL, Torkian D (2013) A system-oriented approach to full-text search on encrypted cloud storage. In: International conference on cloud and service computing (CSC), pp 24–29 iStock. http://www.istockphoto.com/. Accessed 1 Sept 2014 Jeong H, Park J (2012) An efficient cloud storage model for cloud computing environment. In: Proceedings of international conference on advances in grid and pervasive computing, vol 7296, pp 370–376 Curran K, Carlin S, Adams M (2012) Security issues in cloud computing. In: Cloud computing for teaching and learning: strategies for design and implementation. IGI Global, Hershey, Pennsylvania, USA, pp 200–208 Kiya H, Ito I (2008) Image matching between scrambled images for secure data management. In: Proceedings of 16th EUSIPCO, Lausanne, Switzerland, August 25–29, 2008 KITLV, Universiteit Leiden. Digital Image Library. http://media-kitlv.nl/. Accessed 1 Sept 2014 Khan MI, Jeoti V, Khan MA (2010a) Perceptual encryption of JPEG compressed images using DCT coefficients and splitting of DC coefficients into bitplanes. In: 2010 international conference on intelligent and advanced systems (ICIAS), ICIAS2010, Kuala Lumpur, Malaysia, pp 1–6, 15–17 June 2010 Khan MI, Jeoti V, Malik AS (2010b) On perceptual encryption: variants of DCT block scrambling scheme for JPEG compressed images. In: Kim T-H, Pal SK, Grosky WI, Pissinou N, Shih TK, Slezak D (eds) FGIT-SIP/MulGraB, communications in computer and information science. vol 123, Springer, New York, pp 212–223 Liu SC, Fujiyoshi M, Kiya H (2013) An image trading system using amplitude-only images for privacy- and copyright-protection. IEICE Trans Fundam E96-A(6):1245–1252 Lu W, Varna AL, Swaminatahan A, Wu M (2009) Secure image retrieval through feature protection. In: Proceedings of IEEE ICASSP, pp 1533–1536 Lu W, Varna AL, Swaminatahan A, Wu M (2010) Security analysis for privacy preserving search of multimedia. In: Proceeding of IEEE ICIP, Hongkong, pp 2093–2096 Ming-Kuei H (1962) Visual pattern recognition by moment invariants. IRE Trans Inf Theory 8:179–187 Modi C, Patel D, Borisaniya B, Patel A, Rajarajan M (2013) A survey on security issues and solutions at different layers of cloud computing. J Supercomput 63:561–592 Munadi K, Syaryadhi M, Arnia F, Fujiyoshi M, Kiya H (2013) Secure online image trading scheme using DCT coefficients and moment invariants feature. In: Proceedings of IEEE 17th international symposium on consumer electronics (ISCE), Taiwan, pp 291–292 Okada M, Okabe Y, Uehara T (2009) Security analysis on privacy-secure image trading framework using blind watermarking. In: Proceedings of IEEE ninth annual international symposium on applications and the internet, pp 243–246 Okada M, Okabe Y, Uehara T (2010) A web-based privacy-secure content trading system for small content providers using semi-blind digital watermarking. In: Proceedings of annual IEEE consumer communications and networking conference, Las Vegas, USA, pp 1–2 Premaratne P, Premaratne M (2012) Key-based scrambling for secure image communication. In: Gupta P, Huang D, Premaratne P, Zhang X (eds) Emerging intelligent computing technology and applications. Springer, Berlin, pp 259–263 Sae-Tang W, Liu S, Fujiyoshi M, Kiya H (2014) A copyright- and privacy-protected image trading system using fingerprinting in discrete wavelet domain with JPEG 2000. IEICE Trans Fundam E97-A(1):2107–2113 Troncoso-Pastoriza JR, Perez-Gonzales F (2013) Secure signal processing in the cloud: enabling technologies for privacy-preserving multimedia cloud processing. IEEE Signal Process Mag 30(2):29–41 Torrubia A, Mora F (2003) Perceptual cryptography of JPEG compressed images on the JFIF bit-stream domain. In: Proceedings of international conference on consumer electronics (ICCE), pp 58–59 Wallace GK (1992) The JPEG still picture compression standard. IEEE Trans Consum Electron 38(1):xviii–xxxiv Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612 Weng L, Preneel B (2007) On encryption and authentication of the DC DCT coefficient. In: Proceedings of the second international conference on signal processing and multimedia applications (SIGMAP), pp 375–379 Zhang X, Cheng H (2014) Histogram-based retrieval for encrypted JPEG images. In: Proceedings of IEEE China Summit and international conference on signal and information processing (ChinaSIP), pp 446–449 KM conceived the conceptual framework. KM, MF, and HK developed the research design. FA and MS prepared and ran the simulations, KM and FA wrote the paper. KM, MF, and HK reviewed the paper. All authors read and approved the final manuscript. The work reported in this paper is the result of research projects partially funded by the Directorate General of Higher Education (DGHE) of the Republic of Indonesia, under the International Research Collaboration and Scientific Publication Scheme year 2014. Compliance with ethical guidelines Competing interests The authors declare that they have no competing interests. Image source disclosure Most of images used in this paper are obtained from the USC-SIPI Image Database (http://sipi.usc.edu/database/), which is freely available for research purposes. The database is maintained by Signal and Image Processing Institute, the University of Southern California. Copyright information of the images can be found at http://sipi.usc.edu/database/copyright.php. Department of Electrical Engineering, Syiah Kuala University, Jalan Tgk. Syech Abdurrauf No. 7, 23111, Banda Aceh, Indonesia Khairul Munadi , Fitri Arnia & Mohd Syaryadhi Graduate School of System Design, Tokyo Metropolitan University, 6-6 Asahigaoka, Hino-shi, Tokyo, 191-0065, Japan Masaaki Fujiyoshi & Hitoshi Kiya Search for Khairul Munadi in: Search for Fitri Arnia in: Search for Mohd Syaryadhi in: Search for Masaaki Fujiyoshi in: Search for Hitoshi Kiya in: Correspondence to Khairul Munadi. Munadi, K., Arnia, F., Syaryadhi, M. et al. A secure online image trading system for untrusted cloud environments. SpringerPlus 4, 277 (2015) doi:10.1186/s40064-015-1052-1 Image trading Image scrambling
CommonCrawl
Strategies for improving production performance of probiotic Pediococcus acidilactici viable cell by overcoming lactic acid inhibition Majdiah Othman1, Arbakariya B. Ariff1,2, Helmi Wasoh1,2, Mohd Rizal Kapri2 & Murni Halim ORCID: orcid.org/0000-0002-5744-21471,2 AMB Express volume 7, Article number: 215 (2017) Cite this article Lactic acid bacteria are industrially important microorganisms recognized for fermentative ability mostly in their probiotic benefits as well as lactic acid production for various applications. Fermentation conditions such as concentration of initial glucose in the culture, concentration of lactic acid accumulated in the culture, types of pH control strategy, types of aeration mode and different agitation speed had influenced the cultivation performance of batch fermentation of Pediococcus acidilactici. The maximum viable cell concentration obtained in constant fed-batch fermentation at a feeding rate of 0.015 L/h was 6.1 times higher with 1.6 times reduction in lactic acid accumulation compared to batch fermentation. Anion exchange resin, IRA 67 was found to have the highest selectivity towards lactic acid compared to other components studied. Fed-batch fermentation of P. acidilactici coupled with lactic acid removal system using IRA 67 resin showed 55.5 and 9.1 times of improvement in maximum viable cell concentration compared to fermentation without resin for batch and fed-batch mode respectively. The improvement of the P. acidilactici growth in the constant fed-batch fermentation indicated the use of minimal and simple process control equipment is an effective approach for reducing by-product inhibition. Further improvement in the cultivation performance of P. acidilactici in fed-bath fermentation with in situ addition of anion-exchange resin significantly helped to enhance the growth of P. acidilactici by reducing the inhibitory effect of lactic acid and thus increasing probiotic production. Lactic acid bacteria (LAB) have recently attracted greater attention for industrial use due to their microorganisms beneficial to human or animals and probiotic properties (Hwang et al. 2011). Pediococcus acidilactici is one of the probiotics which commonly found as one of the normal flora of alimentary tract, oral cavity gastro intestinal tract and has been widely used in the food industry (Halim et al. 2017). Foods that contain probiotic in an appropriate amount are beneficial to human for instance in enhancing immune system, antibiotics production and prevention of mucosal infection in children. Probiotic products are recommended to contain at least 107 living microorganisms per mL or per g (Elsayed et al. 2014). In addition, foods and tablets consist of LAB have been manufactured and consumed clinically. Therefore, it is necessary for a suitable commercial product to contain a high density of viable cells for efficient effects (Cui et al. 2016). High cell density during fermentation is important in order to achieve targeted biomass, lactic acid and other metabolites productivity. Nonetheless, the conditions of the fermentation medium such as pH, temperature, substrates and by-products are the main influences on the microbial growth and product formation (Boon et al. 2007). Generally, conducive biochemical and biophysical conditions are important for bacteria to grow and express normal metabolic activities. Biophysical conditions such as temperature, pH, redox potential, water activity and the presence of inhibitory compounds gave a broad range of variations among LAB strains. A suitable biochemical condition for LAB to grow can be obtained from nutrients provided by the culture media (Hayek and Ibrahim 2013). Batch, fed-batch and continuous are three fermentation modes commonly used for biomass production in microbial fermentation. Among these, batch fermentation is identified to be the most frequently used for fermentation process due to the simplicity of the process where no substrates components are required to be added during the fermentation process except neutralizing agents for pH controlling (Abdel-Rahman et al. 2013). The closed system of batch fermentation minimize the risk of contamination and in turn produce a high lactic acid concentration as compared to other fermentation modes (Hofvendahl and Hahn-Hägerdal 2000). However, batch fermentation has the drawbacks of producing low cell concentrations mainly due to the limited nutrient levels (Abdel-Rahman et al. 2013) and the LAB culture often suffers from substrate and product inhibitions in which contributes to the low productivity (Hujanen et al. 2001). Fed-batch fermentation is normally used in order to avoid substrate-level inhibition in microbial fermentation (Bai et al. 2003). In comparison, batch and fed-batch fermentations may produce higher lactic acid concentrations than a continuous fermentation. Often, this is due to the complete consumption of substrates available in the batch and fed-batch fermentations, whereas there are always remaining residual concentrations of substrates in the continuous fermentation (Hofvendahl and Hahn-Hägerdal 2000; Abdel-Rahman et al. 2013). In-situ removal of lactic acid from fermentation broth is another interesting strategy to reduce lactic acid product inhibition in the fermentation of LAB (Cui et al. 2016; Garret et al. 2015; Jianlong et al. 1994). Nonetheless, most of the studies on the extractive fermentation of lactic acid using anion exchange resin are focusing on the improvement of lactic acid production instead of cell biomass (Boonmee et al. 2016). Limited literature is therefore available on the combination of both lactic acid extractive fermentation using anion exchange resin and fed-batch fermentation aiming at the improvement of LAB biomass. The purpose of this study was to investigate the effects of fermentation conditions on the growth of batch fermentation of P. acidilactici and to improve its cultivation performance through the application of constant fed-batch fermentation in 2 L stirred tank bioreactor. Fermentation conditions such as concentration of initial glucose in the culture, concentration of lactic acid accumulated in the culture, types of pH control strategy, types of aeration mode and different agitation speed were studied on batch fermentation of P. acidilactici. Whilst for the constant fed-batch fermentation, different feeding rates were used to study the effects of feeding rate on viable cells production and lactic acid accumulated from cultivation of P. acidilactici. Lactic acid extractive fermentation using anion exchange resin, IRA 67 was conducted on fed-batch fermentation of P. acidilactici to further improve its growth. The results of this study may provide several innovative strategies for improvement of biomass production in LAB fermentation. Microorganism, culture maintenance and inoculum preparation Pediococcus acidilactici DSM 20238 used in this study was obtained from Bioprocessing and Biomanufacturing Research Center, Universiti Putra Malaysia which was initially purchased from DSMZ-German Collection of Microorganisms and Cell Cultures. This strain was used throughout the study due to its capability to produce lactic acid as a sole product. De Man, Rogosa and Sharpe (MRS) medium was used for growing of P. acidilactici strain and for maintenance of the culture. The MRS medium was purchased from Merck Millipore, Germany. The compositions of MRS medium in deionized water (g/L) were: peptone from casein 10.0, meat extract 8.0, yeast extract 4.0, d(+) glucose 20.0, di-potassium hydrogen phosphate 2.0, Tween 80 1.0, di-ammonium hydrogen citrate 2.0, sodium acetate 5.0, magnesium sulfate 0.2, and manganese sulfate 0.04. The strain was stored at − 20 °C in 15% (v/v) glycerol. The strain was inoculated into 50 mL MRS broth in 250 mL non-baffled Erlenmeyer flasks and incubated in an incubator shaker (Certomat® BS-1 Braun, Germany) at 37 °C and agitated at 200 rpm for 24 h. The bacterial cells were harvested by centrifugation (Eppendorf Centrifuge 5810 R) at 10,000 rpm for 10 min. The bacterial pellets were resuspended in 15% (v/v) glycerol and stored at − 20 °C for maintenance. The 15% (v/v) glycerol was prepared by mixing with distilled water and autoclaving at 121 °C and 15 psi for 15 min for sterilization purpose. Inoculum was prepared by growing one loopful of P. acidilactici culture from MRS agar plate in 250 mL non-baffled Erlenmeyer flasks containing 50 mL MRS medium. The strain was incubated at 37 °C in an orbital shaker, which was agitated at 200 rpm for 10 h to be used as inoculum in fermentation process. Batch fermentation For preliminary experiments, the fermentations were done in 500 mL non-baffled Erlenmeyer flasks containing 100 mL MRS medium with composition as previously described. Different glucose concentrations (0–20 g/L) were used in MRS medium to study the effect of glucose consumption and lactic acid accumulated in the culture on growth of P. acidilactici. Different lactic acid concentrations (5–15 g/L) were added into the culture during exponential phase of the bacterial growth to investigate the lactic acid inhibition towards the growth of P. acidilactici. Batch cultivation was carried out in a 2 L stirred tank bioreactor (BIOSTAT, B. Braun Biotech International, GmbH, Germany). The bioreactor vessel was made of borosilicate glass and the top-plate of the bioreactor was made of stainless steel. The bioreactor was equipped with a thermostat jacket system for temperature controlling within the bioreactor by means of an external double wall with a circulation pump. The bioreactor was connected with temperature, pH and dissolved oxygen monitoring and control system. pH of the culture was controlled by addition of sodium hydroxide (NaOH) or hydrochloric acid (HCl) accordingly. Alkali, acid and feed medium were added into the culture using peristaltic pumps. Antifoam was not added into the bioreactor since foaming was not a problem during fermentation. The bioreactor was equipped with a six blade Rushton turbine for mixing system. Batch cultivations were carried out in a 2 L stirred tank bioreactor containing 1 L of MRS medium. The culture in bioreactor was also inoculated with 10% (v/v) inoculum similar to the shake flask culture. The fermentations were carried out under facultative condition, agitated at 300 rpm, at 37 °C and the pH was not controlled but monitored throughout the fermentation, unless stated otherwise. The effect of fermentation conditions with and without pH control on the fermentation performance of P. acidilactici was studied by running the fermentation at both non-regulated pH and regulated pH value of 5.7 ± 0.2 (equivalent to the initial pH value of MRS broth). The initial pH values of these two conditions were adjusted by 1 M NaOH or HCl. Due to the lactic acid production during fermentation of P. acidilactici, the pH was maintained with 1 M NaOH during the fermentation for pH control condition. The effect of aeration on the fermentation performance of P. acidilactici was studied by running the fermentation with and without air supply. Due to the facultative characteristic of P. acidilactici, two different modes of aeration were selected to study the effect of aeration on the growth of P. acidilactici. For the anaerobic condition, the fermentation was done by sparging the medium with nitrogen gas at 0.1 vvm (0.15 L/min) until the dissolved oxygen level reached zero value prior to fermentation. Nitrogen gas was stopped once the culture was inoculated, and the fermentation was allowed to proceed without aeration. For the facultative condition, the fermentation was done by sparging the medium with oxygen gas at 0.1 vvm (0.15 L/min) until the dissolved oxygen level achieved 100% prior to fermentation. Oxygen gas was stopped once the culture was inoculated, then the fermentation proceeded without aeration. Finally, the effect of different mixing rate on the fermentation performance of P. acidilactici, was evaluated at three different mixing speeds of 200, 300 and 400 rpm. Constant fed-batch fermentation Constant fed-batch fermentations were conducted in a 2 L stirred tank bioreactor at three different feeding rates of 0.008, 0.015 and 0.03 L/h. Basically the constant fed-batch fermentations were conducted in two phases. In the initial phase, P. acidilactici was cultured in a batch fermentation mode with a working volume of 0.9 L until glucose was exhausted. Once the glucose was exhausted, which occurred approximately at 12 h of the initial batch fermentation, the second phase of the fermentation was started with fed-batch mode. During the second phase, the fed-batch was initiated with 100 mL of concentrated glucose (10 g/L) was being added continuously into the bioreactor at a constant feeding rate to make up the total volume of 1 L. Anion-exchange resin selectivity towards lactic acid A weak base anion-exchange resin (Amberlite IRA 67, Cat. No. 476633) was used in this study for in situ removal of lactic acid from the culture. The resin was purchased from Sigma, Germany and was sterilized using ultraviolet radiation before used in the experiments. The tubes were centrifuged at 10,000 rpm for 10 min to separate resins from the solution to be used for determination of remaining lactic acid in the solution. Resin can be reused by washing the resin with distilled water followed by regenerating with 4% NaOH to elute the lactic acid from the resin at an ambient temperature. The theory of the Langmuir isotherm model assumes adsorption only occurs at specific homogeneous adsorbent sites and no further adsorption can take place at the same site once a lactic acid molecule adsorbs on the site (Gao et al. 2010). The specific uptake capacity of lactic acid (q) for the Langmuir isotherm model was determined by using Eq. 1: $$ q = \frac{(Ci - Ceq)}{X} $$ From Eq. 1, q (g of lactic acid/g of biosorbent) is the specific uptake capacity of lactic acid, Ci (g/L) is the initial concentration of lactic acid, Ceq (g/L) is the equilibrium concentration of lactic acid and X (g/L) is the concentration of biosorbent in solution. Sorption equilibrium studies for a single component present in MRS media (glucose and sodium acetate) and organic acid other than lactic acid (acetic acid) were conducted to study the selectivity of IRA 67 anion-exchange resin towards these components as compared to lactic acid. IRA 67 resin (10 g/L) were added into 15 mL falcon tubes with each separately contained 10 mL glucose, sodium acetate and acetic acid with initial concentration of 5 g/L. The tubes were agitated at 200 rpm on a shaker for 24 h until the sorption reached equilibrium. The adsorption capacity of IRA 67 resin for glucose, sodium acetate and acetic acid was calculated using Eq. 1 by determining the remaining of these components in the solution. The experiments were done in triplicate. Constant fed-batch fermentation coupled with lactic acid extractive fermentation using anion-exchange resin The cultivation performance of P. acidilactici in the constant fed-batch fermentation coupled with extractive fermentation using the condition of dispersed anion-exchange resin were conducted at feeding rate of 0.015 L/h. The sterilized an anion-exchange resin was aseptically added (10 g/L) into the culture during inoculation. Fermentation culture was centrifuged at 10,000 rpm for 10 min to separate resins from the culture to be used for further application. Resin can be reused by washing and regenerating as mentioned in the previous section. Throughout the fermentations, 5 mL of culture samples were withdrawn at time intervals for analysis. Cell growth was determined using colony forming unit (CFU) (Ming et al. 2016). Samples were serially diluted (103–1010) with 0.9% (w/v) sterile saline water (NaCI) and plated onto MRS agar plate. Samples were incubated for 24 h at 37 °C and were analyzed for CFU/mL by calculating the total number of colonies on a plate and multiplied by the dilution involved. The supernatant was separated from the broth by centrifugation at 10,000 rpm for 10 min to be used for glucose and lactic acid determination. The concentrations of glucose and lactic acid were determined by using reverse-phase high performance liquid chromatography (RP-HPLC) (Waters 2695, Separations Module and Waters 2410, Refractive Index Detector). The RP-HPLC analysis was done using a shodex SH-1011 column (7 μm, 8 mm × 300 mm) connected with shodex SH-G guard column (7 μm, 6 × 50 mm). The mobile phase solvent used was 5 mM sulphuric acid, the temperature was maintained at 60 °C and the flow rate was 1.0 mL/min. Empower software was used for data processing. The experiments data were statistically analysed by using MS Office Excel 2010 and SPSS software Version 21. The data represents averages of at least three replicates. The results were represented as mean value ± standard deviation. Unpaired T test and one way analysis of variance (One way ANOVA) were used to determine the significance among treatment means. Significance was declared at P < 0.05. Effect of glucose concentration on growth of P. acidilactici and lactic acid accumulation Varying the initial glucose concentration from 0 to 20 g/L in batch fermentation was found to affect the growth of P. acidilactici (Fig. 1). Increased in initial glucose concentration from 0 to 10 g/L increased viable cell concentration of P. acidilactici (from 9.4 × 108 to 1.4 × 1010 cfu/mL) by about 14.9 times whilst at glucose concentration of above 10–20 g/L viable cell concentration of P. acidilactici was reduced by about 1.4 times (from 1.4 × 1010 to 1.0 × 1010 cfu/mL). Viable cell concentration obtained and lactic acid accumulated at the end of the fermentation were significantly different (P < 0.05) at different initial glucose concentrations studied. Although the growth of P. acidilactici was slightly reduced as glucose concentration was increased from above 10 to 20 g/L, however, lactic acid accumulation and glucose consumption were still increased from 5.91 to 8.71 g/L and 8.77 to 13.59 g/L, respectively. Effect of glucose concentration on growth of P. acidilactici and lactic acid accumulation. The fermentation was conducted in 500 mL shake flask, at 200 rpm. The data are the average of triplicate experiments. The error bars represent the standard deviations about the mean (n = 3) Effect of lactic acid concentration on growth inhibition of P. acidilactici Different lactic acid concentrations (0, 5, 10 and 15 g/L) were added to the culture at exponential phase of growth (8 h) to study the effect of lactic acid on viability of P. acidilactici after exponential phase. Lactic acid was added at exponential phase instead of at the beginning of the fermentation due to the failure of cells to grow in highly acidic condition during initial or lag phase (data not shown). As tabulated in Table 1, final viable cell concentration obtained at the end of the fermentation was significantly different (P < 0.05) at different lactic acid concentrations added. Final viable cell concentration of P. acidilactici was decreased proportionally with increased of lactic acid concentration. The final viable cell concentration reduced from 9.1 × 109 to 8.2 × 107 cfu/mL when 5 g/L of lactic acid was added. The final viable cell concentration was then further reduced to 8.4 × 104 cfu/mL when 10 g/L of lactic acid was added and no cells were survived when 15 g/L of lactic acid was added to the culture. Table 1 Viability of P. acidilactici in 500 mL shake flask at different lactic acid concentrations Effect of pH control strategy on growth of P. acidilactici in batch cultivation using 2 L stirred tank bioreactor In order to study the inhibitory effect of pH reduction due to lactic acid production from fermentation of P. acidilactici on its growth, batch fermentations of P. acidilactici with and without pH control conditions were conducted. The time course of the fermentation is shown in Fig. 2 and the performance of the cultivation is summarized in Table 2. In the culture without pH control condition, the maximum viable cell concentration (1.5 × 1012 cfu/mL) was 8.3 times higher than the culture with pH control condition (1.8 × 1011 cfu/mL). Furthermore, viable cell yield and viable cell productivity obtained in the culture without pH control were also 8.9 times and 8.0 times respectively, higher than that obtained in the culture with pH control. Maximum viable cell concentration, viable cell yield and viable cell productivity obtained were significantly different (P < 0.05) at both with and without pH control conditions. These results hence proved that the addition of NaOH to control the pH was not effective in reducing end-product inhibition in fermentation of P. acidilactici. There were significant differences (P < 0.05) in lactic acid produced and lactic acid productivity for both types of cultures. Lactic acid produced in fermentation without pH control condition (13.17 g/L) was 1.0 time higher compared to the fermentation with pH control condition (12.72 g/L) and lactic acid productivity in fermentation without pH control condition (0.54 g/L h) was 1.03 times higher compared to the fermentation with pH control condition (0.52 g/L h). Lactic acid accumulated was slightly lower in fermentation with NaOH addition due to lactic acid being neutralized. However, lactic acid yield was similar for both types of cultures (0.90 g/gGlucose), showing no significant difference (P < 0.05) in lactic acid yield for both with and without pH control conditions. The time course of batch fermentation of P. acidilactici in 2 L stirred tank bioreactor a without pH control, b with pH control at pH 5.7. The fermentation was conducted at 300 rpm. The error bars represent the standard deviations about the mean (n = 3) Table 2 Effect of culture pH on growth of P. acidilactici in batch fermentation using 2 L stirred tank bioreactor Effect of aeration on growth of P. acidilactici in batch cultivation using 2 L stirred tank bioreactor For facultative microorganisms such as P. acidilactici, facultative condition was shown to improve the cultivation performance of P. acidilactici compared to anaerobic condition (Fig. 3). As shown in Table 3, maximum viable cell concentration, viable cell yield and viable cell productivity obtained were slightly higher in facultative condition (1.7 × 1012 cfu/mL, 1.9 × 1014 cfu/gGlucose and 1.4 × 1011 cfu/mL h, respectively) compared to anaerobic condition (1.2 × 1011 cfu/mL, 1.3 × 1013 cfu/gGlucose and 9.9 × 109 cfu/mL h, respectively). Maximum viable cell concentration, viable cell yield and productivity obtained were significantly different (P < 0.05) at both facultative and anaerobic conditions. Similar to the results of maximum viable cell concentration, yield and productivity, the same trend was obtained for lactic acid production, yield and productivity, where the results were slightly higher in facultative condition (12.98 g/L, 0.91 g/gGlucose and 0.53 g/L h respectively) compared to anaerobic condition (11.32 g/L, 0.86 g/gGlucose and 0.46 g/L h respectively). There were significant differences (P < 0.05) in lactic acid production, yield and productivity obtained at both facultative and anaerobic fermentations. The time course of batch fermentation of P. acidilactici in 2 L stirred tank bioreactor at condition of a facultative, b anaerobic. The fermentation was conducted at 300 rpm. The error bars represent the standard deviations about the mean (n = 3) Table 3 Effect of aeration on growth of P. acidilactici in batch fermentation using 2 L stirred tank bioreactor Effect of agitation speed on growth of P. acidilactici in batch cultivation using 2 L stirred tank bioreactor The growth profiles for batch fermentation of P. acidilactici in 2 L stirred tank bioreactor at different agitation speeds are shown in Fig. 4. Increased in agitation speed from 200 to 300 rpm showed improvement in maximum viable cell concentration, viable cell yield, viable cell productivity, lactic acid production, lactic acid yield and lactic acid productivity (Table 4). However, as the agitation speed was increased from 300 to 400 rpm, the cultivation performance of P. acidilactici was declined. Results from this study demonstrated that agitation speed had an influence on growth of P. acidilactici. Of all the three agitation speeds studied, 300 rpm showed the highest maximum viable cell concentration (1.8 × 1012 cfu/mL) with improvement of 1.5 times and 1.8 times compared to 200 (1.2 × 1012 cfu/mL) and 400 rpm (1.0 × 1012 cfu/mL), respectively. This shows that agitation speed of 300 rpm provides suitable degree of mixing for growth of P. acidilactici compared to other agitation speeds. Glucose consumption by P. acidilactici was found to be increased from 200 (68.2%) to 300 rpm (71%) and then reduced at 400 rpm (63%). The highest viable cell yield was obtained at 300 rpm (2.0 × 1014 cfu/gGlucose), followed by 200 (1.4 × 1014 cfu/gGlucose) and 400 rpm (1.2 × 1014 cfu/gGlucose). This trend was similar to the viable cell productivity where 300 rpm showed the highest viable cell productivity (1.5 × 1011 cfu/mL h) compared to 200 (9.9 × 1010 cfu/mL h) and 400 rpm (8.3 × 1010 cfu/mL h). Maximum viable cell concentrations, viable cell yield and productivity obtained were significantly different (P < 0.05) at all of the three agitation speeds studied. The highest total lactic acid produced from the fermentation of P. acidilactici was found at 300 rpm (13.21 g/L) followed by 200 (12.71 g/L) and 400 rpm (11.77 g/L), showing significant difference (P < 0.05) in lactic acid production for all of the agitation speeds studied. Nonetheless, lactic acid yield obtained were almost similar at all of the three agitation speeds, showing no significant difference (P < 0.05) for all of the cultures. As for the lactic acid productivity, cultures at 200 and 300 rpm showed almost similar results with no significant difference (P < 0.05), whilst culture at 400 rpm attained the lowest lactic acid productivity. The time course of batch fermentation of P. acidilactici in 2 L stirred tank bioreactor at agitation speed of a 200 rpm, b 300 rpm and c 400 rpm. The error bars represent the standard deviations about the mean (n = 3) Table 4 Effect of agitation speed on growth of P. acidilactici in batch fermentation using 2 L stirred tank bioreactor Effect of feed rate in constant fed-batch fermentation on growth of P. acidilactici using 2 L stirred tank bioreactor Figure 5 depicted the time course of constant fed-batch fermentation of P. acidilactici in 2 L stirred tank bioreactor with different feeding rate of limiting substrate. As tabulated in Table 5, increased feeding rate from 0.008 to 0.015 L/h, showed 5.8 times improvement in maximum viable cell concentration from 1.9 × 1012 to 1.1 × 1013 cfu/mL. However, as the feeding rate was further increased from 0.015 to 0.03 L/h, maximum viable cell concentration obtained was reduced 6.9 times equivalent to 1.6 × 1012 cfu/mL. Maximum viable cell concentration obtained was significantly different (P < 0.05) at all of the three feeding rate. The highest viable cell yield was obtained at 0.015 L/h (1.2 × 1015 cfu/gGlucose), followed by 0.008 (2.1 × 1014 cfu/gGlucose) and 0.03 L/h (1.8 × 1014 cfu/gGlucose). Whilst, the highest viable cell productivity was obtained at 0.015 L/h (7.9 × 1011 cfu/mL h) followed by 0.03 (1.1 × 1011 cfu/mL h) and 0.008 L/h (1.1 × 1010 cfu/mL h). Viable cell yield and viable cell productivity obtained were significantly different (P < 0.05) at all of the three feeding rate studied. As for the lactic acid production, the highest lactic acid produced from constant fed-batch fermentation of P. acidilactici was found at feeding rate 0.008 L/h (9.81 g/L) followed by 0.03 (9.25 g/L) and 0.015 L/h (8.12 g/L). Nevertheless, lactic acid yield at feeding rate of 0.008 and 0.03 L/h were almost similar (1.09 and 1.05 g/gGlucose respectively) with no significant difference (P < 0.05), whilst lactic acid yield at feeding rate of 0.015 L/h was slightly lower (0.89 g/gGlucose) showing significant difference (P < 0.05) for lactic acid yield at feeding rate of 0.015 L/h. As for the lactic acid productivity, culture with feeding rate of 0.008 and 0.015 L/h showed almost similar results (0.26 and 0.28 g/L h) with no significant difference (P < 0.05) at both feeding rate, whilst culture at 0.03 L/h showed the highest lactic acid productivity. Improvement of 6.1 times in maximum viable cell concentration and reduction of 1.6 times in lactic acid production were achieved in constant fed-batch fermentation as compared to batch fermentation. There were significant differences (P < 0.05) in maximum viable cell concentration, viable cell yield, viable cell productivity, lactic acid production, yield and productivity obtained for both batch and fed-batch fermentations. The time course of constant fed-batch fermentation of P. acidilactici in 2 L stirred tank bioreactor at feeding rate of a 0.008 L/h, b 0.015 L/h and c 0.03 L/h. The error bars represent the standard deviations about the mean (n = 3) Table 5 Effect of feeding rate on growth of P. acidilactici in constant fed-batch fermentation using 2 L stirred tank bioreactor Table 6 shows the sorption study of lactic acid, acetic acid, sodium acetate and glucose on IRA 67 resin when these components were present separately and in a pure form. From the sorption study and the amount of components adsorbed as calculated from Eq. 1, it was found that the adsorption of lactic acid on Amberlite IRA 67 resin was significantly different (P < 0.05) from the adsorption of acetic acid, sodium acetate and glucose. It was also found that the highest adsorbed component by IRA 67 resin was lactic acid, followed by acetic acid, sodium acetate and glucose. Table 6 Selectivity of Amberlite IRA 67 resin (10 g/L) towards lactic acid, acetic acid, glucose and sodium acetate Figure 6 shows the results for fed-batch fermentation coupled with extractive fermentation using IRA 67 anion-exchange resin that was conducted at the best fermentation conditions as observed from the previous sections (glucose concentration of not more than 10 g/L, fermentation without pH control, fermentation with facultative condition, agitation speed of 300 rpm and feeding rate of 0.015 L/h). As tabulated in Table 7, the growth of P. acidilactici with resin addition for fed-batch fermentation was found to be improved by 9.1 times (1.0 × 1014 cfu/mL) compared to the conventional fed-batch fermentation without resin addition (1.1 × 1013 cfu/mL). However, the time taken for the culture to reach maximum viable cell concentration was 6 h longer in the fed-batch fermentation with resin compared to without resin. Viable cell yield and viable cell productivity obtained in the culture with resin were 8.5 times and 8.6 times higher, respectively, than that obtained in the culture without resin. There were significant differences (P < 0.05) in maximum viable cell concentration, yield and productivity for both fed-batch fermentations with and without resin addition. Lactic acid accumulated in the culture was lower in the fed-batch fermentation with resin addition (8.78 g/L) compared to fed-batch fermentation without resin addition (9.62 g/L). However, the total lactic acid produced was higher in fed-batch fermentation with resin (12.24 g/L) compared to fed-batch fermentation without resin (9.62 g/L). On the other hand, lactic acid yield for fed-batch fermentation with resin was lower (0.72 g/gGlucose) than fed-batch fermentation without resin (0.84 g/gGlucose) and lactic acid productivity was higher in the culture with resin (0.43 g/L h) compared to the culture without resin (0.33 g/L h). There were significant differences (P < 0.05) in lactic acid production, yield and productivity obtained for both fed-batch fermentations with and without resin addition. The time course of constant fed-batch fermentation of P. acidilactici in 2 L stirred tank bioreactor with in situ addition of 10 g/L IRA 67 resin. The error bar represents the standard deviation about the mean (n = 3) Table 7 Effect of resin addition on growth of P. acidilactici in fed-batch fermentation using 2 L stirred tank bioreactor Reduced viable cell concentration with increased glucose concentration was due to high lactic acid accumulation in the culture since P. acidilactici undergo homofermentation. Generally, lactic acid conversion rate from sugar surpass 80% of the theoretical yield for homofermentative bacteria (Liu et al. 2013) and fermentation in lactobacilli tends to redirect its carbon flux from cell built-up to lactic acid production (Ming et al. 2016). The results from the present study (Table 1) are similar to the findings of Monteagudo et al. (1997), who reported that there was an inhibition on bacterial growth by lactic acid when the lactic acid was rapidly being produced after the exponential phase of the growth. Reduced in final viable cell concentration of P. acidilactici was associated with high lactic acid concentration in the culture. This was due to the acidification of cytoplasm and failure of proton motive forces (Wee et al. 2006). The acidification of cytoplasm is causes by the undissociated lactic acid that passes through the bacterial membrane and dissociates inside the cell. The undissociated lactic acid is soluble within the cytoplasmic membrane whilst the dissociated lactate is insoluble. Eventually, this affects the transmembrane pH gradient and reduces the amount of energy that may be used for cell growth. Since lactic acid produced from fermentation causes acidification of medium and cytoplasm which inhibits the bacterial growth, therefore lactic acid produced in the culture must be either neutralized or removed as it is formed in order to maintain the pH within the optimal range (pH 5–7) (Nomura et al. 1987; Roberto et al. 2007). The maximum viable cell concentration of P. acidilactici reduced when NaOH was added because the additional ions from NaOH increased the osmotic pressure of the medium, causing reduced cell growth (Cui et al. 2016). In addition, the accumulated lactic acid was not completely neutralized by NaOH because weak organic acids such as lactic acid and acetic acid are less dissociated in solution at any pH values compared to strong acids (Lund et al. 2014). Therefore, only part of the lactic acid was dissociated into lactate ions and the undissociated lactic acid which is membrane soluble will enter the cytoplasm via simple diffusion and dissociate inside the cell causing acidification of cytoplasm (Wee et al. 2006). In addition, the dissociated lactate ions can combine with external protons present and enter the cytoplasm in undissociated lactic acid form (Lund et al. 2014). The diffusion of lactic acid into the cytoplasm causes lactic acid to be rapidly dissociated releasing protons and anions within the cytoplasm (Broadbent et al. 2010). If the acidification of cytoplasm exceeds the buffering capacity of cytoplasmic and capabilities of efflux systems, the internal pH of the cell will drop and eventually causing failure in maintaining pH gradient and thus damage the cellular functions. Facultative condition was shown to improve the cultivation performance of P. acidilactici compared to anaerobic condition, and the result is in agreement with the observation reported by Smetankova et al. (2012) where improved growth in the presence of oxygen was observed in cultivation of three wild strains of Lactobacillus plantarum compared to anaerobic condition. This was due to the lactate dehydrogenase (LDH) being used in conversion of pyruvate to lactate under anaerobic condition, reducing regeneration of coenzyme nicotinamide adenine dinucleotide (NAD+) by LDH, where NAD+ is needed in metabolism of sugar. Whereas, under facultative or aerobic conditions, instead of LDH, nicotinamide adenine dinucleotide (NADH) oxidases and NADH peroxidases are being used for regeneration of NAD+. This causing the microorganism to redirect the flux, producing more adenosine triphosphate (ATP) and thus, providing more energy for growth (Condon 1987). Oxygen is an important factor for survival and mortality of aerobic microorganism but not to the facultative anaerobe (Duwat et al. 2001). A few studies have revealed that under fermentation condition, oxygen was shown to contribute toxic effects on Lactococcus lactis by inhibiting its growth and survival (Duwat et al. 1995; Condon 1987). Prolonged aeration of lactococcal cultures can cause DNA alteration and cell death. Formation of hydroxyl radicals and hydrogen peroxide may be the cause of the oxygen toxicity (Anders et al. 1970). Therefore, in this study, oxygen was not supply throughout the fermentation, instead, oxygen was only supply at the beginning of the fermentation before inoculation until dissolved oxygen level reached 100% and then the aeration was stopped to create facultative condition for the fermentation to progress facultatively. Furthermore, P. acidilactici is categorized as facultative anaerobe (Papagianni and Anastasiadou 2009), hence, the influence of DOT or oxygen transfer rate contributed by agitation speed on growth of P. acidilactici may not be crucial. As illustrated in Fig. 5b, it can be observed that glucose concentration was maintained at a very low level (< 1.0 g/L) during the fed-batch fermentation with constant feeding of glucose at 0.015 L/h feeding rate. It can be concluded that the glucose consumption rate and the propagation of cells have achieved a quasi-steady state due to cell growth rate (dX/dt) = substrate consumption rate (dS/dt) = 0 for 6 h. This situation happened when substrate added was utilized by the cells as it was added into the culture. When the glucose addition was stopped, the cells functioned as resting cells. However, lactic acid production was still continued until the culture reached death phase at approximately 22 h of fermentation. Although glucose concentration in the culture at feeding rate of 0.008 L/h was similar to the culture at 0.015 L/h, however, maximum viable cells concentration obtained at constant feeding rate of 0.008 L/h was significantly (P < 0.05) lower (Fig. 5a). This may be due to the slow feeding of glucose which restricted the growth or growth inhibition by lactic acid accumulated in the culture. These assumptions are supported by the declining in viable cell concentration as observed at approximately 20 h of fermentation although glucose was still being added into the culture. When high feeding rate (0.03 L/h) was used, maximum viable cell concentration obtained was the lowest compared to other two feeding rate studied. It is tempting to speculate that fast addition of glucose at 0.03 L/h feeding rate has contributed to cells dilution and thus reduced maximum viable cell concentration in the culture. This assumption is supported by the high glucose accumulation recorded as soon as the feeding rate was started and declined viable cell concentration observed at approximately 16 h of fermentation. This situation, in a way, indicated that glucose consumption by P. acidilactici was affected when glucose feeding rate exceeded both substrate consumption rate and cell growth rate, which in turn, inhibited the growth. Based on the results obtained, application of fed-batch fermentation reduced the product inhibition caused by undissociated lactic acid accumulation in the culture. The improvement of cultivation performance of P. acidilactici in the constant fed-batch fermentation was achieved due to the suitable environment conditions created by this method, such as glucose concentration of below inhibitory level and low lactic acid production due to glucose metabolic flux towards cell growth instead of lactic acid production. Fed-batch fermentation has the advantage of reducing extended lag phase of low cell density in batch fermentation (Aguirre-Ezkauriatza et al. 2010). In addition, substrate exhaustion can also be assured in fed-batch fermentation. Often, the achievement of high biomass concentrations in fed-batch fermentation managed to partially overcome the strong inhibition effect of lactic acid. Results from this study showed that the application of fed-batch fermentation in cultivation of P. acidilactici has significantly reduced the accumulation of lactic acid in the culture with significant increased in the viable cell number. This finding is in agreement with the study conducted by Ming et al. (2016), who found that glucose metabolic flux towards cell growth was increased with a reduction in lactic acid production through fed-batch fermentation of Lactobacillus salivarius I 24. In past studies conducted, improvement of productivity and final yield of targeted product with shorter fermentation time in different fermentation media has been successfully achieved through the application of fed-batch fermentation (Lee et al. 2007). Approximately two times higher final cell concentration and beyond 100% improvement in cell growth rate can be obtained by the application of fed-batch fermentation compared to batch fermentation. Besides, the application of fed-batch fermentation has also been widely applied in improving growth and biomass production of LAB such as in the production of Lactobacillus plantarum LP02 biomass isolated from infant feces with potential cholesterol lowering ability by Hwang et al. (2011) and improvement of cell mass production of Lactobacillus delbrueckii sp. bulgaricus WICC-B-02, a newly isolated probiotic strain from mother's milk by Elsayed et al. (2014). Amberlite IRA 67 anion-exchange resin was selected as the candidate to explore fed-batch couple with extractive fermentation of lactic acid is mainly due to the capability of this resin to effectively adsorb lactic acid in lactic acid fermentation (Garret et al. 2015). Amberlite IRA 67 is based on a matrix of cross linked acrylic gel which is more hydrophilic than styrenic resins and their selectivity to most of organic acids is higher due to their matrix (Arup 1995). Although the presence of other components (acetic acid, sodium acetate and glucose) did interfere with the adsorption capacity of IRA 67 resin towards lactic acid, however the affinity of IRA 67 resin was higher towards lactic acid than the other components studied. Fed-batch mode was found to be superior to the batch mode of fermentation for the growth of P. acidilactici. Hence, the fed-batch fermentation of P. acidilactici was further extended with the application of in situ lactic acid removal system. Lactic acid accumulated in the culture was lower in the fed-batch fermentation with resin addition compared to fed-batch fermentation without resin addition due to the adsorption of lactic acid by the resin. The results of this study clearly shown that the in situ addition of anion-exchange resin in the fed-batch fermentation significantly helped to enhance the growth of P. acidilactici by reducing the inhibitory effect of lactic acid. ATP: CFU: colony forming unit DOT: dissolved oxygen tension HCI: MRS: De Man Rogosa and Sharpe NAD+ : nicotinamide adenine dinucleotide NADH: NaOH: RP-HPLC: reverse-phase high performance liquid chromatography rotation per minute v/v: volume per volume vvm: volumetric air flow rate w/v: weight/volume Abdel-Rahman MA, Tashiro Y, Sonomoto K (2013) Recent advances in lactic acid production by microbial fermentation processes. Biotechnol Adv 31:877–902 Aguirre-Ezkauriatza EJ, Aguilar-Yáñez JM, Ramírez-Medrano A, Alvarez MM (2010) Production of probiotic biomass (Lactobacillus casei) in goat milk whey: comparison of batch, continuous and fed-batch cultures. Bioresour Technol 101:2837–2844 Anders RF, Hogg DM, Jago GR (1970) Formation of hydrogen peroxide by group N streptococci and its effect on their growth and metabolism. Appl Microbiol 19:608–612 Arup KS (1995) Sorption and desorption behavior of natural organic matter, ion exchange technology: advances in pollution control. Technomic Publishing Company, Lancaster, pp 149–189 Bai M, Wei Q, Yan ZH, Zhao XM, Li XG, Xu SM (2003) Fed-batch fermentation of Lactobacillus lactis for hyper-production of l-lactic acid. Biotechnol Lett 25:1833–1835 Boon BL, Heng JT, Eng SC (2007) Fed-batch fermentation of lactic acid bacteria to improve biomass production: a theoretical approach. J Appl Sci 7:2211–2215 Boonmee M, Cotano O, Amnuaypanich S, Grisadanurak N (2016) Improved lactic acid production by in situ removal of lactic acid during fermentation and a proposed scheme for its recovery. Arab J Sci Eng 41:2067–2075 Broadbent JR, Larsen RL, Deibel V, Steele JL (2010) Physiological and transcriptional response of Lactobacillus casei ATCC 334 to acid stress. J Bacteriol 192:2445–2458 Condon S (1987) Responses of lactic acid bacteria to oxygen. FEMS Microbiol Rev 46:269–280 Cui S, Zhao J, Zhang H, Chen W (2016) High-density culture of Lactobacillus plantarum coupled with a lactic acid removal system with anion-exchange resins. Biochem Eng J 115:80–84 Duwat P, Ehrlich SD, Gruss A (1995) The recA gene of Lactococcus lactis: characterization and involvement in oxidative and thermal stress. Mol Microbiol 17:1121–1131 Duwat P, Sourice S, Cesselin B, Lambert G, Vido K, Gaudu P, Leloir Y, Violet F, Loubiere P, Gruss A (2001) Respiration capacity of the fermenting bacterium Lactococcus lactis and its positive effects on growth and survival. J Bacteriol 183:4509–4516 Elsayed EA, Othman NZ, Malek R, Tang T, Enshasy HE (2014) Improvement of cell mass production of Lactobacillus delbrueckii sp. bulgaricus WICC-B-02: a newly isolated probiotic strain from mother's milk. J Appl Pharm Sci 4:8–14 Gao Q, Liu F, Zhang T, Zhang J, Jia S, Yu C, Jiang K, Gao N (2010) The role of lactic acid adsorption by ion exchange chromatography. PLoS ONE 5(11):1–8 Garret BG, Srivinas K, Ahring BK (2015) Performance and stability of Amberlite™ IRA-67 ion exchange resin for product extraction and pH control during homolactic fermentation of corn stover sugars. Biochem Eng J 94:1–8 Halim M, Mustafa NAM, Othman M, Wasoh H, Kapri MR, Ariff AB (2017) Effect of encapsulant and cryoprotectant on the viability of probiotic Pediococcus acidilactici ATCC 8042 during freeze-drying and exposure to high acidity, bile salts and heat. LWT Food Sc Technol 81:210–216 Hayek SA, Ibrahim SA (2013) Current limitations and challenges with lactic acid bacteria: a review. Food Nutr Sci 4:73–87 Hofvendahl K, Hahn-Hägerdal B (2000) Factors affecting the fermentative lactic acid production from renewable resources. Enzyme Microb Technol 26:87–107 Hujanen M, Linko S, Linko YY, Leisola M (2001) Optimisation of media and cultivation conditions for l(+)(S)-lactic acid production by Lactobacillus casei NRRL B-441. Appl Microbiol Biotechnol 56:126–130 Hwang CF, Chen JN, Huang YT, Mao ZY (2011) Biomass production of Lactobacillus plantarum LP02 isolated from infant feces with potential cholesterol lowering ability. Afr J Biotechnol 10:7010–7020 Jianlong W, Ping L, Ding Z (1994) Extractive fermentation of lactic acid by immobilized, Lactobacillus casei using ion-exchange resin. Biotechnol Tech 8:905–908 Lee BB, Tham HJ, Chan ES (2007) Fed-batch fermentation of lactic acid bacteria to improve biomass production: a theoretical approach. J Appl Sci 7:2011–2215 Liu J, Wang Q, Zou H, Liu Y, Wang J, Gan K, Xiang J (2013) Glucose metabolic flux distribution of Lactobacillus amylophilus during lactic acid production using kitchen waste saccharified solution. Microb Biotechnol 6:685–693 Lund P, Tramonti A, Biase DD (2014) Coping with low pH: molecular strategies in neutralophilic bacteria. FEMS Microbiol Rev 38:1091–1125 Ming LC, Halim M, Rahim RA, Wan HY, Ariff AB (2016) Strategies in fed-batch cultivation on the production performance of Lactobacillus salivarius I 24 viable cells. Food Sci Biotechnol 25:1393–1398 Monteagudo JM, Rodriguez L, Rincon J, Fuertes J (1997) Kinetics of lactic acid fermentation by Lactobacillus delbrueckii grown on beet molasses. J Chem Technol Biotechnol 68:271–276 Nomura Y, Iwahara M, Hongo M (1987) Lactic acid production by electrodialysis fermentation using immobilized growing cells. Biotechnol Bioeng 30:788–793 Papagianni M, Anastasiadou S (2009) Pediocins: the bacteriocins of Pediococci sources, production, properties and applications. Microb Cell Fact 8:1–16 Roberto I, Mussatto S, Mancilha I, Fernandes M (2007) The effects of pH and nutrient supplementation of brewer's spent grain cellulosic hydrolysate for lactic acid production by Lactobacillus delbrueckii. J Biotechnol 131:181–182 Smetankova J, Hladikova Z, Valach F, Zimanova M, Kohajdova Z, Greif G, Greifova M (2012) Influence of aerobic and anaerobic conditions on the growth and metabolism of selected strains of Lactobacillus plantarum. Acta Chimica Slovaca 5:204–210 Wee YJ, Kim JN, Ryu HW (2006) Biotechnological production of lactic acid and its recent applications. Food Technol Biotechnol 44:163–172 MO conducted the study, collected the data, analysed and interpreted the data, and wrote the manuscript. MH designed the study, analysed and interpreted the data, and revised the manuscript. ABA, HW and MRK guided the study design. All authors read and approved the final manuscript. The authors are grateful and thank Universiti Putra Malaysia for the research facilities provided and the Ministry of Higher Education Malaysia for the Fundamental Research Grant Scheme (5524586) (FRGS/2/2014/SG05/UPM/02/7) funding throughout this research work. All the data are presented in the main paper. This study was funded by the Ministry of Higher Education Malaysia for the Fundamental Research Grant Scheme (5524586) (FRGS/2/2014/SG05/UPM/02/7). The funding body has no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. Department of Bioprocess Technology, Faculty of Biotechnology and Biomolecular Sciences, Universiti Putra Malaysia, 43400, Serdang, Selangor, Malaysia Majdiah Othman , Arbakariya B. Ariff , Helmi Wasoh & Murni Halim Bioprocessing and Biomanufacturing Research Center, Faculty of Biotechnology and Biomolecular Sciences, Universiti Putra Malaysia, 43400, Serdang, Selangor, Malaysia Arbakariya B. Ariff , Mohd Rizal Kapri Search for Majdiah Othman in: Search for Arbakariya B. Ariff in: Search for Helmi Wasoh in: Search for Mohd Rizal Kapri in: Search for Murni Halim in: Correspondence to Murni Halim. Othman, M., Ariff, A.B., Wasoh, H. et al. Strategies for improving production performance of probiotic Pediococcus acidilactici viable cell by overcoming lactic acid inhibition. AMB Expr 7, 215 (2017) doi:10.1186/s13568-017-0519-6 Extractive fermentation Fed-batch fermentation End-product inhibition Anion exchange resin
CommonCrawl
Adaptive couple-resolution blocking protocol for repeated tag identification in RFID systems Sunwoong Choi1 & Hae-il Choi1 RFID applications such as monitoring an object for a long time need to identify tags repeatedly within the scope of the reader. Re-identification process can be improved using the information obtained from the previous tag identification process. Couple-resolution blocking (CRB) protocol utilizes the blocking technique that prevents staying tags from being collided by newly arriving tags. Staying tags can be efficiently re-identified by utilizing the retained information. After staying tags are separately all identified, arriving tags are identified. In this paper, we argue that CRB may work more poorly than other protocols which do not consider the repeated tag identification, such as query tree (QT) and collision tree (CT) protocol, when only few tags stay. To tackle the problem, we propose an adaptive CRB (ACRB) protocol. In ACRB, the reader estimates the tag staying ratio during the re-identification process for staying tags. If the estimated ratio is lower than a certain threshold, the blocking technique is immediately abandoned. Instead, staying tags and arriving tags are identified together without considering the retained information. In addition, we propose to improve CRB further using CT protocol, instead of QT protocol. Through computer simulation, we show that ACRB improves the identification efficiency of CRB, especially when the tag staying ratio is low. A radio frequency identification (RFID) system consists of a reader and multiple tags. The reader broadcasts the query messages and identifies the tags based on the reply messages from the tags. Since the tags typically reply over the shared wireless medium and multiple tags can reply simultaneously to the reader, tag collision may occur at the reader. To resolve this collision problem and to successfully identify all the tags in RFID systems, many tag anti-collision protocols have been proposed. Generally, the anti-collision protocols are categorized into two classes: aloha-based and tree-based protocols. In aloha-based protocols such as dynamic framed-slotted ALOHA (DFSA) [1] and enhanced DFSA (EDFSA) [2], each tag defers for some random time before replying. On the other hand, tree-based protocols continuously split the set of tags into two subsets each time a collision occurs. For the splitting, the binary tree (BT) protocol [3] uses a random number while the query tree (QT) protocol [4] uses tag IDs. Collision tree (CT) protocol [5] enhances QT by using the Manchester code which is used to detect the collided bit [1]. In many RFID applications, the RFID reader may repeatedly identify staying tags, which stay in the reader's communication range, for object tracking, locating, and monitoring. For that purpose, many protocols have been proposed. The basic idea for all those protocols is to retain the information obtained from the previous tag identification process, called the last frame. Myung et al. [6–8] proposed two protocols, the adaptive query splitting protocol (AQS) and the adaptive binary splitting protocol (ABS). Using the retained information, AQS and ABS can avoid collisions among staying tags. YC Lai et al. [9, 10] proposed a blocking technique which prevents staying tags from being collided by newly arriving tags in this frame. Using the blocking technique, the single resolution blocking ABS (SRB) and the pair resolution blocking ABS (PRB) were proposed based on ABS. The other blocking protocol, the couple-resolution blocking (CRB) [11], were proposed based on AQS which requires less system requirements compared to ABS. Preventing the collision between staying tags and arriving tags, the couple-resolution technique is allowed. Tags are coupled by a query that includes two tags' ID prefixes. If both tags stay, they simultaneously transmit their IDs and those responses will collide. The reader, however, can recognize two tags from the collision since none of arriving tags is involved in this collision. Recently, the hybrid blocking algorithm (HBA) [12] were proposed to alleviate the overlapping staying tag problem between multiple neighboring readers in dense RFID systems. In this paper, we notice that CRB may work poorly when most tags have left and only few tags stay. The time for re-identifying the staying tags in CRB depends on the number of tags identified in the last frame, not the actual number of tags in the current frame. Therefore, all queries transmitted for identifying staying tags in the first phase can be a waste when there are no staying tags. Our objective is to improve the performance of CRB when the tag staying ratio is low. The remainder of the paper is organized as follows. In Section 2, we provide the problem statement and review the CRB protocol. Section 3 provides our motivation for the paper. We propose adaptive CRB (ACRB) protocol in Section 4. In Section 5, the results obtained from the computer simulations are given. Then, we conclude with a discussion of our results. In many RFID applications, the reader may repeatedly identify tags in its communication range. In the case, tags can be more efficiently identified using the information obtained from the last frame. Let set S l be the RFID tags identified in the last frame. Let set S c be the RFID tags which should be identified in the current frame. Our problem is to construct set S c with the knowledge of set S l. We define two important types of tags: staying tags and arriving tags. A tag a s is called the staying tag in the current frame if a s ∈ S C ∩ S L. A tag a a is called the arriving tag in the current frame if a a ∈ S C − S L. Let N represent the number of tags identified in the last frame; N = n(S L) where n(S) means the number of elements of the set S. Let N s and N a represent the number of staying tags and arriving tags, respectively; N s = n(S C ∩ S L) and N a = n(S C − S L). Let tag staying ratio, R s, and tag arriving ratio, R a, be the ratio of the number of staying tags and the ratio of the number of arriving tags, to the number of identified tags in the last frame, respectively; R s = N s /N and R a = N a /N. Our objective is to minimize the time for constructing set S c, in terms of slot. A slot is the duration that a reader transmits a query to tags, and then the queried tags respond to the reader. For simplicity, we assume that all the slots have the same duration. CRB protocol Our proposed protocol, ACRB, is based on the CRB protocol [11]. Thus, we first briefly introduce CRB protocol in this section. The CRB protocol adopts the blocking technique, which prevents staying tags from being collided by arriving tags. For that purpose, each identification frame is divided into two phases. In the first phase, only staying tags are involved. After staying tags are all identified, arriving tags are identified in the second phase. Note that each tag itself is able to determine whether it is a staying tag or an arriving tag using the reader's ID and the frame number. Staying tags involve only in the first phase, not in the second phase. By preventing the collision between staying tags and arriving tags, the couple-resolution technique is allowed in the first phase. In the first phase, CRB checks whether tags identified in the last frame still stay. Figure 1 shows the flow diagram of the CRB protocol. CRB reader has a queue, Q, which is constructed by the QueryConstruction function in the beginning of the first phase. QueryConstruction function finds the readable queries in this frame using all recognized tags' IDs stored in the last frame. A query in the first phase includes two tags' ID prefixes. If both of the queried tags simultaneously transmit their IDs and those responses will collide. The key idea of CRB protocol is that the reader, however, can recognize two tags from the collision since none of arriving (unknown) tags is involved in this collision due to the blocking technique. Procedure of the CRB According to the received response, CRB reader can obtain information as follows: No tag response: both of the queried tags have left. One tag response: one tag stays and the other has left. The responded tag is identified. Collision: both of the queried tags stay. The two queried tags are identified. After staying tags are all identified, arriving tags are identified in the second phase. In the second phase, CRB operates as QT [4] except that it prepares initial prefixes in advance using QueryInsertion function. The CRB reader estimates the number of arriving tags and then generates a complete binary tree which owns the same number of leaf nodes. The prefixes of the leaf nodes are used to identify arriving tags in the second phase. Table 1 shows an example of the identifying process using CRB protocol. Suppose four tags whose IDs being 0000, 0010, 1001, and 1100 have been identified in the the ith frame, f i . We are interested in the identification process in the next frame, f i+1. Table 1 shows the identification process in f i+1 where tag 1100 has left and two tags 0101 and 1111 have newly arrived. Hence, S L = {0000, 0010, 1001, 1100} and S C = {0000, 0010, 0101, 1001, 1111}. Table 1 An example of CRB: the procedure in f i+1 In the first phase, the reader checks whether tags in S L stay. At the first slot, the query includes two ID prefixes, 000 and 001, to identify a pair of tags. Since both tags stay, the responses from tags are collided, and then the reader assures that two tags, 0000 and 0001, stay. The second query includes two ID prefixes, 10 and 11. Since tag 1100 has left, the reader receives only a response from tag 1001. Once all tags in S L are checked, the reader transits into the second phase. In the second phase, QT is employed and begins with prepared prefixes. We assume that the number of arriving tags is correctly estimated as 2 and thus the reader prepares two prefixes, 0 and 1, in advance. The arriving tags 0101 and 1111 are identified at the third and fourth slots, respectively. Finally, the reader correctly detects all tags in S C. We argue that CRB may work more poorly than QT. CRB checks all tags identified in the last frame whether they still stay in the current frame. The checking process for staying tags is efficient because two tags are identified with just one query. However, note that the length of the checking process for staying tags depends on the number of tags identified in the last frame, not the actual number of tags in the current frame. It means that all queries transmitted for identifying staying tags in the first phase become a waste when there are no staying tags. The number of slots in the first phase is always ⌊(N + 1)/2⌋ because one query contains two tags' ID prefixes, where N represents the number of tags identified in the last frame. Thus, the number of total slots required to identify all tags in S C with CRB protocol is as follows: $$ {T}_{\mathrm{CRB}}\left(N,\kern0.5em {N}_{\mathrm{s}},\kern0.5em {N}_{\mathrm{a}}\right) = {T}_{\mathrm{CRB}}\left(\mathrm{first}\ \mathrm{phase}\right) + {T}_{\mathrm{CRB}}\left(\mathrm{second}\ \mathrm{phase}\right) = \kern0.5em \left\lfloor \left(N\kern0.5em +\kern0.5em 1\right)/2\right\rfloor \kern0.5em +\kern0.5em {T}_{\mathrm{QT}}\left({N}_{\mathrm{a}}\right). $$ To make the equation simple, Eq. (1) ignores three command slots: the first-phase command, the second-phase command, and the command terminating the frame. They are considered in the Section 5. On the other hand, QT does not utilize any retained information in the last frame. The performance depends on the actual number of tags in the current frame. Thus, the number of total slots required to identify all tags in S C with QT protocol is as follows: $$ {T}_{\mathrm{QT}}\left(N,{N}_{\mathrm{s}},{N}_{\mathrm{a}}\right) = {T}_{\mathrm{QT}}\left({N}_{\mathrm{s}}+{N}_{\mathrm{a}}\right). $$ From Eqs. (1) and (2), it is clear that if the number of staying tags, N s, is very small, then CRB protocol needs more slots than QT protocol. From this intuition, we aim to design an adaptive CRB protocol, which adapts well to the tag staying ratio. It is well known that T QT(N) ≈ 2.9N − 1 [4]. And when N is sufficiently large, ⌊(N + 1)/2⌋ = N/2. Then, we can obtain the condition that CRB needs more slots than QT, $$ N/2 + 2.9{N}_{\mathrm{a}}\kern0.5em \hbox{--}\ 1\kern0.5em \ge \kern0.5em 2.9\left({N}_{\mathrm{s}}\kern0.5em +\kern0.5em {N}_{\mathrm{a}}\right) - 1. $$ The condition can be expressed in terms of the tag staying ratio, and we thus have $$ {N}_{\mathrm{s}}\kern0.5em /\kern0.5em N\kern0.5em \le \kern0.5em 0.17. $$ Figure 2 shows the performance of CRB and QT protocol according to the tag staying ratio. In order to focus on the performance of the first phase, we assume that there are no arriving tags. We can observe that the number of slots required by CRB protocol is not affected by the tag staying ratio. On the other hand, QT requires a linearly proportional number of slots to the tag staying ratio. Note that QT protocol shows better performance than CRB when the tag staying ratio is below than 0.17. It matches well with the analysis result. Performance of CRB and QT according to the tag staying ratio In this paper, we propose to transit into the second phase immediately when the tag staying ratio is lower than a certain threshold. The tag staying ratio can be estimated during the re-identification process for staying tags. Adaptive couple-resolution blocking protocol Here, we propose our new adaptive couple-resolution blocking protocol, ACRB, which is based on the CRB protocol. The ACRB is different from the CRB in two ways. First, a fast transition algorithm is adopted. ACRB transits from the first phase into the second phase immediately when the couple-resolution for the staying tags is not advantageous. In addition to the fast transition, ACRB uses CT protocol, rather than QT protocol, in the second phase. By adopting CT protocol, the performance of ACRB to identify arriving tags can be improved. Besides, it is noteworthy that the range where the fast transition is advantageous is also widened. Fast transition algorithm We propose to transit into the second phase immediately when the tag staying ratio is significantly low. ACRB reader estimates the tag staying ratio in the first phase. If the estimated ratio is lower than a certain threshold, the blocking technique is immediately abandoned. Instead of re-identifying staying tags separately using the couple-resolution technique, staying tags and arriving tags are identified together without considering the retained information. The problem is to decide whether the tag staying ratio is sufficiently low and it is more advantageous to transit into the second phase. We assume that staying tags are independently determined by Bernoulli trials. The probability is constant in this frame, and it is same with the tag staying ratio, R s = N s/N. Let X denote the number of staying tags in the sample of size n. X can be viewed as a binomial random variable with parameters (n, R s), that is, $$ P\left(X=i\right)=\left(\begin{array}{c}\hfill n\hfill \\ {}\hfill i\hfill \end{array}\right){R_{\mathrm{s}}}^i{\left(1-{R}_{\mathrm{s}}\right)}^{n-i}. $$ Our objective is to make the correct decision of whether R s > =p 0 (null hypothesis denoted by H 0) or R s < p 0 (alternative hypothesis denoted by H 1). Note that p 0 is a tag staying ratio threshold to determine whether the fast transition is more advantageous, or not; p 0 = 0.17 when QT protocol is used in the second phase, from Eq. (3). It is clear that we wish to reject H 0 when X is small. We can reject H 0 at the α level of significance when $$ X\le {k}^{*} $$ where k* is the largest value of k for which \( {\displaystyle {\sum}_{i=0}^kP\left(X=i\Big|{R}_{\mathrm{s}}={p}_0\right)}\le \alpha \) [13]. That is, $$ {k}^{*}= \max \left\{k:{\displaystyle \sum_{i=0}^kP\left(X=i\Big|{R}_{\mathrm{s}}={p}_0\right)}\le \alpha \right\}. $$ In testing H 0 versus H 1, there are two types of errors that can be made: H 0 can be falsely accepted (miss detection) or H 0 can be falsely rejected (false alarm). Note that the fast transition is triggered only when H 0 is rejected. Miss detection does not have to be worried because the consequence is to keep operating CRB algorithm. On the other hand, false alarm should be carefully considered. The fast transition may increase the identification time in that case. In fact, X is a hypergeometric random variable because n sampled tags are randomly chosen "without replacements" out of N tags, of which N s(=NR s) tags are staying tags and the others are leaving tags. Unfortunately, it is difficult to calculate the probability of hypergeometric distribution when Np 0 is not an integer. We use the binomial distribution since it is known that a hypergeometric distribution can be approximated to a binomial distribution if N is large compared to n [13]. Figure 3 compares k* obtained from both distributions. We observe that two curves match well when the sample size is small. The effect of the fast transition gets more significant as the decision is made with a sample of smaller size. Also, note that k* values obtained from binomial distribution are smaller than from hypergeometric distribution. It reduces the probability of false alarm. k* obtained from hypergeometric distribution and binomial distribution (N = 500, p 0 = 0.17, α = 0.01) Procedure of fast transition To implement the fast transition algorithm, we conduct the hypothesis test during the identification process in the first phase. Table 2 represents the operations of the ACRB reader. The identification procedure of ACRB is divided into two phases like CRB. In the first phase, only staying tags are identified (lines 3–26), whereas in the second phase, arriving tags as well as the unidentified staying tags are identified (lines 27–41). The lines different from CRB protocol are marked with comment. Table 2 Reader's operation of the ACRB protocol ACRB reader counts the number of staying tags, n s, out of n queried tags. It is clear n is updated when queries are transmitted (lines 9 and 12). n s can be updated according to the tags' responses (lines 17 and 20). When tag responses are received at the reader, n s is updated as follows: Collision: n s = n s + 2. One tag response: n s = n s + 1. Next, we should calculate k* according to Eq. (5) (line 22). The time complexity for the calculation is bounded by O(n) since k should be less than or equal to n. The calculation does not need many iterations because small value is used for the significance level, α. From k* and n s, we can determine whether the fast transition is advantageous or not. If n s ≤ k* (line 23), the fast transition is triggered and as a result the second phase immediately begins. The QueryConstruction function prepares the prefixes for the first phase using all the recognized tag IDs in the last frame (line 1). The QueryInsertion function generates a complete binary tree which has the amount of leaf nodes being the estimated number of arriving tags (line 28). Since both functions are exactly same with those of the CRB protocol, the details are omitted in this paper. ACRB using collision tree protocol and fast transition In addition to fast transition, we propose to improve CRB protocol further using CT protocol [5, 14] in the second phase, rather than the QT protocol. CT protocol is similar to QT protocol, but CT reader can identify the bit where the collision occurred using Manchester coding. Using this property, CT increases the length of the prefix in the query up to the first collided bit, not just one bit as in QT. The lines different from QT are marked with comment in Table 2 (lines 34 and 35). It is well known that CT reduces the number of total slots about 30% compared to QT; T CT(N) = 2N − 1 (N > 0) and T CT(0) = 1 [5, 14]. It is noteworthy that the range where the fast transition is advantageous increases by using CT. The number of total slots required to identify all tags in S C with CT protocol is as follows: $$ {\mathrm{T}}_{\mathrm{CT}}\left(N,\kern0.5em {N}_{\mathrm{s}},\ {N}_{\mathrm{a}}\right) = {T}_{\mathrm{CT}}\left({N}_{\mathrm{s}}\kern0.5em +\kern0.5em {N}_{\mathrm{a}}\right). $$ On the other hand, Eq. (1) should be changed when CT protocol is adopted instead of QT. The number of total slots required to identify all tags in S C with CRB protocol is as follows: $$ {T}_{\mathrm{CRB}}\left(N,{N}_{\mathrm{s}},{N}_{\mathrm{a}}\right) = \kern0.5em \left\lfloor \left(N+1\right)/2\right\rfloor \kern0.5em +\kern0.5em {T}_{\mathrm{CT}}\left({N}_{\mathrm{a}}\right). $$ From Eqs. (6) and (7), we can obtain the condition that CRB needs more slots than CT, $$ {N}_s\kern0.5em /\kern0.5em N\kern0.5em \le \kern0.5em 0.25. $$ Note that the critical value for fast transition, p 0, gets increased from 0.17 to 0.25, using CT instead of QT. If we could find another protocol that improves CT, then the range where the fast transition is advantageous would increase. ACRB tag algorithm The CRB protocol uses the blocking technique, which prevents staying tags from being collided by arriving tags. For that purpose, staying tags are involved only in the first phase and do not respond in the second phase. However, when the proposed fast transition is triggered, some staying tags may not be identified in the first phase. Thus, unidentified staying tags must be given another chance to be identified. So the tag algorithm of CRB should be a little modified to allow unidentified staying tags to participate in the second phase. Table 3 represents tag's operation of ACRB. When a query message is received, a tag checks the variable isResponsible, as shown in line 13. In the first phase, only staying tags become responsible. After receiving the second-phase command, arriving tags become responsible. Table 3 Tag's operation of the ACRB protocol One line is added for ACRB protocol (line 15). It makes staying tags which did not have a chance to respond in the first phase keep active even in the second phase. We evaluate the performance of the proposed ACRB protocol and compare with the original CRB. We focus on the number of total slots required to identify all tags in the (i + 1)th frame, f i+1, assuming that N tags have been identified in the previous frame f i . Our objective is to reduce the number of total slots because it signifies the identification delay in recognizing all of the tags. The simulation setup is as follows. There are one reader and multiple tags in the simulation area. Each tag has a 96-bit ID, which is uniquely and randomly chosen from a uniform distribution. We run the simulation for each case 1000 times and use their average results. We make the same assumption with [11] that CRB and ACRB can correctly estimate the number of arriving tags, in order to compare two protocols fairly. Impact of tag staying ratio First, we investigate the impact of the tag staying ratio, R s. Figure 4 shows the performance of CRB, CRB with fast transition, and ACRB in terms of the number of total slots in f i+1. CRB with fast transition uses the fast transition in the first phase, and ACRB uses CT instead of QT in the second phase as well as the fast transition. We set N to 500 and vary R s from 0 to 1. The significance level, α, is set to 0.01. We assume that there are no arriving tags to focus on the performance of the first phase. Performance of ACRB protocol according to the tag staying ratio We observe that the number of total slots with CRB protocol is not affected by the tag staying ratio while CRB with fast transition show better performance when the tag staying ratio is low. ACRB improves the performance of CRB with fast transition further. It is noteworthy that the range where the fast transition is advantageous increases compared with CRB with fast transition. It matches well with the simulation result that the critical value for fast transition, p 0, increases from 0.17 to 0.25, using CT instead of QT. Note that the simulation results match well with the theoretical approach. Including three command slots, the number of total slots in CRB matches with Eq. (7) where N = 500, N a = 0. When R s = 0.1, we check the simulation results. On the average, the fast transition is triggered after 12 tags are identified during 63.5 slots in the first phase when CRB with fast transition is used. The unidentified 38 tags are identified by QT during 111.5 slots in the second phase. It matches well with the analysis of QT protocol. With ACRB, the fast transition is triggered after 4 tags are identified during 22 slots in the first phase on the average. It is because the critical value for fast transition of ACRB is larger than that of CRB with fast transition. The unidentified 46 tags are identified by CT during 91 slots in the second phase. It matches well with the analysis of CT protocol. Figure 5 shows the CDF of the number of total slots in all 1000 simulation runs for ACRB when R s is 0.1, 0.2, 0.3, and 0.4. More detail statistics are given in Table 4. When R s is 0.4, it is observed that the fast transition is not activated. It means that ACRB acts as CRB in all 1000 runs. CRB always requires 254 slots; one slot for the first-phase command, 250 slots for queries in the first phase, one slot for the second-phase command, one slot for query in the second phase, and one slot for the command terminating the frame. When R s is 0.1 or 0.2, the fast transition is activated in all 1000 runs and the maximum number of total slots is less than 254 as a result. When R s is 0.3, the maximum number of total slots is 317, larger than 254 though the average value 254.47 is similar, that is, a negative consequence of the false alarm in eight runs. As R s increases, the probability of false alarm gets reduced. However, the negative consequence of false alarm gets greater. When R s is 0.35, the maximum number of total slots is 362, much larger than 254. CDF of the number of total slots in all 1000 simulation runs for ACRB when R s is 0.1, 0.2, 0.3, and 0.4 Table 4 Statistics in all 1000 runs for ACRB when N = 500, R a = 0, α = 0.01 Impact of significance level The proposed fast transition algorithm has an operational parameter: α. It determines the significance level for the decision of fast transition. Selecting α involves a trade-off between decision time and the probability of the false alarm. Figure 6 shows the impact of α. We can observe that the proposed ACRB transits more quickly and needs less number of total slots with larger α, when the tag staying ratio is less than 0.25. Impact of α However, more slots may be required when the tag staying ratio is larger than 0.25 due to the false alarm. The probability of the false alarm is presented in Fig. 7. It is clear that the probability of the false alarm gets higher as the tag staying ratio approaches 0.25 and the significance level, α, increases. In this paper, we set α to 0.01, by default. Probability of false alarm Impact of tag arriving ratio So far, we have assumed that there are no arriving tags to focus on the first phase. The performance of the second phase highly depends on the number of the arriving tags. We consider the impact of the tag arriving ratio on the performance. Figure 8 shows the impact of the tag arriving ratio, R a. We consider R a = 0, 0.5, and 1. That is, the number of the arriving tags are 0, 250, and 500, respectively. ACRB shows better performance than the original CRB for all tag arriving ratios. Impact of the tag arriving ratio, R a Note that ACRB requires less slots than the CRB even when the tag staying ratio is greater than 0.25 if the tag arrival ratio is greater than 0. It results from the operation in the second phase. ACRB employs CT instead of QT in the second phase. Therefore, ACRB can identify arriving tags more efficiently than CRB. Number of bits sent by a reader Overhead on the reader is compared with CRB in terms of the number of bits sent by a reader, as in [11]. More number of bits sent by a reader means that more overhead to the reader, and it also causes longer transmission time of signals and longer identification. Figure 9 shows the number of bits sent by a reader assuming that the lengths of the reader's ID, the frame numbers, commands are 8 bits. We observe that ACRB shows better performance than the original CRB. It is because the fast transition alleviates the problem that queries for leaving tags are wasted in the first phase. More detailed data is given in Table 5 to summarize the comparison of CRB with the proposed ACRB. We observe that ACRB does not show worse performance than CRB for any R a and R s. That is, the number of total slots in ACRB is less than CRB. In addition, the number of bits sent by a reader in ACRB is less than CRB. When R s is less than 0.3, ACRB shows better performance than CRB. It is because the fast transition is triggered in the range. Also, the enhancement gets greater as R aincreases. ACRB uses CT instead of QT, and the effect becomes greater as the number of tags to be identified in the second phase increases. Table 5 Performance comparison with CRB when N = 500, α = 0.01 Impact of the number of tags The identification performance highly depends on the number of tags. It is trivial that more slots are required as the number of tags increases. We consider N = 100 (Fig. 10) and 1000 (Fig. 11). Both figures show similar patterns, and ACRB improves the original CRB in all cases. Impact of the number of tags (N = 100) Impact of the number of tags (N = 1000) Blocking technique which separates staying tags and arriving tags is not helpful when the staying tag ratio is small. We showed that CRB protocol which uses the blocking technique may work more poorly than other protocols such as QT and CT. They do not utilize the retained information from the last frame and do not distinguish staying tags and arriving tags. In this paper, we proposed ACRB protocol which uses fast transition algorithm. The ACRB reader estimates the tag staying ratio while re-identifying staying tags. If the estimated tag staying ratio is lower than a certain threshold, it immediately transits into the second phase. Also, we proposed to use CT instead of QT in the second phase. Through various simulations, we showed that the proposed ACRB protocol improves the performance of CRB protocol, especially when the tag staying ratio is low. K Finkenzeller, RFID handbook: fundamentals and applications in contactless smart cards and identification, (John Wiley & Sons, Chichester, 2003), pp. 206–219 S Lee, S Joo, C Lee, An enhanced dynamic framed slotted ALOHA algorithm for RFID tag identification, in International Conference on Mobile and Ubiquitous Systems: Networking and Services (IEEE, San Diego, 2005), pp. 166–172 D R Hush, C Wood, Analysis of tree algorithms for RFID arbitration, in IEEE International Symposium on Information Theory (IEEE, Cambridge, MA, 1998), pp. 107–107 C Law, K Lee, K Siu, Efficient memoryless protocol for tag identification. In ACM DIAL-M 2000 (ACM, Boston, 2000), pp. 75–84 X Jia, Q Feng, C Ma, An efficient anti-collision protocol for RFID tag identification. IEEE Commun. Lett. 14(11), 1014–1016 (2010) J Myung, W Lee, TK Shih, An adaptive memoryless protocol for RFID tag collision arbitration. IEEE T. Multimedia 8(5), 1096–1101 (2006) J Myung, W Lee, J Srivastava, Adaptive binary splitting for efficient RFID tag anti-collision. IEEE Commun. Lett. 10(3), 144–146 (2006) J Myung, W Lee, J Srivastava, TK Shih, Tag-splitting: adaptive collision arbitration protocols for RFID tag identification. IEEE T. Parall. Distr. Syst. 18(6), 763–775 (2007) YC Lai, CC Lin, A pair-resolution blocking algorithm on adaptive binary splitting for RFID tag identification. IEEE Commun. Lett. 12(6), 432–434 (2008) YC Lai, CC Lin, Two blocking algorithms on adaptive binary splitting: single and pair resolutions for RFID tag identification. IEEE/ACM Trans. Network. 17(3), 962–975 (2009) YC Lai, CC Lin, Two couple-resolution blocking protocols on adaptive query splitting for RFID tag identification. IEEE Trans. Mob. Comput. 11(10), 1450–1463 (2012) Y Hu, I Chang, J Li, Hybrid blocking algorithm for identification of overlapping staying tags between multiple neighboring readers in RFID systems. IEEE Sens. Journ. 15(7), 4076–4085 (2015) S Ross, Probability and statistics for engineers and scientists, 4th edn. (Elsevier Academic Press, Burlington, 2009), pp. 158, 325–326 X Jia, Q Feng, L Yu, Stability analysis of an efficient anti-collision protocol for RFID tag identification. IEEE Trans. Commun. 60(8), 2285–2294 (2012) This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2011-0023856) and by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government (MSIP) (No. 2016R1A5A1012966). School of Electrical Engineering, Kookmin University, Seoul, 02707, Republic of Korea Sunwoong Choi & Hae-il Choi Sunwoong Choi Hae-il Choi Correspondence to Sunwoong Choi. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Choi, S., Choi, Hi. Adaptive couple-resolution blocking protocol for repeated tag identification in RFID systems. J Wireless Com Network 2016, 279 (2016). https://doi.org/10.1186/s13638-016-0775-1 Anti-collision algorithm Repeated tag identification Couple-resolution blocking protocol (CRB)
CommonCrawl
Projection methods and discrete gradient methods for preserving first integrals of ODEs Short-wavelength instabilities of edge waves in stratified water May 2015, 35(5): 2067-2078. doi: 10.3934/dcds.2015.35.2067 Wolff type potential estimates and application to nonlinear equations with negative exponents Yutian Lei 1, Jiangsu Key Laboratory for NSLSCS, School of Mathematical Sciences, Nanjing Normal University, Nanjing, 210023 Received December 2013 Revised September 2014 Published December 2014 In this paper, we are concerned with the positive continuous entire solutions of the Wolff type integral equation $$ u(x)=c(x)W_{\beta,\gamma}(u^{-p})(x), \quad u>0 ~in~ R^n, $$ where $n \geq 1$, $p>0$, $\gamma>1$, $\beta>0$ and $\beta\gamma \neq n$. In addition, $c(x)$ is a double bounded function. Such an integral equation is related to the study of the conformal geometry and nonlinear PDEs, such as $\gamma$-Laplace equations and $k$-Hessian equations with negative exponents. By some Wolff type potential integral estimates, we obtain the asymptotic rates and the integrability of positive solutions, and discuss the existence and nonexistence results of the radial solutions. Keywords: asymptotic behavior., Wolff potential, $\gamma$-Laplace equation, $k$-Hessian equation, conformal geometry. Mathematics Subject Classification: 35J60, 35J92, 45G05, 45M0. Citation: Yutian Lei. Wolff type potential estimates and application to nonlinear equations with negative exponents. Discrete & Continuous Dynamical Systems, 2015, 35 (5) : 2067-2078. doi: 10.3934/dcds.2015.35.2067 C. Caseante, J. Ortega and I. Verbitsky, Wolff's inequality for radially nonincreasing kernels and applications to trace inequalities, Potential Anal., 16 (2002), 347-372. doi: 10.1023/A:1014845728367. Google Scholar H. Chen and Z. Lü, The properties of positive solutions to an integral system involving Wolff potential, Discrete Contin. Dyn. Syst., 34 (2014), 1879-1904. doi: 10.3934/dcds.2014.34.1879. Google Scholar W. Chen and C. Li, Classification of solutions of some nonlinear elliptic equations, Duke Math. J., 63 (1991), 615-622. doi: 10.1215/S0012-7094-91-06325-8. Google Scholar W. Chen and C. Li, Radial symmetry of solutions for some integral systems of Wolff type, Discrete Contin. Dyn. Syst., 30 (2011), 1083-1093. doi: 10.3934/dcds.2011.30.1083. Google Scholar W. Chen, C. Li and B. Ou, Classification of solutions for an integral equation, Comm. Pure Appl. Math., 59 (2006), 330-343. doi: 10.1002/cpa.20116. Google Scholar Y. Choi and X. Xu, Nonlinear biharmonic equations with negative exponents, J. Differential Equations, 246 (2009), 216-234. doi: 10.1016/j.jde.2008.06.027. Google Scholar B. Gidas and J. Spruck, Global and local behavior of positive solutions of nonlinear elliptic equations, Comm. Pure Appl. Math., 34 (1981), 525-598. doi: 10.1002/cpa.3160340406. Google Scholar Z. Guo and J. Wei, Liouville type results and regularity of the extremal solutions of biharmonic equation with negative exponents, Discrete Contin. Dyn. Syst., 34 (2014), 2561-2580. doi: 10.3934/dcds.2014.34.2561. Google Scholar L. I. Hedberg and T. Wolff, Thin sets in nonlinear potential theory, Ann. Inst. Fourier (Grenobel), 33 (1983), 161-187. doi: 10.5802/aif.944. Google Scholar T. Kilpelaiinen, T. Kuusi and A. Tuhola-Kujanpaa, Superharmonic functions are locally renormalized solutions, Ann. Inst. H. Poincare Analyse Non Lineaire, 28 (2011), 775-795. doi: 10.1016/j.anihpc.2011.03.004. Google Scholar T. Kilpelaiinen and J. Maly, Degenerate elliptic equations with measure data and nonlinear potentials, Ann. Seuola Norm. Sup. Pisa, Cl. Sci., 19 (1992), 591-613. Google Scholar T. Kilpelaiinen and J. Maly, The Wiener test and potential estimates for quasilinear elliptic equations, Acta Math., 172 (1994), 137-161. doi: 10.1007/BF02392793. Google Scholar N. Kawano, E. Yanagida and S. Yotsutani, Structure theorems for positive radial solutions to div$(|Du|^{m-2} Du)+K(|x|)u^q=0$ in $R^n$, J. Math. Soc. Japan, 45 (1993), 719-742. doi: 10.2969/jmsj/04540719. Google Scholar D. Labutin, Potential estimates for a class of fully nonlinear elliptic equations, Duke Math. J., 111 (2002), 1-49. doi: 10.1215/S0012-7094-02-11111-9. Google Scholar Y. Lei, Decay rates for solutions of an integral system of Wolff type, Potential Anal., 35 (2011), 387-402. doi: 10.1007/s11118-010-9218-5. Google Scholar Y. Lei, On the integral systems with negative exponents, Discrete Contin. Dyn. Syst., 35 (2015), 1039-1057. doi: 10.3934/dcds.2015.35.1039. Google Scholar Y. Lei and C. Li, Integrability and asymptotics of positive solutions of a $\gamma$-Laplace system, J. Differential Equations, 252 (2012), 2739-2758. doi: 10.1016/j.jde.2011.10.009. Google Scholar Y. Lei, C. Li and C. Ma, Asymptotic radial symmetry and growth estimates of positive solutions to weighted Hardy-Littlewood-Sobolev system, Calc. Var. Partial Differential Equations, 45 (2012), 43-61. doi: 10.1007/s00526-011-0450-7. Google Scholar Y. Li, Remark on some conformally invariant integral equations: The method of moving spheres, J. Eur. Math. Soc., 6 (2004), 153-180. doi: 10.4171/JEMS/6. Google Scholar E. Lieb, Sharp constants in the Hardy-Littlewood-Sobolev and related inequalities, Ann. of Math., 118 (1983), 349-374. doi: 10.2307/2007032. Google Scholar T. Lukkari, F.-Y. Maeda and N. Marola, Wolff potential estimates for elliptic equations with nonstandard growth and applications, Forum. Math., 22 (2010), 1061-1087. doi: 10.1515/forum.2010.057. Google Scholar C. Ma, W. Chen and C. Li, Regularity of solutions for an integral system of Wolff type, Adv. Math., 226 (2011), 2676-2699. doi: 10.1016/j.aim.2010.07.020. Google Scholar J. Maly, Wolff potential estimates of superminimizers of Orlicz type Dirichlet integrals, Manuscripta Math., 110 (2003), 513-525. doi: 10.1007/s00229-003-0358-4. Google Scholar G. Mingione, Gradient potential estimates, J. Eur. Math. Soc., 13 (2011), 459-486. doi: 10.4171/JEMS/258. Google Scholar N. Phuc and I. Verbitsky, Quasilinear and Hessian equations of Lane-Emden type, Ann. of Math., 168 (2008), 859-914. doi: 10.4007/annals.2008.168.859. Google Scholar S. Sun and Y. Lei, Fast decay estimates for integrable solutions of the Lane-Emden type integral systems involving the Wolff potentials, J. Funct. Anal., 263 (2012), 3857-3882. doi: 10.1016/j.jfa.2012.09.012. Google Scholar X. Xu, Exact solution of nonlinear conformally invarient integral equations in $R^3$, Adv. Math., 194 (2005), 485-503. doi: 10.1016/j.aim.2004.07.004. Google Scholar X. Xu, Uniqueness theorem for integral equations and its application, J. Funct. Anal., 247 (2007), 95-109. doi: 10.1016/j.jfa.2007.03.005. Google Scholar X. Yu, Liouville type theorems for integral equations and integral systems, Calc. Var. Partial Differential Equations, 46 (2013), 75-95. doi: 10.1007/s00526-011-0474-z. Google Scholar Yutian Lei, Congming Li, Chao Ma. Decay estimation for positive solutions of a $\gamma$-Laplace equation. Discrete & Continuous Dynamical Systems, 2011, 30 (2) : 547-558. doi: 10.3934/dcds.2011.30.547 Xiaoyan Lin, Yubo He, Xianhua Tang. Existence and asymptotic behavior of ground state solutions for asymptotically linear Schrödinger equation with inverse square potential. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1547-1565. doi: 10.3934/cpaa.2019074 Julián Fernández Bonder, Analía Silva, Juan F. Spedaletti. Gamma convergence and asymptotic behavior for eigenvalues of nonlocal problems. Discrete & Continuous Dynamical Systems, 2021, 41 (5) : 2125-2140. doi: 10.3934/dcds.2020355 Minkyu Kwak, Kyong Yu. The asymptotic behavior of solutions of a semilinear parabolic equation. Discrete & Continuous Dynamical Systems, 1996, 2 (4) : 483-496. doi: 10.3934/dcds.1996.2.483 Carmen Cortázar, Manuel Elgueta, Fernando Quirós, Noemí Wolanski. Asymptotic behavior for a nonlocal diffusion equation on the half line. Discrete & Continuous Dynamical Systems, 2015, 35 (4) : 1391-1407. doi: 10.3934/dcds.2015.35.1391 Yongqin Liu. Asymptotic behavior of solutions to a nonlinear plate equation with memory. Communications on Pure & Applied Analysis, 2017, 16 (2) : 533-556. doi: 10.3934/cpaa.2017027 Shota Sato, Eiji Yanagida. Asymptotic behavior of singular solutions for a semilinear parabolic equation. Discrete & Continuous Dynamical Systems, 2012, 32 (11) : 4027-4043. doi: 10.3934/dcds.2012.32.4027 Ka Luen Cheung, Man Chun Leung. Asymptotic behavior of positive solutions of the equation $ \Delta u + K u^{\frac{n+2}{n-2}} = 0$ in $IR^n$ and positive scalar curvature. Conference Publications, 2001, 2001 (Special) : 109-120. doi: 10.3934/proc.2001.2001.109 Kazuhiro Ishige, Asato Mukai. Large time behavior of solutions of the heat equation with inverse square potential. Discrete & Continuous Dynamical Systems, 2018, 38 (8) : 4041-4069. doi: 10.3934/dcds.2018176 Genni Fragnelli, A. Idrissi, L. Maniar. The asymptotic behavior of a population equation with diffusion and delayed birth process. Discrete & Continuous Dynamical Systems - B, 2007, 7 (4) : 735-754. doi: 10.3934/dcdsb.2007.7.735 Hongwei Wang, Amin Esfahani. Well-posedness and asymptotic behavior of the dissipative Ostrovsky equation. Evolution Equations & Control Theory, 2019, 8 (4) : 709-735. doi: 10.3934/eect.2019035 Kin Ming Hui, Soojung Kim. Asymptotic large time behavior of singular solutions of the fast diffusion equation. Discrete & Continuous Dynamical Systems, 2017, 37 (11) : 5943-5977. doi: 10.3934/dcds.2017258 Hongwei Zhang, Qingying Hu. Asymptotic behavior and nonexistence of wave equation with nonlinear boundary condition. Communications on Pure & Applied Analysis, 2005, 4 (4) : 861-869. doi: 10.3934/cpaa.2005.4.861 Yan Zhang. Asymptotic behavior of a nonlocal KPP equation with an almost periodic nonlinearity. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 5183-5199. doi: 10.3934/dcds.2016025 Weijiu Liu. Asymptotic behavior of solutions of time-delayed Burgers' equation. Discrete & Continuous Dynamical Systems - B, 2002, 2 (1) : 47-56. doi: 10.3934/dcdsb.2002.2.47 Guanggan Chen, Jian Zhang. Asymptotic behavior for a stochastic wave equation with dynamical boundary conditions. Discrete & Continuous Dynamical Systems - B, 2012, 17 (5) : 1441-1453. doi: 10.3934/dcdsb.2012.17.1441 Yinbin Deng, Qi Gao. Asymptotic behavior of the positive solutions for an elliptic equation with Hardy term. Discrete & Continuous Dynamical Systems, 2009, 24 (2) : 367-380. doi: 10.3934/dcds.2009.24.367 Sofía Nieto, Guillermo Reyes. Asymptotic behavior of the solutions of the inhomogeneous Porous Medium Equation with critical vanishing density. Communications on Pure & Applied Analysis, 2013, 12 (2) : 1123-1139. doi: 10.3934/cpaa.2013.12.1123 Thierry Cazenave, Zheng Han. Asymptotic behavior for a Schrödinger equation with nonlinear subcritical dissipation. Discrete & Continuous Dynamical Systems, 2020, 40 (8) : 4801-4819. doi: 10.3934/dcds.2020202 Yutian Lei
CommonCrawl