text
stringlengths
100
500k
subset
stringclasses
4 values
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. View all journals Water induced sediment levitation enhances downslope transport on Mars Jan Raack ORCID: orcid.org/0000-0002-0264-07281, Susan J. Conway ORCID: orcid.org/0000-0002-0577-23122, Clémence Herny3, Matthew R. Balme1, Sabrina Carpy2 & Manish R. Patel ORCID: orcid.org/0000-0002-8223-35661,4 Nature Communications volume 8, Article number: 1151 (2017) Cite this article 180 Altmetric Atmospheric dynamics Cryospheric science On Mars, locally warm surface temperatures (~293 K) occur, leading to the possibility of (transient) liquid water on the surface. However, water exposed to the martian atmosphere will boil, and the sediment transport capacity of such unstable water is not well understood. Here, we present laboratory studies of a newly recognized transport mechanism: "levitation" of saturated sediment bodies on a cushion of vapor released by boiling. Sediment transport where this mechanism is active is about nine times greater than without this effect, reducing the amount of water required to transport comparable sediment volumes by nearly an order of magnitude. Our calculations show that the effect of levitation could persist up to ~48 times longer under reduced martian gravity. Sediment levitation must therefore be considered when evaluating the formation of recent and present-day martian mass wasting features, as much less water may be required to form such features than previously thought. Downslope sediment transport can occur by dry granular flow, or alternatively can be supported by a fluid, e.g., a gas and/or a liquid. The physical properties of the interstitial fluid determines the flow behavior, which in turn influences the transport capacity of the flow1, 2 and its final morphology. In planetary science it is extremely rare to catch sediment transport "in action" and therefore the final morphology and morphometry of the flow, often in conjunction with terrestrial analogs, are used to infer the process and the supporting fluid. On Mars, this line of reasoning has been used to infer that gullies are created by the action of liquid water3,4,5,6,7 acting over timescales of potentially millions of years4, 8,9,10. One of the proposed sources of water to form these gullies comprises shallow11 or deep aquifers12. Aquifer-based hypotheses have not been favored because gullies have been identified on isolated highs where groundwater is less likely to occur3, 13,14,15,16,17. Another proposed source is the melting of snow or ice under current climatic conditions18 or in the recent past3, 10, 19. Non-water hypotheses include: CO2-sublimation gas supported flows20 and dry granular flows21. Due to their occurrence in different climatic regions (from polar regions to mid-latitudes), their different morphologies, and their different ages gullies could be formed by a variety and/or combinations of different mechanisms and no above-mentioned proposed process has yet been completely ruled out. The present-day activity of gullies was first detected in the form of appearance of low relief, digitate, light toned deposits22. More recent observations include: incision of channels, formation of deposits with meter-scale relief23,24,25, and dark sediment deposits within existing gullies24, 26. On sand dunes ongoing formation and growth of both classic and linear gullies17, 27 as well as the seasonal occurrence of dark flows28, 29 have been observed27, 30, 31. Often, but not always, found in association with gullies are dark recurring slope lineae (RSL)32, which are characterized by their annual (re)appearance, seasonal growth during peak annual temperatures, and fading in the colder months32,33,34. These present-day surface activities have been linked to several different formation mechanisms, including liquid water (e.g., overland flow or debris flow)17, 22, 34, 35, CO2 frost sublimation and sediment fluidization23, 26, 30, 31, liquid "cryobrines", acting in a similar way to liquid water32, 36, or dry avalanches37, 38. However, we can only distinguish between these different hypotheses if we understand their associated sediment transport processes, and so we need to understand whether flows animated by liquid water behave in a similar fashion on Mars as they do on Earth. This may not be the case, because liquid water is unstable34 under modern martian conditions. Previous experimental work has shown that transient water under freezing conditions behaves slightly differently to stable water on Earth39, 40, and under warmer conditions this difference is exaggerated further41. Remote sensing and climate models have shown that maximum surface temperatures on Mars up to ~300 K can occur during summer on Mars at equatorial- and southern mid-latitudes32, 42, and even in the south polar regions maximum surface temperatures up to ~280 K are possible26 meaning that transient liquid water is a possibility. As an example, detailed surface temperature analysis show that RSL only lengthen when temperatures exceed 273 K34. Ojha et al.33 reported mid-afternoon maximum surface temperatures between 252 and 290 K from the Thermal Emission Imaging System at active RSL sites. Investigations with the Thermal Emission Spectrometer on RSL sites during same solar longitudes have shown maximum surface temperatures of ~296–298 K35. Based on these data sets, we chose two surface temperatures to investigate the contribution of transient water to downslope transport under martian environmental conditions: flows onto "cold" sediment (~278 K), and flows onto "warm" sediment (~297 K). Our experiments reveal for the first time a transport mechanism of wet sediment levitation that occurs under low atmospheric pressures but not at terrestrial pressures. This sediment levitation effect is caused by boiling of transient water, comparable to the Leidenfrost effect, and itself triggers further sediment movement by grain avalanches. These transportation mechanisms enhance the volume of transported sediment by up to nine times and therefore reduce the required amount of liquid water to ~11% of that needed to transport the same volume of sediment without the levitation effect. Numerical scaling for gravity suggests this effect is greater for lower gravity, leading to an even greater sediment transport potential on Mars. Hence, the effect of levitation can have a direct influence on the estimated water budget for recent and present-day mass wasting processes on Mars in that the amount of water needed to transport sediment could be much smaller than previously thought. The experimental apparatus comprises a 0.9 × 0.4 m test-section containing a 5 cm deep sediment bed. The test section is inclined at 25° and located inside the Open University's Mars simulation chamber40, 41 maintained at an average pressure of ~9 mbar. For each experiment, pure water was introduced near the top of the slope at 1.5 cm above the sediment bed and the resulting flow behavior was observed. The water was pumped into the chamber from an external reservoir allowing the temperature to be maintained at ~278 K and the flow rate at ~11 ml s−1 (see Table 1). The sediment consisted of sand (~63–200 μm grain diameter). Each run was performed in triplicate, and all experiments were recorded with three cameras. Digital elevation models (DEMs) of the bed were created both before and after each run using multiview digital photogrammetry. Table 1 provides full details of the experimental conditions. Table 1 Summary of measured and controlled variables Water flow experiments During the "cold" experiments water flowed over the surface of the sediment and also infiltrated into the sediment. Entrained sediment was transported downslope, depositing a series of lobes that migrated laterally over time, comparable to flows under terrestrial conditions40. The majority of the sediment was transported by overland flow of water (~98%; Fig. 1a–c, Supplementary Movies 1, 2). Boiling of the water was identified by the observation of bubbles at the surface. Occasional millimeter-sized, damp "pellets" of sediment were ejected by the boiling water as it infiltrated the bed. These ejected pellets transported negligible volumes of sediment (~2%). The majority of the sediment transported in the "cold" experiments was by overland flow, confined to a zone with maximum average width of ~9.2 cm and a downslope length of ~36.5 cm (Fig. 1b). Image, map, and elevation data at the end-state of experiments. a, d Orthophotographs (0.2 mm pix−1) of "cold" (a) and "warm" (d) experiments. b, e Hillshaded relief from DEM (1 mm pix−1) overlain by process-zone maps giving the spatial extent of the different transport types (blue = overland flow, green = percolation, red = pellets, yellow = dry avalanches/saltation) for "cold" (b) and "warm" (e) experiments, and c, f elevation difference between start and end of "cold" (c) and "warm" (f) experiments. Flow direction is from top to bottom and the same scale is used for all images The volume of sediment transported during the "warm" experiments was nearly nine times greater than that during the "cold" experiments, and thus an increase in the sediment transport rate of the flow from ~0.13 cm3 ml−1 for "cold" experiments to ~1.18 cm3 ml−1 for "warm" experiments (Table 1, Fig. 2). Thus to transport the same volume of sediment in the "warm" experiment as the "cold", only ~11% of the volume of water is required. The increase in the volume transported for a given water volume in the "warm" experiments is caused by three processes: (1) transport of sediment by ballistic ejection of sediment and millimeter-sized sediment pellets, (2) transport of sediment by "levitation" of millimeter-sized to centimeter-sized sediment pellets with very rapid downslope transport, and (3) dry avalanches of sediment triggered by the ejected grains and levitating pellets. The combined effect of these processes accounted for about 96% of the total sediment transport, with overland flow being only a minor component, in contrast to the "cold" experiments. Mean volumes of transported sediment. a Mean volumes are divided into "warm" experiments (left bar) and "cold" experiments (right bar), and subdivided into different transport types (blue = overland flow, green = percolation, red = pellets, yellow = dry avalanches/saltation) (see also Fig. 1). The mean of total error (Measurement Error) are presented on top of the bars, errors at the side of the bars represent the mean of total errors (Measurement Error) for each individual transport type. More information on the error calculations can be found in Table 1, the methods section, and in Supplementary Table 3. b Re-scaled "cold" experiments Saltation and levitation processes The following sequence of events were reconstructed from the video footage: in the "warm" experiments, when the water came into contact with the sediment, boiling-induced saltation of the sediment created a continuous fountain of ejected grains until the sediment became saturated (after about 30 s; Fig. 3a–e, k, l; Supplementary Movies 3–5). In the very first seconds of the experiment, numerous saturated sediment pellets detached from the source area and rolled/slide quickly down the test bed (often to the end) with very little direct surface contact (Supplementary Movies 3, 4, 6). These pellets ranged in size from 0.5 to ~50 mm and were observed to travel at average speeds of ~46 cm s−1. This is more than twice the speed of pellets under "cold" experiments (~19 cm s−1; Table 1). We conclude that the pellets in the "warm" experiments partially levitate on a cushion of gas produced by boiling via a mechanism comparable to the Leidenfrost effect (Fig. 3a), which enhances their downslope velocity. The gas released at the base of these pellets causes erosion of loose dry sediment, as shown by tracks leading to isolated pellets, and by the formation of a short-lived transportation channel carved by a rapid series of levitating pellets in the first seconds of the experiment (Fig. 3b–d, i, j; Supplementary Movies 3, 6). The transient channel was approximately 5-cm wide and had a curvilinear shape. Due to the short length of the test bed and the fast material transport, this transient channel was backfilled within the first seven seconds (Supplementary Movies 3, 6). Example of transport processes. Frames from video of a "warm" temperature experiment (Run 5). Images after a 1 s, b 3 s, c 4 s, d 10 s, e 44 s, f 60 s, g 122 s, and h 303 s after the start of the experiment. White arrows point to sand saltation plumes, black arrows point to levitated pellets of wet sediment (a, i), to dry material superposing the channel (d), and to the last observed dry avalanche (h). i–l Show detailed excerpts of b–e, respectively. Contrast and brightness was adjusted on all images individually for clarity. Note for scale that the metallic tray is 0.9-m long and 0.4-m wide Grain avalanches In the "warm" experiments, the saltating sediment and levitating pellets triggered grain avalanches that propagated downslope (Fig. 1d–f, Supplementary Movies 3, 4). Grain avalanches and grain ejections occurred over the same time period (up to ~138 s), with some very late grain avalanches observed after 528 s for run 6. During the "cold" experiments no such movements were detectable. About 56% of all transported sediment was by these dry avalanches (Table 1, Fig. 2). The effect of sand saltation and grain flows caused by boiling liquid water was first reported by Massé et al.41, who used a melting ice block as a water source, giving a very low water flow rate of 1–5 ml min−1. They observed the formation of arcuate ridges caused by intergranular wet flow and the ejection of sand grains at the contact of the wet and dry sediment: these phenomena (but not the ridges) were also observed in our experiments. Saltation and flow arrested in their experiments once the water supply was removed41. In our "warm" experiments, though, saltation from the saturated sediment body continued for a mean of ~78 s after the water was stopped. This implies that the sediment in our experiments was supersaturated, and percolation continued after removal of the water source. Supersaturation requires the water release rate to be faster than the infiltration rate (hence flow rates higher than those in Massé et al.41), suggesting this may be a limiting condition for sediment levitation. Liquid overland flow Liquid overland flow occurred in both "warm" and "cold" experiments, but only began in the "warm" runs at a mean of ~20 s into the experiments. The total downslope extension of the overland flow in the "warm" runs was ~76% (~8.4 cm shorter, Table 1) and the average width ~80% (~1.8 cm narrower, Table 1) of the "cold" experiments (Fig. 1, Supplementary Movies 2, 4). The average propagation rates were very similar (~0.74 cm s−1 for the "warm" experiments, ~0.61 cm s−1 for the "cold" experiments). The average volume of sediment mobilized by overland flow in the "warm" experiments was about half that in the "cold" experiments due to the shorter time for which this process was active. Scaling to martian gravity In our laboratory experiments we were unable to simulate the effect of martian gravity on these processes. Massé et al.41 found that saltation induced by boiling is more effective under martian gravity than terrestrial gravity, resulting in three times more sediment transport. We do not repeat their calculations, but instead focus our attention on the effect of gravity of the levitation of pellets, in order to assess if sediment transport via this mechanism would be more or less efficient than observed in our experiments for otherwise similar conditions. Below we derive equations to describe the levitation force produced by the boiling gas, and then we apply these equations to understand the effect of gravitational acceleration on the levitation duration and the size of objects levitated. We follow the reasoning and calculations of Diniega et al.43 who considered the levitation of a sublimating CO2 ice block on Earth and on Mars. We assume that the wet sand pellet can also be treated as a block with a width D = 2 R (m), a thickness H (m), and an aspect ratio of D/H lying on the dry sand test bed (Fig. 4). The temperature at the surface of the wet sand pellet is set at the temperature of evaporation of the liquid water T e for the relevant atmospheric pressure p. We assume that the temperature of the test bed T 0 exceeds the evaporation temperature. We assume that the gas escapes uniformly from the bottom of the object, perpendicular to the surface of the test bed. The object experiences two opposing forces (Fig. 4). The force W due to weight of the object $$W = g{\rho _{{\rm{ws}}}}HA,$$ where g is the local gravity, \({\rho _{{\rm{ws}}}}\) is the wet sand density, H is the height of sand pellets, and A is the area in contact with the test bed. As in Diniega et al.43 we consider two shapes: (I) rectangular, if R<<L (length in m), then the problem can be solved in 2D and A = 2RL or (II) cylindrical, the problem is solved in 3D and A = πR 2. The contact between the block and the sand bed results in frictional forces that prevent the block from falling. The friction force F T is proportional to the normal force N z $${F_{\rm{T}}} = \mu {N_{\rm{z}}},$$ where the coefficient of proportionality μ is the Coulomb friction coefficient. Moreover, there is no motion of the pellet if \({F_{\rm{T}}} >W\,\sin \,\theta \). To determine if motion can start, we need to consider the normal force N z, which is the resulting force between the weight W and the levitation force F e, defined in the normal direction z as follows: $${N_{\rm{z}}} = W\,{\rm{cos}}\,\theta - {F_{\rm{e}}},$$ where θ is the slope angle and F e is the force due to the gas escape by evaporation of liquid water during boiling43 and defined as: $${F_{\rm{e}}} = {C_{\rm{f}}}A\frac{{R\,{u_0}\,v}}{k}.$$ Schematic representation of a block over an inclined plain. The block represents a wet sand pellet in contact to dry sand with T 0 > T e. The block is subject to three forces in competition: the weight W, the friction force F T, and the levitation force F e. The z-axis is oriented so that z ≥ 0 indicates increasing depth into dry sand and the x-axis is oriented parallel to the sand surface so that absolute values of x less than 1 represent the interior of the wet sand pellet43 The levitation force F e is therefore proportional to the dynamic pressure \(R{u_0}\nu /k\) (described below), the surface area A, and an aerodynamic coefficient C f. The aerodynamic coefficient is complex to evaluate because it depends on both the shape of the object (rectangular, spherical, oval, etc.), its roughness and the slope angle. By making some simplifying assumptions, Diniega et al.43 have shown that this coefficient can be deduced from the calculation of the total force of the fluid exerted on the sublimating CO2 block placed on a flat floor. Thus, for a rectangle (A = 2LR), their calculation of the force after integration gives \({F_{\rm{e}}} = 2\,LR.\left( {\frac{\pi }{4}} \right).\left( {\frac{{R\,{u_0}\,v}}{k}} \right)\), which makes it possible to deduce that \({C_{\rm{f}}} = \pi /4\) for a rectangular object and \({F_{\rm{e}}} = 2LR.\left( {\frac{{4\pi }}{3}} \right).\left( {\frac{{R\,{u_0}\,v}}{k}} \right)\) is \({C_{\rm{f}}} = 4/\left( {3\pi } \right)\) for a cylindrical object. In our case the determination of C f is non-trivial. The pellets consist irregular objects of cohesive sand supersaturated with water. The surface of the pellets is not smooth as could be reasonably assumed for a block of CO2 ice. Moreover, the shape of our objects depends on the experiment considered and can be very variable according to the temperature conditions of the experiment. Finally, we must consider the slope that will favor the levitation effect and will tend to increase this coefficient, but increasing roughness will decrease this coefficient. For these reasons we have chosen to estimate the value of the aerodynamic coefficient C f using our experimental results. We estimated the size and shape of the pellets from the videos and orthophotos of the experiments at 297 and 278 K. The pellet sizes range from 0.5 to 50 mm. They have irregular shapes and are often flattened with an aspect ratio H/D = ~0.75. Therefore we know that for experiments at a sediment temperature of 297 K, the boiling effect is strong enough to move centimeter-sized pellets for the duration of several seconds. At 278 K, centimeter-sized pellets are not levitated while millimeter-sized pellets are observed to levitate for a few seconds. We tuned the aerodynamic coefficient to match these experimental observations. We find that C f ranges from approximately 1.45 to 7.3. We then used the corresponding value of the aerodynamic coefficient C f in our calculations for Mars to evaluate the influence of Mars' reduced gravity on pellet levitation. The dynamic pressure \(R{u_0}\nu /k\) is dependent of the length R, the sand permeability k, the gas viscosity v, and the mean gas velocity u 0 escaping from the surface A of the block, which is defined as follows: $${u_0} = \frac{q}{{{E_{\rm{v}}}\,{\rho _{\rm{g}}}}},$$ where q (W m−2) is the heat flux by thermal conduction, E v is the enthalpy of evaporation for water, and \({\rho _{\rm{g}}}\) is the volatile gas density. The heat flow is obtained by solving the heat equation44. The integration of the solution gives us the heat flux q from the sand bed to the block by conduction $$\left. {q\left( t \right) = \lambda \frac{{\partial T}}{{\partial {\rm{z}}}}} \right|\begin{array}{*{20}{c}}\\ {} \\ \\ {z = 0} \\ \end{array} = \left( {{T_0} - {T_{\rm{e}}}} \right)\sqrt {\frac{{\lambda {C_{\rm{p}}}{\rho _{\rm{s}}}}}{{\pi t}}} ,$$ where λ is the thermal conductivity of the sand, C p is the heat capacity of the sand, \({\rho _{\rm{s}}}\) is the sand density, and t is the time (s). Along a slope, a block will move if the friction force is overcome by the weight force in the x-direction (Fig. 4): $$\mu \left( {W\,{\rm{cos}}\,\theta - {F_{\rm{e}}}} \right) < W\,\sin \,\theta \to \tan \theta \, >\, \mu - \frac{{\mu {F_{\rm{e}}}}}{{W\,\cos \theta }}.$$ Determination of the Coulomb friction coefficient μ is non-trivial. There is no empirical method to determine this coefficient and we have no experimental measurements that allow us to calculate it directly. As μ is a coefficient, its sign is imposed, so the sense of the inequality depends only on the sign of \(\left( {1 - \frac{{{F_{\rm{e}}}}}{{W\cos \theta }}} \right)\) and therefore on the ratio \({F_{\rm{e}}}/W\,{\rm{cos}}\,\theta \). Under the angle of repose of the sediment, if \({F_{\rm{e}}} >W\,{\rm{cos}}\,\theta \) then the pellets will move. Increasing slope will tend to reduce the threshold value F T required to start movement. We calculated the evolution of the ratio of the levitation force F e to the weight force W cos θ of a block over a slope of 25° with time for both the low pressure environment of our chamber experiments at different temperatures and for equivalent conditions, but using martian gravity. The parameters used for these calculations are presented in the Supplementary Table 1. We assume that not all the heat flux due to conduction is used for the change of state of the water contains in pellets. Figure 5 shows that the levitation force produced by boiling is about 4.6 higher at 297 K than 278 K, which is consistent with our experimental results. For pellets with an aspect ratio of 0.75, circular basal area (results are similar for a rectangular base), and a sand temperature of 278 K, our calculations predict that levitation should not occur for pellets of R = 1 cm and should persist for about 2 s for R = 0.1 cm, and these predictions are consistent with our experimental observations (Supplementary Movies 1, 2). For the same aspect ratio, at sediment temperature of 297 K, our calculations predict that levitation should persist 5 s for R = 1 cm, which is of the same order of magnitude as the levitation duration observed in our experiments (Supplementary Movies 3, 4, 6). For R = 0.1 cm, we predict that levitation can last for 51.5 s, but many pellets of this size hit the end of the tray, therefore this prediction cannot be validated by our experimental data. Evolution in the ratio of the levitation force to the weight force. The ratio of the levitation force F e to the weight force W cos θ of a block over a slope of 25° with time is calculated for the physical parameters of the martian surface and of our experiments in the Mars simulation chamber (Supplementary Table 1) and use a cylindrical geometry. The levitation of the block occurs when the ratio is greater than 1 (dashed black line). a Ratio calculated for different sand temperatures T s and sizes of block for parameters of the Mars simulation chamber and H/D = 0.75. b Ratio calculated for different temperatures T s for parameters of martian surface and of the Mars simulation chamber and H/D = 0.75 with R = 0.01 m In our calculations we found that the levitation force produced by boiling is about 6.8 times stronger on Mars than it is in our simulation experiments (Fig. 5b). This results in an increased duration of the levitation and the possibility for larger pellets to be transported than under terrestrial gravity, even at a relatively low surface temperature of 278 K. Therefore, the reduction of gravitational acceleration acts in favor of the levitation of pellets as well as any mass wasting triggered by the boiling of transient water41. Similar equations applied to levitating of CO2 blocks over sand have also shown that the levitation processes is less intense on Earth than on Mars and further that a denser atmosphere also tends to inhibit levitation43. The temperature of the sediment plays an important role in the physics of boiling because it sets the temperature gradient between the surface and the object, which drives the heat flux powering the levitation45. Scaling to the lower martian gravity has revealed several important differences with respect to our experimental results: (1) for any given temperature condition larger sediment pellets should be levitated, (2) pellets should levitate for longer, (3) pellets should displace more sediment and create larger "channels", and (4) significant pellet-levitation should occur even under our "cold" experiment conditions. The combination of larger pellets and the longer duration of levitation would result in a significantly larger spatial area being affected by the flow than under terrestrial gravitational acceleration. The trajectory of the majority of pellets in our "warm" experiments is interrupted by collision with the end of the test-section. A detailed reconstruction of their trajectory is beyond the scope of this work, but given the relatively high speeds of the pellets we can conservatively estimate a 2 m runout for our "warm" experiments. As the speed of the pellets is partly driven by gravitational acceleration we would expect equivalent pellets on Mars to travel more slowly (at worst at ~1/3 the speed observed in our experiments), hence their runout would likely be contained within our test-section at around 60–70 cm. Our simple calculations predict up to 48 times longer levitation of pellets considering martian gravity, which is likely to be an overestimation, but even a levitation of 10 times longer would result in a decameter-extent of sediment disturbance, which should be visible in remote sensing images. We do not anticipate that sediment pellets themselves could reach a size readily visible in remote sensing images. The sediment transport directly engendered by pellet levitation (excluding the secondary dry granular avalanches) is defined by the number and size of pellets that are released. The maximum size of pellets that can be levitated is defined by the levitation-force generated by boiling, yet it is likely the combination of flow-rate and infiltration rate also influences the actual sizes and numbers of pellets that are released. Because infiltration is driven by gravity, for the same quantity of water the infiltration would be 1/3 slower; however, further experiments would need to be performed to understand the exact relation between pellet size/number flow rate and infiltration rate. To conclude, our calculations show that the levitation force is about 6.8 times stronger on Mars (Fig. 5b), resulting in levitation lasting up to 48 times longer. This would allow levitating sediments to travel decameters downslope, even with the relatively small amounts of water used in our experiments. Such disturbances could be detectable in remote sensing data, although the detailed morphologies would be unresolvable. Importantly, the calculations show that sediment levitation would be viable on Mars even under conditions similar to our "cold" experiments, so this process could be widely applicable on Mars today and in the recent past. The driver for the enhanced transport in the "warm" runs is the combination of a rapid delivery of water to the surface and the relatively warm sediment temperature. Our experimental results do not assume a particular source of this water and below we discuss how our results might apply to the various source mechanisms already proposed. Mechanisms that deliver water rapidly to the martian surface are summarized in the context of gullies by Heldmann and Mellon5 and Heldmann et al.6 For example, aquifer-release4, 6, 12, 22 is one possibility for rapid water release, but is unlikely to explain mass wasting occurring near the top of isolated dunes and massifs or crater rims3, 15, 17, 18, although this mechanism cannot be ruled out for every gully on the surface on Mars and has recently been invoked to explain RSL34, 46. Our experimental results are consistent with the large range of proposed aquifer discharge rates in terms of flow rate and also the requirement of relatively warm surface temperature to melt the confining plug47. In the recent past in about 20% of the spin/orbital conditions (high eccentricity, high obliquity, perihelion close to the solstice) between 5 and 10 Ma ago48, 49, the climatic conditions favor the formation of a near-surface ice-bearing layer that freezes and thaws regularly (e.g., daily) due to small surface temperature variations around the freezing point50, leading to a possible availability of transient liquid water51. This transient liquid water could be a possible source for our observed transportation mechanism, if the parameters are correct (e.g., flow rate high enough to overcome infiltration, low pressure). Water trapped as ice is common today as inferred from a variety of remote sensing data and modeling both within and on the martian surface52, 53, and across a wide latitudinal range from polar54,55,56 to low latitudes around ~25°57. For example, ice is thought to be a primary component of the mid-latitude atmospherically derived dust-ice mantle58, as well as of mid-latitude glaciers (e.g., rock glaciers59). Under current climatic conditions melting would have to occur within or below isolating layers such as dust, regolith, or ice15, 18, 60 to mitigate the effects of sublimation. Even today, locally warm surface temperatures above the frost point are possible on Mars26, 32, 33, 35, 42. In order for our results to be relevant such melting would have to be rapid enough to reach the flow rates in our experiments, or more likely, melt could accumulate beneath or within a protective layer before being released to the surface suddenly. The main argument against water-based hypotheses for presently active features is they all require significant volumes of liquid water and/or brines3, 4, 17, 22, 34,35,36, 46, 61. Our newly identified process opens up the possibility that the amount of liquid water needed for present-day and geologically recent downslope transportation on Mars has been overestimated. Furthermore, our experiments could help to answer still open questions about the formation of RSL. Although comparison with them show that their general growth-speed is relatively low (0.25–1.86 cm sol−1)46 compared to the sediment transportation processes observed in our experiments, there is the possibility that RSL could propagate rapidly, but episodically (e.g., only during the high peak temperatures at noon34), in which case their flow rates should be comparable to those used in our experiments (approximately 7 ml s−1 as calculated from values presented by Stillman et al.46). The processes revealed by our experiments could also help to address the problem that RSL require a high water budget if they propagate by infiltration alone (about 1.5–5.6 m3 m−1 as calculated by Stillman et al.46). The length of RSL could be achieved with ~10 times less water (value is not scaled to martian gravity, which would make it even larger based on our calculations) if boiling is causing saltation, pellet levitation, and granular avalanches. However, our experiments concern the transport mechanisms by unstable water at the martian surface and do not inform the contentious debate about how this water was produced or brought to the surface, which is discussed in other papers5, 6, 34, 46, 60. The sediment transport processes we describe here are applicable anywhere in the solar system where a liquid would be unstable, and where temperature differences between the liquid and substrate could occur. Our experiments have the potential to help us understand mass wasting on bodies such as Titan62 and Vesta63. However, Mars has the ideal combination of environmental parameters for this process to operate with water, as well as possessing many well-documented mass-wasting landforms that it might help explain. The amount of water required to move sediment on martian hillslopes may have been overestimated due to the absence of the relevant sediment transport processes on Earth whose landscape is used as the basis of planetary comparison. This conclusion demonstrates the unique capability of laboratory experiments for exploring and understanding planetary surface processes, and shows how, on Mars, even a little water can go a long way. Detailed experimental setup and parameter justification The experiments were performed in a 2-m length and 1-m wide Mars simulation chamber located at Open University. Two vacuum pumps were used to reduce the pressure at 7 mbar at which pressure all experiments were started. Due to rapid release of water vapor, average pressures for the 60 s of water flow have values around 9 mbar for each experiment. The pressure within the chamber was measured with a Pirani gauge and logged every second. The sediment used was a natural eolian fine silica sand, with D 50 = 230.1 µm and minor components of clay and silt, previously used in similar experiments at the Mars simulation chamber at the Open University40, 41, 64. We chose this sediment, because it is broadly consistent with sediments that are found on Mars41, 65 and its unimodal nature aids the development of physical models. A ~5-cm depth sand bed was placed in a rectangular metallic tray (0.9-m long, 0.4-m wide, and 0.1-m deep). This thickness was chosen to avoid spill over of sediment at the end and sides of the test bed during "warm" experiments (which would influence our transport volume measurements), and to have sufficient material to avoid exposing the underlying tray upon erosion of the substrate. The angle was set to 25°, which is within the range of slope angles reported for gullies on Mars5, 7, 16 and a compromise between the different slope angles observed at contemporary active mass wasting sites, e.g., dark flows within polar gullies (~15°)26, linear dune gullies (~10°–20°)17, 61, and RSL (~28°–35°)32, 33, 35, 46. Our chosen angle is below the angle of repose for martian and terrestrial sand dunes (between 30° and 35° based on remote sensing studies66, and at ~30° based on experiments67) and hence the movements we measure are not related to dry granular flows (slip face avalanches). The water outlet was placed 1.5 cm above the sediment surface, 8 cm from the top wall of the tray. The height of the water outlet was chosen to be as close to the surface as possible, yet high enough so as not to interfere with the subsequent sediment ejection. The temperature of the sediment was monitored every second using four thermocouples placed 8 cm from the edges and 20 cm from the top/bottom at a height of ~2 cm within the sediment. Two thermocouples were used to monitor the water temperature inside the water reservoir located outside the chamber. All experiments were recorded with three different cameras, two webcams in the interior of the chamber, and one video camera outside the chamber. Each experiment was defined as a 60 s flow of water on the sediment with a water volume between 620 and 670 ml, resulting in flow rates between 10.3 and 11.2 ml s−1. This flow rate is intermediate between Conway et al.40 (~80 ml s−1) and Massé et al.41 (~1–5 ml s−1) in order to both (a) obtain erosion by overland flow under terrestrial (or non-boiling) conditions, and (b) under boiling conditions minimize the boundary effects (i.e., contact with the tray edges), but also (c) obtain a steady and reproducible flow rate. The instruments were left recording and the chamber was kept at low pressure for at least a further 10 min after the end of water flow. Averaged values and standard deviation values for pressure, water temperature, and surface temperature were calculated during the 60 s of the water flow and are presented in Table 1. Our experimental installation is somewhat comparable to that of Coleman et al.68 with the main difference being that their experiments were performed under terrestrial pressures and not under low martian pressures, which are fundamental to observe the transport mechanism investigated in this work. Production of DEMs Before and after each experiment the sediment test bed was photographed ~40 times from a down-looking (nadir) viewpoint to construct a 3D model using "Structure-from-Motion"69 software Agisoft PhotoScan. Twelve fixed targets of 2.67-cm diameter were positioned within the models and marked by standard black-on-white printed target markers. These targets allowed the resulting 3D models to be scaled and coregistered. Root mean square errors and reprojection errors can be found in the Supplementary Table 2. DEMs at 1 mm pix−1 and orthophotos at 0.2 mm pix−1 were exported to ESRI ArcGIS. The values for the volume of erosion and deposition were calculated by differencing the before and after DEMs. In order to estimate the volume transported we summed the erosion and deposition volumes and divided this number by two. Mapping of transport mechanisms In ArcGIS, all surface changes were manually mapped using the orthophotos from before and after the experiments and a hillshaded visualization of the DEM from after the experiments. Videos were also used for additional identification of different transportation mechanisms to improve mapping. As seen in the map of Fig. 1, we used four different units for transport types: (1) overland flow (blue unit), characterized by a visible erosion and deposition of sediment via liquid water flows on top of the surface identifiable by comparing the before and after photos, inspection of the hillshade, and video observation; (2) percolation (green unit), regions where liquid water infiltrated and wetted the sediment (wetted sediment bodies) identified by the darker color in the after images, yet lack of visible transport by entrainment; (3) dry avalanches/saltation (yellow unit), characterized by the movement of dry sand (dry landslides) with no visible influence of wet sediment (no change in color); and (4) pellets (red unit), wetted sediment bodies that were ejected or detached from the source area and levitated/roll over the surface, identified by a color and elevation change, and by video observation. We used the mapped outlines to partition the volumes of sediment eroded and deposited to these different processes. This method of mapping surface changes in planview only provides a crude estimate of the partitioning. For example, pellets deposited early in the experiments can be subsequently buried by dry flows, hence using our mapping scheme these pellets would be included in the volume assigned to the "dry avalanches/saltation" category. Therefore, we performed additional manipulations (including error calculations, described below) to improve this partitioning (Supplementary Table 3). Error calculations "Interpolation" for the overland flow in "warm" experiments. Here, we adjusted the volumes of the overland flow category calculated in planview to account for the fact that the surface was lowered by other processes. Video observations show that the area that was mapped as overland flow on the basis of the "after" DEM was heavily eroded at the beginning of the experiment by pellet ejection and dry avalanches/saltation (Supplementary Movies 3, 4). These effects are not taken into account when simply calculating volumes based on our planview mapping. Hence, we performed an adjustment by assuming the initial surface for the overland flow could be adequately approximated by an interpolated "natural neighbor" surface fitted to elevations extracted from the "after" DEM within a 2-mm buffer outside the digitized boundary (instead of using the "before" DEM as the initial surface). To quantitatively assess the uncertainty of this assumption we calculated the mean volume attributed to overland flows for "cold" experiments using the interpolation method, as described above, and compared it to the mean volume derived using the original method using the "before" DEM. We then scaled this uncertainty for the smaller area covered by the overland flow in the "warm" experiments. The "Interpolation Error" ranged between ~3.1 and ~3.4 cm3 for the three "warm" experiments. Application of the interpolation method, described above, leaves a certain volume of material unaccounted for. This volume was arbitrarily partitioned 50–50% to pellets and dry avalanches/saltation, because we do not know the exact partitioning. We consider that this 50–50 partitioning as the maximum uncertainty on the volume partitioning. The "Superposition Error" ranged from 14 to ~15 cm3 for the three performed "warm" experiments. In order to assess the "Measurement Error" associated with our volume calculations we performed test measurements on surfaces undisturbed by the flows within a fixed rectangular area (~46 cm2). We made one test measurement per experiment. The resulting transport volumes were then scaled to the areas covered by particular transport types (Fig. 1b, e) to obtain the errors in their total volumes (Supplementary Table 3). If our volume calculations were perfect, the test areas should give zero-volume changes. We took this approach because uncertainty on volume calculations performed by differencing DEMs arises from a number of sources (photo quality, target misplacement, blunders in point matching, differences in lighting and texture, etc.), which are difficult to assess individually. The "Measurement Error" varied between ~1 and ~31% error for the whole area of the flows (Supplementary Table 3). For calculation of the "Total error of Runs" we scaled the "Measurement Error" to the total area. "Interpolation Error" and "Superposition Error" have no influence on the total error and only apply to the subdivision of the total volume into the different transport types (Fig. 2, Table 1, Supplementary Table 3). Mean errors for each transport type and for the "Total error of Runs" were calculated using the method of error propagation. Definition of overland flow The overland flow runoff length and width maxima were measured in ArcGIS. The runoff length is defined as the maximum linear distance between the most upslope sediment disturbance (uppermost sediment erosion via liquid water) and the lowermost sediment disturbance (lowermost sediment deposition via liquid water). The runoff width of the overland flow is defined as the maximum linear distance between the rims of the overland flow perpendicular to the runoff length. Also these values are presented in Table 1. All relevant data are available in the article and Supplementary Information files, or are available from the corresponding authors upon reasonable request. Hecht, M. H. Metastability of liquid water on Mars. Icarus 156, 373–386 (2002). ADS CAS Article Google Scholar Heldmann, J. L. et al. Formation of Martian gullies by the action of liquid water flowing under current Martian environmental conditions. J. Geophys. Res. 110, E05004 (2005). ADS Article Google Scholar Costard, F., Forget, F., Mangold, N. & Peulvast, J. P. Formation of recent Martian debris flows by melting of near-surface ground ice at high obliquity. Science 295, 110–113 (2002). ADS CAS Article PubMed Google Scholar Malin, M. C. & Edgett, K. S. Evidence for recent groundwater seepage and surface runoff on Mars. Science 288, 2330–2335 (2000). Heldmann, J. L. & Mellon, M. T. Observation of Martian gullies and constraints on potential formation mechanisms. Icarus 168, 285–304 (2004). Heldmann, J. L., Carlsson, E., Johansson, H., Mellon, M. T. & Toon, O. B. Observations of Martian gullies and constraints on potential formation mechanisms II. The northern hemisphere. Icarus 188, 324–344 (2007). Conway, S. J., Balme, M. R., Kreslavsky, M. A., Murray, J. B. & Towner, M. C. The comparison of topographic long profiles of gullies on Earth to gullies on Mars: a signal of water on Mars. Icarus 253, 189–204 (2015). Reiss, D., van Gasselt, S., Neukum, G. & Jaumann, R. Absolute dune ages and implications for the time of formation of gullies in Nirgal Vallis, Mars. J. Geophys. Res. 119, E06007 (2004). ADS Google Scholar Schon, S. C., Head, J. W. & Fassett, C. I. Unique chronostratigraphic marker in depositional fan stratigraphy on Mars: evidence for ca. 1.25 Ma gully activity and surficial meltwater origin. Geology 37, 207–210 (2009). Raack, J., Reiss, D. & Hiesinger, H. Gullies and their relationships to the dust-ice mantle in the northwestern Argyre Basin, Mars. Icarus 219, 129–141 (2012). Mellon, M. T. & Phillips, R. J. Recent gullies on Mars and the source of liquid water. J. Geophys. Res. 106, 23165–23179 (2001). Gaidos, E. J. Cryovolcanism and the recent flow of liquid water on Mars. Icarus 153, 218–223 (2001). Mangold, N., Costard, F. & Forget, F. Debris flows over sand dunes on Mars: evidence for liquid water. J. Geophys. Res. 108, 5027 (2003). Article CAS Google Scholar Reiss, D. & Jaumann, R. Recent debris flow on Mars: seasonal observations of the Russell Crater dune field. Geophys. Res. Lett. 30, 1321 (2003). Balme, M. et al. Orientation and distribution of recent gullies in the southern hemisphere of Mars: observations from High Resolution Stereo Camera/Mars Express (HRSC/MEX) and Mars Orbiter Camera/Mars Global Surveyor (MOC/MGS) data. J. Geophys. Res. 111, E05001 (2006). Dickson, J. L., Head, J. W. & Kreslavsky, M. Martian gullies in the southern mid-latitudes of Mars: evidence for climate-controlled formation of young fluvial features based upon local and global topography. Icarus 188, 315–323 (2007). Reiss, D., Erkeling, G., Bauch, K. E. & Hiesinger, H. Evidence for present day gully activity on the Russell crater dune field, Mars. Geophys. Res. Lett. 37, L06203 (2010). Christensen, P. R. Formation of recent Martian gullies through melting of extensive water-rich snow deposits. Nature 422, 45–48 (2003). Arfstrom, J. & Hartmann, W. K. Martian flow features, moraine-like ridges, and gullies: terrestrial analogs and interrelationships. Icarus 174, 321–335 (2005). Hoffman, N. Active polar gullies on Mars and the role of carbon dioxide. Astrobiology 2, 313–323 (2002). Treiman, A. H. Geologic settings of Martian gullies: implications for their origins. J. Geophys. Res. 108, 8031 (2003). Malin, M. C., Edgett, K. S., Posiolova, L. V., McColley, S. M. & Noe Dobrea, E. Z. Present-day impact cratering rate and contemporary gully activity on Mars. Science 314, 1573–1577 (2006). Dundas, C. M., McEwen, A. S., Diniega, S., Byrne, S. & Martinez-Alonso, S. New and recent gully activity on Mars as seen by HiRISE. Geophys. Res. Lett. 37, L07202 (2010). Dundas, C. M., Diniega, S., Hansen, C. J., Byrne, S. & McEwen, A. S. Seasonal activity and morphological changes in Martian gullies. Icarus 220, 124–143 (2012). Dundas, C. M., Diniega, S. & McEwen, A. S. Long-term monitoring of Martian gully formation and evolution with MRO/HiRISE. Icarus 251, 244–263 (2015). Raack, J. et al. Present-day seasonal gully activity in a south polar pit (Sisyphi Cavi) on Mars. Icarus 251, 226–243 (2015). Jouannic, G. et al. Morphological and mechanical characterization of gullies in a periglacial environment: the case of the Russell crater dune (Mars). Planet Space Sci. 71, 38–54 (2012). Fenton, L. K., Bandfield, J. L. & Ward, A. W. Aeolian processes in Proctor Crater on Mars: sedimentary history as analysed from multiple data sets. J. Geophys. Res. 108, 5129 (2003). Kereszturi, A. et al. Recent rheologic processes on dark polar dunes of Mars: driven by interfacial water? Icarus 201, 492–503 (2009). Diniega, S., Byrne, S., Bridges, N. T., Dundas, C. M. & McEwen, A. S. Seasonality of present-day Martian dune-gully activity. Geology 38, 1047–1050 (2010). Hansen, C. J. et al. Seasonal erosion and restoration of Mars' Northern polar dunes. Science 331, 575–578 (2011). McEwen, A. S. et al. Seasonal flows on warm Martian slopes. Science 333, 740–743 (2011). Ojha, L. et al. HiRISE observations of recurring slope lineae (RSL) during southern summer on Mars. Icarus 231, 365–376 (2014). Stillman, D. E., Michaels, T. I., Grimm, R. E. & Harrison, K. P. New observations of Martian southern mid-latitude recurring slope lineae (RSL) imply formation by freshwater subsurface flows. Icarus 233, 328–341 (2014). Chojnacki, M. et al. Geologic context of recurring slope lineae in Melas and Coprates Chasmata, Mars. J. Geophys. Res. Planets 121, 1204–1231 (2016). Ojha, L. et al. Spectral evidence for hydrated salts in recurring slope lineae on Mars. Nat. Geosci. 8, 829–832 (2015). Sullivan, R., Thomas, P., Veverka, J., Malin, M. & Edgett, K. S. Mass movement slope streaks imaged by the Mars Orbiter Camera. J. Geophys. Res. 106, 23607–23633 (2001). Schmidt, F., Andrieu, F., Costard, F., Kocifaj, M. & Meresescu, A. G. Formation of recurring slope lineae on Mars by rarefied gas-triggered granular flows. Nat. Geosci. 10, 270–274 (2017). Sears, D. W. G. & Moore, S. R. On laboratory simulation and the evaporation rate of water on Mars. Geophys. Res. Lett. 32, L16202 (2005). Conway, S. J., Lamb, M. P., Balme, M. R., Towner, M. C. & Murray, J. B. Enhanced runout and erosion by overland flow at low pressure and sub-freezing conditions: experiments and application to Mars. Icarus 211, 443–457 (2011). Massé, M. et al. Transport processes induced by metastable boiling water under Martian surface conditions. Nat. Geosci. 9, 425–428 (2016). ADS Article CAS Google Scholar Haberle, R. M. et al. On the possibility of liquid water on present-day Mars. J. Geophys. Res. 106, 23317–23326 (2001). Diniega, S. et al. A new dry hypothesis for the formation of Martian linear gullies. Icarus 225, 526–537 (2013). Turcotte, D. L. & Schubert, G. Geodynamics. 456 (Cambridge University Press, Cambridge, 2002). Cengel, Y. A. & Ghajar, A. J. Heat and Mass Transfer: Fundamentals and Applications. 992 (McGraw-Hill Education, New York, 2014). Stillman, D. E., Michaels, T. I., Grimm, R. E. & Hanley, J. Observations and modeling of northern mid-latitude recurring slope lineae (RSL) suggest recharge by a present-day Martian briny aquifer. Icarus 265, 125–138 (2016). Goldspiel, J. M. & Squyres, S. W. Groundwater discharge and gully formation on Martian slopes. Icarus 211, 238–258 (2011). Laskar, J. et al. Long term evolution and chaotic diffusion of the insolation quantities on Mars. Icarus 170, 343–364 (2004). McKay, C. P. et al. The icebreaker life mission to Mars: a search for biomolecular evidence for life. Astrobiology 13, 334–353 (2013). Kreslavsky, M. A., Head, J. W. & Marchant, D. R. Periods of active permafrost layer formation during the geological history of Mars: implications for circum-polar and mid-latitude surface processes. Planet Space Sci. 56, 289–302 (2008). Richardson, M. I. & Mischna, M. A. Long-term evolution of transient liquid water on Mars. J. Geophys. Res. 110, E03003 (2005). Squyres, S. W. & Carr, M. H. Geomorphic evidence for the distribution of ground ice on Mars. Science 231, 249–252 (1986). Boynton, W. V. et al. Distribution of hydrogen in the near surface of Mars: evidence for subsurface ice deposits. Science 297, 81–85 (2002). Kieffer, H. H., Chase, S. C. Jr, Martin, T. Z., Miner, E. D. & Palluconi, F. D. Martian north pole summer temperatures: dirty water ice. Science 194, 1341–1344 (1976). Bibring, J.-P. et al. Perennial water ice identified in the south polar cap of Mars. Nature 428, 627–630 (2004). Appéré, T. et al. Winter and spring evolution of northern seasonal deposits on Mars from OMEGA on Mars Express. J. Geophys. Res. 116, E05001 (2011). Vincendon, M. et al. Near-tropical subsurface ice on Mars. Geophys. Res. Lett. 37, L01202 (2010). Mustard, J. F., Cooper, C. D. & Rifkin, M. K. Evidence for recent climate change on Mars from the identification of youthful near-surface ground ice. Nature 412, 411–414 (2001). Head, J. W. et al. Tropical to mid-latitude snow and ice accumulation, flow and glaciation on Mars. Nature 434, 346–351 (2005). Grimm, R. E., Harrison, K. P. & Stillman, D. E. Water budgets of Martian recurring slope lineae. Icarus 233, 316–327 (2014). Pasquon, K., Gargani, J., Massé, M. & Conway, S. J. Present-day formation and seasonal evolution of linear dune gullies on Mars. Icarus 274, 195–210 (2016). Elachi, C. et al. Cassini radar views the surface of Titan. Science 308, 970–974 (2005). Scully, J. E. C. et al. Geomorphological evidence for transient water flow on Vesta. Earth Planet Sci. Lett. 411, 151–163 (2015). Jouannic, G. et al. Laboratory simulation of debris flows over sand dunes: insights into gully-formation (Mars). Geomorphology 231, 101–115 (2015). Cousin, A. et al. Compositions of coarse and fine particles in Martian soils at gale: a window into the production of soils. Icarus 249, 22–42 (2015). Atwood-Stone, C. & McEwen, A. S. Avalanche slope angles in low-gravity environments from active Martian sand dunes. Geophys. Res. Lett. 40, 2929–2934 (2013). Kleinhans, M. G., Markies, H., de Vet, S. J., in 't Veld, A. C. & Postema, F. N. Static and dynamic angles of repose in loose granular materials under reduced gravity. J. Geophys. Res. 116, E11004 (2011). Coleman, K. A., Dixon, J. C., Howe, K. L., Roe, L. A. & Chevrier, V. Experimental simulation of Martian gully forms. Planet Space Sci. 57, 711–716 (2009). Westoby, M. J., Brasington, J., Glasser, N. F., Hambrey, M. J. & Reynolds, J. M. 'Structure-from-motion' photogrammetry: a low-cost, effective tool for geoscience applications. Geomorphology 179, 300–314 (2012). C.H. and laboratory work was funded by Europlanet (Europlanet 2020 RI has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 654208), J.R. was funded by a Horizon 2020 Marie Skłodowska-Curie Individual Fellowship (H2020-MSCA-IF-2014-657452), and M.R.B.'s contribution to this work was partly funded by the UK Science and Technology Facilities Council (ST/L000776/1). S.J.C. was partially supported by the French Space Agency CNES. M.R.P. was partly funded by the EU Europlanet program (grant agreement No 654208) and by the UK Science and Technology Facilities Council (ST/P001262/1). We acknowledge the help of J.P. Mason, who greatly improved our pressure measurements within the Mars simulation chamber. School of Physical Sciences, Faculty of Science, Technology, Engineering & Mathematics, The Open University, Walton Hall, Milton Keynes, MK7 6AA, UK Jan Raack, Matthew R. Balme & Manish R. Patel Laboratoire de Planétologie et Géodynamique—UMR CNRS 6112, Université de Nantes, 2 rue de la Houssinière—BP 92208, 44322, Nantes Cedex 3, France Susan J. Conway & Sabrina Carpy Physikalisches Institut, Universität Bern, Sidlerstrasse 5, 3012, Bern, Switzerland Clémence Herny Space Science and Technology Department, STFC Rutherford Appleton Laboratory, Harwell Campus, Didcot, OX11 0QX, UK Manish R. Patel Jan Raack Susan J. Conway Matthew R. Balme Sabrina Carpy The methodology and experiments were planned by S.J.C. and conducted by C.H. and J.R. with significant advice and support from S.J.C. and M.R.P. Data analysis was done by J.R. and C.H. with advice from S.J.C. and M.R.B. The manuscript was prepared by J.R. with significant help and support from S.J.C., M.R.B., C.H., M.R.P., and S.J.C. Scaling calculations were done by C.H., S.C. and S.J.C. Correspondence to Jan Raack. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Description of Additional Supplementary Files Supplementary Movie 1 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Raack, J., Conway, S.J., Herny, C. et al. Water induced sediment levitation enhances downslope transport on Mars. Nat Commun 8, 1151 (2017). https://doi.org/10.1038/s41467-017-01213-z DOI: https://doi.org/10.1038/s41467-017-01213-z Experimental evidence for lava-like mud flows under Martian surface conditions Petr Brož Ondřej Krýza Nature Geoscience (2020) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. A song of fire and ice across the Solar System Reviews & Analysis Editorial Values Statement Editors' Highlights Top 50 Articles Search articles by subject, keyword or author Show results from All journals This journal Explore articles by subject Guide to authors Nature Communications (Nat Commun) ISSN 2041-1723 (online) nature.com sitemap Articles by subject Protocol Exchange Nature portfolio policies Author & Researcher services Nature Research Academies Libraries & institutions Librarian service & tools Librarian portal Partnerships & Services Nature Conferences Nature events Nature Africa Nature Italy Nature Japan Nature Korea Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
Gas Giant Station Design What would, roughly, a scientific research station look like on a gas giant? Use Saturn as a baseline. The station would float because of a "balloon" carrying nothing. Vacuum is the lifting "gas", as the atmosphere of most gas giants is already hydrogen/helium. The pressure at the average elevation of the station would be 1 bar, or just a tiny bit less than 1 atm. The average temperature would be around -139 C. Power is generated by fusion of atmospheric hydrogen and helium, as Saturn is too far away from the sun for solar power to be meaningful. The stations will be floating along in the wind, so they will experience a stiff breeze at most, because they aren't actually going that fast relative to the wind. Materials come from regular shipments from drop bots delivered by mass drivers in the asteroid belt/moons. (I haven't decided yet) Drop bots are large machines that are mainly cargo pods built to withstand atmospheric entry, with helicopter blades that extend when they slow down enough. 23rd century tech, use your best judgement. Some ideas I had, might not be the most practical: Would it have a huge central sphere with the station built on a strip around its center? Would it be a number of small balloons around the edges of the base? I feel like that'd be sacrificing buoyancy for redundancy. science-based architecture Desolationgame DesolationgameDesolationgame $\begingroup$ If you would explain the downvote I could fix the problem? $\endgroup$ – Desolationgame Apr 11 '16 at 22:30 $\begingroup$ A huge central sphere with the station built on a strip around its center sounds cool, but I think it would tip over easily. Better to go with a tried-and-true dirigible / blimp design, with the "station" underneath the "balloon". No way it would tip over then since all the weight is on the bottom. $\endgroup$ – BrettFromLA Apr 11 '16 at 22:36 $\begingroup$ I'm guessing the downvote is because the question is too specific (is option A or option B better?) and doesn't do a good job of explaining the overarching issue. Are you asking about the feasibility of having a vacuum chamber for buoyancy (it seems rather implausible to me)? Are you worried about balance and stability? $\endgroup$ – Rob Watts Apr 11 '16 at 22:36 $\begingroup$ @RobWatts I'm not asking about feasibility or anything, I'm asking how such a station would be designed. The two examples I gave weren't options, but just ideas. I'll edit the question to be more clear. $\endgroup$ – Desolationgame Apr 11 '16 at 22:40 $\begingroup$ The downvote is mine. Your question lacks basic details like "Where in Saturn's tropsosphere will the research station be situated?", "What kind of power sources are available and what do they run on?", "how should the station deal with the insane wind speeds on Saturn, if they need to deal with them at all?"...basically, there's no detail about the context of this station, its occupants, their roles or goals or the technologies available. $\endgroup$ – Green Apr 12 '16 at 12:52 Maintain altitude using a heated gas envelope, much like a hot air balloon on Earth While it's true that a vacuum is much less dense than atmospheric hydrogen at 1 bar, the pressure vessel/flight envelope required to maintain buoyancy adds unnecassary weight. The primary problem with a "lifting vacuum" is that there's 1 bar of pressure pushing inwards on the envelope but nothing pushing back out. Thus the envelope alone must withstand a uniform 1 bar of compression plus fluctuations in atmospheric pressure plus wind loads plus safety margin for unplanned ascents or descents. While doing this is probably possible, I believe there are easier ways. Heat the Hydrogen Instead of creating a hard vacuum, use a gas envelope to contain heated hydrogen. Getting heat is both easy and essential. Since fusion power is available, heated hydrogen is abundantly available. As opposed to a hard vacuum in the flight envelope, filling it with heated hydrogen does a couple of beneficial things. It gives shape to the envelope as well as structural integrity. It prevents ammonia ice build up. Just as water ice build up on airplanes causes significant problems, so would the build up of ammonia ice cause problems on the surface of the space station. If the envelope is heated above the melting point of ammonia ice then no ice can accumulate. If the shape of the envelope is variable then inflating or underinflating parts of the envelope will allow the research station some degree of mobility. An envelope that must only maintain tension loads can be made much much lighter than a structure dealing with compression loads. This is the same difference in "lightness" observed between suspension bridges vs older stone bridges. We want this envelope and the associated research station to be as light as possible. In the event of reactor failure, the residual heat in the envelope will give the scientists/engineers on the research platform time to solve the problem before they lose buoyancy. Envelope Constraints The envelope should have the following characteristics: Volume: Sufficient to maintain station buoyancy at any altitude between 0.1 bar and 10 bar. This gives an altitude range of just above the water ice clouds at the bottom of the troposphere all the way to the top of the troposphere. This should include a 50% safety margin on envelope volume. Being able to fly to 0.1 bar means that rendezvous with incoming supply ships can be done above the clouds and using visual flight rules instead of instruments. Being able to see where you're going is always preferable. Abrasion resistance: Even thought the relative winds experienced by the station should be relatively small, ammonia ice and ammonium hydrosulfide ice may be abrasive. Over a long enough period this could cause the envelope to lose pressure and fail. While upwellings of water ice air are uncommon, the envelope should be able to handle exposure to water ice crystals as well. Chemical resistance: I'm unfamiliar with the chemical properties of the three ices in Saturn's atmosphere but the envelope should be able to resist these effects as well. Redundancy: Should the envelope fail causing an unplanned descent into Saturn's atmosphere, a back up envelope will be very handy to have. This should be inflatable in fairly rapid order. Station Constraints The station will be positioned at the bottom of the envelope for stability's sake. Aside from the envelope, the research station has the same concerns as a space station with the reduced requirements for cooling. However, there are a couple of things to mention. Redundant Power Supplies: With this design, heat equals life, both for the envelope and for the human occupants. It is therefore essential that the fusion powerplants be online at all times. Hydrogen Processing: These will need to run all the time, preferably with multiple units running at the same time. Sulfur can be discarded back into the atmosphere. Excess hydrogen can be stored in the envelope or low pressure tanks on the station. Nitrogen can be used to supplement the station's atmosphere. Oxygen Processing: The station must be able to position itself low enough in the atmosphere to gather water ice. This provides infinite oxygen supplies. All other metals will need to be delivered from off-world. GreenGreen $\begingroup$ Fun fact: the "surface gravity" in Saturn's troposphere is approximately 0.3g. $\endgroup$ – Green Apr 12 '16 at 17:46 $\begingroup$ Great answer! I wish I could mark two answers, but I'll prioritize yours for detail. $\endgroup$ – Desolationgame Apr 12 '16 at 17:49 $\begingroup$ I was basically starting to write this answer, but got bogged down in the (fun, actually) details of just how hard it would be to get a vacuum balloon to work. $\endgroup$ – Rob Watts Apr 12 '16 at 17:53 $\begingroup$ @RobWatts, I made it as far as "pressure vessels aren't buoyant at 1 bar." then quit. I appreciate the work you put in your answer. $\endgroup$ – Green Apr 12 '16 at 17:55 Let's look at some of the difficulties your gas giant research station will need to deal with. To start with, let's take a look at the idea of using a vacuum balloon. 1 bar is equal to 0.1MPa. Using the formulas supplied by an answer on physics.SE about pressure in a sphere, the formula for the required thickness of the sphere is $t = p r / (2 \sigma)$, where $\sigma$ is the compressive strength in MPa, $p$ is the pressure (0.1MPa), and $r$ is the radius of the sphere. The mass of the shell is roughly $t\times 4\pi r^2\times\rho$ where $\rho$ is the density of the material. Using 0.19kg/m^3 as the density of Saturn's atmosphere at that point, we get $0.19\times \frac{4}{3}\pi r^3$ kilograms displaced by the sphere. Combining these formulas, we can get the mass of the shell as a function of the compressive strength and density of the shell material and the radius of the sphere - $0.4\pi r^3\rho /(2\sigma)$. Now let's look at the ratio of the mass of the sphere to the mass of the atmosphere displaced - $\frac{0.4\pi r^3\rho/(2\sigma)}{0.19\frac{4}{3}\pi r^3}\approx \frac{3\rho}{4\sigma}$. Fortunately for us the formula has managed to simplify quite nicely. In order to float, this ratio needs to be less than one - it needs to displace more mass than it weighs. This could be reached by a hypothetical material with a compressive strength of 1MPa and a density of 1kg/m^3. Unfortunately, this isn't good news for the vacuum balloons. Steel has a compressive strength somewhere around 300MPa, but has a density of about 8000kg/m^3. That leaves the ratio at about 20, nowhere near being able to support even itself much less a research station. The shell could be thinner by using internal supports, but even then you wouldn't be able to get an over 95% reduction in the total amount of steel being used. Also, you'll notice that all radius contributions cancelled out in the final equation. This means that making the sphere larger or smaller makes no difference. A hot-air balloon really is the way to go. Unlike needing something that will resist compression, like steel, you can use a more flexible material with a high tensile strength, like kevlar or graphene. Graphene has a ridiculously high tensile strength, but by itself it's not airtight. It's quite reasonable that by the 23rd century we will have developed either a graphene-like compound or something that we can coat graphene with to have a strong, lightweight airtight material. From there, just follow @Green's answer. It's pretty much what I was going to say. Rob WattsRob Watts If I were designing it, I would have a large rectangular block (think two or three Borg Cubes squeezed together) as the main body of the station. This block is supported by several "balloons" around the upper rim. The weight is below the buoyancy source, so it will be more or less stable. To add some extra stability, underneath the block have a large pool of water with a massive floating weight in it. This weight will act as a counter-balance; should the station start to tilt, the buoyancy of the water will cause it to move "up" the tilt, adding weight to bring the station back to level. Note that this will add lots of weight, so you will need extra balloons to compensate. But if there's one thing you learn in engineering, it's that redundancy is better than failure. John RobinsonJohn Robinson If it was on Saturn Rockets, blimps, spaceships, hell even one of Davinci's flying machines. Since the air is mostly comprised of Hydrogen the air wouldn't combust, but it all depends on what you would like specifically, and what type of future you are thinking of (steam punk, cyber punk, etc). Fox-ChanFox-Chan Not the answer you're looking for? Browse other questions tagged science-based architecture or ask your own question. What would make airships viable economically? Tidally locked gas giant moon - brightness of the gas giant Could a Neptune like Gas Giant support life? Reality Check: Habitable moon around earth-like planet Hiding a Gas Giant What are the energy requirements of moving some of Venus' atmosphere to Mars? Extremely huge dirigible/airplane hybrid feasibility Is this planet's sulphur dioxide atmosphere feasible? How long would an array of mass drivers take to terraform Mars by transporting CO2 from Venus? Is it possible to place a permanent probe on Uranus? Habitable world orbiting a gas giant orbiting a gas giant
CommonCrawl
CPAA Home A concentration phenomenon of the least energy solution to non-autonomous elliptic problems with a totally degenerate potential March 2017, 16(2): 699-718. doi: 10.3934/cpaa.2017034 A sustainability condition for stochastic forest model TÔn Vı$\underset{.}{\overset{\hat{\ }}{\mathop{\text{E}}}}\, $T T$\mathop {\text{A}}\limits_. $ 1, , Linhthi hoai Nguyen 2, and Atsushi Yagi 3, Promotive Center for International Education and Research of Agriculture, Faculty of Agriculture, Kyushu University, 6-10-1 Hakozaki, Nishi-ku, Fukuoka 812-8581, Japan Department of Information and Physical Sciences, Graduate School of Information Science and Technology, Osaka University, 1-5 Yamadaoka, Suita, Osaka 565-0871, Japan Department of Applied Physics, Graduate School of Engineering, Osaka University, 1-5 Yamadaoka, Suita, Osaka 565-0871, Japan Received May 2016 Revised November 2016 Published January 2017 Fund Project: This work was supported by JSPS KAKENHI Grant Number 20140047, This work was supported by JSPS Grant-in-Aid for Scientific Research (No. 26400166) A stochastic forest model of young and old age class trees is studied. First, we prove existence, uniqueness and boundedness of global nonnegative solutions. Second, we investigate asymptotic behavior of solutions by giving a sufficient condition for sustainability of the forest. Under this condition, we show existence of a Borel invariant measure. Third, we present several sufficient conditions for decline of the forest. Finally, we give some numerical examples. Keywords: Forest model, sustainability, stochastic differential equations, Markov process. Mathematics Subject Classification: Primary: 37H10; Secondary: 47D07. Citation: TÔn Vı$\underset{.}{\overset{\hat{\ }}{\mathop{\text{E}}}}\, $T T$\mathop {\text{A}}\limits_. $, Linhthi hoai Nguyen, Atsushi Yagi. A sustainability condition for stochastic forest model. Communications on Pure & Applied Analysis, 2017, 16 (2) : 699-718. doi: 10.3934/cpaa.2017034 M. Ya. Antonovsky, Impact of the factors of the environment on the dynamics of population (mathematical model), in Proc. Soviet-American Symp. Comprehensive Analysis of the Environment, Tbilisi 1974, Leningrad: Hydromet, (1975), 218-230. Google Scholar L. Arnold, Stochastic Differential Equations: Theory and Applications, Wiley, New York, 1972. Google Scholar L. H. Chuan and A. Yagi, Dynamical system for forest kinematic model, Adv. Math. Sci. Appl., 16 (2006), 393-409. Google Scholar L. H. Chuan, T. Tsujikawa and A. Yagi, Asymptotic behavior of solutions for forest kinematic model, Funkcial. Ekvac., 49 (2006), 427-449. doi: 10.1619/fesi.49.427. Google Scholar L. H. Chuan, T. Tsujikawa and A. Yagi, Stationary solutions to forest kinematic model, Glasg. Math. J., 51 (2009), 1-17. doi: 10.1017/S0017089508004485. Google Scholar S. R. Foguel, The ergodic theory of positive operators on continuous functions, Ann. Scuola Norm. Sup. Pisa, 27 (1973), 19-51. Google Scholar [7] A. Friedman, Stochastic Differential Equations and Applications, Academic Press, New York, 1976. Google Scholar N. Ikeda and S. Watanabe, Stochastic Differential Equations and Diffusion Processes, NorthHolland, Tokyo, 1981. Google Scholar I. Karatzas and S. E. Shreve, Brownian Motion and Stochastic Calculus, Springer-Verlag, Berlin, 1991. doi: 10.1007/978-1-4612-0949-2. Google Scholar P. E. Kloeden, E. Platen and H. Schurz, Numerical Solution of SDE through Computer Experiments, Springer-Verlag, Berlin, 1994. doi: 10.1007/978-3-642-57913-4. Google Scholar Yu.A. Kuznetsov, M.Ya. Antonovsky, V. N. Biktashev and E. A. Aponina, A cross-diffusion model of forest boundary dynamics, J. Math. Biol., 32 (1994), 219-232. doi: 10.1007/BF00163879. Google Scholar X. Mao, Stochastic Differential Equations and Applications, 2nd edition, Horwood, Chichester, 2008. doi: 10.1533/9780857099402. Google Scholar L. Michael, Conservative Markov processes on a topological space, Isr. J. Math., 8 (1970), 165-186. doi: 10.1007/BF02771312. Google Scholar L. T. H. Nguyen and T.V. Ta, Dynamics of a stochastic ratio-dependent predator-prey model, Anal. Appl. (Singap.), 9 (2011), 329-344. doi: 10.1142/S0219530511001868. Google Scholar T. Shirai, L. H. Chuan and A. Yagi, Asymptotic behavior of solutions for forest kinematic model under Dirichlet conditions, Sci. Math. Jpn., 66 (2007), 289-301. Google Scholar T.V. Ta, L. T. H. Nguyen and A. Yagi, Flocking and non-flocking behavior in a stochastic Cucker-Smale system, Anal. Appl. (Singap.), 12 (2014), 63-73. Google Scholar A. Yagi, Abstract Parabolic Evolution Equations and their Applications, Springer, Berlin, 2010. Google Scholar Figure 1. Sample trajectories of $u_t$ and $v_t$ of (2) with parameters: $a=2, b=1, c=2.5, f=4, h=1, \rho=5, \sigma=0.5$ and initial value $(u_0, v_0)=(2, 1).$ The left figure illustrates a sample trajectory of $(u_t, v_t)$ in the phase space; the right figure illustrates sample trajectories of $u_t$ and $v_t$ along $t\in [0,100]$ Figure 2. Distribution of $(u_t, v_t)$ of (2) at $t=10^3$. The parameters and initial value are taken as in the legend of Fig. 1 Figure 3. Graphs of $\mathbb Eu$ and $\mathbb Ev$ along $ t\in [0, 20]$. The parameters and initial value are taken as in the legend of Fig. 1 Figure 4. Sample trajectory of two processes $I$ and $J$ defined by $I(t)=\frac{1}{t}\int_0^t u_sds$ and $J(t)=\frac{1}{t}\int_0^t v_sds$ along $ t\in [0,100].$ The parameters and initial value are taken as in the legend of Fig. 1 Figure 5. Graph of probability functions $R$ and $S$ defined by $R(t)=\mathbb P\{(u_t, v_t)\in A; (u_0, v_0)=(2, 1)\}$ and $S(t)=\mathbb P\{(u_t, v_t)\in A; (u_0, v_0)=(3, 4)\}$ along $t\in [50,100], $ where $A=[0.5, 30]\times[0.1, 20]$ and the parameters of (2) are taken as in the legend of Fig. 1. These functions are calculated on the basis of 2000 sample trajectories of $(u_t, v_t)$ corresponding to each initial value Figure 6. Decline of forest under the effect of noise with large intensity $\sigma$. Here, $a=3, b=4, c=5, f=6, h=2, \rho=7, \sigma=4$ and initial value $(u_0, v_0)=(4, 3)$. The left figure is a sample trajectory of $(u_t, v_t)$ in the phase space; the right figure is a sample trajectory of $u$ and $v$ along $t\in [0, 1]$ Figure 7. Decline of forest when the mortality $h$ of old trees is large. Here, $a=3, b=4, c=5, f=6, h=3.82, \rho=7, \sigma=0.25$ and initial value $(u_0, v_0)=(4, 3).$ The figure gives a graph of $\mathbb Eu$ and $\mathbb Ev$ along $t\in [0, 10]$ Table 1. Stability and instability of stationary solutions of (1) h (0, h*) (h*, h*) (h*, ∞) O unstable stable glob. asymp. stable P+ stable stable − P− − unstable − Daoyi Xu, Yumei Huang, Zhiguo Yang. Existence theorems for periodic Markov process and stochastic functional differential equations. Discrete & Continuous Dynamical Systems - A, 2009, 24 (3) : 1005-1023. doi: 10.3934/dcds.2009.24.1005 Shaokuan Chen, Shanjian Tang. Semi-linear backward stochastic integral partial differential equations driven by a Brownian motion and a Poisson point process. Mathematical Control & Related Fields, 2015, 5 (3) : 401-434. doi: 10.3934/mcrf.2015.5.401 Tomás Caraballo, Carlos Ogouyandjou, Fulbert Kuessi Allognissode, Mamadou Abdoul Diop. Existence and exponential stability for neutral stochastic integro–differential equations with impulses driven by a Rosenblatt process. Discrete & Continuous Dynamical Systems - B, 2020, 25 (2) : 507-528. doi: 10.3934/dcdsb.2019251 Vladimir Kazakov. Sampling - reconstruction procedure with jitter of markov continuous processes formed by stochastic differential equations of the first order. Conference Publications, 2009, 2009 (Special) : 433-441. doi: 10.3934/proc.2009.2009.433 Yan Wang, Lei Wang, Yanxiang Zhao, Aimin Song, Yanping Ma. A stochastic model for microbial fermentation process under Gaussian white noise environment. Numerical Algebra, Control & Optimization, 2015, 5 (4) : 381-392. doi: 10.3934/naco.2015.5.381 Ludwig Arnold, Igor Chueshov. Cooperative random and stochastic differential equations. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 1-33. doi: 10.3934/dcds.2001.7.1 Can Huang, Zhimin Zhang. The spectral collocation method for stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (3) : 667-679. doi: 10.3934/dcdsb.2013.18.667 Jasmina Djordjević, Svetlana Janković. Reflected backward stochastic differential equations with perturbations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 1833-1848. doi: 10.3934/dcds.2018075 Arnulf Jentzen. Taylor expansions of solutions of stochastic partial differential equations. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 515-557. doi: 10.3934/dcdsb.2010.14.515 Jan A. Van Casteren. On backward stochastic differential equations in infinite dimensions. Discrete & Continuous Dynamical Systems - S, 2013, 6 (3) : 803-824. doi: 10.3934/dcdss.2013.6.803 Igor Chueshov, Michael Scheutzow. Invariance and monotonicity for stochastic delay differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1533-1554. doi: 10.3934/dcdsb.2013.18.1533 A. Alamo, J. M. Sanz-Serna. Word combinatorics for stochastic differential equations: Splitting integrators. Communications on Pure & Applied Analysis, 2019, 18 (4) : 2163-2195. doi: 10.3934/cpaa.2019097 Pingping Niu, Shuai Lu, Jin Cheng. On periodic parameter identification in stochastic differential equations. Inverse Problems & Imaging, 2019, 13 (3) : 513-543. doi: 10.3934/ipi.2019025 Yaozhong Hu, David Nualart, Xiaobin Sun, Yingchao Xie. Smoothness of density for stochastic differential equations with Markovian switching. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3615-3631. doi: 10.3934/dcdsb.2018307 Yongqiang Suo, Chenggui Yuan. Large deviations for neutral stochastic functional differential equations. Communications on Pure & Applied Analysis, 2020, 19 (4) : 2369-2384. doi: 10.3934/cpaa.2020103 Xin Chen, Ana Bela Cruzeiro. Stochastic geodesics and forward-backward stochastic differential equations on Lie groups. Conference Publications, 2013, 2013 (special) : 115-121. doi: 10.3934/proc.2013.2013.115 Ellina Grigorieva, Evgenii Khailov. A nonlinear controlled system of differential equations describing the process of production and sales of a consumer good. Conference Publications, 2003, 2003 (Special) : 359-364. doi: 10.3934/proc.2003.2003.359 Ying Hu, Shanjian Tang. Switching game of backward stochastic differential equations and associated system of obliquely reflected backward stochastic differential equations. Discrete & Continuous Dynamical Systems - A, 2015, 35 (11) : 5447-5465. doi: 10.3934/dcds.2015.35.5447 Yayun Zheng, Xu Sun. Governing equations for Probability densities of stochastic differential equations with discrete time delays. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3615-3628. doi: 10.3934/dcdsb.2017182 Deena Schmidt, Janet Best, Mark S. Blumberg. Random graph and stochastic process contributions to network dynamics. Conference Publications, 2011, 2011 (Special) : 1279-1288. doi: 10.3934/proc.2011.2011.1279 TÔn Vı$\underset{.}{\overset{\hat{\ }}{\mathop{\text{E}}}}\, $T T$\mathop {\text{A}}\limits_. $ Linhthi hoai Nguyen Atsushi Yagi
CommonCrawl
Only show open access (1) Mathematical Proceedings of the Cambridge Philosophical Society (1) Mathematical Structures in Computer Science (1) Higher rank motivic Donaldson–Thomas invariants of $\mathbb A^3$ via wall-crossing, and asymptotics Multiplicative number theory Projective and enumerative geometry ALBERTO CAZZANIGA, DIMBINAINA RALAIVAOSAONA, ANDREA T. RICOLFI Journal: Mathematical Proceedings of the Cambridge Philosophical Society / Volume 174 / Issue 1 / January 2023 Published online by Cambridge University Press: 05 May 2022, pp. 97-122 Print publication: January 2023 We compute, via motivic wall-crossing, the generating function of virtual motives of the Quot scheme of points on ${\mathbb{A}}^3$ , generalising to higher rank a result of Behrend–Bryan–Szendrői. We show that this motivic partition function converges to a Gaussian distribution, extending a result of Morrison. (Tissue) P systems with cell polarity DANIELA BESOZZI, NADIA BUSI, PAOLO CAZZANIGA, CLAUDIO FERRETTI, ALBERTO LEPORATI, GIANCARLO MAURI, DARIO PESCINI, CLAUDIO ZANDRON Journal: Mathematical Structures in Computer Science / Volume 19 / Issue 6 / December 2009 Published online by Cambridge University Press: 04 December 2009, pp. 1141-1160 We consider the structure of the intestinal epithelial tissue and of cell–cell junctions as the biological model inspiring a new class of P systems. First we define the concept of cell polarity, a formal property derived from epithelial cells, which present morphologically and functionally distinct regions of the plasma membrane. Then we show two preliminary results for this new model of computation: on the theoretical side, we show that P systems with cell polarity are computationally (Turing) complete; on the modelling side, we show that the transepithelial movement of glucose from the intestinal lumen into the blood can be described by such a formal system. Finally, we define tissue P systems with cell polarity, where each cell has fixed connections to the neighbouring cells and to the environment, according to both the cell polarity and specific cell–cell junctions.
CommonCrawl
What use is the Yoneda lemma? Although I know very little category theory, I really do find it a pretty branch of mathematics and consider it quite useful, especially when it comes to laying down definitions and unifying diverse concepts. Many of the tools of category theory seem quite useful to me, such as Mitchell's embedding theorem, which allows one to prove theorems in any abelian category using diagram chasing. It lets me the ability to treat lots of objects I would not otherwise feel comfortable with as if they were modules over some ring; in essence, I feel like I've gained some tools from it. However, I simply cannot see where to apply the Yoneda lemma to some useful end. This is not to say that I don't think it is a very pretty lemma, which I do, or that I do not appreciate the aesthetic of being able to study an object in a category by looking at the morphisms from that object, which I also do. And I do find it useful to consider the modules over a ring rather than the ring itself when studying that ring, or to treat groups as subgroups of permutation groups, which are the two applications I've heard of the Yoneda lemma. The problem is that I already knew these things could be done. Essentially, I don't feel like I've gained any tools from the Yoneda lemma. My question is this: how can the Yoneda lemma be applied to make problems more approachable, other than in cases like those I have listed above which can easily be treated without a general result like the Yoneda lemma? Basically, what new tools does it give us? algebraic-geometry category-theory yoneda-lemma Alex BeckerAlex Becker $\begingroup$ I might try to write an answer later, but: the first time I really needed Yoneda and this functor of points stuff was while learning algebraic groups. There should be some good examples in Milne's notes. $\endgroup$ – Dylan Moreland $\begingroup$ @Alex: According to the practicing categorists I've spoken to, the Yoneda lemma is one of those things you internalise very quickly and forget about. For example, universal objects being unique up to unique isomorphism can be thought of as an application of the Yoneda lemma. If nothing else, the Yoneda lemma gives us the Yoneda embedding, which eventually leads to the functor of points which Dylan alludes to above. There are also some explicit calculations in topos theory which are done using the Yoneda lemma. $\endgroup$ – Zhen Lin $\begingroup$ Also, see this MO question. $\endgroup$ $\begingroup$ To provide a few links to threads on this site: Can someone explain the Yoneda Lemma to an applied mathematician?, Yoneda Lemma as generalization of Cayley's theorem, Yoneda Lemma - $\operatorname{Hom}{(\mathbb{Z},G)} \simeq G$ and What is the origin of the expression "Yoneda Lemma"? See also the links in these threads. $\endgroup$ – t.b. Some elaboration on Dylan Moreland's comment is in order. Consider the gadget $\text{GL}_n(-)$. What sort of gadget is this, exactly? To every commutative ring $R$, it assigns a group $\text{GL}_n(R)$ of $n \times n$ invertible matrices over $R$. But there's more: to every morphism $R \to S$ of commutative rings, it assigns a morphism $\text{GL}_n(R) \to \text{GL}_n(S)$ in the obvious way, and this assignment satisfies the obvious compatibility conditions. That is, $\text{GL}_n(-)$ defines a functor $$\text{GL}_n(-) : \text{CRing} \to \text{Grp}.$$ Composing this functor with the forgetful functor $\text{Grp} \to \text{Set}$ gives a functor which turns out to be representable by the ring $$\mathbb{Z}[x_{ij} : 1 \le i, j \le n, y]/(y \det_{1 \le i, j \le n} x_{ij} - 1).$$ Now, this ring itself only defines a functor $\text{CRing} \to \text{Set}$. What extra structure do we need to recover the fact that we actually have a functor into $\text{Grp}$? Well, for every ring $R$ we have maps $$e : 1 \to \text{GL}_n(R)$$ $$m : \text{GL}_n(R) \times \text{GL}_n(R) \to \text{GL}_n(R)$$ $$i : \text{GL}_n(R) \to \text{GL}_n(R)$$ satisfying various axioms coming from the ordinary group operations on $\text{GL}_n(R)$. These maps are all natural transformations of the corresponding functors, all of which are representable, so by the Yoneda lemma they come from morphisms in $\text{CRing}$ itself. These morphisms endow the ring above with the extra structure of a commutative Hopf algebra, which is equivalent to endowing its spectrum with the extra structure of a group object in the category of schemes, or an affine group scheme. In other words, in a category with finite products, saying that an object $G$ has the property that $\text{Hom}(-, G)$ is endowed with a natural group structure in the ordinary set-theoretic sense is equivalent to saying that $G$ itself is endowed with a group structure in a category-theoretic sense. I discuss these ideas in some more detail, using a simpler group scheme, in this blog post. Qiaochu YuanQiaochu Yuan $\begingroup$ Purely out of curiosity, is "gadget" a technical term? $\endgroup$ – Alex Becker $\begingroup$ @Alex: nope, I just didn't want to say "functor" yet. $\endgroup$ – Qiaochu Yuan I think one of the classical examples of the Yoneda lemma in action was Serre's observation that "cohomology operations" (i.e. natural transformations $H^n(X; G) \to H^m(X; H)$ for some $n, m, G, H$) are entirely classified by elements in the cohomology $H^m( K(G, n); H)$ (in view of the fact that Eilenberg-MacLane spaces represent cohomology on the homotopy category). This is not a deep observation once you believe the Yoneda lemma, but it means that one can determine what all possible cohomology operations are by computing the cohomology rings of these spaces $K(G, n)$ (which Serre and others did using the spectral sequence). This led to the development of the Steenrod algebra, which is the algebra $\mathcal{A}{}{}{}{}$ of all "stable" cohomology operations in mod 2 cohomology; as a result of Serre's observation and some computation, $\mathcal{A}$ can be explicitly written down via generators and relations. The use of algebraic methods with $\mathcal{A}$ to solve geometric problems is huge, a lot of which has to do with the Adams spectral sequence; this essentially lets one compute (in some favorable cases) stable homotopy classes of maps in terms of cohomology. Serre's observation is literally a consequence of the statement of the Yoneda lemma, but I think the "philosophy" of the Yoneda lemma is also very important; namely, one can characterize an object by how other objects map into it. In algebraic geometry, for instance, many complicated schemes (such as the Hilbert scheme) are literally defined in terms of the functor they represent. This has the benefit that one can reason about an object without necessarily knowing much about its "internal" structure (which may be prohibitively complex); for instance, one can talk about the tangent space to the Hilbert scheme (or any scheme defined by a moduli problem) in a very concrete way, in terms of that moduli problem evaluated on $\mathrm{Spec} k[\epsilon]/\epsilon^2$. Tom Church Akhil MathewAkhil Mathew $\begingroup$ There's a minor typo where you've written $H^m(K(G,n),m)$ for $H^m(K(G,n),H)$. I tried to edit, but my edit was rejected for being involving fewer than six (more precisely, exactly one) characters. $\endgroup$ – WillO This answer is much more naive than the others, but I'll post it anway. The most standard application of Yoneda's Lemma seems to be the uniqueness, up to unique isomorphism, of the limit of an inductive or projective system. See for instance this pdf file by Pierre Schapira, or, more classically, the very beginning of: Grothendieck, Alexander, Éléments de Géométrie Algébrique (rédigés avec la collaboration de Jean Dieudonné) : III. Étude cohomologique des faisceaux cohérents, Première partie. Publications Mathématiques de l'IHÉS, 11 (1961), p. 5-167. I don't see how to prove this uniqueness without using (explicitly or implicitly) Yoneda's Lemma. EDIT A. Let me try to explain more precisely what I have in mind by taking the example of final objects. (This example is given by Grothendieck in Section $(8.1.10)$ page $8$ of the linked text.) First, let $S$ be the category of sets, and let $C$ be any category. Denote by $C(x,y)$ the set of $C$-morphisms from $x$ to $y$. Let $F:C\to S$ be a contravariant functor. Say that a representation of $F$ consists of an object $a$ of $C$ together with an isomomorphism $F\simeq C(?,a)$. (By an enormous abuse of language, one usually expresses this by saying that $a$ represents $F$.) Yoneda's Lemma tells us that, to each pair $F\simeq C(?,a)$ and $F\simeq C(?,b)$ of representations of $F$ is attached a perfectly defined isomorphism $a\simeq b$, and that the isomorphisms attached to any three representations of $F$ form a commuting triangle. So, we won't get in trouble if, among all representations of $F$, we pick one of them, call the object involved the representative of $F$, and denote it by a symbol like $a_F$. This is something we all do almost always. (This is explained by Grothendieck in Section $(8.1.8)$ page $7$.) Going back to our final object, let $s$ be a set with a single element, and $F:C\to S$ the contravariant functor attaching $s$ to any object of $C$. If $F\simeq C(?,\omega)$ is an isomorphism, we say, that $\omega$ is the final object of $C$. This of course applies (as pointed out by Grothendieck in Section $(8.1.9)$ page $7$) to any projective system. EDIT B. This is just a complement to t.b.'s interesting comment, which mentions this answer of ineff's about adjoint functors. The interested reader might take a look at Section 1.5 pages 38 to 41 about adjoint functors in this pdf scan taken from the Springer edition of EGA I. More precisely, the scan contains pages 19 to 41 of Éléments de Géométrie Algébrique I, Volume 166 of Grundlehren der mathematischen Wissenschaften in Einzeldarstellungen mit besonderer Berücksichtigung der Anwendungsgebiete, A. Grothendieck, Jean Alexandre Dieudonné, Springer-Verlag, 1971. Apart from this Section on adjoint functors, these pages are essentially contained into the part of the IHES edition of EGA referred to in the previous edit. This scan (I think) can be read with almost no prerequisite, and I don't know any other part of Grothendieck's work for which this can be said. Pierre-Yves GaillardPierre-Yves Gaillard $\begingroup$ Unfortunately, I cannot read french. Is it possible to give a quick summary of Grothendieck's argument (which I assume assets the uniqueness of schemes)? $\endgroup$ $\begingroup$ Dear @Alex: For the point I'm trying to make (and which is a very soft one), Grothendieck's and Schapira's texts are interchangeable. Grothendieck, immediately after having stated Yoneda's Lemma, gives this uniqueness of limits as the first example. I just wanted to say that, beside the deep applications of YL (which I don't understand), there are elementary applications. The few pages of the EGA I'm referring to are (I think) genuinely elementary. $\endgroup$ – Pierre-Yves Gaillard $\begingroup$ I took a look in the intervening time at Schapira's text and found the implicit use of Yoneda's you were talking about. Not the kind of answer I was looking for, but very interesting. +1'd. $\endgroup$ $\begingroup$ Dear Pierre-Yves I'm not entirely sure about the point you're making. Do you mean to say that by definition limit and colimit are adjoint to the diagonal functor and verifying the universal property amounts to that statement by the "adjointness via universals" corollary of Yoneda's lemma? $\endgroup$ $\begingroup$ Dear @t.b.: Thank you very much for your comment. I edited the answer. By all means, let me know if there is anything wrong with what I wrote. $\endgroup$ Not the answer you're looking for? Browse other questions tagged algebraic-geometry category-theory yoneda-lemma or ask your own question. Can someone explain the Yoneda Lemma to an applied mathematician? Yoneda-Lemma as generalization of Cayley`s theorem? What is the origin of the expression "Yoneda Lemma"? Yoneda Lemma Exercises Philosophy or meaning of adjoint functors What is the use of representable functor Yoneda Lemma - $\mbox{Hom}(\mathbb{Z},G) \simeq G$ corollaries and applications of Yoneda's famous lemma "The Yoneda embedding reflects exactness" is a direct consequence of Yoneda? direct proof of the dual statement of the Yoneda lemma Applying the Yoneda-Lemma to prove the existence of Tensor-products Additive Yoneda Lemma Not getting example to fit Yoneda Is iteratively applying the Yoneda embedding interesting? Two questions on Yoneda's lemma Confusion about the Yoneda lemma Understanding the maps between $F(A)$ and the natural transformations used in the proof of the Yoneda lemma.
CommonCrawl
eISSN: Mathematical Biosciences & Engineering February 2018 , Volume 15 , Issue 1 Special issue on Erice 'MathCompEpi 2015' Proceedings Select all articles Export/Reference: Alberto D'Onofrio, Paola Cerrai and Piero Manfredi 2018, 14(1): i-iv doi: 10.3934/mbe.201801i +[Abstract](4967) +[HTML](137) +[PDF](214.6KB) Alberto D\'Onofrio, Paola Cerrai, Piero Manfredi. Special issue on Erice \u2018MathCompEpi 2015\u2019 Proceedings. Mathematical Biosciences & Engineering, 2018, 14(1): i-iv. doi: 10.3934/mbe.201801i. The interplay between models and public health policies: Regional control for a class of spatially structured epidemics (think globally, act locally) Vincenzo Capasso and Sebastian AniȚa 2018, 15(1): 1-20 doi: 10.3934/mbe.2018001 +[Abstract](2356) +[HTML](152) +[PDF](540.0KB) A review is presented here of the research carried out, by a group including the authors, on the mathematical analysis of epidemic systems. Particular attention is paid to recent analysis of optimal control problems related to spatially structured epidemics driven by environmental pollution. A relevant problem, related to the possible eradication of the epidemic, is the so called zero stabilization. In a series of papers, necessary conditions, and sufficient conditions of stabilizability have been obtained. It has been proved that it is possible to diminish exponentially the epidemic process, in the whole habitat, just by reducing the concentration of the pollutant in a nonempty and sufficiently large subset of the spatial domain. The stabilizability with a feedback control of harvesting type is related to the magnitude of the principal eigenvalue of a certain operator. The problem of finding the optimal position (by translation) of the support of the feedback stabilizing control is faced, in order to minimize both the infected population and the pollutant at a certain finite time. Vincenzo Capasso, Sebastian Ani\u021Aa. The interplay between models and public health policies: Regional control for a class of spatially structured epidemics <i>(think globally, act locally)<\/i>. Mathematical Biosciences & Engineering, 2018, 15(1): 1-20. doi: 10.3934/mbe.2018001. Modeling Ebola Virus Disease transmissions with reservoir in a complex virus life ecology Tsanou Berge, Samuel Bowong, Jean Lubuma and Martin Luther Mann Manyombe 2018, 15(1): 21-56 doi: 10.3934/mbe.2018002 +[Abstract](4011) +[HTML](203) +[PDF](1273.2KB) We propose a new deterministic mathematical model for the transmission dynamics of Ebola Virus Disease (EVD) in a complex Ebola virus life ecology. Our model captures as much as possible the features and patterns of the disease evolution as a three cycle transmission process in the two ways below. Firstly it involves the synergy between the epizootic phase (during which the disease circulates periodically amongst non-human primates populations and decimates them), the enzootic phase (during which the disease always remains in fruit bats population) and the epidemic phase (during which the EVD threatens and decimates human populations). Secondly it takes into account the well-known, the probable/suspected and the hypothetical transmission mechanisms (including direct and indirect routes of contamination) between and within the three different types of populations consisting of humans, animals and fruit bats. The reproduction number $\mathcal R_0$ for the full model with the environmental contamination is derived and the global asymptotic stability of the disease free equilibrium is established when $\mathcal R_0 < 1$. It is conjectured that there exists a unique globally asymptotically stable endemic equilibrium for the full model when $\mathcal R_0>1$. The role of a contaminated environment is assessed by comparing the human infected component for the sub-model without the environment with that of the full model. Similarly, the sub-model without animals on the one hand and the sub-model without bats on the other hand are studied. It is shown that bats influence more the dynamics of EVD than the animals. Global sensitivity analysis shows that the effective contact rate between humans and fruit bats and the mortality rate for bats are the most influential parameters on the latent and infected human individuals. Numerical simulations, apart from supporting the theoretical results and the existence of a unique globally asymptotically stable endemic equilibrium for the full model, suggest further that: (1) fruit bats are more important in the transmission processes and the endemicity level of EVD than animals. This is in line with biological findings which identified bats as reservoir of Ebola viruses; (2) the indirect environmental contamination is detrimental to human beings, while it is almost insignificant for the transmission in bats. Tsanou Berge, Samuel Bowong, Jean Lubuma, Martin Luther Mann Manyombe. Modeling Ebola Virus Disease transmissions with reservoir in a complex virus life ecology. Mathematical Biosciences & Engineering, 2018, 15(1): 21-56. doi: 10.3934/mbe.2018002. Mathematical analysis of a weather-driven model for the population ecology of mosquitoes Kamaldeen Okuneye, Ahmed Abdelrazec and Abba B. Gumel 2018, 15(1): 57-93 doi: 10.3934/mbe.2018003 +[Abstract](4852) +[HTML](2245) +[PDF](905.5KB) A new deterministic model for the population biology of immature and mature mosquitoes is designed and used to assess the impact of temperature and rainfall on the abundance of mosquitoes in a community. The trivial equilibrium of the model is globally-asymptotically stable when the associated vectorial reproduction number $({\mathcal R}_0)$ is less than unity. In the absence of density-dependence mortality in the larval stage, the autonomous version of the model has a unique and globally-asymptotically stable non-trivial equilibrium whenever $1 < {\mathcal R}_0 < {\mathcal R}_0^C$ (this equilibrium bifurcates into a limit cycle, via a Hopf bifurcation at ${\mathcal R}_0={\mathcal R}_0^C$). Numerical simulations of the weather-driven model, using temperature and rainfall data from three cities in Sub-Saharan Africa (Kwazulu Natal, South Africa; Lagos, Nigeria; and Nairobi, Kenya), show peak mosquito abundance occurring in the cities when the mean monthly temperature and rainfall values lie in the ranges $[22 -25]^{0}$C, $[98 -121]$ mm; $[24 -27]^{0}$C, $[113 -255]$ mm and $[20.5 -21.5]^{0}$C, $[70 -120]$ mm, respectively (thus, mosquito control efforts should be intensified in these cities during the periods when the respective suitable weather ranges are recorded). Kamaldeen Okuneye, Ahmed Abdelrazec, Abba B. Gumel. Mathematical analysis of a weather-driven model for the population ecology of mosquitoes. Mathematical Biosciences & Engineering, 2018, 15(1): 57-93. doi: 10.3934/mbe.2018003. Numerical solution of a spatio-temporal gender-structured model for hantavirus infection in rodents Raimund BÜrger, Gerardo Chowell, Elvis GavilÁn, Pep Mulet and Luis M. Villada 2018, 15(1): 95-123 doi: 10.3934/mbe.2018004 +[Abstract](5751) +[HTML](180) +[PDF](2149.0KB) In this article we describe the transmission dynamics of hantavirus in rodents using a spatio-temporal susceptible-exposed-infective-recovered (SEIR) compartmental model that distinguishes between male and female subpopulations [L.J.S. Allen, R.K. McCormack and C.B. Jonsson, Bull. Math. Biol. 68 (2006), 511-524]. Both subpopulations are assumed to differ in their movement with respect to local variations in the densities of their own and the opposite gender group. Three alternative models for the movement of the male individuals are examined. In some cases the movement is not only directed by the gradient of a density (as in the standard diffusive case), but also by a non-local convolution of density values as proposed, in another context, in [R.M. Colombo and E. Rossi, Commun. Math. Sci., 13 (2015), 369-400]. An efficient numerical method for the resulting convection-diffusion-reaction system of partial differential equations is proposed. This method involves techniques of weighted essentially non-oscillatory (WENO) reconstructions in combination with implicit-explicit Runge-Kutta (IMEX-RK) methods for time stepping. The numerical results demonstrate significant differences in the spatio-temporal behavior predicted by the different models, which suggest future research directions. Raimund B\u00DCrger, Gerardo Chowell, Elvis Gavil\u00C1n, Pep Mulet, Luis M. Villada. Numerical solution of a spatio-temporal gender-structured model for hantavirus infection in rodents. Mathematical Biosciences & Engineering, 2018, 15(1): 95-123. doi: 10.3934/mbe.2018004. Sex-biased prevalence in infections with heterosexual, direct, and vector-mediated transmission: a theoretical analysis Andrea Pugliese, Abba B. Gumel, Fabio A. Milner and Jorge X. Velasco-Hernandez 2018, 15(1): 125-140 doi: 10.3934/mbe.2018005 +[Abstract](6075) +[HTML](818) +[PDF](642.3KB) Three deterministic Kermack-McKendrick-type models for studying the transmission dynamics of an infection in a two-sex closed population are analyzed here. In each model it is assumed that infection can be transmitted through heterosexual contacts, and that there is a higher probability of transmission from one sex to the other than vice versa. The study is focused on understanding whether and how this bias in transmission reflects in sex differences in final attack ratios (i.e. the fraction of individuals of each sex that eventually gets infected). In the first model, where the other two transmission modes are not considered, the attack ratios (fractions of the population of each sex that will eventually be infected) can be obtained as solutions of a system of two nonlinear equations, that has a unique solution if the net reproduction number exceeds unity. It is also shown that the ratio of attack ratios depends solely on the ratio of gender-specific susceptibilities and on the basic reproductive number of the epidemic \begin{document}$ \mathcal{R}_0 $\end{document}, and that the gender-specific final attack-ratio is biased in the same direction as the gender-specific susceptibilities. The second model allows also for infection transmission through direct, non-sexual, contacts. In this case too, an analytical expression is derived from which the attack ratios can be obtained. The qualitative results are similar to those obtained for the previous model, but another important parameter for determining the value of the ratio between the attack ratios in the two sexes is obtained, the relative weight of direct vs. heterosexual transmission (namely, ρ). Quantitatively, the ratio of final attack ratios generally will not exceed 1.5, if non-sexual transmission accounts for most transmission events (ρ ≥ 0.6) and the ratio of gender-specific susceptibilities is not too large (say, 5 at most). The third model considers vector-borne, instead of direct transmission. In this case, we were not able to find an analytical expression for the final attack ratios, but used instead numerical simulations. The results on final attack ratios are actually quite similar to those obtained with the second model. It is interesting to note that transient patterns can differ from final attack ratios, as new cases will tend to occur more often in the more susceptible sex, while later depletion of susceptibles may bias the ratio in the opposite direction. The analysis of these simple models, despite their lack of realism, can help in providing insight into, and assessment of, the potential role of gender-specific transmission in infections with multiple modes of transmission, such as Zika virus (ZIKV), by gauging what can be expected to be seen from epidemiological reports of new cases, disease incidence and seroprevalence surveys. Andrea Pugliese, Abba B. Gumel, Fabio A. Milner, Jorge X. Velasco-Hernandez. Sex-biased prevalence in infections with heterosexual, direct, and vector-mediated transmission: a theoretical analysis. Mathematical Biosciences & Engineering, 2018, 15(1): 125-140. doi: 10.3934/mbe.2018005. On the usefulness of set-membership estimation in the epidemiology of infectious diseases Andreas Widder We present a method, known in control theory, to give set-membership estimates for the states of a population in which an infectious disease is spreading. An estimation is reasonable due to the fact that the parameters of the equations describing the dynamics of the disease are not known with certainty. We discuss the properties of the resulting estimations. These include the possibility to determine best-or worst-case-scenarios and identify under which circumstances they occur, as well as a method to calculate confidence intervals for disease trajectories under sparse data. We give numerical examples of the technique using data from the 2014 outbreak of the Ebola virus in Africa. We conclude that the method presented here can be used to extract additional information from epidemiological data. Andreas Widder. On the usefulness of set-membership estimation in the epidemiology of infectious diseases. Mathematical Biosciences & Engineering, 2018, 15(1): 141-152. doi: 10.3934/mbe.2018006. An exact approach to calibrating infectious disease models to surveillance data: The case of HIV and HSV-2 David J. Gerberry 2018, 15(1): 153-179 doi: 10.3934/mbe.2018007 +[Abstract](6154) +[HTML](170) +[PDF](1462.5KB) When mathematical models of infectious diseases are used to inform health policy, an important first step is often to calibrate a model to disease surveillance data for a specific setting (or multiple settings). It is increasingly common to also perform sensitivity analyses to demonstrate the robustness, or lack thereof, of the modeling results. Doing so requires the modeler to find multiple parameter sets for which the model produces behavior that is consistent with the surveillance data. While frequently overlooked, the calibration process is nontrivial at best and can be inefficient, poorly communicated and a major hurdle to the overall reproducibility of modeling results. In this work, we describe a general approach to calibrating infectious disease models to surveillance data. The technique is able to match surveillance data to high accuracy in a very efficient manner as it is based on the Newton-Raphson method for solving nonlinear systems. To demonstrate its robustness, we use the calibration technique on multiple models for the interacting dynamics of HIV and HSV-2. David J. Gerberry. An exact approach to calibrating infectious disease models to surveillance data: The case of HIV and HSV-2. Mathematical Biosciences & Engineering, 2018, 15(1): 153-179. doi: 10.3934/mbe.2018007. A simple model of HIV epidemic in Italy: The role of the antiretroviral treatment Federico Papa, Francesca Binda, Giovanni Felici, Marco Franzetti, Alberto Gandolfi, Carmela Sinisgalli and Claudia Balotta In the present paper we propose a simple time-varying ODE model to describe the evolution of HIV epidemic in Italy. The model considers a single population of susceptibles, without distinction of high-risk groups within the general population, and accounts for the presence of immigration and emigration, modelling their effects on both the general demography and the dynamics of the infected subpopulations. To represent the intra-host disease progression, the untreated infected population is distributed over four compartments in cascade according to the CD4 counts. A further compartment is added to represent infected people under antiretroviral therapy. The per capita exit rate from treatment, due to voluntary interruption or failure of therapy, is assumed variable with time. The values of the model parameters not reported in the literature are assessed by fitting available epidemiological data over the decade \begin{document}$2003 \div 2012$\end{document}. Predictions until year 2025 are computed, enlightening the impact on the public health of the early initiation of the antiretroviral therapy. The benefits of this change in the treatment eligibility consist in reducing the HIV incidence rate, the rate of new AIDS cases, and the rate of death from AIDS. Analytical results about properties of the model in its time-invariant form are provided, in particular the global stability of the equilibrium points is established either in the absence and in the presence of infected among immigrants. Federico Papa, Francesca Binda, Giovanni Felici, Marco Franzetti, Alberto Gandolfi, Carmela Sinisgalli, Claudia Balotta. A simple model of HIV epidemic in Italy: The role of the antiretroviral treatment. Mathematical Biosciences & Engineering, 2018, 15(1): 181-207. doi: 10.3934/mbe.2018008. Prediction of influenza peaks in Russian cities: Comparing the accuracy of two SEIR models Vasiliy N. Leonenko and Sergey V. Ivanov This paper is dedicated to the application of two types of SEIR models to the influenza outbreak peak prediction in Russian cities. The first one is a continuous SEIR model described by a system of ordinary differential equations. The second one is a discrete model formulated as a set of difference equations, which was used in the Baroyan-Rvachev modeling framework for the influenza outbreak prediction in the Soviet Union. The outbreak peak day and height predictions were performed by calibrating both models to varied-size samples of long-term data on ARI incidence in Moscow, Saint Petersburg, and Novosibirsk. The accuracy of the modeling predictions on incomplete data was compared with a number of other peak forecasting methods tested on the same dataset. The drawbacks of the described prediction approach and possible ways to overcome them are discussed. Vasiliy N. Leonenko, Sergey V. Ivanov. Prediction of influenza peaks in Russian cities: Comparing the accuracy of two SEIR models. Mathematical Biosciences & Engineering, 2018, 15(1): 209-232. doi: 10.3934/mbe.2018009. A TB model: Is disease eradication possible in India? Surabhi Pandey and Ezio Venturino Tuberculosis (TB) is returning to be a worldwide global public health threat. It is estimated that 9.6 million cases occurred in 2014, of which just two-thirds notified to public health authorities. The "missing cases" constitute a severe challenge for TB transmission control. TB is a severe disease in India, while, worldwide, the WHO estimates that one third of the entire world population is infected. Nowadays, incidence estimation relies increasingly more on notifications of new cases from routine surveillance. There is an urgent need for better estimates of the load of TB, in high-burden settings. We developed a simple model of TB transmission dynamics, using a dynamical system model, consisting of six classes of individuals. It contains the current medical epidemiologists' understanding of the spread of the Mycobacterium tuberculosis in humans, which is substantiated by field observations at the district level in India. The model incorporates the treatment options provided by the public and private sectors in India. Mathematically, an interesting feature of the system is that it exhibits a backward, or subcritical, bifurcation. One of the results of the investigation shows that the discrepancy between the diagnosis rates of the public and private sector does not seem to be the cause of the endemicity of the disease, and, unfortunately, even if they reached 100% of correct diagnosis, this would not be enough to achieve disease eradication. Several other approaches have been attempted on the basis of this model to indicate possible strategies that may lead to disease eradication, but the rather sad conclusion is that they unfortunately do not appear viable in practice. Surabhi Pandey, Ezio Venturino. A TB model: Is disease eradication possible in India?. Mathematical Biosciences & Engineering, 2018, 15(1): 233-254. doi: 10.3934/mbe.2018010. Three-level global resource allocation model for HIV control: A hierarchical decision system approach Semu Mitiku Kassa Funds from various global organizations, such as, The Global Fund, The World Bank, etc. are not directly distributed to the targeted risk groups. Especially in the so-called third-world-countries, the major part of the fund in HIV prevention programs comes from these global funding organizations. The allocations of these funds usually pass through several levels of decision making bodies that have their own specific parameters to control and specific objectives to achieve. However, these decisions are made mostly in a heuristic manner and this may lead to a non-optimal allocation of the scarce resources. In this paper, a hierarchical mathematical optimization model is proposed to solve such a problem. Combining existing epidemiological models with the kind of interventions being on practice, a 3-level hierarchical decision making model in optimally allocating such resources has been developed and analyzed. When the impact of antiretroviral therapy (ART) is included in the model, it has been shown that the objective function of the lower level decision making structure is a non-convex minimization problem in the allocation variables even if all the production functions for the intervention programs are assumed to be linear. Semu Mitiku Kassa. Three-level global resource allocation model for HIV control: <i>A hierarchical decision system approach<\/i>. Mathematical Biosciences & Engineering, 2018, 15(1): 255-273. doi: 10.3934/mbe.2018011. A frailty model for intervention effectiveness against disease transmission when implemented with unobservable heterogeneity Ping Yan For an intervention against the spread of communicable diseases, the idealized situation is when individuals fully comply with the intervention and the exposure to the infectious agent is comparable across all individuals. Some level of non-compliance is likely where the intervention is widely implemented. The focus is on a more accurate view of its effects population-wide. A frailty model is applied. Qualitative analysis, in mathematical terms, reveals how large variability in compliance renders the intervention less effective. This finding sharpens our vague, intuitive and empirical notions. An effective reproduction number in the presence of frailty is defined and is shown to be invariant with respect to the time-scale of disease progression. This makes the results in this paper valid for a wide spectrum of acute and chronic infectious diseases. Quantitative analysis by comparing numerical results shows that they are also robust with respect to assumptions on disease progression structure and distributions, such as with or without the latent period and the assumed distributions of latent and infectious periods. Ping Yan. A frailty model for intervention effectiveness against disease transmission when implemented with unobservable heterogeneity. Mathematical Biosciences & Engineering, 2018, 15(1): 275-298. doi: 10.3934/mbe.2018012. Effect of seasonality on the dynamics of an imitation-based vaccination model with public health intervention Bruno Buonomo, Giuseppe Carbone and Alberto d'Onofrio We extend here the game-theoretic investigation made by d'Onofrio et al (2012) on the interplay between private vaccination choices and actions of the public health system (PHS) to favor vaccine propensity in SIR-type diseases. We focus here on three important features. First, we consider a SEIR-type disease. Second, we focus on the role of seasonal fluctuations of the transmission rate. Third, by a simple population-biology approach we derive -with a didactic aim -the game theoretic equation ruling the dynamics of vaccine propensity, without employing 'economy-related' concepts such as the payoff. By means of analytical and analytical-approximate methods, we investigate the global stability of the of disease-free equilibria. We show that in the general case the stability critically depends on the 'shape' of the periodically varying transmission rate. In other words, the knowledge of the average transmission rate (ATR) is not enough to make inferences on the stability of the elimination equilibria, due to the presence of the class of latent subjects. In particular, we obtain that the amplitude of the oscillations favors the possible elimination of the disease by the action of the PHS, through a threshold condition. Indeed, for a given average value of the transmission rate, in absence of oscillations as well as for moderate oscillations, there is no disease elimination. On the contrary, if the amplitude exceeds a threshold value, the elimination of the disease is induced. We heuristically explain this apparently paradoxical phenomenon as a beneficial effect of the phase when the transmission rate is under its average value: the reduction of transmission rate (for example during holidays) under its annual average over-compensates its increase during periods of intense contacts. We also investigate the conditions for the persistence of the disease. Numerical simulations support the theoretical predictions. Finally, we briefly investigate the qualitative behavior of the non-autonomous system for SIR-type disease, by showing that the stability of the elimination equilibria are, in such a case, determined by the ATR. Bruno Buonomo, Giuseppe Carbone, Alberto d\'Onofrio. Effect of seasonality on the dynamics of an imitation-based vaccination model with public health intervention. Mathematical Biosciences & Engineering, 2018, 15(1): 299-321. doi: 10.3934/mbe.2018013. Optimal time to intervene: The case of measles child immunization Zuzana Chladná The recent measles outbreaks in US and Germany emphasize the importance of sustaining and increasing vaccination rates. In Slovakia, despite mandatory vaccination scheme, decrease in the vaccination rates against measles has been observed in recent years. Different kinds of intervention at the state level, like a law making vaccination a requirement for school entry or education and advertising seem to be the only strategies to improve vaccination coverage. This study aims to analyze the economic effectiveness of intervention in Slovakia. Using real options techniques we determine the level of vaccination rate at which it is optimal to perform intervention. We represent immunization rate of newborns as a stochastic process and intervention as a one-period jump of this process. Sensitivity analysis shows the importance of early intervention in the population with high initial average vaccination coverage. Furthermore, our numerical results demonstrate that the less certain we are about the future development of the immunization rate of newborns, the more valuable is the option to intervene. Zuzana Chladn\u00E1. Optimal time to intervene: The case of measles child immunization. Mathematical Biosciences & Engineering, 2018, 15(1): 323-335. doi: 10.3934/mbe.2018014. RSS this journal Tex file preparation Abstracted in Add your name and e-mail address to receive news of forthcoming issues of this journal: Select the journal Select Journals
CommonCrawl
Error analysis of discontinuous Galerkin method for the time fractional KdV equation with weak singularity solution Advances in the LaSalle-type theorems for stochastic functional differential equations with infinite delay January 2020, 25(1): 301-319. doi: 10.3934/dcdsb.2019183 Pullback exponential attractors for differential equations with variable delays Mohamed Ali Hammami a, , Lassaad Mchiri a,b, , Sana Netchaoui a, and Stefanie Sonner c,, Department of Mathematics, University of Sfax, Route de la Soukra km 4, Sfax 3038, Tunisia Department of Statistics and Operations Research, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia IMAPP Mathematics, Radboud Universiteit Nijmegen, PO Box 9010, 6500GL Nijmegen, The Netherlands * Corresponding author: Stefanie Sonner Received February 2019 Published July 2019 We show how recent existence results for pullback exponential attractors can be applied to non-autonomous delay differential equations with time-varying delays. Moreover, we derive explicit estimates for the fractal dimension of the attractors. As a special case, autonomous delay differential equations are also discussed, where our results improve previously obtained bounds for the fractal dimension of exponential attractors. Keywords: Non-autonomous delay differential equations, variable delays, pullback exponential attractors, fractal dimension, non-autonomous dynamical systems. Mathematics Subject Classification: Primary: 37L30, 37B55; Secondary: 34D45, 37L25. Citation: Mohamed Ali Hammami, Lassaad Mchiri, Sana Netchaoui, Stefanie Sonner. Pullback exponential attractors for differential equations with variable delays. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 301-319. doi: 10.3934/dcdsb.2019183 T. Caraballo, P. Marin-Rubio and J. Valero, Autonomous and non-autonomous attractors for differential equations with delays, J. Differential Equations, 208 (2005), 9-41. doi: 10.1016/j.jde.2003.09.008. Google Scholar T. Caraballo, J. A. Langa and J. C. Robinson, Attractors for differential equations with variable delays, J. Math. Anal. Appl., 260 (2001), 421-438. doi: 10.1006/jmaa.2000.7464. Google Scholar A. N. Carvalho, J. A. Langa and J. C. Robinson, Attractors for Infinite-Dimensional Non-Autonomous Dynamical Systems, Applied Mathematical Sciences, 182. Springer, New York, 2013. doi: 10.1007/978-1-4614-4581-4. Google Scholar A. N. Carvalho and S. Sonner, Pullback exponential attractors for evolution processes in Banach spaces: theoretical results, Commun. Pure Appl. Anal., 12 (2013), 3047-3071. doi: 10.3934/cpaa.2013.12.3047. Google Scholar A. N. Carvalho and S. Sonner, Pullback exponential attractors for evolution processes in Banach spaces: properties and applications, Commun. Pure Appl. Anal., 13 (2014), 1141-1165. doi: 10.3934/cpaa.2014.13.1141. Google Scholar R. Czaja and M. A. Efendiev, Pullback exponential attractors for nonautonomous equations part I: Semilinear parabolic equations, J. Math. Anal. Appl., 381 (2011), 748-765. doi: 10.1016/j.jmaa.2011.03.053. Google Scholar A. Eden, C. Foias, B. Nicolaenko and R. Temam, Exponential Attractors for Dissipative Evolution Equations, John Wiley and Sons Ltd., Chichester, 1994. Google Scholar [8] D. E. Edmunds and H. Triebel, Function Spaces, Entropy Numbers and Differential Operators, Cambridge University Press, New York, 1996. doi: 10.1017/CBO9780511662201. Google Scholar M. A. Efendiev, A. Miranville and S. Zelik, Exponential attractors for a nonlinear reaction-diffusion system in $ \mathbb{R}^3$, C. R. Acad. Sci. Paris Sér. I Math., 330 (2000), 713-718. doi: 10.1016/S0764-4442(00)00259-7. Google Scholar S. Habibi, Estimates on the dimension of an exponential attractor for a delay differential equation, Math. Slovaca, 64 (2014), 1237-1248. doi: 10.2478/s12175-014-0272-0. Google Scholar J. K. Hale, Asymptotic Behavior of Dissipative Systems, Mathematical Surveys and Monographs, 25, American Mathematical Society, Providence, RI, 1988. Google Scholar J. A. Langa, A. Miranville and J. Real, Pullback exponential attractors, Discrete Contin. Dyn. Syst., 26 (2010), 1329-1357. doi: 10.3934/dcds.2010.26.1329. Google Scholar D. Pražák, On the dynamics of equations with infinite delay, Cent. Eur. J. Math., 4 (2006), 635-647. doi: 10.2478/s11533-006-0024-7. Google Scholar H. Smith, An Introduction to Delay Differential Equations with Applications to the Life Sciences, Texts in Applied Mathematics, 57, Springer, New York, 2011. doi: 10.1007/978-1-4419-7646-8. Google Scholar S. Sonner, Systems of Quasi-Linear PDEs Arising in the Modelling of Biofilms and Related Dynamical Questions, PhD thesis, Technische Universität München, Germany (2012). Google Scholar Pengyu Chen. Non-autonomous stochastic evolution equations with nonlinear noise and nonlocal conditions governed by noncompact evolution families. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020383 Lin Shi, Xuemin Wang, Dingshi Li. Limiting behavior of non-autonomous stochastic reaction-diffusion equations with colored noise on unbounded thin domains. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5367-5386. doi: 10.3934/cpaa.2020242 Pengyu Chen, Yongxiang Li, Xuping Zhang. Cauchy problem for stochastic non-autonomous evolution equations governed by noncompact evolution families. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1531-1547. doi: 10.3934/dcdsb.2020171 Kaixuan Zhu, Ji Li, Yongqin Xie, Mingji Zhang. Dynamics of non-autonomous fractional reaction-diffusion equations on $ \mathbb{R}^{N} $ driven by multiplicative noise. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020376 Wenqiang Zhao, Yijin Zhang. High-order Wong-Zakai approximations for non-autonomous stochastic $ p $-Laplacian equations on $ \mathbb{R}^N $. Communications on Pure & Applied Analysis, 2021, 20 (1) : 243-280. doi: 10.3934/cpaa.2020265 Stefan Ruschel, Serhiy Yanchuk. The spectrum of delay differential equations with multiple hierarchical large delays. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 151-175. doi: 10.3934/dcdss.2020321 Nitha Niralda P C, Sunil Mathew. On properties of similarity boundary of attractors in product dynamical systems. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021004 Xin-Guang Yang, Lu Li, Xingjie Yan, Ling Ding. The structure and stability of pullback attractors for 3D Brinkman-Forchheimer equation with delay. Electronic Research Archive, 2020, 28 (4) : 1395-1418. doi: 10.3934/era.2020074 João Marcos do Ó, Bruno Ribeiro, Bernhard Ruf. Hamiltonian elliptic systems in dimension two with arbitrary and double exponential growth conditions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 277-296. doi: 10.3934/dcds.2020138 Guillaume Cantin, M. A. Aziz-Alaoui. Dimension estimate of attractors for complex networks of reaction-diffusion systems applied to an ecological model. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020283 John Mallet-Paret, Roger D. Nussbaum. Asymptotic homogenization for delay-differential equations and a question of analyticity. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3789-3812. doi: 10.3934/dcds.2020044 Mugen Huang, Moxun Tang, Jianshe Yu, Bo Zheng. A stage structured model of delay differential equations for Aedes mosquito population suppression. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3467-3484. doi: 10.3934/dcds.2020042 Sergey Rashkovskiy. Hamilton-Jacobi theory for Hamiltonian and non-Hamiltonian systems. Journal of Geometric Mechanics, 2020, 12 (4) : 563-583. doi: 10.3934/jgm.2020024 Zhongbao Zhou, Yanfei Bai, Helu Xiao, Xu Chen. A non-zero-sum reinsurance-investment game with delay and asymmetric information. Journal of Industrial & Management Optimization, 2021, 17 (2) : 909-936. doi: 10.3934/jimo.2020004 Mauricio Achigar. Extensions of expansive dynamical systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020399 Qingfeng Zhu, Yufeng Shi. Nonzero-sum differential game of backward doubly stochastic systems with delay and applications. Mathematical Control & Related Fields, 2021, 11 (1) : 73-94. doi: 10.3934/mcrf.2020028 Fathalla A. Rihan, Hebatallah J. Alsakaji. Stochastic delay differential equations of three-species prey-predator system with cooperation among prey species. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020468 Manil T. Mohan. Global attractors, exponential attractors and determining modes for the three dimensional Kelvin-Voigt fluids with "fading memory". Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020105 Mohamed Ali Hammami Lassaad Mchiri Sana Netchaoui Stefanie Sonner
CommonCrawl
HomeTextbook AnswersMathCalculusCalculus (3rd Edition)Chapter 9 - Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Exercises - Page 46814 Calculus (3rd Edition) by Rogawski, Jon; Adams, Colin Published by W. H. Freeman Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter 18 Appendix A Appendix C Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Preliminary Questions Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Exercises Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Exercises Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Exercises Further Applications of the Integral and Taylor Polynomials - 9.2 Fluid Pressure and Force - Preliminary Questions Further Applications of the Integral and Taylor Polynomials - 9.2 Fluid Pressure and Force - Exercises Further Applications of the Integral and Taylor Polynomials - 9.2 Fluid Pressure and Force - Exercises Further Applications of the Integral and Taylor Polynomials - 9.2 Fluid Pressure and Force - Exercises Further Applications of the Integral and Taylor Polynomials - 9.3 Center of Mass - Preliminary Questions Further Applications of the Integral and Taylor Polynomials - 9.3 Center of Mass - Exercises Further Applications of the Integral and Taylor Polynomials - 9.3 Center of Mass - Exercises Further Applications of the Integral and Taylor Polynomials - 9.3 Center of Mass - Exercises Exponents and Exponential Functions - 9.4 Taylor Polynomials - Preliminary Questions Further Applications of the Integral and Taylor Polynomials - 9.4 Taylor Polynomials - Exercises Further Applications of the Integral and Taylor Polynomials - 9.4 Taylor Polynomials - Exercises Further Applications of the Integral and Taylor Polynomials - 9.4 Taylor Polynomials - Exercises Further Applications of the Integral and Taylor Polynomials - 9.4 Taylor Polynomials - Exercises Further Applications of the Integral and Taylor Polynomials - 9.4 Taylor Polynomials - Exercises Further Applications of the Integral and Taylor Polynomials - Chapter Review Exercises Further Applications of the Integral and Taylor Polynomials - Chapter Review Exercises 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 Chapter 9 - Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Exercises - Page 468: 14 $$ 2.28$$ Since \begin{aligned} s&=\int_{a}^{b} \sqrt{1+\left[f'(x)\right]^{2}} d x \\ \\ &=\int_{0}^{2} \sqrt{1+\left[-2 x e^{-x^{2}}\right]^{2}} d x \\ &=\int_{0}^{2} \sqrt{1+4 x^{2} e^{-2 x^{2}}} d x \end{aligned} and $$\Delta x= \frac{2-}{8}=\frac{1}{4} $$ Then \begin{aligned} S_{8}=& \frac{1}{3} \Delta x[f(a)+4 f(a+\Delta x)+2 f(a+2 \Delta x)+4 f(a+3 \Delta x)+2 f(a+4 \Delta x)+4 f(a+5 \Delta x)\\ &+2 f(a+6 \Delta x)+4 f(a+7 \Delta x)+f(b)] \\ =& \frac{1}{3} \cdot \frac{1}{4}[f(0)+4 f(1 / 4)+2 f(2 / 4)+4 f(3 / 4)+2 f(4 / 4)+4 f(5 / 4)+2 f(6 / 4)+4 f(6 / 4)+f(2)] \\ =& \frac{1}{12}[\sqrt{1+0}+4 \sqrt{1+4\left(\frac{1}{4}\right)^{2} e^{-2(1 / 4)^{2}}}+2 \sqrt{1+4\left(\frac{2}{4}\right)^{2} e^{-2(2 / 1)^{2}}}+4 \sqrt{1+4\left(\frac{3}{4}\right)^{2} e^{-2(3 / 4)^{2}}}+2 \sqrt{1+4 e^{-2}}\\ &+4 \sqrt{1+4\left(\frac{5}{4}\right)^{2} e^{-2(5 / 4)^{2}}}+2 \sqrt{1+4\left(\frac{6}{4}\right)^{2} e^{-2(6 / 4)^{2}}}+4 \sqrt{1+4\left(\frac{7}{4}\right)^{2} e^{-2(7 / 4)^{2}}}+\sqrt{1+4(2)^{2} e^{-2(2)^{2}}}]\\ &\approx 2.28 \end{aligned} Next Answer Chapter 9 - Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Exercises - Page 468: 15 Previous Answer Chapter 9 - Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Exercises - Page 468: 13 Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Preliminary Questions Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Exercises Further Applications of the Integral and Taylor Polynomials - 9.2 Fluid Pressure and Force - Preliminary Questions Further Applications of the Integral and Taylor Polynomials - 9.2 Fluid Pressure and Force - Exercises Further Applications of the Integral and Taylor Polynomials - 9.3 Center of Mass - Preliminary Questions Further Applications of the Integral and Taylor Polynomials - 9.3 Center of Mass - Exercises Exponents and Exponential Functions - 9.4 Taylor Polynomials - Preliminary Questions Further Applications of the Integral and Taylor Polynomials - 9.4 Taylor Polynomials - Exercises Further Applications of the Integral and Taylor Polynomials - Chapter Review Exercises
CommonCrawl
Effective attention-based network for syndrome differentiation of AIDS Huaxin Pang1, Shikui Wei1, Yufeng Zhao ORCID: orcid.org/0000-0003-1157-94792, Liyun He2, Jian Wang3, Baoyan Liu3 & Yao Zhao1 Syndrome differentiation aims at dividing patients into several types according to their clinical symptoms and signs, which is essential for traditional Chinese medicine (TCM). Several previous works were devoted to employing the classical algorithms to classify the syndrome and achieved delightful results. However, the presence of ambiguous symptoms substantially disturbed the performance of syndrome differentiation, This disturbance is always due to the diversity and complexity of the patients' symptoms. To alleviate this issue, we proposed an algorithm based on the multilayer perceptron model with an attention mechanism (ATT-MLP). In particular, we first introduced an attention mechanism to assign different weights for different symptoms among the symptomatic features. In this manner, the symptoms of major significance were highlighted and ambiguous symptoms were restrained. Subsequently, those weighted features were further fed into an MLP to predict the syndrome type of AIDS. Experimental results for a real-world AIDS dataset show that our framework achieves significant and consistent improvements compared to other methods. Besides, our model can also capture the key symptoms corresponding to each type of syndrome. In conclusion, our proposed method can learn these intrinsic correlations between symptoms and types of syndromes. Our model is able to learn the core cluster of symptoms for each type of syndrome from limited data, while assisting medical doctors to diagnose patients efficiently. Syndrome diffetentiation in Traditional Chinese Medicine (TCM) syndromes is a method of classifying the whole functional status summarized by clinical symptoms of different individuals during a period of illness. In TCM, this is one of the crucial aspects to study syndromes and plays a guiding role in clinically individualized diagnosis and dialectical treatment of TCM. In recent years, researchers have intensively studied the efficacy of TCM in the treatment of AIDS [1–3]. Many clinical practices and data proved that TCM had made surprising progress in reducing the HIV viral load in the blood and relieving patients' clinical symptoms and improving quality of life [4–11]. These advances are mainly attributed to the fact that TCM practitioners classified AIDS patients with syndromes and cured them with Chinese medicine treatments. It can be seen that syndrome classification is of great significance in the field of TCM. Differentiation is at the core of TCM and sets the precondition that ensures efficacy. Usually, the approaches for classifying the syndromes in TCM, which include multivariate statistical methods, machine learning, neural networks, and other methods introduced into the study, have resulted in an extensive set of scenarios. Among the group of multivariate statistical methods, cluster analysis is one of the fundamental statistical methods. It is widely used in syndrome differentiation due to the trait that it avoids the negative impacts of individual subjectivity. Researchers such as Martis and Chakraborty tried to classify and explore the principles for the cause of arrhythmia disease [12]. As the representative of machine learning algorithms, support vector machine (SVM) is one of the most commonly used diagnostic classification models, and has been used by researchers such as Ekiz et al. to diagnose heart disease through SVM [13]; by Chen et al. to diagnose hepatitis disease [14] and by Zeng et al. to diagnose Alzheimer's disease [15]. Pang and Zhang tried to use a naive Bayesian network to reveal the connection between abnormal tongue appearances and diseases in a particular population [16]. In recent researches, deep learning models have been widely used to diagnose diseases. Models such as noisy deep dictionary learning [17], deep belief networks (DBN) [18], and long-short term memory network (LSTM) [19] have achieved better results. Although these methods have achieved significant improvement in syndrome classification, they remain far from being satisfactory. First, when all symptoms are used equally for classification, uncorrelated symptoms can have excessive disturbing influences. In such cases, most of these algorithms cannot quite figure out what are the representative symptoms in one type of syndromes for diverse diseases. Moreover, because of the obvious differences between the diseases, there is no unified classification model for all illnesses. Due to the particularities of AIDS, most patients suffer from multiple diseases at the same time, and the clinical symptoms are various and complex [7, 9]. These causes make it relatively difficult to judge patient's syndrome type and to define suitable treatment protocols. To tackle the above-mentioned limitations, here we propose an attention-based MLP framewor (ATT-MLP). Our model relies on an attention mechanism that directly draws the global dependencies of the inputs [20]. We build feature-level attention with a hidden layer over multiple symptoms, which is expected to dynamically reduce the weights of noisy symptoms. Finally, the sequence where all symptom features are weighted by feature-level attention is fed into the MLP to discriminate types of syndrome. In contrast to other algorithms, the attention mechanism is trained to capture the dependencies that make significant contributions to the work, regardless of the distance between the elements in the sequence. Another major advantage of attention is that computing the attention only requires matrix multiplication, which is highly parallelizable and easily realized [21]. With the proposed simple yet effective ATT-MLP, we evaluated our model on a real-world AIDS dataset that integrates data from multiple clinical units to provide a comprehensive view of syndrome differentiation and medication patterns of AIDS [22]. In term of accuracy-score, our experimental results outperformed traditional algorithms by 13.4%. At the same time, our model also selected symptoms associated with the type of syndrome that is in line with the actual clinical situation. The rapid adoption and availability of disease treatment records using TCM have enabled new investigations into data-driven clinical support. The broad goal of these studies is to learn from the datasets of patients' records and provide personalized treatment to them. Here, we provide a brief overview of work specifically in diagnosis and treatment of diseases as well as applications of attention-base models in different fields. Diagnosis and treatment of diseases Currently, some innovative classification techniques apply to quantitative syndrome analysis. Li and Yan et al. used the k-Nearest Neighbor (k-NN) model for classification of hypertension [23]. Bayesian networks have been used to make quantitative TCM diagnosis of diseases [24]. Some studies [25–27] have also used SVM to classify disease data with certain low feature dimensions. These algorithms have achieved good results in both disease diagnosis and re-classification tasks. However, the degree of syndrome correlation with symptoms in the data set of AIDS is different. The traditional k-NN algorithm [22] that assigns the same weight to each dimension or symptom does not fit the task well. The dataset has a high data dimension and the symptoms of patients are sparse. An SVM based on distance metric classification also fails to achieve ideal results. The Bayes classification algorithm based on prior probability can reveal its inadequacies when handling high-dimensional features. Artificial neural networks (ANN) boasts large-scale distributed parallel processing, nonlinear, self-organizing, self-learning, associative memory, and other excellent features. ANN has been used to achieve many gratifying results, and some researchers [28, 29] have begun to use neural networks to conduct an exploratory study on the classification of diseases and syndromes according to TCM. Thanks to these efforts, the effectiveness of the diagnosis and classification of diseases has been significantly improved. Attention applications Attention-based models have attracted much interest from researchers recently. The selectivity of attention-based models allows them to learn alignments between different modalities. Such models have been used successfully in several tasks. Minh et al. [30] used recurrent models and attention to facilitate the task of image classification. Ling et al. [31] proposed an attention-based model for word embedding, which calculates an attention weight for each word at each possible position in the context window. Parikh et al. [32] utilized attention for the task of natural language inference. Lin et al. [33] proposed sentence-level attention for sentence embedding and applied them to author profiling, sentiment analysis, and textual entailment. Paulus, Xiong, and Socher [34] combined reinforcement learning and self-attention to capture the long-distance-dependencies nature of abstractive summarization. Vaswani et al. [35] carried out the first study to construct an end-to-end model with attention alone and reported state-of-the-art performance in machine translation tasks. Our work follows this line and applies attention to learning long-distance dependencies. To the best of our knowledge, this is the first effort to adopt an attention-based model for TCM syndrome differentiation. Frame structure Three main functional modules are included in the proposed method. They are setting the initial symptom feature vectors and syndrome label vectors, calculating attention weights for different symptom characteristics under different labels, and iterative training of the classification model. The framework of the proposed method is illustrated in Fig. 1. Architecture of the ATT-MLP model used for syndrome classification. The original symptoms sequence is taken as the only input for ATT-MLP Attention-based framework The definition of the symbols is given here to explain the attention-based model. The patient set is P={pnm,n=1,...,N,m=1,...,M},where N is the total number of patients and M is the total number of the patients' symptoms. The syndrome type of AIDS is S={ci,i=1,...,K},where K is the total number of syndrome types. The purpose of the attention-based model is to obtain different symptom weights based on the patient's own symptoms and syndrome characteristics. Then, the attention weight is realized by optimizing the function, which is shown in Eq. (1). $$ \mathbf{E_{ns}}=\tanh\left(A \cdot \boldsymbol{p}_{\boldsymbol{n}}\right) $$ Where pn is the query vector of a patient's symptoms, A is a weighted matrix. Ensis the attentional vector that scores how well all symptoms and the syndrome type match. We selected the non-linear form tanh, which achieves the best performance when considering different alternatives. Then, we chose the softmax function to normalize the attentional vector, which is shown in Eqs. (2)and (3). $$\begin{array}{*{20}l} \mathbf{W_{ns}}=\mit{softmax}\left(\mathbf{E_{ns}}\right) \end{array} $$ $$\begin{array}{*{20}l} w_{i} = \frac{\exp (e_{i})}{{\sum\nolimits}_{i=1}^{M} \exp(e_{i})} \end{array} $$ Where Wns is the normalized weight vector, ei is the i-th weight probability corresponding to the i-th symptom, and wi is the normalized weight value. The final output is the patient's feature vector based on attention weight as in Eq. (4): $$ \mathbf{P^{*}_{n}}=\mathbf{P_{n}} \cdot \mathbf{W_{ns}} $$ Multilayer perceptron We used a multilayer perceptron(MLP) as the last syndrome classifier. The MLP is a type of ANN composed of multiple hidden layers, where every neuron in layer i is fully connected to every other neuron in layer i+1. Typically, these networks are limited to a few hidden layers, and the data flows only in one direction, unlike recurrent or undirected models. The input is a one-dimensional vector that is consistent with the format of our data set. In our proposed method, the structure of MLP consists of three layers, which are input, hidden and output. Each hidden unit computes a weighted sum of the outputs from the input layer, followed by a nonlinear activation σ of the calculated sum as in Eq. (5). $$ h_{i} = \sigma\left(\sum\limits_{j=1}^{d} p_{rj}w_{ij}+b_{ij}\right) $$ Here, d is the number of units in the input layer, which is also the number of symptom features. prj is the weighted value of the j-th symptom of the n-th patient. And wij and bij are the weight and bias terms associated with each prj. Traditionally, sigmoid or tanh are chosen as the nonlinear activation functions, but we utilized rectified linear units (ReLU) to get good results in our model. It is worth noting that the attention-based model and MLP were trained at the same time. We used the backpropagation algorithm to update the parameters in these models. To verify the performance of the model, the desired significance level is set to 0.05. Dataset of AIDS in TCM In this study, we used data from the TCM pilot project for treating AIDS, which contains over 12,000 patients' records from over 17 provinces covering the years from 2004 to 2013. The ethics committees of the Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, granted exempt status for this study and also waived the need for informed consent. The dataset recorded the personal and disease history, symptoms, syndromes, and treatment strategies provided by certified senior medical doctors. The symptom group contains a total of 93 symptoms. There are seven types of syndrome for these patients in the dataset according to the AIDS syndrome diagnostic criteria, which includes (S1) phlegm-heat obstructing the lung and accumulation of heat toxin; (S2) deficiency of both qi and yin in the lung and kidney; (S3) stasis of blood and toxin due to qi deficiency; (S4) hot in the liver with accumulation of damp toxin; (S5) stagnation of qi, phlegm and blood; (S6) deficiency of spleen and stomach with retention of damp; and (S7) qi deficiency with kidney yin deficiency. From the entire AIDS dataset, we selected 10,910 cases based on the inclusion criteria. The criteria for inclusion of patients in this study were(1) age over 18; (2) complete TCM syndrome diagnosis record completed; (3) explicit symptoms; (4) patients presenting with at least two symptoms; and (5) patients providing informed consent. Of the 10,910 patients, 6,248 patients were male(̇57.27% with a mean age of 41.53 ±9.47), and 4,662 patients were female(̇42.73% with 36.39 ±8.54). Experimental and evaluation metric In practice, we find that certain symptoms are different in their relevance and importance for these syndromes. Hence, we build a model for each type of syndrome. In the model, we used all positive samples and some randomly selected samples with other syndromes as the negative samples in the whole dataset to ensure that the ratio of positive to negative was about 1:2. All models were trained and tested through a 10-fold cross-validation method. Meanwhile, and the average of multiple test scores was recorded in these tables. To check the robustness of our model, we provided the prediction results with the five indicators: Accuracy, Sensitivity(Recall), Specificity, Matthews correlation coefficient(MCC), and Area Under the ROC curve(AUC). Selected symptoms based on attention We used the proposed model to select representative symptoms to characterize the features of each AIDS syndrome. The selected representative symptoms for seven AIDS syndromes are shown in Table 1. Table 1 Selected representative symptoms for seven AIDS syndrome types, Here, we have selected symptoms whose attention weight is greater than 0.8 to characterize the features of each AIDS syndrome According to Table 1, we could find that ATT-MLP performs well in the task of selecting representative symptoms to characterize each syndrome and all accuracy of syndromes are more than 80%. On closer inspection, the accuracy of S4 and S7 syndromes exceed 87.0%. And the S7 syndrome got the highest accuracy, reaching 87.6%. In addition, this worst performance (S3) is still 83.5%. Overall, our model automatically diagnosed the syndrome type of AIDS patients by selecting symptoms, and its performance was close to the level of most clinicians. Secondly, through observing Table 1, we could also find out that some symptoms appeared in the symptom group of multiple syndromes at the same time. For example, fever occurring in S1, S3, and S6; cough appearing in both S1 and S2, and slippery pulse appearing in both S4 and S6. White tongue coating was included in S3 and S6. More detailed reasons are given for above-mentioned conditions. On one hand, the problems in different parts of the body may show the same symptoms. On the other hand, a same symptom plays different roles in different syndrome types. Combined with Table 1 and compared with S3, symptom fever plays a more primary role in syndrome S1. Diagnosing syndrome accurately needs to integrate several symptoms' information. In order to explore the contribution of each symptom to syndrome type and disease diagnosis, more details and further discussions about symptoms' importance are discussed in the next section-Symptom Weights for Different Syndrome types. Symptom weights for different syndromes For each syndrome type, we collected the attention weight vectors for all patients within them and subjected them to normalization to obtain Fig. 2. The rows of the matrix in the figure represent seven syndrome types of the AIDS dataset and each column represents the attention weight values for one symptom. Heat maps of symptoms with attention weights for seven syndromes plotted by symptoms on the horizontal axis and syndromes on the vertical. Each cell shows the relevance percentage of symptoms for each syndrome We can see that for different syndromes, the type and number of symptoms on which the model focused were significantly different from Fig. 2. For instance, the number of symptoms of concern for S3 and S5 is more than for S7. Focusing on the local details of the matrix, some symptoms occupy a higher weight in certain syndrome types, We can find that the importance of red tongue(2) is different for seven syndrome type: the higher importance for S2 and S6, compared with S1, S4, and S5. The representative symptom white tongue coating(29) also is closely related to S2 and S5, and selected as their key symptom in Table 1. Nevertheless, only the syndrome S1 has a greater dependence on the symptom-scanty coating(36), and the weight between S1 and scanty coating is over 0.9. there another noteworthy phenomenon is that though the weights of two symptoms-cough(72) and fatigue(73) are high in all of the syndromes, they just are regarded as generalized features, not typical symptoms. These cases are in line with the actual situation. On other hand, the current model can focus on different dimensional symptoms, and the distances of these symptoms are relatively distant from the global perspective of the map. This implies that our model has prominent advantages in dealing with high-dimensional information in medical diagnosis. The heat map in Fig. 2 also shows that our model not only makes use of local symptom information for diagnosis and prediction but also combines global information to diagnose the patient's syndrome category. It is similar to the disease diagnosis method used by clinical experts, indicating that the internal mechanism of ATT-MLP is to realize the goal of predicting syndrome by studying and applying some real human experience. Tables 2 and 3 recorded the performance scores of 5 algorithms, measured with indicators including: Accuracy, Sensitivity(Recall), Specificity, MCC and AUC. The best result for each column was highlighted. Overall, our framework based on the attention mechanism significantly improved performance over the other four models in most indicators on each of the AIDS syndromes. The average of the Accuracy of syndrome differentiation based on ATT-MLP was 85.7% and the average of sensitivity and specificity reaches 74.6% and 91.3%, which were more than 10% higher than other models. However, the SVM had worse performance in the predicted task, the main reason was that the SVM classifier used all the symptoms of patients, but without considering and selecting some key symptoms and without using a fusion of different symptom information to classify syndrome type. Through analyzing all the experimental results of syndrome type, we could find that the classification effect of the five models for S4 and S7 were significantly better than for other syndromes, which illustrates that the symptomatic composition of both types of syndromes was relatively fixed and less affected by other symptoms. Obviously, the proposed model also achieved the best performance, the accuracy and sensitivity scoring 87.6%, and 83.7% in S7. These results further indicated that the key symptoms of S7 were obvious and easy to find and capture. Table 2 Performance comparison of our proposed model and traditional methods on dataset. The A, Se, and Sp mean Accuracy, Sensitivity, and Specificity in the table Table 3 The performance comparison of robustness and generalization of multiple models and the independent test results of ATT-MLP are presented. The MCC and AUC are for Matthews correlation coefficient and Area Under the ROC curve in the table Because the number of positive and negative samples are unbalanced, we employed MCC and AUC indicators to measure the robustness of our model. Meanwhile, and we performed ATT-MLP over the dependent test to measure the reliability of cross-validation results. Judging from Table 3, Random Forest(RF) and ATT-MLP model have compared results. The MCC's score was above 0.5 and the average of AUC's result reached 0.77 and 0.79 for RF and ATT-MLP. This showed that facing the problem of unbalanced samples, our model could deal with it by learning the internal mapping between symptoms and syndrome. The P-values of the independence test of our model were all less than 0.001, which verified the truthfulness of the experimental results of our model and indicated that the model could extract effectively the representative symptom group of syndromes. It is noteworthy to point out that the MLP model is our baseline model. Viewing Table 3, we see that the MLP model based on the attention mechanism shows a significant improvement over the performance of the original model. There is enough evidence to show that the attention mechanism framework plays an important role in the task of capturing key symptoms. Whihout changing the structure of the data itself, the attention mechanism can help MLP to optimize and classify in the right direction by scoring the symptoms. All models had poor classification performance for S3 and S5, especially in the indicator-Sensitivity score, compared to the other five syndromes. It means that by only relying on the dataset, these models can't find and learn the key symptom groups of S3 and S5. For our model(ATT-MLP), the difficulties for learning and mining the pivotal symptom groups of S3 and S5 syndromes are objective subsistent. The initial conjecture about the problem is that the symptoms of these patients labelled as the two syndromes (S3 and S5) are complex and diverse and that the interaction between symptoms is complicated. In order to test this hypothesis, we conducted the following experiment: the evaluation and measurement of the number of symptoms related to the syndrome. Number of syndromes & metric In order to verify whether the symptoms selected by the proposed model based on the attention mechanism are the key symptoms of the certain syndrome, we verified the results. Firstly, we used the attention framework to score all symptoms and to remove some symptoms according to different score thresholds. Then the remaining symptoms were re-scored to classify syndromes. Recording model classification performance indicators with different thresholds. The changes in these indicators are shown in Fig. 3. Performance comparison in the case of different numbers of representative symptoms for seven syndromes By observing these figures, we can clearly see that as the number of selected symptoms decreases, there is no significant decline in the classification performance of our model. As to syndromes-S4 and S7, with the number of symptoms decreasing, our model keeps high-level score in the accuracy rate of diagnosis, which suggests that their main symptom groups remain stable and our model can efficiently extract their core symptoms from some limited samples. However, through observation of Fig. 3, the classification effects of S1 and S3 syndromes were greatly affected by the change in the number of symptoms, and the change of the index was more than 15%. One convictive explanation for these phenomena is that the combinations of primary symptom for different syndromes are diverse and the association between the symptoms is complex. We further randomly selected 100 samples labelled S1 and S3 separately, shown in Fig. 4. For syndrome-S1, main symptoms such as the red tongue(2), yellow coating(30), string-taut pulse(70), and fever(71) are relatively obvious to be seen, but other symptoms are difficult to summarize. As for syndrome-S3, white coating(29) and thready pulse(63) is easily detected. In this case, a single framework does not fit well into these clinical diagnostic methods. We need to combine other methods to do more comprehensive researches. The sample cases hot map of syndromes S1-(a) and S3-(b). the symptoms and sample index severally are shown on the horizontal and the vertical axis Secondly, we can also find another fact from these figures. As the selected symptoms decrease, all the indicator values decrease slightly. This illustrates that while our model eliminates the effects of irrelevant symptoms, it is also possible to delete certain key symptoms unintentionally or to sever the relationships between certain features. Since these results to have a small decrease in model performance, we will conduct further exploration and research of these matters in follow-up work. In this paper, we proposed an ATT-MLP-based syndrome classification model. Our model can diagnose the syndrome of AIDS patients effectively, and select representative symptoms of each syndrome. This provides new ideas and methods for the establishment of a TCM syndrome differentiation model for patients with AIDS. Our proposed model shows good performance in the classification of the seven syndromes in the dataset and gives an average accuracy of 85.7%, an average sensitivity (recall) rate of 74.6%, and an average specificity of 91.3%. The results of these classification experiments are superior to what has been obtained using more conventional classifiers in previous studies. There are two advantages to the proposed model. First, compared with other syndrome differentiation models, our model can accurately select representative features to characterize explicitly the characteristics of the AIDS syndrome. This provides an objective basis for TCM syndrome differentiation, which often relied on empirical medicine. Second, our model can also assign reasonable weights to the selected symptoms. For the same symptom, the weight is different for different syndromes, which means that only the more heavily weighted symptoms play a key role in the diagnosis of a given syndrome. This can help doctors to develop treatments that are appropriate for their patients. Due to the complexity of AIDS, some patients may have several AIDS syndromes. However, current attention-based models are not good at handling the multi-label task classification. In the future, we will consider abandoning this model, which scores the symptoms individually. Instead, we will explore the effect of the degree of association between the symptoms on the diagnosis of the syndrome. Then we could use the efficient deep learning framework to build a more complete syndrome differentiation model. In conclusion, our proposed method can learn these intrinsic correlations between symptoms and syndromes. As a matter of fact, the relevant information was summarized by experts with rich clinical experience. These experiments demonstrate that our model can use the attention mechanism to select representative symptoms for each syndrome and improve the diagnosis accuracy of patients' syndromes. This study was based in part on data from the TCM pilot project for treating AIDS, provided by the Institute of Basic Research in Clinical Medicine, and was managed by China Academy of Chinese Medical Sciences. The datasets are available from the corresponding author on reasonable request. TCM: Tradition chinese medicine ATT-MLP: Multilayer perceptron model with the attention mechanism AIDS: HIV: SVM: DBN: Deep belief networks LSTM: Long-short term memory networks ANN: Zhou X, Chen S, Liu B, Zhang R, Wang Y, Zhang X. Extraction of hierarchical core structures from traditional chinese medicine herb combination network. In: Proceedings of 2008 International Conference on Advanced Intelligence. Beijing, China: Posts & Telecom Press: 2008. p. 262–7. Zhou X, Chen S, Liu B, Zhang R, Wang Y, Li P, Guo Y, Zhang H, Gao Z, Yan X. Development of traditional chinese medicine clinical data warehouse for medical knowledge discovery and decision support. Artif Intell Med. 2010; 48:139–52. Chan K. Progress in traditional chinese medicine. Trends Pharmacol Sci. 1995; 16(6):182–7. He L, Liu B, Wang J, Zha D, Liu W. Establishment of an index system to evaluate the efficacy of traditional chinese medicines in treating aiv/aids. Chin J AIDS STD. 2010; 16(3):288–91. Zhang W-F. Investigation on tcm syndrome and quality of life among acquired immune deficiency syndrome. Guangzhou Univ Chin Med. 2010; 13:266–7. Liu Y, Wang J. Research thoughts on tcm symptomatology of aids. J He nan Univ Chin Med. 2011; 26(6):641–3. Liu Y, Wang J. Discussion and analysis on break through point and therapeutic effect evaluation system: Aids treated with tcm. China J Tradit Chin Med Pharm. 2010; 25(8):159–61. Xie R, Zhang C, Li S, et al. Clinical observation of 51 cases of aids based on integrative medicine. Yunnan J Tradit Chin Med Materia Med. 2008; 29(12):21–22. Feng L. Clinical observation of 104 cases of tcm treating aids. Yunnan J Tradit Chin Med Materia Med. 2011; 32(8):20–21. Peng B, Wang D. A symptomatic clinical observation of 65 cases of hiv infection for righting detox treatment tablets. Chin Arch Tradit Chin Med. 2006; 24(10):1781–2. Zhang M, Fu L. A symptomatic clinical observation of 65 cases of hiv infection for righting detox treatment tablets. Chin Arch Tradit Chin Med. 2006; 24(10):1781–2. Roshan JM, ChanDan C. Arrhythmia disease diagnosis using neural network, svm, and genetic algorithm-optimized k-means clustering. J Mech Med Biol. 2011; 11(4):897–915. Ekız S, Erdoğmuş P. Comparative study of heart disease classification. In: 2017 Electric Electronics, Computer Science, Biomedical Engineerings' Meeting (EBBT). IEEE: 2017. p. 1–4. https://doi.org/10.1109/EBBT.2017.7956761. Chen HL, et al. A new hybrid method based on local fisher discriminant analysis and support vector machines for hepatitis disease diagnosis. Expert Syst Appl. 2011; 38(9):11796–803. Thelaidjia T, S C. A new approach of preprocessing with svm optimization based on pso for bearing fault diagnosis. In: 13th International Conference on Hybrid Intelligent Systems (HIS 2013). Gammarth: 2013. p. 319–24. https://doi.org/10.1109/HIS.2013.6920452. Pang B, et al. Computerized tongue diagnosis based on bayesian networks. IEEE Trans Biomed Eng. 2004; 51(10):1803–10. Majumdar A, Singhal V. Noisy deep dictionary learning: Application to alzheimer's disease classification. In: 2017 International Joint Conference on Neural Networks (IJCNN). Anchorage: IEEE: 2017. p. 2679–83. https://doi.org/10.1109/IJCNN.2017.7966184. Ying J, Yang C, Li Q, Xue W, Li T, Cao W. Severity classification of chronic obstructive pulmonary disease based on deep learning. Sheng wu yi xue gong cheng xue za zhi= J Biomed Eng = Shengwu yixue gongchengxue zazhi. 2017; 34(6):842–9. Chae S, Kwon S, Lee D. Predicting infectious disease using deep learning and big data. Int J Environ Res Public Health. 2018; 15(8):1596. Tan Z, Wang M, Xie J, Chen Y, Shi X. Deep semantic role labeling with self-attention. In: 32th the Association for the Advance of Artificial Intelligence (AAAI) Conference on Artificial Intelligence (AAAI 2018). New Orleans: 2018. Shen T, Zhou T, Long G, Jiang J, Pan S, Zhang C. Disan: Directional self-attention network for rnn/cnn-free language understanding. In: 32th the Association for the Advance of Artificial Intelligence (AAAI) Conference on Artificial Intelligence (AAAI 2018). New Orleans: 2018. Zhao Y, et al.Tcm syndrome differentiation of aids using subspace clustering algorithm. In: 2014 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). Belfast: 2014. p. 219–24. https://doi.org/10.1109/BIBM.2014.6999363. Li G-Z, et al.Intelligent ZHENG Classification of Hypertension Depending on ML-kNN and Information Fusion. Evid Based Complement Alternat Med. 2012; 2012(3):837245. Wang H, Wang J. A quantitative diagnostic method based on bayesian networks in traditional chinese medicine. In: International Conference on Neural Information Processing. Springer: 2006. p. 176–83. Chang Y, et al.A support vector machine classifier reduces interscanner variation in the HRCT classification of regional disease pattern in diffuse lung disease: Comparison to a Bayesian classifier. Med Phys. 2013; 40(5):n/a. Wang X, Pardalos PM. A survey of support vector machines with uncertainties. Ann Data Sci. 2014; 1:293–309. Raikwal JS, Saxena K. Performance Evaluation of SVM and K-Nearest Neighbor Algorithm over Medical Data set. Int J Comput Appl. 2012; 50(14):35–9. Tang ACY, Chung JWY, Wong TKS. Validation of a Novel Traditional Chinese Medicine Pulse Diagnostic Model Using an Artificial Neural Network. Evid-Based Complement Alternat Med. 2012; 2012:685094. Tang AC, Chung JW, Wong TK. Digitalizing traditional chinese medicine pulse diagnosis with artificial neural network. Telemed e-health. 2012; 18(6):446–53. Mnih V, Heess N, Graves A, et al. Recurrent models of visual attention. In: 28th Conference on Neural Information Processing Systems (NeurIPS 2014). Montreal: 2014. p. 2204–12. Ling W, Tsvetkov Y, Amir S, Fermandez R, Dyer C, Black AW, Trancoso I, Lin C-C. Not all contexts are created equal: Better word representations with variable attention. In: 6th Conference on Empirical Methods in Natural Language Processing (EMNLP 2015). Lisbon: 2015. p. 1367–72. Parikh AP, Täckström O, Das D, Uszkoreit J. A decomposable attention model for natural language inference. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing.: 2016. p. 2249–55. Lin Z, Feng M, Santos CNxd, Yu M, Xiang B, Zhou B, Bengio Y. A structured self-attentive sentence embedding. In: 5th International Conference on Learning Representations (ICLR 2017). Toulon: 2017. Paulus R. Deep reinforced model for abstractive summarization. U.S. Patent No. 10,474,709. 2019. Vaswani A, Shazeer N, Parmar N., Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I. Attention is all you need. In: 31th Conference on Neural Information Processing Systems (NeurIPS 2017). Long Beach: 2017. p. 5998–6008. Thanks for the guidance of Professor Mozheng Wu (Professor, International Training Center, China Academy of Chinese Medical Sciences) to help us finish the language editing of our manuscript. This work was supported in part by {funder: Yufeng Zhao} National Key Research and Development of China (No.2017YFC1703503), National Natural Science Foundation of China(Research on Dynamic Target Relationship of AIDS Syndrome Based on Multiple Examples and Multiple Marks, No.81674101); by { funder: Shikui Wei} National Natural Science Foundation of China (No.61532005, No.61572065), Program of China Scholarships Council (No.201807095006), and Fundamental Research Funds for the Central Universities (No. 2018JBZ001). Beijing Jiaotong University,China, No.3 Shangyuancun, Haidian District, Beijing, 100044, China Huaxin Pang, Shikui Wei & Yao Zhao Institute of Basic Research in Clinical Medicine/National Data Center of Traditional Chinese Medicine, China Academy of Chinese Medical Sciences, No.16 South Street,Dongzhimen,Dongcheng District, Beijing, 100700, China Yufeng Zhao & Liyun He China Academy of Chinese Medical Sciences, No.16 South Street,Dongzhimen,Dongcheng District, Beijing, 100700, China Jian Wang & Baoyan Liu Huaxin Pang Shikui Wei Yufeng Zhao Liyun He Baoyan Liu Yao Zhao L.He, J.Wang and B.Liu provided the data and materials. Huax.Pang, Shk.Wei and YuF.Zhao designed the study and participated in data analysis and interpretation. Huax.Pang and YuF.Zhao finalized the experimental work, interpreted the results and prepared Figs. Huax.Pang wrote the paper, Shk.Wei, Y.Zhao and YuF.Zhao edited and revised. All author(s) read and approved the final version of the manuscript. Correspondence to Yufeng Zhao. The ethics committees of the Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, granted exempt status for this study and also waived the need for informed consent. The patient consent was exempted due to the total anonymity of all research data in this study. The interpretation and conclusions contained herein do not represent the China Academy of Chinese Medical Sciences. The authors declare that they have no competing interests. Pang, H., Wei, S., Zhao, Y. et al. Effective attention-based network for syndrome differentiation of AIDS. BMC Med Inform Decis Mak 20, 264 (2020). https://doi.org/10.1186/s12911-020-01249-0 Syndrome differentiation Explainable AI in Medical Informatics and Decision Support
CommonCrawl
Collective intelligence Collective action Self-organized criticality Herd mentality Phase transition Agent-based modelling Synchronization Ant colony optimization Particle swarm optimization Swarm behaviour Social network analysis Small-world networks Centrality Motifs Graph theory Scaling Robustness Systems biology Dynamic networks Evolutionary computation Genetic algorithms Genetic programming Artificial life Machine learning Evolutionary developmental biology Artificial intelligence Evolutionary robotics Read all.. korean free sex video - Bokep vidio, ngentot xxx sex hd video 3gp 2019 (170705) 주간 아이돌 310회 블랙핑크 (BLACKPINK) - Weekly idol ep 310 BLACKPINK This is a video about (170705) 주간 아이돌 310회 블랙핑크 (BLACKPINK) - Weekly idol ep 310 BLACKPINK ben efsaneyim 2 izle | Türkçe Dublaj izle, Full HD izle, Filmi Tek Parça izle, 720p 1080p izle 主要支援:已於2009年4月8日到期 延伸支援:已於2014年4月8日到期(仅限Service Pack 3 x86(SP3 x86)及Service Pack 2 x64(SP2 x64)) 新增的功能 移除的功能 版本 开发历史 批評 主题 Windows XP(开发代号:)是微软公司推出供个人电脑使用的操作系统,包括商用及家用的桌上型电脑、笔记本电脑、媒体中心(英语:)和平板电脑等。其RTM版于2001年8月24日发布;零售版于2001年10月25日上市。其名字「」的意思是英文中的「体验」()。Windows .. Nov 13, 2019- Explore dobdan222's board "교복", followed by 405 people on Pinterest. See more ideas about Asian girl, Korean student and Fashion. 1082 Best 여고딩 images in 2019 | School looks, Fashion, School uniform Nov 10, 2019- Explore cutebear36088's board "여고딩", followed by 557 people on Pinterest. See more ideas about School looks, Fashion and School uniform. Video - YouTube Drugo Obleganje Dunaja Republika obeh narodov Habsburška monarhija Bavarska Saška Franconia Švabska Zaporoški kozaki Velika vojvodina Toskana Drugo obleganje Dunaja je potekalo leta 1683; pričelo se je 14. julija 1683, ko je Osmanski imperij obkolil Dunaj in končalo 11. septembra .. Robert Henry Goldsborough Robert Henry Goldsborough (January 4, 1779 – October 5, 1836) was an American politician from Talbot County, Maryland. Goldsborough was born at "Myrtle Grove" near Easton, Maryland. He was educated by private tutors and graduated from St. John's College in .. Anabolic-androgenic Steroid Anabolic steroids, also known more properly as anabolic–androgenic steroids (AAS), are steroidal androgens that include natural androgens like testosterone as well as synthetic androgens that are structurally related and have similar effects to testosterone. .. What people searched for [You can read the original article here], Licensed under CC-BY-SA. This article is about the mathematical study of optimizing agents. For the mathematical study of sequential games, see Combinatorial game theory. For the study of playing games for entertainment, see Game studies. For the YouTube series, see MatPat. For other uses, see Game theory (disambiguation). Complex systems Self-organization Collective behaviour Self-organized criticality Phase transition Agent-based modelling Ant colony optimization Swarm behaviour Scale-free networks Small-world networks Centrality Graph theory Dynamic networks Adaptive networks Genetic programming Evolutionary developmental biology Evolutionary robotics Evolvability Pattern formation Reaction–diffusion systems Dissipative structures Spatial ecology Self-replication Operationalization Self-reference System dynamics Sensemaking Nonlinear dynamics Multistability Coupled map lattices Prisoner's dilemma Evolutionary game theory The study of mathematical models of strategic interaction between rational decision-makers History of economics Schools of economics Mainstream economics Heterodox economics Economic methodology JEL classification codes Middle income trap Expeditionary Public / Social choice Publications (journals) Glossary of economics Money portal Game theory is the study of mathematical models of strategic interaction among rational decision-makers.[1] It has applications in all fields of social science, as well as in logic, systems science and computer science. Originally, it addressed zero-sum games, in which each participant's gains or losses are exactly balanced by those of the other participants. In the 21st century, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers. Modern game theory began with the idea of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumann's original proof used the Brouwer fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty. Game theory was developed extensively in the 1950s by many scholars. It was explicitly applied to evolution in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. As of 2014[update], with the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole, eleven game theorists have won the economics Nobel Prize. John Maynard Smith was awarded the Crafoord Prize for his application of evolutionary game theory. Discussions of two-person games began long before the rise of modern, mathematical game theory. In 1713, a letter attributed to Charles Waldegrave analyzed a game called "le her". He was an active Jacobite and uncle to James Waldegrave, a British diplomat.[2] The true identity of the original correspondent is somewhat elusive given the limited details and evidence available and the subjective nature of its interpretation. One theory postulates Francis Waldegrave as the true correspondent, but this has yet to be proven.[3] In this letter, Waldegrave provides a minimax mixed strategy solution to a two-person version of the card game le Her, and the problem is now known as Waldegrave problem. In his 1838 Recherches sur les principes mathématiques de la théorie des richesses (Researches into the Mathematical Principles of the Theory of Wealth), Antoine Augustin Cournot considered a duopoly and presents a solution that is the Nash equilibrium of the game. In 1913, Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels (On an Application of Set Theory to the Theory of the Game of Chess), which proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems.[4] In 1938, the Danish mathematical economist Frederik Zeuthen proved that the mathematical model had a winning strategy by using Brouwer's fixed point theorem.[5] In his 1938 book Applications aux Jeux de Hasard and earlier notes, Émile Borel proved a minimax theorem for two-person zero-sum matrix games only when the pay-off matrix was symmetric and provides a solution to a non-trivial infinite game (known in English as Blotto game). Borel conjectured the non-existence of mixed-strategy equilibria in finite two-person zero-sum games, a conjecture that was proved false by von Neumann. Game theory did not really exist as a unique field until John von Neumann published the paper On the Theory of Games of Strategy in 1928.[6][7] Von Neumann's original proof used Brouwer's fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern.[8] The second edition of this book provided an axiomatic theory of utility, which reincarnated Daniel Bernoulli's old theory of utility (of money) as an independent discipline. Von Neumann's work in game theory culminated in this 1944 book. This foundational work contains the method for finding mutually consistent solutions for two-person zero-sum games. Subsequent work focused primarily on cooperative game theory, which analyzes optimal strategies for groups of individuals, presuming that they can enforce agreements between them about proper strategies.[9] In 1950, the first mathematical discussion of the prisoner's dilemma appeared, and an experiment was undertaken by notable mathematicians Merrill M. Flood and Melvin Dresher, as part of the RAND Corporation's investigations into game theory. RAND pursued the studies because of possible applications to global nuclear strategy.[10] Around this same time, John Nash developed a criterion for mutual consistency of players' strategies known as the Nash equilibrium, applicable to a wider variety of games than the criterion proposed by von Neumann and Morgenstern. Nash proved that every finite n-player, non-zero-sum (not just two-player zero-sum) non-cooperative game has what is now known as a Nash equilibrium in mixed strategies. Game theory experienced a flurry of activity in the 1950s, during which the concepts of the core, the extensive form game, fictitious play, repeated games, and the Shapley value were developed. The 1950s also saw the first applications of game theory to philosophy and political science. In 1979 Robert Axelrod tried setting up computer programs as players and found that in tournaments between them the winner was often a simple "tit-for-tat" program—submitted by Anatol Rapoport—that cooperates on the first step, then, on subsequent steps, does whatever its opponent did on the previous step. The same winner was also often obtained by natural selection; a fact that is widely taken to explain cooperation phenomena in evolutionary biology and the social sciences.[11] Prize-winning achievements In 1965, Reinhard Selten introduced his solution concept of subgame perfect equilibria, which further refined the Nash equilibrium. Later he would introduce trembling hand perfection as well. In 1994 Nash, Selten and Harsanyi became Economics Nobel Laureates for their contributions to economic game theory. In the 1970s, game theory was extensively applied in biology, largely as a result of the work of John Maynard Smith and his evolutionarily stable strategy. In addition, the concepts of correlated equilibrium, trembling hand perfection, and common knowledge[lower-alpha 1] were introduced and analyzed. In 2005, game theorists Thomas Schelling and Robert Aumann followed Nash, Selten, and Harsanyi as Nobel Laureates. Schelling worked on dynamic models, early examples of evolutionary game theory. Aumann contributed more to the equilibrium school, introducing equilibrium coarsening and correlated equilibria, and developing an extensive formal analysis of the assumption of common knowledge and of its consequences. In 2007, Leonid Hurwicz, Eric Maskin, and Roger Myerson were awarded the Nobel Prize in Economics "for having laid the foundations of mechanism design theory". Myerson's contributions include the notion of proper equilibrium, and an important graduate text: Game Theory, Analysis of Conflict.[1] Hurwicz introduced and formalized the concept of incentive compatibility. In 2012, Alvin E. Roth and Lloyd S. Shapley were awarded the Nobel Prize in Economics "for the theory of stable allocations and the practice of market design". In 2014, the Nobel went to game theorist Jean Tirole. See also: List of games in game theory Cooperative / non-cooperative Main articles: Cooperative game and Non-cooperative game A game is cooperative if the players are able to form binding commitments externally enforced (e.g. through contract law). A game is non-cooperative if players cannot form alliances or if all agreements need to be self-enforcing (e.g. through credible threats).[12] Cooperative games are often analyzed through the framework of cooperative game theory, which focuses on predicting which coalitions will form, the joint actions that groups take, and the resulting collective payoffs. It is opposed to the traditional non-cooperative game theory which focuses on predicting individual players' actions and payoffs and analyzing Nash equilibria.[13][14] Cooperative game theory provides a high-level approach as it describes only the structure, strategies, and payoffs of coalitions, whereas non-cooperative game theory also looks at how bargaining procedures will affect the distribution of payoffs within each coalition. As non-cooperative game theory is more general, cooperative games can be analyzed through the approach of non-cooperative game theory (the converse does not hold) provided that sufficient assumptions are made to encompass all the possible strategies available to players due to the possibility of external enforcement of cooperation. While it would thus be optimal to have all games expressed under a non-cooperative framework, in many instances insufficient information is available to accurately model the formal procedures available during the strategic bargaining process, or the resulting model would be too complex to offer a practical tool in the real world. In such cases, cooperative game theory provides a simplified approach that allows analysis of the game at large without having to make any assumption about bargaining powers. Symmetric / asymmetric E 1, 2 0, 0 F 0, 0 1, 2 An asymmetric game Main article: Symmetric game A symmetric game is a game where the payoffs for playing a particular strategy depend only on the other strategies employed, not on who is playing them. That is, if the identities of the players can be changed without changing the payoff to the strategies, then a game is symmetric. Many of the commonly studied 2×2 games are symmetric. The standard representations of chicken, the prisoner's dilemma, and the stag hunt are all symmetric games. Some[who?] scholars would consider certain asymmetric games as examples of these games as well. However, the most common payoffs for each of these games are symmetric. The most commonly studied asymmetric games are games where there are not identical strategy sets for both players. For instance, the ultimatum game and similarly the dictator game have different strategies for each player. It is possible, however, for a game to have identical strategies for both players, yet be asymmetric. For example, the game pictured to the right is asymmetric despite having identical strategy sets for both players. Zero-sum / non-zero-sum A –1, 1 3, –3 B 0, 0 –2, 2 A zero-sum game Main article: Zero-sum game Zero-sum games are a special case of constant-sum games in which choices by players can neither increase nor decrease the available resources. In zero-sum games, the total benefit goes to all players in a game, for every combination of strategies, always adds to zero (more informally, a player benefits only at the equal expense of others).[15] Poker exemplifies a zero-sum game (ignoring the possibility of the house's cut), because one wins exactly the amount one's opponents lose. Other zero-sum games include matching pennies and most classical board games including Go and chess. Many games studied by game theorists (including the famed prisoner's dilemma) are non-zero-sum games, because the outcome has net results greater or less than zero. Informally, in non-zero-sum games, a gain by one player does not necessarily correspond with a loss by another. Constant-sum games correspond to activities like theft and gambling, but not to the fundamental economic situation in which there are potential gains from trade. It is possible to transform any game into a (possibly asymmetric) zero-sum game by adding a dummy player (often called "the board") whose losses compensate the players' net winnings. Simultaneous / sequential Main articles: Simultaneous game and Sequential game Simultaneous games are games where both players move simultaneously, or if they do not move simultaneously, the later players are unaware of the earlier players' actions (making them effectively simultaneous). Sequential games (or dynamic games) are games where later players have some knowledge about earlier actions. This need not be perfect information about every action of earlier players; it might be very little knowledge. For instance, a player may know that an earlier player did not perform one particular action, while they do not know which of the other available actions the first player actually performed. The difference between simultaneous and sequential games is captured in the different representations discussed above. Often, normal form is used to represent simultaneous games, while extensive form is used to represent sequential ones. The transformation of extensive to normal form is one way, meaning that multiple extensive form games correspond to the same normal form. Consequently, notions of equilibrium for simultaneous games are insufficient for reasoning about sequential games; see subgame perfection. In short, the differences between sequential and simultaneous games are as follows: Simultaneous Normally denoted by Decision trees Payoff matrices Prior knowledge of opponent's move? Time axis? Yes No Extensive-form game Extensive game Strategic game Perfect information and imperfect information Main article: Perfect information A game of imperfect information (the dotted line represents ignorance on the part of player 2, formally called an information set) An important subset of sequential games consists of games of perfect information. A game is one of perfect information if all players know the moves previously made by all other players. Most games studied in game theory are imperfect-information games.[citation needed] Examples of perfect-information games include tic-tac-toe, checkers, infinite chess, and Go.[16][17][18][19] Many card games are games of imperfect information, such as poker and bridge.[20] Perfect information is often confused with complete information, which is a similar concept.[citation needed] Complete information requires that every player know the strategies and payoffs available to the other players but not necessarily the actions taken. Games of incomplete information can be reduced, however, to games of imperfect information by introducing "moves by nature".[21] Combinatorial games Games in which the difficulty of finding an optimal strategy stems from the multiplicity of possible moves are called combinatorial games. Examples include chess and go. Games that involve imperfect information may also have a strong combinatorial character, for instance backgammon. There is no unified theory addressing combinatorial elements in games. There are, however, mathematical tools that can solve particular problems and answer general questions.[22] Games of perfect information have been studied in combinatorial game theory, which has developed novel representations, e.g. surreal numbers, as well as combinatorial and algebraic (and sometimes non-constructive) proof methods to solve games of certain types, including "loopy" games that may result in infinitely long sequences of moves. These methods address games with higher combinatorial complexity than those usually considered in traditional (or "economic") game theory.[23][24] A typical game that has been solved this way is Hex. A related field of study, drawing from computational complexity theory, is game complexity, which is concerned with estimating the computational difficulty of finding optimal strategies.[25] Research in artificial intelligence has addressed both perfect and imperfect information games that have very complex combinatorial structures (like chess, go, or backgammon) for which no provable optimal strategies have been found. The practical solutions involve computational heuristics, like alpha–beta pruning or use of artificial neural networks trained by reinforcement learning, which make games more tractable in computing practice.[22][26] Infinitely long games Main article: Determinacy Games, as studied by economists and real-world game players, are generally finished in finitely many moves. Pure mathematicians are not so constrained, and set theorists in particular study games that last for infinitely many moves, with the winner (or other payoff) not known until after all those moves are completed. The focus of attention is usually not so much on the best way to play such a game, but whether one player has a winning strategy. (It can be proven, using the axiom of choice, that there are games – even with perfect information and where the only outcomes are "win" or "lose" – for which neither player has a winning strategy.) The existence of such strategies, for cleverly designed games, has important consequences in descriptive set theory. Discrete and continuous games Much of game theory is concerned with finite, discrete games that have a finite number of players, moves, events, outcomes, etc. Many concepts can be extended, however. Continuous games allow players to choose a strategy from a continuous strategy set. For instance, Cournot competition is typically modeled with players' strategies being any non-negative quantities, including fractional quantities. Differential games Differential games such as the continuous pursuit and evasion game are continuous games where the evolution of the players' state variables is governed by differential equations. The problem of finding an optimal strategy in a differential game is closely related to the optimal control theory. In particular, there are two types of strategies: the open-loop strategies are found using the Pontryagin maximum principle while the closed-loop strategies are found using Bellman's Dynamic Programming method. A particular case of differential games are the games with a random time horizon.[27] In such games, the terminal time is a random variable with a given probability distribution function. Therefore, the players maximize the mathematical expectation of the cost function. It was shown that the modified optimization problem can be reformulated as a discounted differential game over an infinite time interval. Evolutionary game theory studies players who adjust their strategies over time according to rules that are not necessarily rational or farsighted.[28] In general, the evolution of strategies over time according to such rules is modeled as a Markov chain with a state variable such as the current strategy profile or how the game has been played in the recent past. Such rules may feature imitation, optimization, or survival of the fittest. In biology, such models can represent evolution, in which offspring adopt their parents' strategies and parents who play more successful strategies (i.e. corresponding to higher payoffs) have a greater number of offspring. In the social sciences, such models typically represent strategic adjustment by players who play a game many times within their lifetime and, consciously or unconsciously, occasionally adjust their strategies.[29] Stochastic outcomes (and relation to other fields) Individual decision problems with stochastic outcomes are sometimes considered "one-player games". These situations are not considered game theoretical by some authors.[by whom?] They may be modeled using similar tools within the related disciplines of decision theory, operations research, and areas of artificial intelligence, particularly AI planning (with uncertainty) and multi-agent system. Although these fields may have different motivators, the mathematics involved are substantially the same, e.g. using Markov decision processes (MDP).[30] Stochastic outcomes can also be modeled in terms of game theory by adding a randomly acting player who makes "chance moves" ("moves by nature").[31] This player is not typically considered a third player in what is otherwise a two-player game, but merely serves to provide a roll of the dice where required by the game. For some problems, different approaches to modeling stochastic outcomes may lead to different solutions. For example, the difference in approach between MDPs and the minimax solution is that the latter considers the worst-case over a set of adversarial moves, rather than reasoning in expectation about these moves given a fixed probability distribution. The minimax approach may be advantageous where stochastic models of uncertainty are not available, but may also be overestimating extremely unlikely (but costly) events, dramatically swaying the strategy in such scenarios if it is assumed that an adversary can force such an event to happen.[32] (See Black swan theory for more discussion on this kind of modeling issue, particularly as it relates to predicting and limiting losses in investment banking.) General models that include all elements of stochastic outcomes, adversaries, and partial or noisy observability (of moves by other players) have also been studied. The "gold standard" is considered to be partially observable stochastic game (POSG), but few realistic problems are computationally feasible in POSG representation.[32] These are games the play of which is the development of the rules for another game, the target or subject game. Metagames seek to maximize the utility value of the rule set developed. The theory of metagames is related to mechanism design theory. The term metagame analysis is also used to refer to a practical approach developed by Nigel Howard.[33] whereby a situation is framed as a strategic game in which stakeholders try to realize their objectives by means of the options available to them. Subsequent developments have led to the formulation of confrontation analysis. Pooling games These are games prevailing over all forms of society. Pooling games are repeated plays with changing payoff table in general over an experienced path, and their equilibrium strategies usually take a form of evolutionary social convention and economic convention. Pooling game theory emerges to formally recognize the interaction between optimal choice in one play and the emergence of forthcoming payoff table update path, identify the invariance existence and robustness, and predict variance over time. The theory is based upon topological transformation classification of payoff table update over time to predict variance and invariance, and is also within the jurisdiction of the computational law of reachable optimality for ordered system.[34] Mean field game theory Mean field game theory is the study of strategic decision making in very large populations of small interacting agents. This class of problems was considered in the economics literature by Boyan Jovanovic and Robert W. Rosenthal, in the engineering literature by Peter E. Caines, and by mathematician Pierre-Louis Lions and Jean-Michel Lasry. Representation of games The games studied in game theory are well-defined mathematical objects. To be fully defined, a game must specify the following elements: the players of the game, the information and actions available to each player at each decision point, and the payoffs for each outcome. (Eric Rasmusen refers to these four "essential elements" by the acronym "PAPI".)[35][36][37][38] A game theorist typically uses these elements, along with a solution concept of their choosing, to deduce a set of equilibrium strategies for each player such that, when these strategies are employed, no player can profit by unilaterally deviating from their strategy. These equilibrium strategies determine an equilibrium to the game—a stable state in which either one outcome occurs or a set of outcomes occur with known probability. Most cooperative games are presented in the characteristic function form, while the extensive and the normal forms are used to define noncooperative games. Extensive form Main article: Extensive form game An extensive form game The extensive form can be used to formalize games with a time sequencing of moves. Games here are played on trees (as pictured here). Here each vertex (or node) represents a point of choice for a player. The player is specified by a number listed by the vertex. The lines out of the vertex represent a possible action for that player. The payoffs are specified at the bottom of the tree. The extensive form can be viewed as a multi-player generalization of a decision tree.[39] To solve any extensive form game, backward induction must be used. It involves working backward up the game tree to determine what a rational player would do at the last vertex of the tree, what the player with the previous move would do given that the player with the last move is rational, and so on until the first vertex of the tree is reached.[40] The game pictured consists of two players. The way this particular game is structured (i.e., with sequential decision making and perfect information), Player 1 "moves" first by choosing either F or U (fair or unfair). Next in the sequence, Player 2, who has now seen Player 1's move, chooses to play either A or R. Once Player 2 has made their choice, the game is considered finished and each player gets their respective payoff. Suppose that Player 1 chooses U and then Player 2 chooses A: Player 1 then gets a payoff of "eight" (which in real-world terms can be interpreted in many ways, the simplest of which is in terms of money but could mean things such as eight days of vacation or eight countries conquered or even eight more opportunities to play the same game against other players) and Player 2 gets a payoff of "two". The extensive form can also capture simultaneous-move games and games with imperfect information. To represent it, either a dotted line connects different vertices to represent them as being part of the same information set (i.e. the players do not know at which point they are), or a closed line is drawn around them. (See example in the imperfect information section.) chooses Left Player 2 chooses Right chooses Up 4, 3 –1, –1 chooses Down 0, 0 3, 4 Normal form or payoff matrix of a 2-player, 2-strategy game Main article: Normal-form game The normal (or strategic form) game is usually represented by a matrix which shows the players, strategies, and payoffs (see the example to the right). More generally it can be represented by any function that associates a payoff for each player with every possible combination of actions. In the accompanying example there are two players; one chooses the row and the other chooses the column. Each player has two strategies, which are specified by the number of rows and the number of columns. The payoffs are provided in the interior. The first number is the payoff received by the row player (Player 1 in our example); the second is the payoff for the column player (Player 2 in our example). Suppose that Player 1 plays Up and that Player 2 plays Left. Then Player 1 gets a payoff of 4, and Player 2 gets 3. When a game is presented in normal form, it is presumed that each player acts simultaneously or, at least, without knowing the actions of the other. If players have some information about the choices of other players, the game is usually presented in extensive form. Every extensive-form game has an equivalent normal-form game, however, the transformation to normal form may result in an exponential blowup in the size of the representation, making it computationally impractical.[41] Characteristic function form Main article: Cooperative game In games that possess removable utility, separate rewards are not given; rather, the characteristic function decides the payoff of each unity. The idea is that the unity that is 'empty', so to speak, does not receive a reward at all. The origin of this form is to be found in John von Neumann and Oskar Morgenstern's book; when looking at these instances, they guessed that when a union C {\displaystyle \mathbf {C} } appears, it works against the fraction ( N C ) {\displaystyle \left({\frac {\mathbf {N} }{\mathbf {C} }}\right)} as if two individuals were playing a normal game. The balanced payoff of C is a basic function. Although there are differing examples that help determine coalitional amounts from normal games, not all appear that in their function form can be derived from such. Formally, a characteristic function is seen as: (N,v), where N represents the group of people and v : 2 N → R {\displaystyle v:2^{N}\to \mathbf {R} } is a normal utility. Such characteristic functions have expanded to describe games where there is no removable utility. Alternative game representations See also: Succinct game Alternative game representation forms exist and are used for some subclasses of games or adjusted to the needs of interdisciplinary research.[42] In addition to classical game representions, some of the alternative representations also encode time related aspects. Type of games Congestion game[43] 1973 functions subset of n-person games, simultaneous moves No Sequential form[44] 1994 matrices 2-person games of imperfect information No Timed games[45][46] 1994 functions 2-person games Yes Gala[47] 1997 logic n-person games of imperfect information No Local effect games[48] 2003 functions subset of n-person games, simultaneous moves No GDL[49] 2005 logic deterministic n-person games, simultaneous moves No Game Petri-nets[50] 2006 Petri net deterministic n-person games, simultaneous moves No Continuous games[51] 2007 functions subset of 2-person games of imperfect information Yes PNSI[52][53] 2008 Petri net n-person games of imperfect information Yes Action graph games[54] 2012 graphs, functions n-person games, simultaneous moves No Graphical games[55] 2015 graphs, functions n-person games, simultaneous moves No General and applied uses As a method of applied mathematics, game theory has been used to study a wide variety of human and animal behaviors. It was initially developed in economics to understand a large collection of economic behaviors, including behaviors of firms, markets, and consumers. The first use of game-theoretic analysis was by Antoine Augustin Cournot in 1838 with his solution of the Cournot duopoly. The use of game theory in the social sciences has expanded, and game theory has been applied to political, sociological, and psychological behaviors as well. Although pre-twentieth-century naturalists such as Charles Darwin made game-theoretic kinds of statements, the use of game-theoretic analysis in biology began with Ronald Fisher's studies of animal behavior during the 1930s. This work predates the name "game theory", but it shares many important features with this field. The developments in economics were later applied to biology largely by John Maynard Smith in his 1982 book Evolution and the Theory of Games.[56] In addition to being used to describe, predict, and explain behavior, game theory has also been used to develop theories of ethical or normative behavior and to prescribe such behavior.[57] In economics and philosophy, scholars have applied game theory to help in the understanding of good or proper behavior. Game-theoretic arguments of this type can be found as far back as Plato.[58] An alternative version of game theory, called chemical game theory, represents the player's choices as metaphorical chemical reactant molecules called "knowlecules".[59] Chemical game theory then calculates the outcomes as equilibrium solutions to a system of chemical reactions. Description and modeling A four-stage centipede game The primary use of game theory is to describe and model how human populations behave.[citation needed] Some[who?] scholars believe that by finding the equilibria of games they can predict how actual human populations will behave when confronted with situations analogous to the game being studied. This particular view of game theory has been criticized. It is argued that the assumptions made by game theorists are often violated when applied to real-world situations. Game theorists usually assume players act rationally, but in practice, human behavior often deviates from this model. Game theorists respond by comparing their assumptions to those used in physics. Thus while their assumptions do not always hold, they can treat game theory as a reasonable scientific ideal akin to the models used by physicists. However, empirical work has shown that in some classic games, such as the centipede game, guess 2/3 of the average game, and the dictator game, people regularly do not play Nash equilibria. There is an ongoing debate regarding the importance of these experiments and whether the analysis of the experiments fully captures all aspects of the relevant situation.[lower-alpha 2] Some game theorists, following the work of John Maynard Smith and George R. Price, have turned to evolutionary game theory in order to resolve these issues. These models presume either no rationality or bounded rationality on the part of players. Despite the name, evolutionary game theory does not necessarily presume natural selection in the biological sense. Evolutionary game theory includes both biological as well as cultural evolution and also models of individual learning (for example, fictitious play dynamics). Prescriptive or normative analysis Cooperate Defect Cooperate -1, -1 -10, 0 Defect 0, -10 -5, -5 Some scholars see game theory not as a predictive tool for the behavior of human beings, but as a suggestion for how people ought to behave. Since a strategy, corresponding to a Nash equilibrium of a game constitutes one's best response to the actions of the other players – provided they are in (the same) Nash equilibrium – playing a strategy that is part of a Nash equilibrium seems appropriate. This normative use of game theory has also come under criticism.[citation needed] Game theory is a major method used in mathematical economics and business for modeling competing behaviors of interacting agents.[lower-alpha 3][61][62][63] Applications include a wide array of economic phenomena and approaches, such as auctions, bargaining, mergers and acquisitions pricing,[64] fair division, duopolies, oligopolies, social network formation, agent-based computational economics,[65][66] general equilibrium, mechanism design,[67][68][69][70][71] and voting systems;[72] and across such broad areas as experimental economics,[73][74][75][76][77] behavioral economics,[78][79][80][81][82][83] information economics,[35][36][37][38] industrial organization,[84][85][86][87] and political economy.[88][89][90][91] This research usually focuses on particular sets of strategies known as "solution concepts" or "equilibria". A common assumption is that players act rationally. In non-cooperative games, the most famous of these is the Nash equilibrium. A set of strategies is a Nash equilibrium if each represents a best response to the other strategies. If all the players are playing the strategies in a Nash equilibrium, they have no unilateral incentive to deviate, since their strategy is the best they can do given what others are doing.[92][93] The payoffs of the game are generally taken to represent the utility of individual players. A prototypical paper on game theory in economics begins by presenting a game that is an abstraction of a particular economic situation. One or more solution concepts are chosen, and the author demonstrates which strategy sets in the presented game are equilibria of the appropriate type. Economists and business professors suggest two primary uses (noted above): descriptive and prescriptive.[57] Sensible decision-making is critical for the success of projects. In project management, game theory is used to model the decision-making process of players, such as investors, project managers, contractors, sub-contractors, governments and customers. Quite often, these players have competing interests, and sometimes their interests are directly detrimental to other players, making project management scenarios well-suited to be modeled by game theory. Piraveenan (2019)[94] in his review provides several examples where game theory is used to model project management scenarios. For instance, an investor typically has several investment options, and each option will likely result in a different project, and thus one of the investment options has to be chosen before the project charter can be produced. Similarly, any large project involving subcontractors, for instance, a construction project, has a complex interplay between the main contractor (the project manager) and subcontractors, or among the subcontractors themselves, which typically has several decision points. For example, if there is an ambiguity in the contract between the contractor and subcontractor, each must decide how hard to push their case without jeopardizing the whole project, and thus their own stake in it. Similarly, when projects from competing organizations are launched, the marketing personnel have to decide what is the best timing and strategy to market the project, or its resultant product or service, so that it can gain maximum traction in the face of competition. In each of these scenarios, the required decisions depend on the decisions of other players who, in some way, have competing interests to the interests of the decision-maker, and thus can ideally be modeled using game theory. Piraveenan[94] summarises that two-player games are predominantly used to model project management scenarios, and based on the identity of these players, five distinct types of games are used in project management. Government-sector–private-sector games (games that model public–private partnerships) Contractor–contractor games Contractor–subcontractor games Subcontractor–subcontractor games Games involving other players In terms of types of games, both cooperative as well as non-cooperative, normal-form as well as extensive-form, and zero-sum as well as non-zero-sum are used to model various project management scenarios. The application of game theory to political science is focused in the overlapping areas of fair division, political economy, public choice, war bargaining, positive political theory, and social choice theory. In each of these areas, researchers have developed game-theoretic models in which the players are often voters, states, special interest groups, and politicians. Early examples of game theory applied to political science are provided by Anthony Downs. In his 1957 book An Economic Theory of Democracy,[95] he applies the Hotelling firm location model to the political process. In the Downsian model, political candidates commit to ideologies on a one-dimensional policy space. Downs first shows how the political candidates will converge to the ideology preferred by the median voter if voters are fully informed, but then argues that voters choose to remain rationally ignorant which allows for candidate divergence. Game theory was applied in 1962 to the Cuban Missile Crisis during the presidency of John F. Kennedy.[96] It has also been proposed that game theory explains the stability of any form of political government. Taking the simplest case of a monarchy, for example, the king, being only one person, does not and cannot maintain his authority by personally exercising physical control over all or even any significant number of his subjects. Sovereign control is instead explained by the recognition by each citizen that all other citizens expect each other to view the king (or other established government) as the person whose orders will be followed. Coordinating communication among citizens to replace the sovereign is effectively barred, since conspiracy to replace the sovereign is generally punishable as a crime. Thus, in a process that can be modeled by variants of the prisoner's dilemma, during periods of stability no citizen will find it rational to move to replace the sovereign, even if all the citizens know they would be better off if they were all to act collectively.[97] A game-theoretic explanation for democratic peace is that public and open debate in democracies sends clear and reliable information regarding their intentions to other states. In contrast, it is difficult to know the intentions of nondemocratic leaders, what effect concessions will have, and if promises will be kept. Thus there will be mistrust and unwillingness to make concessions if at least one of the parties in a dispute is a non-democracy.[98] However, game theory predicts that two countries may still go to war even if their leaders are cognizant of the costs of fighting. War may result from asymmetric information; two countries may have incentives to mis-represent the amount of military resources they have on hand, rendering them unable to settle disputes agreeably without resorting to fighting. Moreover, war may arise because of commitment problems: if two countries wish to settle a dispute via peaceful means, but each wishes to go back on the terms of that settlement, they may have no choice but to resort to warfare. Finally, war may result from issue indivisibilities.[99] Game theory could also help predict a nation's responses when there is a new rule or law to be applied to that nation. One example is Peter John Wood's (2013) research looking into what nations could do to help reduce climate change. Wood thought this could be accomplished by making treaties with other nations to reduce greenhouse gas emissions. However, he concluded that this idea could not work because it would create a prisoner's dilemma for the nations.[100] Hawk Dove Hawk 20, 20 80, 40 Dove 40, 80 60, 60 The hawk-dove game Main article: Evolutionary game theory Unlike those in economics, the payoffs for games in biology are often interpreted as corresponding to fitness. In addition, the focus has been less on equilibria that correspond to a notion of rationality and more on ones that would be maintained by evolutionary forces. The best-known equilibrium in biology is known as the evolutionarily stable strategy (ESS), first introduced in (Maynard Smith & Price 1973). Although its initial motivation did not involve any of the mental requirements of the Nash equilibrium, every ESS is a Nash equilibrium. In biology, game theory has been used as a model to understand many different phenomena. It was first used to explain the evolution (and stability) of the approximate 1:1 sex ratios. (Fisher 1930) harv error: no target: CITEREFFisher1930 (help) suggested that the 1:1 sex ratios are a result of evolutionary forces acting on individuals who could be seen as trying to maximize their number of grandchildren. Additionally, biologists have used evolutionary game theory and the ESS to explain the emergence of animal communication.[101] The analysis of signaling games and other communication games has provided insight into the evolution of communication among animals. For example, the mobbing behavior of many species, in which a large number of prey animals attack a larger predator, seems to be an example of spontaneous emergent organization. Ants have also been shown to exhibit feed-forward behavior akin to fashion (see Paul Ormerod's Butterfly Economics). Biologists have used the game of chicken to analyze fighting behavior and territoriality.[102] According to Maynard Smith, in the preface to Evolution and the Theory of Games, "paradoxically, it has turned out that game theory is more readily applied to biology than to the field of economic behaviour for which it was originally designed". Evolutionary game theory has been used to explain many seemingly incongruous phenomena in nature.[103] One such phenomenon is known as biological altruism. This is a situation in which an organism appears to act in a way that benefits other organisms and is detrimental to itself. This is distinct from traditional notions of altruism because such actions are not conscious, but appear to be evolutionary adaptations to increase overall fitness. Examples can be found in species ranging from vampire bats that regurgitate blood they have obtained from a night's hunting and give it to group members who have failed to feed, to worker bees that care for the queen bee for their entire lives and never mate, to vervet monkeys that warn group members of a predator's approach, even when it endangers that individual's chance of survival.[104] All of these actions increase the overall fitness of a group, but occur at a cost to the individual. Evolutionary game theory explains this altruism with the idea of kin selection. Altruists discriminate between the individuals they help and favor relatives. Hamilton's rule explains the evolutionary rationale behind this selection with the equation c < b × r, where the cost c to the altruist must be less than the benefit b to the recipient multiplied by the coefficient of relatedness r. The more closely related two organisms are causes the incidences of altruism to increase because they share many of the same alleles. This means that the altruistic individual, by ensuring that the alleles of its close relative are passed on through survival of its offspring, can forgo the option of having offspring itself because the same number of alleles are passed on. For example, helping a sibling (in diploid animals) has a coefficient of ​1⁄2, because (on average) an individual shares half of the alleles in its sibling's offspring. Ensuring that enough of a sibling's offspring survive to adulthood precludes the necessity of the altruistic individual producing offspring.[104] The coefficient values depend heavily on the scope of the playing field; for example if the choice of whom to favor includes all genetic living things, not just all relatives, we assume the discrepancy between all humans only accounts for approximately 1% of the diversity in the playing field, a coefficient that was ​1⁄2 in the smaller field becomes 0.995. Similarly if it is considered that information other than that of a genetic nature (e.g. epigenetics, religion, science, etc.) persisted through time the playing field becomes larger still, and the discrepancies smaller. Computer science and logic Game theory has come to play an increasingly important role in logic and in computer science. Several logical theories have a basis in game semantics. In addition, computer scientists have used games to model interactive computations. Also, game theory provides a theoretical basis to the field of multi-agent systems.[105] Separately, game theory has played a role in online algorithms; in particular, the k-server problem, which has in the past been referred to as games with moving costs and request-answer games.[106] Yao's principle is a game-theoretic technique for proving lower bounds on the computational complexity of randomized algorithms, especially online algorithms. The emergence of the Internet has motivated the development of algorithms for finding equilibria in games, markets, computational auctions, peer-to-peer systems, and security and information markets. Algorithmic game theory[107] and within it algorithmic mechanism design[108] combine computational algorithm design and analysis of complex systems with economic theory.[109][110][111] Stag Hare Stag 3, 3 0, 2 Hare 2, 0 2, 2 Stag hunt Game theory has been put to several uses in philosophy. Responding to two papers by W.V.O. Quine (1960, 1967), Lewis (1969) used game theory to develop a philosophical account of convention. In so doing, he provided the first analysis of common knowledge and employed it in analyzing play in coordination games. In addition, he first suggested that one can understand meaning in terms of signaling games. This later suggestion has been pursued by several philosophers since Lewis.[112][113] Following Lewis (1969) game-theoretic account of conventions, Edna Ullmann-Margalit (1977) and Bicchieri (2006) have developed theories of social norms that define them as Nash equilibria that result from transforming a mixed-motive game into a coordination game.[114][115] Game theory has also challenged philosophers to think in terms of interactive epistemology: what it means for a collective to have common beliefs or knowledge, and what are the consequences of this knowledge for the social outcomes resulting from the interactions of agents. Philosophers who have worked in this area include Bicchieri (1989, 1993),[116][117] Skyrms (1990),[118] and Stalnaker (1999).[119] In ethics, some (most notably David Gauthier, Gregory Kavka, and Jean Hampton) [who?] authors have attempted to pursue Thomas Hobbes' project of deriving morality from self-interest. Since games like the prisoner's dilemma present an apparent conflict between morality and self-interest, explaining why cooperation is required by self-interest is an important component of this project. This general strategy is a component of the general social contract view in political philosophy (for examples, see Gauthier (1986) and Kavka (1986) harvtxt error: no target: CITEREFKavka1986 (help)).[lower-alpha 4] Other authors have attempted to use evolutionary game theory in order to explain the emergence of human attitudes about morality and corresponding animal behaviors. These authors look at several games including the prisoner's dilemma, stag hunt, and the Nash bargaining game as providing an explanation for the emergence of attitudes about morality (see, e.g., Skyrms (1996, 2004) and Sober and Wilson (1998)). Retail and consumer product pricing Game theory applications are used heavily in the pricing strategies of retail and consumer markets, particularly for the sale of inelastic goods. With retailers constantly competing against one another for consumer market share, it has become a fairly common practice for retailers to discount certain goods, intermittently, in the hopes of increasing foot-traffic in brick and mortar locations (websites visits for e-commerce retailers) or increasing sales of ancillary or complimentary products.[120] Black Friday, a popular shopping holiday in the US, is when many retailers focus on optimal pricing strategies to capture the holiday shopping market. In the Black Friday scenario, retailers using game theory applications typically ask "what is the dominant competitor's reaction to me?"[121] In such a scenario, the game has two players: the retailer, and the consumer. The retailer is focused on an optimal pricing strategy, while the consumer is focused on the best deal. In this closed system, there often is no dominant strategy as both players have alternative options. That is, retailers can find a different customer, and consumers can shop at a different retailer.[121] Given the market competition that day, however, the dominant strategy for retailers lies in outperforming competitors. The open system assumes multiple retailers selling similar goods, and a finite number of consumers demanding the goods at an optimal price. A blog by a Cornell University professor provided an example of such a strategy, when Amazon priced a Samsung TV $100 below retail value, effectively undercutting competitors. Amazon made up part of the difference by increasing the price of HDMI cables, as it has been found that consumers are less price discriminatory when it comes to the sale of secondary items.[121] Retail markets continue to evolve strategies and applications of game theory when it comes to pricing consumer goods. The key insights found between simulations in a controlled environment and real-world retail experiences show that the applications of such strategies are more complex, as each retailer has to find an optimal balance between pricing, supplier relations, brand image, and the potential to cannibalize the sale of more profitable items.[122] Based on the 1998 book by Sylvia Nasar,[123] the life story of game theorist and mathematician John Nash was turned into the 2001 biopic A Beautiful Mind, starring Russell Crowe as Nash.[124] The 1959 military science fiction novel Starship Troopers by Robert A. Heinlein mentioned "games theory" and "theory of games".[125] In the 1997 film of the same name, the character Carl Jenkins referred to his military intelligence assignment as being assigned to "games and theory". The 1964 film Dr. Strangelove satirizes game theoretic ideas about deterrence theory. For example, nuclear deterrence depends on the threat to retaliate catastrophically if a nuclear attack is detected. A game theorist might argue that such threats can fail to be credible, in the sense that they can lead to subgame imperfect equilibria. The movie takes this idea one step further, with the Soviet Union irrevocably committing to a catastrophic nuclear response without making the threat public.[126] The 1980s power pop band Game Theory was founded by singer/songwriter Scott Miller, who described the band's name as alluding to "the study of calculating the most appropriate action given an adversary... to give yourself the minimum amount of failure."[127] Liar Game, a 2005 Japanese manga and 2007 television series, presents the main characters in each episode with a game or problem that is typically drawn from game theory, as demonstrated by the strategies applied by the characters.[citation needed] The 1974 novel Spy Story by Len Deighton explores elements of Game Theory in regard to cold war army exercises. The 2008 novel The Dark Forest by Liu Cixin explores the relationship between extraterrestrial life, humanity, and game theory. Chainstore paradox Glossary of game theory Intra-household bargaining Kingmaker scenario Outline of artificial intelligence Parrondo's paradox Quantum refereed game Self-confirming equilibrium List of cognitive biases List of emerging technologies List of games in game theory ↑ Although common knowledge was first discussed by the philosopher David Lewis in his dissertation (and later book) Convention in the late 1960s, it was not widely considered by economists until Robert Aumann's work in the 1970s. ↑ Experimental work in game theory goes by many names, experimental economics, behavioral economics, and behavioural game theory are several.[60] ↑ At JEL:C7 of the Journal of Economic Literature classification codes. ↑ For a more detailed discussion of the use of game theory in ethics, see the Stanford Encyclopedia of Philosophy's entry game theory and ethics. 1 2 Myerson, Roger B. (1991). Game Theory: Analysis of Conflict, Harvard University Press, p. 1. Chapter-preview links, pp. vii–xi. ↑ Bellhouse, David R. (2007), "The Problem of Waldegrave" (PDF), Journal Électronique d'Histoire des Probabilités et de la Statistique [Electronic Journal of Probability History and Statistics], 3 (2) ↑ Bellhouse, David R. (2015). "Le Her and Other Problems in Probability Discussed by Bernoulli, Montmort and Waldegrave". Statistical Science. Institute of Mathematical Statistics. 30 (1): 26–39. arXiv:1504.01950. Bibcode:2015arXiv150401950B. doi:10.1214/14-STS469. S2CID 59066805. ↑ Zermelo, Ernst (1913). Hobson, E. W.; Love, A. E. H. (eds.). Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels [On an Application of Set Theory to the Theory of the Game of Chess] (PDF). Proceedings of the Fifth International Congress of Mathematicians (1912) (in German). Cambridge: Cambridge University Press. pp. 501–504. Archived from the original (PDF) on 23 October 2015. Retrieved 29 August 2019. ↑ Kim, Sungwook, ed. (2014). Game theory applications in network design. IGI Global. p. 3. ISBN 9781466660519. ↑ Neumann, John von (1928). "Zur Theorie der Gesellschaftsspiele" [On the Theory of Games of Strategy]. Mathematische Annalen [Mathematical Annals] (in German). 100 (1): 295–320. doi:10.1007/BF01448847. S2CID 122961988. ↑ Neumann, John von (1959). "On the Theory of Games of Strategy". In Tucker, A. W.; Luce, R. D. (eds.). Contributions to the Theory of Games. 4. pp. 13–42. ISBN 0691079374. ↑ Mirowski, Philip (1992). "What Were von Neumann and Morgenstern Trying to Accomplish?". In Weintraub, E. Roy (ed.). Toward a History of Game Theory. Durham: Duke University Press. pp. 113–147. ISBN 978-0-8223-1253-6. ↑ Leonard, Robert (2010), Von Neumann, Morgenstern, and the Creation of Game Theory, New York: Cambridge University Press, doi:10.1017/CBO9780511778278, ISBN 9780521562669 ↑ Kuhn, Steven (4 September 1997). Zalta, Edward N. (ed.). "Prisoner's Dilemma". Stanford Encyclopedia of Philosophy. Stanford University. Retrieved 3 January 2013. ↑ Wolfram, Stephen (2002). A New Kind of Science. Wolfram Media. p. 1104. ISBN 978-1-57955-008-0. ↑ Shor, Mike. "Non-Cooperative Game". GameTheory.net. Retrieved 15 September 2016. ↑ Chandrasekaran, Ramaswamy. "Cooperative Game Theory" (PDF). University of Texas at Dallas. ↑ Brandenburger, Adam. "Cooperative Game Theory: Characteristic Functions, Allocations, Marginal Contribution" (PDF). Archived from the original (PDF) on 27 May 2016. ↑ Owen, Guillermo (1995). Game Theory: Third Edition. Bingley: Emerald Group Publishing. p. 11. ISBN 978-0-12-531151-9. ↑ Ferguson, Thomas S. "Game Theory" (PDF). UCLA Department of Mathematics. pp. 56–57. ↑ "Complete vs Perfect information in Combinatorial Game Theory". Stack Exchange. 24 June 2014. ↑ Mycielski, Jan (1992). "Games with Perfect Information". Handbook of Game Theory with Economic Applications. 1. pp. 41–70. doi:10.1016/S1574-0005(05)80006-2. ISBN 978-0-4448-8098-7. ↑ "Infinite Chess". PBS Infinite Series. 2 March 2017. Perfect information defined at 0:25, with academic sources arXiv:1302.4377 and arXiv:1510.08155. ↑ Owen, Guillermo (1995). Game Theory: Third Edition. Bingley: Emerald Group Publishing. p. 4. ISBN 978-0-12-531151-9. ↑ Shoham & Leyton-Brown (2008), p. 60. 1 2 Jörg Bewersdorff (2005). "31". Luck, logic, and white lies: the mathematics of games. A K Peters, Ltd. pp. ix–xii. ISBN 978-1-56881-210-6. ↑ Albert, Michael H.; Nowakowski, Richard J.; Wolfe, David (2007), Lessons in Play: In Introduction to Combinatorial Game Theory, A K Peters Ltd, pp. 3–4, ISBN 978-1-56881-277-9 ↑ Beck, József (2008). Combinatorial Games: Tic-Tac-Toe Theory. Cambridge University Press. pp. 1–3. ISBN 978-0-521-46100-9. ↑ Hearn, Robert A.; Demaine, Erik D. (2009), Games, Puzzles, and Computation, A K Peters, Ltd., ISBN 978-1-56881-322-6 ↑ Jones, M. Tim (2008). Artificial Intelligence: A Systems Approach. Jones & Bartlett Learning. pp. 106–118. ISBN 978-0-7637-7337-3. ↑ Petrosjan, L. A.; Murzov, N. V. (1966). "Game-theoretic problems of mechanics". Litovsk. Mat. Sb. (in Russian). 6: 423–433. ↑ Newton, Jonathan (2018). "Evolutionary Game Theory: A Renaissance". Games. 9 (2): 31. doi:10.3390/g9020031. ↑ Webb (2007). ↑ Lozovanu, D; Pickl, S (2015). A Game-Theoretical Approach to Markov Decision Processes, Stochastic Positional Games and Multicriteria Control Models. Springer, Cham. ISBN 978-3-319-11832-1. ↑ Osborne & Rubinstein (1994). 1 2 McMahan, Hugh Brendan (2006). "Robust Planning in Domains with Stochastic Outcomes, Adversaries, and Partial Observability" (PDF). Cmu-Cs-06-166: 3–4. ↑ Howard (1971). ↑ Wang, Wenliang (2015). Pooling Game Theory and Public Pension Plan. ISBN 978-1507658246. 1 2 Rasmusen, Eric (2007). Games and Information (4th ed.). ISBN 9781405136662. 1 2 Kreps, David M. (1990). Game Theory and Economic Modelling. 1 2 Aumann, Robert; Hart, Sergiu, eds. (1992). Handbook of Game Theory with Economic Applications. 1. pp. 1–733. 1 2 Aumann, Robert J.; Heifetz, Aviad (2002). "Chapter 43 Incomplete information". Handbook of Game Theory with Economic Applications Volume 3. Handbook of Game Theory with Economic Applications. 3. pp. 1665–1686. doi:10.1016/S1574-0005(02)03006-0. ISBN 9780444894281. ↑ Fudenberg & Tirole (1991), p. 67. sfnp error: no target: CITEREFFudenbergTirole1991 (help) ↑ Williams, Paul D. (2013). Security Studies: an Introduction (second ed.). Abingdon: Routledge. pp. 55–56. ↑ Tagiew, Rustam (3 May 2011). "If more than Analytical Modeling is Needed to Predict Real Agents' Strategic Interaction". arXiv:1105.0558 [cs.GT]. ↑ Rosenthal, Robert W. (December 1973). "A class of games possessing pure-strategy Nash equilibria". International Journal of Game Theory. 2 (1): 65–67. doi:10.1007/BF01737559. S2CID 121904640. ↑ Koller, Daphne; Megiddo, Nimrod; von Stengel, Bernhard (1994). "Fast algorithms for finding randomized strategies in game trees". STOC '94: Proceedings of the Twenty-sixth Annual ACM Symposium on Theory of Computing: 750–759. doi:10.1145/195058.195451. ISBN 0897916638. S2CID 1893272. ↑ Alur, Rajeev; Dill, David L. (April 1994). "A theory of timed automata". Theoretical Computer Science. 126 (2): 183–235. doi:10.1016/0304-3975(94)90010-8. ↑ Tomlin, C.J.; Lygeros, J.; Shankar Sastry, S. (July 2000). "A game theoretic approach to controller design for hybrid systems". Proceedings of the IEEE. 88 (7): 949–970. doi:10.1109/5.871303. S2CID 1844682. ↑ Koller, Daphne; Pfeffer, Avi (1997). "Representations and solutions for game-theoretic problems" (PDF). Artificial Intelligence. 94 (1–2): 167–215. doi:10.1016/S0004-3702(97)00023-4. ↑ Leyton-Brown, Kevin; Tennenholtz, Moshe (2003). "Local-effect games". IJCAI'03: Proceedings of the 18th International Joint Conference on Artificial Intelligence. ↑ Genesereth, Michael; Love, Nathaniel; Pell, Barney (15 June 2005). "General Game Playing: Overview of the AAAI Competition". AI Magazine. 26 (2): 62. doi:10.1609/aimag.v26i2.1813. ISSN 2371-9621. ↑ Clempner, Julio (2006). "Modeling shortest path games with Petri nets: a Lyapunov based theory". International Journal of Applied Mathematics and Computer Science. 16 (3): 387–397. ISSN 1641-876X. ↑ Sannikov, Yuliy (September 2007). "Games with Imperfectly Observable Actions in Continuous Time" (PDF). Econometrica. 75 (5): 1285–1329. doi:10.1111/j.1468-0262.2007.00795.x. ↑ Tagiew, Rustam (December 2008). "Multi-Agent Petri-Games". 2008 International Conference on Computational Intelligence for Modelling Control Automation: 130–135. doi:10.1109/CIMCA.2008.15. ISBN 978-0-7695-3514-2. S2CID 16679934. ↑ Tagiew, Rustam (2009). "On Multi-agent Petri Net Models for Computing Extensive Finite Games". New Challenges in Computational Collective Intelligence. Studies in Computational Intelligence. Springer. 244: 243–254. doi:10.1007/978-3-642-03958-4_21. ISBN 978-3-642-03957-7. ↑ Bhat, Navin; Leyton-Brown, Kevin (11 July 2012). "Computing Nash Equilibria of Action-Graph Games". arXiv:1207.4128 [cs.GT]. ↑ Kearns, Michael; Littman, Michael L.; Singh, Satinder (7 March 2015). "Graphical Models for Game Theory". arXiv:1301.2281 [cs.GT]. ↑ Friedman, Daniel (1998). "On economic applications of evolutionary game theory" (PDF). Journal of Evolutionary Economics. 8: 14–53. 1 2 Camerer, Colin F. (2003). "1.1 What Is Game Theory Good For?". Behavioral Game Theory: Experiments in Strategic Interaction. pp. 5–7. Archived from the original on 14 May 2011. ↑ Ross, Don (10 March 2006). "Game Theory". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Stanford University. Retrieved 21 August 2008. ↑ Velegol, Darrell; Suhey, Paul; Connolly, John; Morrissey, Natalie; Cook, Laura (14 September 2018). "Chemical Game Theory". Industrial & Engineering Chemistry Research. 57 (41): 13593–13607. doi:10.1021/acs.iecr.8b03835. ISSN 0888-5885. Camerer, Colin F. (2003). "Introduction". Behavioral Game Theory: Experiments in Strategic Interaction. pp. 1–25. Archived from the original on 14 May 2011. ↑ Aumann, Robert J. (2008). "game theory". The New Palgrave Dictionary of Economics (2nd ed.). Archived from the original on 15 May 2011. Retrieved 22 August 2011. ↑ Shubik, Martin (1981). Arrow, Kenneth; Intriligator, Michael (eds.). Game Theory Models and Methods in Political Economy. Handbook of Mathematical Economics, v. 1. 1. pp. 285–330. doi:10.1016/S1573-4382(81)01011-4. ↑ Carl Shapiro (1989). "The Theory of Business Strategy," RAND Journal of Economics, 20(1), pp. 125–137 JSTOR 2555656. ↑ N. Agarwal and P. Zeephongsekul. Psychological Pricing in Mergers & Acquisitions using Game Theory, School of Mathematics and Geospatial Sciences, RMIT University, Melbourne ↑ Leigh Tesfatsion (2006). "Agent-Based Computational Economics: A Constructive Approach to Economic Theory," ch. 16, Handbook of Computational Economics, v. 2, pp. 831–880 doi:10.1016/S1574-0021(05)02016-2. ↑ Joseph Y. Halpern (2008). "computer science and game theory". The New Palgrave Dictionary of Economics. ↑ Myerson, Roger B. (2008). "mechanism design". The New Palgrave Dictionary of Economics. Archived from the original on 23 November 2011. Retrieved 4 August 2011. ↑ Myerson, Roger B. (2008). "revelation principle". The New Palgrave Dictionary of Economics. ↑ Sandholm, Tuomas (2008). "computing in mechanism design". The New Palgrave Dictionary of Economics. Archived from the original on 23 November 2011. Retrieved 5 December 2011. ↑ Nisan, Noam; Ronen, Amir (2001). "Algorithmic Mechanism Design" (PDF). Games and Economic Behavior. 35 (1–2): 166–196. doi:10.1006/game.1999.0790. ↑ Nisan, Noam; et al., eds. (2007). Algorithmic Game Theory. Cambridge University Press. Archived from the original on 5 May 2012. ↑ Brams, Steven J. (1994). Chapter 30 Voting procedures. Handbook of Game Theory with Economic Applications. 2. pp. 1055–1089. doi:10.1016/S1574-0005(05)80062-1. ISBN 9780444894274. and Moulin, Hervé (1994). Chapter 31 Social choice. Handbook of Game Theory with Economic Applications. 2. pp. 1091–1125. doi:10.1016/S1574-0005(05)80063-3. ISBN 9780444894274. ↑ Vernon L. Smith, 1992. "Game Theory and Experimental Economics: Beginnings and Early Influences," in E. R. Weintraub, ed., Towards a History of Game Theory, pp. 241–282 ↑ Smith, V.L. (2001). "Experimental Economics". International Encyclopedia of the Social & Behavioral Sciences. pp. 5100–5108. doi:10.1016/B0-08-043076-7/02232-4. ISBN 9780080430768. ↑ Handbook of Experimental Economics Results. ↑ Vincent P. Crawford (1997). "Theory and Experiment in the Analysis of Strategic Interaction," in Advances in Economics and Econometrics: Theory and Applications, pp. 206–242. Cambridge. Reprinted in Colin F. Camerer et al., ed. (2003). Advances in Behavioral Economics, Princeton. 1986–2003 papers. Description, preview, Princeton, ch. 12 ↑ Shubik, Martin (2002). "Chapter 62 Game theory and experimental gaming". Handbook of Game Theory with Economic Applications Volume 3. Handbook of Game Theory with Economic Applications. 3. pp. 2327–2351. doi:10.1016/S1574-0005(02)03025-4. ISBN 9780444894281. ↑ The New Palgrave Dictionary of Economics. 2008. Faruk Gul. "behavioural economics and game theory." Abstract. ↑ Camerer, Colin F. (2008). "behavioral game theory". The New Palgrave Dictionary of Economics. Archived from the original on 23 November 2011. Retrieved 4 August 2011. ↑ Camerer, Colin F. (1997). "Progress in Behavioral Game Theory" (PDF). Journal of Economic Perspectives. 11 (4): 172. doi:10.1257/jep.11.4.167. ↑ Camerer, Colin F. (2003). Behavioral Game Theory. Princeton. Description Archived 14 May 2011 at the Wayback Machine, preview ([ctrl]+), and ch. 1 link. ↑ Camerer, Colin F. (2003). Loewenstein, George; Rabin, Matthew (eds.). "Advances in Behavioral Economics". 1986–2003 Papers. Princeton. ISBN 1400829119. ↑ Fudenberg, Drew (2006). "Advancing Beyond Advances in Behavioral Economics". Journal of Economic Literature. 44 (3): 694–711. doi:10.1257/jel.44.3.694. JSTOR 30032349. ↑ Tirole, Jean (1988). The Theory of Industrial Organization. MIT Press. Description and chapter-preview links, pp. vii–ix, "General Organization," pp. 5–6, and "Non-Cooperative Game Theory: A User's Guide Manual,' " ch. 11, pp. 423–59. ↑ Kyle Bagwell and Asher Wolinsky (2002). "Game theory and Industrial Organization," ch. 49, Handbook of Game Theory with Economic Applications, v. 3, pp. 1851–1895. ↑ Martin Shubik (1959). Strategy and Market Structure: Competition, Oligopoly, and the Theory of Games, Wiley. Description and review extract. ↑ Martin Shubik with Richard Levitan (1980). Market Structure and Behavior, Harvard University Press. Review extract. Archived 15 March 2010 at the Wayback Machine ↑ Martin Shubik (1981). "Game Theory Models and Methods in Political Economy," in Handbook of Mathematical Economics, v. 1, pp. 285–330 doi:10.1016/S1573-4382(81)01011-4. ↑ Martin Shubik (1987). A Game-Theoretic Approach to Political Economy. MIT Press. Description. Archived 29 June 2011 at the Wayback Machine ↑ Martin Shubik (1978). "Game Theory: Economic Applications," in W. Kruskal and J.M. Tanur, ed., International Encyclopedia of Statistics, v. 2, pp. 372–78. ↑ Robert Aumann and Sergiu Hart, ed. Handbook of Game Theory with Economic Applications (scrollable to chapter-outline or abstract links): :1992. v. 1; 1994. v. 2; 2002. v. 3. ↑ Christen, Markus (1 July 1998). "Game-theoretic model to examine the two tradeoffs in the acquisition of information for a careful balancing act". INSEAD. Archived from the original on 24 May 2013. Retrieved 1 July 2012. ↑ Chevalier-Roignant, Benoît; Trigeorgis, Lenos (15 February 2012). "Options Games: Balancing the trade-off between flexibility and commitment". The European Financial Review. Archived from the original on 20 June 2013. Retrieved 3 January 2013. 1 2 Piraveenan, Mahendra (2019). "Applications of Game Theory in Project Management: A Structured Review and Analysis". Mathematics. 7 (9): 858. doi:10.3390/math7090858. Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License. ↑ Downs (1957). ↑ Brams, Steven J. (1 January 2001). "Game theory and the Cuban missile crisis". Plus Magazine. Retrieved 31 January 2016. ↑ Morrison, Andrew Stumpff (January 2013). "Yes, Law is the Command of the Sovereign". SSRN. doi:10.2139/ssrn.2371076. ↑ Levy, G.; Razin, R. (2004). "It Takes Two: An Explanation for the Democratic Peace". Journal of the European Economic Association. 2 (1): 1–29. doi:10.1162/154247604323015463. JSTOR 40004867. S2CID 12114936. ↑ Fearon, James D. (1 January 1995). "Rationalist Explanations for War". International Organization. 49 (3): 379–414. doi:10.1017/s0020818300033324. JSTOR 2706903. ↑ Wood, Peter John (2011). "Climate change and game theory" (PDF). Ecological Economics Review. 1219 (1): 153–70. Bibcode:2011NYASA1219..153W. doi:10.1111/j.1749-6632.2010.05891.x. hdl:1885/67270. PMID 21332497. S2CID 21381945. ↑ Harper & Maynard Smith (2003). ↑ Maynard Smith, John (1974). "The theory of games and the evolution of animal conflicts" (PDF). Journal of Theoretical Biology. 47 (1): 209–221. doi:10.1016/0022-5193(74)90110-6. PMID 4459582. ↑ Alexander, J. McKenzie (19 July 2009). "Evolutionary Game Theory". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Stanford University. Retrieved 3 January 2013. 1 2 Okasha, Samir (3 June 2003). "Biological Altruism". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Stanford University. Retrieved 3 January 2013. ↑ Shoham, Yoav; Leyton-Brown, Kevin (15 December 2008). Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. Cambridge University Press. ISBN 978-1-139-47524-2. ↑ Ben David et al. (1994). ↑ Nisan, Noam; Ronen, Amir (2001). "Algorithmic Mechanism Design" (PDF). Games and Economic Behavior. 35 (1–2): 166–196. CiteSeerX 10.1.1.21.1731. doi:10.1006/game.1999.0790. ↑ Halpern, Joseph Y. (2008). "Computer science and game theory". The New Palgrave Dictionary of Economics (2nd ed.). ↑ Shoham, Yoav (2008). "Computer Science and Game Theory" (PDF). Communications of the ACM. 51 (8): 75–79. CiteSeerX 10.1.1.314.2936. doi:10.1145/1378704.1378721. S2CID 2057889. Archived from the original (PDF) on 26 April 2012. Retrieved 28 November 2011. ↑ Littman, Amy; Littman, Michael L. (2007). "Introduction to the Special Issue on Learning and Computational Game Theory". Machine Learning. 67 (1–2): 3–6. doi:10.1007/s10994-007-0770-1. S2CID 22635389. ↑ Skyrms (1996) ↑ Grim et al. (2004). ↑ Ullmann-Margalit, E. (1977), The Emergence of Norms, Oxford University Press, ISBN 978-0198244110 ↑ Bicchieri, Cristina (2006), The Grammar of Society: the Nature and Dynamics of Social Norms, Cambridge University Press, ISBN 978-0521573726 ↑ Bicchieri, Cristina (1989). "Self-Refuting Theories of Strategic Interaction: A Paradox of Common Knowledge". Erkenntnis. 30 (1–2): 69–85. doi:10.1007/BF00184816. S2CID 120848181. ↑ Bicchieri, Cristina (1993), Rationality and Coordination, Cambridge University Press, ISBN 978-0-521-57444-0 ↑ Skyrms, Brian (1990), The Dynamics of Rational Deliberation, Harvard University Press, ISBN 978-0674218857 ↑ Bicchieri, Cristina; Jeffrey, Richard; Skyrms, Brian, eds. (1999), "Knowledge, Belief, and Counterfactual Reasoning in Games", The Logic of Strategy, New York: Oxford University Press, ISBN 978-0195117158 ↑ Kopalle; Shumsky. "Game Theory Models of Pricing" (PDF). Retrieved 10 January 2020. 1 2 3 "How e-Commerce Uses Game Theory to Capture Consumer Dollars : Networks Course blog for INFO 2040/CS 2850/Econ 2040/SOC 2090". Retrieved 11 January 2020. ↑ "Black Friday Games: Concurrent pricing wars for a competitive advantage". SFK Inc. | SKK Marine | SFK SecCon. 27 November 2018. Retrieved 11 January 2020. ↑ Nasar, Sylvia (1998) A Beautiful Mind, Simon & Schuster. ISBN 0-684-81906-6. ↑ Singh, Simon (14 June 1998) "Between Genius and Madness", New York Times. ↑ Heinlein, Robert A. (1959), Starship Troopers ↑ Dr. Strangelove Or How I Learned to Stop Worrying and Love the Bomb. 29 January 1964. 51 minutes in. ... is that the whole point of the doomsday machine is lost, if you keep it a secret! ↑ Guzman, Rafer (6 March 1996). "Star on hold: Faithful following, meager sales". Pacific Sun. Archived from the original on 6 November 2013. Retrieved 25 July 2018. . References and further reading Wikiquote has quotations related to: Game theory Wikimedia Commons has media related to Game theory. Textbooks and general references Aumann, Robert J (1987), "game theory", The New Palgrave: A Dictionary of Economics, 2, pp. 460–82 . Camerer, Colin (2003), "Introduction", Behavioral Game Theory: Experiments in Strategic Interaction, Russell Sage Foundation, pp. 1–25, ISBN 978-0-691-09039-9 , Description. Dutta, Prajit K. (1999), Strategies and games: theory and practice, MIT Press, ISBN 978-0-262-04169-0 . Suitable for undergraduate and business students. https://b-ok.org/book/2640653/e56341. Fernandez, L F.; Bierman, H S. (1998), Game theory with economic applications, Addison-Wesley, ISBN 978-0-201-84758-1 . Suitable for upper-level undergraduates. Gibbons, Robert D. (1992), Game theory for applied economists, Princeton University Press, ISBN 978-0-691-00395-5 . Suitable for advanced undergraduates. Published in Europe as Gibbons, Robert (2001), A Primer in Game Theory, London: Harvester Wheatsheaf, ISBN 978-0-7450-1159-2 . Gintis, Herbert (2000), Game theory evolving: a problem-centered introduction to modeling strategic behavior, Princeton University Press, ISBN 978-0-691-00943-8 Green, Jerry R.; Mas-Colell, Andreu; Whinston, Michael D. (1995), Microeconomic theory, Oxford University Press, ISBN 978-0-19-507340-9 . Presents game theory in formal way suitable for graduate level. Joseph E. Harrington (2008) Games, strategies, and decision making, Worth, ISBN 0-7167-6630-2. Textbook suitable for undergraduates in applied fields; numerous examples, fewer formalisms in concept presentation. Howard, Nigel (1971), Paradoxes of Rationality: Games, Metagames, and Political Behavior, Cambridge, MA: The MIT Press, ISBN 978-0-262-58237-7 Isaacs, Rufus (1999), Differential Games: A Mathematical Theory With Applications to Warfare and Pursuit, Control and Optimization, New York: Dover Publications, ISBN 978-0-486-40682-4 Maschler, Michael; Solan, Eilon; Zamir, Shmuel (2013), Game Theory, Cambridge University Press, ISBN 978-1108493451. Undergraduate textbook. Miller, James H. (2003), Game theory at work: how to use game theory to outthink and outmaneuver your competition, New York: McGraw-Hill, ISBN 978-0-07-140020-6 . Suitable for a general audience. Osborne, Martin J. (2004), An introduction to game theory, Oxford University Press, ISBN 978-0-19-512895-6 . Undergraduate textbook. Osborne, Martin J.; Rubinstein, Ariel (1994), A course in game theory, MIT Press, ISBN 978-0-262-65040-3 . A modern introduction at the graduate level. Shoham, Yoav; Leyton-Brown, Kevin (2009), Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, New York: Cambridge University Press, ISBN 978-0-521-89943-7, retrieved 8 March 2016 Watson, Joel (2013), Strategy: An Introduction to Game Theory (3rd edition), New York: W.W. Norton and Co., ISBN 978-0-393-91838-0 . A leading textbook at the advanced undergraduate level. McCain, Roger A. (2010), Roger McCain's Game Theory: A Nontechnical Introduction to the Analysis of Strategy (Revised ed.), ISBN 9789814289658 Webb, James N. (2007), Game theory: decisions, interaction and evolution, Undergraduate mathematics, Springer, ISBN 978-1-84628-423-6 Consistent treatment of game types usually claimed by different applied fields, e.g. Markov decision processes. Historically important texts Aumann, R. J.; Shapley, L. S. (1974), Values of Non-Atomic Games, Princeton University Press Cournot, A. Augustin (1838), "Recherches sur les principles mathematiques de la théorie des richesses", Libraire des Sciences Politiques et Sociales Edgeworth, Francis Y. (1881), Mathematical Psychics, London: Kegan Paul Farquharson, Robin (1969), Theory of Voting, Blackwell (Yale U.P. in the U.S.), ISBN 978-0-631-12460-3 Luce, R. Duncan; Raiffa, Howard (1957), Games and decisions: introduction and critical survey, New York: Wiley reprinted edition: R. Duncan Luce; Howard Raiffa (1989), Games and decisions: introduction and critical survey, New York: Dover Publications, ISBN 978-0-486-65943-5 CS1 maint: multiple names: authors list (link) Maynard Smith, John (1982), Evolution and the theory of games, Cambridge University Press, ISBN 978-0-521-28884-2 Maynard Smith, John; Price, George R. (1973), "The logic of animal conflict", Nature, 246 (5427): 15–18, Bibcode:1973Natur.246...15S, doi:10.1038/246015a0, S2CID 4224989 Nash, John (1950), "Equilibrium points in n-person games", Proceedings of the National Academy of Sciences of the United States of America, 36 (1): 48–49, Bibcode:1950PNAS...36...48N, doi:10.1073/pnas.36.1.48, PMC 1063129, PMID 16588946 Shapley, L.S. (1953), A Value for n-person Games, In: Contributions to the Theory of Games volume II, H. W. Kuhn and A. W. Tucker (eds.) Shapley, L.S. (1953), Stochastic Games, Proceedings of National Academy of Science Vol. 39, pp. 1095–1100. von Neumann, John (1928), "Zur Theorie der Gesellschaftsspiele", Mathematische Annalen, 100 (1): 295–320, doi:10.1007/bf01448847, S2CID 122961988 English translation: "On the Theory of Games of Strategy," in A. W. Tucker and R. D. Luce, ed. (1959), Contributions to the Theory of Games, v. 4, p. 42. Princeton University Press. von Neumann, John; Morgenstern, Oskar (1944), "Theory of games and economic behavior", Nature, Princeton University Press, 157 (3981): 172, Bibcode:1946Natur.157..172R, doi:10.1038/157172a0, S2CID 29754824 Zermelo, Ernst (1913), "Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels", Proceedings of the Fifth International Congress of Mathematicians, 2: 501–4 Other print references Ben David, S.; Borodin, Allan; Karp, Richard; Tardos, G.; Wigderson, A. (1994), "On the Power of Randomization in On-line Algorithms" (PDF), Algorithmica, 11 (1): 2–14, doi:10.1007/BF01294260, S2CID 26771869 Downs, Anthony (1957), An Economic theory of Democracy, New York: Harper Gauthier, David (1986), Morals by agreement, Oxford University Press, ISBN 978-0-19-824992-4 Allan Gibbard, "Manipulation of voting schemes: a general result", Econometrica, Vol. 41, No. 4 (1973), pp. 587–601. Grim, Patrick; Kokalis, Trina; Alai-Tafti, Ali; Kilb, Nicholas; St Denis, Paul (2004), "Making meaning happen", Journal of Experimental & Theoretical Artificial Intelligence, 16 (4): 209–243, doi:10.1080/09528130412331294715, S2CID 5737352 Harper, David; Maynard Smith, John (2003), Animal signals, Oxford University Press, ISBN 978-0-19-852685-8 Lewis, David (1969), Convention: A Philosophical Study , ISBN 978-0-631-23257-5 (2002 edition) McDonald, John (1950–1996), Strategy in Poker, Business & War, W. W. Norton, ISBN 978-0-393-31457-1 . A layman's introduction. Papayoanou, Paul (2010), Game Theory for Business: A Primer in Strategic Gaming, Probabilistic, ISBN 978-0964793873 . Quine, W.v.O (1967), "Truth by Convention", Philosophica Essays for A.N. Whitehead, Russel and Russel Publishers, ISBN 978-0-8462-0970-6 Quine, W.v.O (1960), "Carnap and Logical Truth", Synthese, 12 (4): 350–374, doi:10.1007/BF00485423, S2CID 46979744 Satterthwaite, Mark A. (April 1975), "Strategy-proofness and Arrow's Conditions: Existence and Correspondence Theorems for Voting Procedures and Social Welfare Functions" (PDF), Journal of Economic Theory, 10 (2): 187–217, doi:10.1016/0022-0531(75)90050-2 Siegfried, Tom (2006), A Beautiful Math, Joseph Henry Press, ISBN 978-0-309-10192-9 Skyrms, Brian (1990), The Dynamics of Rational Deliberation, Harvard University Press, ISBN 978-0-674-21885-7 Skyrms, Brian (1996), Evolution of the social contract, Cambridge University Press, ISBN 978-0-521-55583-8 Skyrms, Brian (2004), The stag hunt and the evolution of social structure, Cambridge University Press, ISBN 978-0-521-53392-8 Sober, Elliott; Wilson, David Sloan (1998), Unto others: the evolution and psychology of unselfish behavior, Harvard University Press, ISBN 978-0-674-93047-6 Thrall, Robert M.; Lucas, William F. (1963), " n {\displaystyle n} -person games in partition function form", Naval Research Logistics Quarterly, 10 (4): 281–298, doi:10.1002/nav.3800100126 Dolev, Shlomi; Panagopoulou, Panagiota; Rabie, Mikael; Schiller, Elad Michael; Spirakis, Paul (2011), "Rationality authority for provable rational behavior", Proceedings of the 30th annual ACM SIGACT-SIGOPS symposium on Principles of distributed computing, pp. 289–290, doi:10.1145/1993806.1993858, ISBN 9781450307192, S2CID 8974307 Chastain, E. (2014), "Algorithms, games, and evolution", Proceedings of the National Academy of Sciences, 111 (29): 10620–10623, Bibcode:2014PNAS..11110620C, doi:10.1073/pnas.1406556111, PMC 4115542, PMID 24979793 Look up game theory in Wiktionary, the free dictionary. Wikiversity has learning resources about Game Theory Wikibooks has a book on the topic of: Introduction to Game Theory James Miller (2015): Introductory Game Theory Videos. "Games, theory of", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Paul Walker: History of Game Theory Page. David Levine: Game Theory. Papers, Lecture Notes and much more stuff. Alvin Roth: "Game Theory and Experimental Economics page". Archived from the original on 15 August 2000. Retrieved 13 September 2003. — Comprehensive list of links to game theory information on the Web Adam Kalai: Game Theory and Computer Science — Lecture notes on Game Theory and Computer Science Mike Shor: GameTheory.net — Lecture notes, interactive illustrations and other information. Jim Ratliff's Graduate Course in Game Theory (lecture notes). Don Ross: Review Of Game Theory in the Stanford Encyclopedia of Philosophy. Bruno Verbeek and Christopher Morris: Game Theory and Ethics Elmer G. Wiens: Game Theory — Introduction, worked examples, play online two-person zero-sum games. Marek M. Kaminski: Game Theory and Politics — Syllabuses and lecture notes for game theory and political science. Websites on game theory and social interactions Kesten Green's Conflict Forecasting at the Wayback Machine (archived 11 April 2011) — See Papers for evidence on the accuracy of forecasts from game theory and other methods. McKelvey, Richard D., McLennan, Andrew M., and Turocy, Theodore L. (2007) Gambit: Software Tools for Game Theory. Benjamin Polak: Open Course on Game Theory at Yale videos of the course Benjamin Moritz, Bernhard Könsgen, Danny Bures, Ronni Wiersch, (2007) Spieltheorie-Software.de: An application for Game Theory implemented in JAVA. Antonin Kucera: Stochastic Two-Player Games. Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ?( #5) – Finale, summing up, and my own view Topics in game theory Determinacy Escalation of commitment First-player and second-player win Game complexity Graphical game Hierarchy of beliefs Information set Normal-form game Sequential game Simultaneous game Simultaneous action selection Solved game Succinct game Subgame perfection Mertens-stable equilibrium Bayesian Nash equilibrium Perfect Bayesian equilibrium Trembling hand Proper equilibrium Epsilon-equilibrium Correlated equilibrium Sequential equilibrium Quasi-perfect equilibrium Evolutionarily stable strategy Risk dominance Shapley value Gibbs equilibrium Quantal response equilibrium Strong Nash equilibrium Markov perfect equilibrium Dominant strategies Pure strategy Mixed strategy Strategy-stealing argument Grim trigger Backward induction Forward induction Markov strategy Bid shading of games Symmetric game Repeated game Signaling game Screening game Zero-sum game Bargaining problem Stochastic game Mean-field game n-player game Large Poisson game Nontransitive game Global game Strictly determined game Potential game Infinite chess Gift-exchange game Optional prisoner's dilemma Traveler's dilemma Coordination game Centipede game Volunteer's dilemma Dollar auction Matching pennies Ultimatum game Pirate game Dictator game Public goods game Blotto game War of attrition El Farol Bar problem Fair division Fair cake-cutting Cournot game Diner's dilemma Guess 2/3 of the average Kuhn poker Nash bargaining game Induction puzzles Trust game Princess and Monster game Rendezvous problem Arrow's impossibility theorem Aumann's agreement theorem Folk theorem Minimax theorem Nash's theorem Purification theorem Revelation principle Zermelo's theorem Albert W. Tucker Amos Tversky Ariel Rubinstein David K. Levine David M. Kreps Donald B. Gillies Drew Fudenberg Eric Maskin Harold W. Kuhn Hervé Moulin Jean Tirole Jean-François Mertens Jennifer Tour Chayes John Harsanyi John Maynard Smith Kenneth Binmore Leonid Hurwicz Lloyd Shapley Melvin Dresher Merrill M. Flood Olga Bondareva Oskar Morgenstern Paul Milgrom Peyton Young Reinhard Selten Robert Axelrod Robert Aumann Robert B. Wilson Roger Myerson Samuel Bowles Suzanne Scotchmer All-pay auction Alpha–beta pruning Bertrand paradox Combinatorial game theory Confrontation analysis First-move advantage in chess List of game theorists No-win situation Solving chess Topological game Mathematics (areas of mathematics) Commutative Multilinear Order theory Algebraic Algebraic number theory Analytic number theory Diophantine geometry Mathematical chemistry Mathematical sociology Computer algebra Recreational mathematics Mathematics and art Major topics Convexity and non-convexity Income–consumption curve Duopoly Substitution effect Statistical decision theory Civil engineering economics Microfoundations of macroeconomics Common land Common-pool resource fictitious commodities Common good (economics) Free-rider problem Natural rights usufruct Disposession/ Farhud spousal husband-selling (key work) Two Treatises of Government What Is Property? Murray N. Rothbard The Ethics of Liberty The Social Contract BNF: cb11941518c (data) © 2019 raptorfind.com. Imprint, All rights reserved.
CommonCrawl
July 2018 , 8:119 | Cite as Kinetic study of lead (Pb2+) removal from battery manufacturing wastewater using bagasse biochar as biosorbent Sushil Kumar Bharti Narendra Kumar Agricultural waste of bagasse was employed for investigating its lead (Pb2+) removal potential from wastewater of battery manufacturing industry. To optimize maximum removal efficacy of the bagasse, it was thermally modified in the form of biochar. Adsorption kinetics and mechanism including various parameters (contact time, dose and pH) were studied employing biochar prepared from bagasse waste. The optimum adsorption occurred at pH 5 with 140 min. of contact time utilizing 5 g of adsorbent dosage at room temperature (25 ± 3 °C). The maximum removal efficiency was recorded as 12.741 mg g−1 with 75.376% of removal at optimum pH 5 as compared to the initial concentration in the effluent. The result illustrated the most suitable fit was for Langmuir isotherm with monolayer and homogenous adsorption of Pb2+. The kinetics involved in the process was observed to be pseudo-second-order, which indicates chemisorption as a major phenomenon involved in the process. The characterization of the adsorbent biochar was done by SEM, EDX and FTIR analysis that provided details about ultrastructural and functionality of organic moiety present to have porous and rough surface, favoring the adsorption process. The functional groups identified by the FTIR analysis demonstrated involvement of carboxyl groups in Pb2+ binding. Postadsorption elution of metal-loaded bagasse was executed by 0.1 M HNO3 with about 90% of regeneration. Biosorption Biochar Isotherms Kinetics Desorption Water pollution Heavy metals Industrialization and urbanization have all the limits to sustain a healthy and peaceful life on the earth (Li et al. 2017). Among them, aquatic pollution caused by release of various heavy metals (HMs) from different anthropogenic activities is one of the major environmental concerns (Amuda et al. 2007; Ajenifuja et al. 2017). These HMs have the bioaccumulation tendency, non-biodegradable nature and may contaminate the food chain threatening the existence of living beings (Rangabhashiyam and Selvaraju 2015; Poonam and Kumar 2018). Lead (Pb2+) is one of the most toxic HMs, as lowest concentration in drinking water may cause serious problems to the nervous and reproductive system, kidney, liver, brain and bony tissues (Renner 2010; Andrade et al. 2015). The recommended concentration level of by the Environmental Protection Agency (EPA) is 0.015 and 0.05 mg l−1 in drinking and wastewater, respectively (Salmani et al. 2017; Gil et al. 2018). The main sources of the Pb2+ pollution are battery manufacturing industry, shooting ranges, paint industries, mining processes, chemical manufacturers, surfactants, preservatives, etc. (Xu et al. 2013; Liu et al. 2016; Laidlaw et al. 2017; Mehmood et al. 2017). The removal of Pb2+ from wastewater is accomplished by various conventional techniques like precipitation with hydroxide ion or lime, ion exchange, coagulation, electrochemical process, reverse osmosis and ion flotation (Wang and Chen 2009; Kong et al. 2014; Ehrampoush et al. 2015). But, these methods are expensive as well as produce secondary wastes such as persistent organic pollutants, volatile organic compounds (Salmani et al. 2017; Banerjee et al. 2012). Therefore, one of the most effective and low-cost techniques, adsorption, has attracted researchers for the removal of HMs from water bodies (Trans et al. 2010; Uçar et al. 2015). However, modification of adsorbents for better performance may be a restricting factor in view of consumption of energy and chemicals (Gil et al. 2018). Till now available studies suggest the use of readily available and cheaper materials for example agricultural wastes, industrial wastes and household wastes, algae and fungi for the removal of Pb2+ from water (Saka et al. 2011; Taha et al. 2011; Ibrahim et al. 2012; Reddy et al. 2014; Cheraghi et al. 2015). Various researchers have used the by-product of sugarcane, i.e., bagasse adsorption of different pollutants including heavy metals, dyes (Joseph et al. 2009; Saad et al. 2010; Said et al. 2013; Gardare et al. 2015; Abdelhafez and Li 2016; Tahir et al. 2016). It is an easily available waste which can be collected from different sugar mills and juice shops. In the present study bagasse was used as adsorbent for the purpose of removal of Pb2+ from battery manufacturing industry effluent. The adsorbent was thermally modified into biochar to enhance the efficiency of adsorbent as thermal treatment increased the surface area and porosity of the adsorbent (Doke and Khan 2017). Adsorption is a surface phenomenon which depends on a number of parameters like pH, contact time, adsorbent dose, initial metal concentration, pore size, temperature, surface area (Buasri et al. 2012; Nguyen et al. 2013; Salmani et al. 2017). The goal of the present study is to assess the effectiveness and performance of bagasse biochar for the removal of Pb2+ from wastewater of battery manufacturing industry effluent. Therefore, the effect of contact time and adsorbent dose on room temperature (25 ± 3 °C) and at pH 5 (optimized) was investigated to optimize the kinetics and isotherms involved in the process. The experiments were performed in batch technique, and adsorption capacity was calculated in the each step for the optimization of the parameters. Wastewater was collected from the effluent outlet of battery manufacturing industry situated near Aishbagh Park, Lucknow, UP. The samples were collected in sampling gallons of 10 l during winter season (January 2016) to minimize the influence of microbial activity on physicochemical properties of the wastewater which were examined by following methods of APHA (2005). Adsorbent preparation Bagasse was collected from sugar juice shops of Rajnikhand, Lucknow. After collection, it was brought to the laboratory and washed out thoroughly first with tap water after that deionized water for removing dust and unwanted objects. Lastly, bagasse was air-dried for 2 weeks to remove the moisture content. Then, it was subjected to pyrolysis at temperature of about 300 ± 10 °C for 2.5 h. After cooling overnight, the biochar was washed thoroughly with deionized water to remove unwanted ash contents. Thereafter, it was dried in the oven and stored in desiccated condition in airtight containers. Characterization of the adsorbent All of the chemical reagents used in the present study were of analytical reagent (AR) grade from E. Merck, Darmstadt, Germany. The adsorbent was characterized by atomic adsorption spectrometer (AAS), Fourier transform infrared spectroscopy (FTIR), scanning electron microscope (SEM) and energy-dispersive X-ray analysis (EDX) studies. The concentration of Pb2+ was determined by AAS (Varian AA240FS). The functional groups present in the biochar before and after treating the wastewater were determined by FTIR (NicoletTM 6700). Surface morphology was observed by SEM (JSM-6490LV, manufactured by JEOL, Japan) micrographs. Further, elemental composition was analyzed by EDX (model no. JSM-6490LV, designed by JEOL, Japan) and elemental composition (C, H, N and S %) was analyzed by CHNS analyzer (model no. Flash EA112 Series, manufactured by Thermo Finnegan). Surface area and pore size of bagasse biochar (before and after treatment) were characterized by Quanta Chrome Nova-1000 surface analyzer instrument under liquid N2 temperature. Further, adsorption–desorption studies were performed in order to determine the evolution of porosity and textural properties and surface are from Brunauer–Emmett–Teller (BET) method. Barrett–Joyner–Halenda (BJH) method was used to evaluate pore diameter and volume and de Boer t-method for the newly generated micropore volume measurement (Venkatesha et al. 2016). The structural integrity of the sample was observed by powder X-ray diffraction (XRD). The data were recorded by step scanning at 2θ = 0.0200/s from 30 to 800 on PANalytical X' Pert PRO MPD X-ray diffraction with graphite monochromatized Cu K α radiation (λ = 0.15406). Quantitative evaluation of surface acidic functional groups on the adsorbent The acidic functional groups present on the surface of biochar were determined by the following Boehm titration method (Boehm et al. 1964; Oickle et al. 2010). 0.05 M NaHCO3, Na2CO3 and NaOH bases were used in this method. 0.5 g of adsorbent was added in 50 ml of three bases. After that, the samples along with blank were shaken for 24 h at 120 rpm and then filtered it to remove extra particles. Thereafter, 20 ml of filtrate from each one was titrated with 0.05 M HCl to neutralize the base completely, and then backtitration was done by the blank solution with 0.05 M NaOH. The phenolphthalein indicator was used to determine the end point of the reaction. All titrations were carried out at room temperature (25 ± 3 °C), and the solutions were made up of Millipore water. The difference between molar NaOH and Na2CO3 was assumed to be the phenolic group content as described by Oickle (2010) and Abdelhafez and Li (2016). Following steps were used to calculate different surface acidic groups Calculation of surface acidic groups amount (Ax) $$A_{x} = \frac{{\left( {V_{bx} - V_{x} } \right).M_{\text{HCl}} .2.5}}{{m_{x} }}\,{\text{mol}}\,{\text{g}}\,{\text{m}}^{ - 1}$$ where mx = mass of biochar (gm), Vbx = volume (ml) of HCl used for the titration of blank, Vx = volume (ml) used for sample titration of respective bases solution after biochar addition, MHCl = molarity of HCl concentration in moles/lit., 2.5 is a coefficient for decreasing titration sample volume in comparison with reaction sample volume (50 ml/20 ml), Calculation of the amount of different kinds of surface groups Amount of phenolic groups, Aph = A3–A2–A1 Amount of carboxylic groups, Aca = A1 Amount of carboxylic from lactones hydrolysis groups Ala = A2–A1 where A1, A2 and A3 are amount of surface acidic groups for NaHCO3, Na2CO3 and NaOH, respectively. Moisture and ash content Moisture content: 1 g of sample was dried for 24 h in an oven at temperature of 100 ± 5 °C until constant weight was gained. Moisture content was calculated using the following formula $${\text{Moisture }}\,{\text{content}}\left( \% \right) = \frac{{w_{\text{i}} - w_{\text{f}} }}{{W_{\text{i}} }} \times 100$$ where wi = initial weight of the adsorbent (gm), wf = final weight of adsorbent after drying (gm). Ash content: It was determined with the help of muffle furnace by weighing 1 g of the sample and placing it into a porcelain crucible. The crucible was heated up to 500 ± 5 °C for 5 h. The material was allowed to cool down in a desiccator for 15 min. The ash content was calculated by using the following formula $${\text{Ash}}\,{\text{content}}\left( \% \right) = \frac{{W_{2} - W_{0} }}{{W_{1} - W_{0} }} \times 100$$ where W0 = weight of empty crucible (g), W1 = weight of crucible (g) + weight of adsorbent (gm), W2 = weight of crucible (g) + weight of ashed sample (g) (Basu et al. 2017; Poonam and Kumar 2018). Experimental setup The adsorption experiment was performed by varying adsorbent dosage from 2.0 to 5.0 g L−1 at an interval of 0.5 g, for 100 ml in 250-ml Erlenmeyer flasks at rotating speed of 120 rpm. The pH was optimized for maximum adsorption by shifting it from 2 to 5 with 0.1 N NaOH and 0.1 N HCl. The contact time was also optimized by varying it from 20 to 140 min at an interval of 20 min, until the maximum adsorption was achieved. Adsorption rate kinetics The adsorption kinetics involved is an important parameter to describe the basic traits of a good adsorbent (Wang et al. 2012). It provides the process which controls the sorbate reactions in the solid–solution interface at different time. The present study pseudo-first- and second-order reactions were utilized to describe the adsorption mechanism involved in the Pb2+ removal processes by bagasse biochar (Lagergren 1898; Ho et al. 2000). The pseudo-first-order kinetic model (Eq. 1) is expressed as $$In \left( {q_{1} - q_{t} } \right) = In q_{1} - k_{1} t$$ where q1 and qt are the amount of Pb2+ (mg g−1) absorbed at equilibrium and at time t, respectively, and k1 is the first-order rate constant (min−1). According to Mckay and Ho (1999), pseudo-second-order kinetic model (Eq. 2) is expressed as $$\frac{t}{{q_{t} }} = \frac{1}{{k_{2} q_{2}^{2} }} + \frac{1}{{q_{2} }}t$$ $$\frac{1}{{q_{t} }} = \frac{1}{{K_{2} q_{2} t}} + \frac{1}{{q_{2} }}$$ Equation (3), modified Ritchie's second-order kinetic model, is used to calculate the initial sorption rate, h (mg g−1 min−1) (Eq. 4). $$h = K_{2} q_{2}^{2}$$ where q2 is the maximum adsorption capacity (mg g−1) for the pseudo-second-order adsorption, K2 is the equilibrium rate constant for the pseudo-second-order adsorption (g mg−1 min−1), and h is initial sorption rate (mg g−1 min−1). Adsorption isotherm studies The experimental uptake (qe) values obtained from batch assay were analyzed using adsorption isotherm models (Langmuir and Freundlich) and separation factor (RL) at room temperature (25 ± 3 °C) and other optimized conditions, i.e., pH 5, dose 5.0 g and 140 min of contact time. The linear form of Freundlich adsorption isotherm is given as the following $$\log q_{\text{e}} = \log K_{\text{f}} + \frac{1}{n}\log C_{\text{e}}$$ where Kf and n are Freundlich constants for distribution coefficient and intensity, respectively. The Langmuir equation is given as $$\frac{{C_{\text{e}} }}{{q_{\text{e}} }} = \frac{1}{{q_{\hbox{max} } K_{L} }} + \frac{{C_{\text{e}} }}{{q_{\hbox{max} } }}$$ where qe is the equilibrium metal ion concentration, Ce is the equilibrium metal ion concentration in the solution (mg l−1), qmax is the monolayer adsorption capacity of bagasse biochar (mg g−1), and KL is the Langmuir adsorption constant (L mg−1). Analytical method The amount of Pb2+ adsorbed was calculated by the following mass balance relationship $$q_{\text{e}} = \frac{{C_{0} - C_{\text{e}} }}{w} \times v$$ and percent removal at the equilibrium (qe) was calculated as the following $${\text{Removal}} \left( \% \right) = \frac{{(C_{0} - C_{\text{e}} )}}{{C_{\text{e}} }} \times 100$$ where C0 and Ce (mg l−1) are the metal concentrations at initial stage and equilibrium, respectively. v is the volume of the effluent in ml, and w is the weight of the adsorbent in grams(gm). Desorption study Desorption experiments were performed with metal-loaded biochar of bagasse to check the reusability of the adsorbent. 0.1 M HCl, HNO3 and H2SO4, and NaOH were used as desorbing eluants. One gram of the Pb2+-loaded adsorbent was added in 100 ml of eluants and incubated for 3 h at 30 °C at 150 rpm. Physicochemical properties of battery manufacturing industry effluent The physicochemical properties of battery manufacturing industry effluent are presented in Table 1. Most of the parameters were found to be beyond the limits prescribed in Bureau of Indian Standards (BIS 10500: 2012) for on land irrigation. pH, temperature and electrical conductivity (EC) affect the ionic concentration of the effluent and influence the chemical reactions in the aquatic environment (Akpomie and Dawodua 2015). Different salts, used in the processes, may be the reason for higher values of pH as well as EC. Chemical oxygen demand (COD), biological oxygen demand (BOD), dissolved oxygen (DO), total dissolved solids (TDS), total suspended solids (TSS) and total solids (TS) define the amount of pollution caused by different organic and inorganic pollutants affecting the water quality. Besides hardness and alkalinity higher concentration of nitrate, phosphate and sulfate may be attributed to the usage of different chemicals and salts during the battery manufacturing processes (Ahmed et al. 2012). Average ± SD BIS (10500:2012) On land for irrigation Light vinegar Non-objectionable 29.733 ± 0.208 2968.02 ± 29.197 12,500.50 ± 165.751 1455.333 ± 6.429 757.333 ± 74.717 583.333 ± 4.578 400.00 ± 2.646 Sulfate 67.5033 ± 1.402 Total hardness (as CaCO3) 621.00 ± 263.236 Lead (Pb2+) Results are expressed as mean of five replicates ± SD (i.e., n = 3); all the results were expressed in mg l−1 except for color, odor, pH, temperature (°C) and EC (Siemen m−1) nd not detected The concentration of Pb2+ in the effluent was found to be 2.393 mg l−1 which exceeded the limit of 0.1 mg l−1 as prescribed by BIS (10500: 2012). Higher concentration of Pb2+ in the water bodies may be harmful for health of flora and fauna (Babel and Kurniawan 2003; Alluri et al. 2007). Screening of biosorbents Three thermally modified agrowastes (biochar of bagasse, orange and coir) were observed for their suitability for adsorbing Pb2+ (Fig. 1). Among them, bagasse biochar showed maximum adsorptive removal of 75.38% Pb2+ in comparison with orange (70.36%) and coir (61.98%) for the same adsorbent dosage of 5.0 g l−1. The variation in the adsorption efficiencies may be attributed to variations in the morphology and binding sites of the adsorbents (Petrović et al. 2017). Adsorption efficiency of different agrowastes at room temperature (25 ± 3 °C), ± SD shown by error bar Characteristics of adsorbent The proximate and ultimate study of the adsorbents is summarized in Table 2. The ash content was found to be low with average value of 7.337% because of the biochar-producing processes which eliminate the oxygen content. The moisture content was also found to be very low with average value of 1.968% which facilitated the adsorption process. The bagasse biochar was alkaline in nature as pH was found to be 8.967 which could be recognized by the detachment of alkali metal, i.e., Ca2+ as their concentration was found to be moderately high as shown in EDX spectrum of biochar (Fig. 2e). The surface area of the biochar (12.378 m2 g−1) seems to be rough and irregular which was confirmed by SEM image of bagasse biochar (Fig. 2b). Further, the biochar showed comparatively higher C/H and C/N ratio and less C % (Table 1) which suggests that the charring process was not accomplished well with some decomposed matter. This may also be a reason for comparatively lesser specific surface area of the biochar as analyzed by BET. In addition to this, higher values of H and O % indicate availability of binding sites responsible for the successful adsorption of Pb2+ (Table 1). The results were found to be in good agreement with Fernandez et al. (2014); Abdelhafez and Li (2016); and Basu et al. (2017). Characteristics of bagasse biochar Moisture content (%)a Ash content (%)a Pore volume (cc/g)b Pore diameter (nm)b Surface area (m2/g)b 12.628 ± 0.30 C %c H %c O %c* N %c S %c C/H ratioc C/N ratioc aData are presented as dry weight percent bData are retrieved from BET analysis cData are retrieved from CHNS analyzer c*Determined by difference SEM images of: a bagasse, b bagasse biochar, c lead (Pb2+)-loaded bagasse biochar; EDX spectrum of d bagasse biochar and e lead (Pb2+) adsorbed SEM and EDX SEM and EDX images showed the change in the morphology of the adsorbent before and after adsorption process (Fig. 2). The differences in SEM image of (a) bagasse, (b) its biochar and (c) Pb2+-loaded biochar could be clearly visualized. Before charring the surface was uniform and very smooth, and after thermal treatment, it became comparatively rough and irregular increasing the surface area. After adsorption, the space became filled with some irregular, crystalline structures which supported the adsorption of Pb2+ from the effluent. In addition to this, existence of Pb2+ in the EDX spectrum may aslo be considered in the support successful adsorption of the metal (Fig. 2d, e). Adsorption process is affected by the groups and bonds present in the adsorbent. In order to investigate the chemical structure and major functional groups present, which may be responsible for adsorption process, FTIR analysis was carried out (Solum et al. 1995). The spectra of the (a) bagasse, (b) its biochar and (c) Pb2+-loaded biochar are presented in Fig. 3. The broad peaks between 3846 and 3438 cm−1 show the presence of –OH stretching vibrations of cellulose, pectin and lignin in all the three spectra (Guo et al. 2008; Reddy et al. 2014). After thermal treatment, most of the major peaks were disappeared except –CH stretching at 2925.3 cm−1, C=O stretch in ketones represented by broad peak at 1711 cm−1, C=C stretch in alkenes and C=O stretch in secondary amides represented by a sharp peak at 1631 cm−1. The stretching of the chemical bonds is marked by the higher carbon content as well as the organic matter (Abdelhafez and Li 2016). After the adsorption of Pb2+ the peaks were shifted to 1629.7, 1441.67 and 874.9 7 cm−1 representing C=O stretching at C–N stretch in primary amides at C-H and CH2 deformations, respectively. Conversion of secondary amide to primary amide and –CH to C=C is evident of formation and dissociation of bonds and groups through ion exchange or some other mechanisms (Kim et al. 2015). The shift in the functional groups between non-treated and treated adsorbent suggests that the adsorption of Pb2+ might be facilitated by the process of chemisorption resulting in binding of the metal by the nucleophilic functional groups resulting in the formation of metal complex. This process also supported the adsorption of Pb2+ onto the bagasse biochar. FTIR spectra of: a bagasse, bagasse biochar before b and after c Pb2+ adsorption Surface acidic groups The surface acidic group concentrations of bagasse biochar are represented in Fig. 4. The total concentration of surface acidic groups was 7.033 mol g−1. The carboxylic acidic functional groups occupied maximum concentration of 5.15 mol g−1 in comparison with other phenolic (1.283 mol g−1) and lactonic (0.601 mol g−1) acidic groups. The results of the Boehm titration indicated majority of carboxylic acidic groups occupying 72.963% of total surface acidic groups, followed by lactonic (18.182%) and phenolic (8.855%) groups. These results were in good agreement with those of FTIR analysis, explained earlier. Surface acidic groups concentration (mol g−1) of bagasse biochar Effect of pH The effect of pH on concentration of Pb2+ is presented in Fig. 5. pH is an important controlling parameter in adsorption process as it determines the surface charge of adsorbent, degree of ionization of the adsorbate and extent of dissociation of functional groups present on active sites (Yang and Cui. 2013; Salam et al. 2011). Thus, the effect of H+ ions concentration in the battery manufacturing industry effluent was studied at different pHs at optimized dosage (5.0 g l−1) and contact time (140 min). Above pH 7, precipitation of oxides of Pb2+ was formed; therefore, the study was confined to pH range of 2.0–6.0 (Basu et al. 2017). The optimum pH for the removal of Pb2+ was found to be 5. As the pH of the effluent was increased from 2 to 5, the adsorption capacity of the adsorbent also increased from 3.34 to 3.68 mg g−1. In acidic medium, availability of binding sites increases and removal of H+ ions in the solution facilitates it being replaced by Pb2+ which is termed as chelation. Since Pb has +2 charges, two carbon atoms are involved in chelating/binding of the metal resulting in the formation of metal complex. This may also be the probable reason for the shift in the IR spectra too. At lower pH, there was Coulombic repulsion between positively charged binding sites and cations which suppressed the removal rate, further (Liu and Zhang 2009). The result is in agreement with previous studies for the removal of Pb2+ by bamboo charcoal (Wang et al. 2012), dried water hyacinth (Ibrahim et al. 2012), rapeseed oil cake (Uçar et al. 2015) and strain of Alcaligenes sp. (Jin et al. 2017). Effect of different pHs on the concentration of Pb2+ in battery manufacturing industry effluent after adsorption at room temperature (25 ± 3 °C) and optimized pH 5, adsorbent dose 5.0 g l−1 and contact time of 140 min (SD shown by error bars) Effect of adsorbent dosage The effect of adsorbent dosages on the concentration of Pb2+ in battery manufacturing industry effluent is presented in Fig. 5. The experiments were carried out at different dosages ranging from 2.5 to 5 g l−1. For convenience in presenting the data, the initial dose was chosen as 2.5 g l−1 since the removal rate was found to decrease from 66.696 to 64.345% and so on, when the dosage was reduced from 2.5 to lesser. But, when the amount of adsorbent dosage was increased further from 2.5 to 5.0 g l−1, the removal percentage increased from 66.696 to 75.376%. It can be justified by the fact that increasing dose upsurges the number of binding sites for the metal (Gil et al. 2018). The optimum dose for maximum removal of Pb2+ from effluent was recorded 5.0 g l−1. Moreover, when dosage was increased further, removal rate decreased due to overloading and intracellular dissociation phenomenon, which affected the binding capacity of the surface groups (Zhao et al. 2017). These results were in agreement with previous studies performed on many other adsorbents for different metals including Pb2+ (Oluyemi et al. 2012; Kılıç et al. 2013; Reddy et al. 2014). Effect of contact time The contact time also plays a major role in the adsorption of metals from effluent. The effect of contact time on the concentration of Pb2+ in the effluent is shown in Fig. 6. The contact time was varied from 20 to 140 min. In initial 20 min about 64.79% of Pb2+ was adsorbed, after which the process slowed down and till 140 min the binding sites of the adsorbents became saturated with 75.376% removal of Pb2+. Thus, the optimum time for the removal of Pb2+ was 140 min. No further change in the concentration of Pb2+ was observed due to exhaustion of the active binding sites. Similar outcomes were also accomplished when adsorbent dosage and pH were optimized. Moreover, the results are in agreement with Uçar et al. (2015). Effect of time adsorbent dose and contact time lead (Pb2+) concentration, at constant temperature (25 ± 3 °C) Adsorption kinetics The plots of pseudo-first order and pseudo-second order for the adsorption of Pb2+ onto bagasse are presented in Fig. 7. The linear plots of t versus log(qe − qt) and t versus t/qt offer the pseudo-first- and second-order rate constants K1 and K2, respectively, which are calculated by the slope and intercept. The corresponding values of K1, K2, qe and r2 are represented in Table 3. Role of time a pseudo-first-order kinetic plots and b pseudo-second-order kinetic plots Pseudo-first-order and pseudo-second-order kinetic constants for sorption of lead (Pb2+) onto bagasse biochar Dosage (g) qe (exp.) (mg g−1) Pseudo-first-order model constants q1 (mg g−1) K1 (min−1) Pseudo-second-order model constants q2 (mg g−1)* K2 (g−1 mg−1 min−1) h (mg g−1 min−1) *Calculated from graph 1/t versus 1/qt According to correlation coefficients (r2), the correlation coefficients for pseudo-second-order kinetic model were higher than those of the pseudo-first-order model for Pb2+. This suggests that the adsorption process followed the pseudo-second-order kinetics, indicating adsorption takes place by chemisorptions (Reddy et al. 2014). The results of the present study are in good agreement with previous studies, which also reported that the adsorption process follows the pseudo-second-order kinetics (Uçar et al. 2015; Ibrahim et al. 2012; Wang et al. 2012). Adsorption isotherm Adsorption isotherms are used to describe the adsorption equilibrium for wastewater treatments by providing useful information regarding adsorbent surface (Uçar et al. 2015; Naiya et al. 2009). At equilibrium, battery manufacturing industry effluent was allowed to contact with varying dosages of bagasse biochar to examine the maximum loading capacity of the adsorbent used. Adsorption process was justified by applying linear forms of Langmuir and Freundlich isotherms (Langmuir 1918; Freundlich 1906). The plots of Langmuir isotherm for 1/qe versus 1/Ce and Freundlich isotherm for log qe versus log Ce are presented in Fig. 8. Moreover, values of different parameters for both isotherms and separation factor (RL) are given in Table 4. a Langmuir and b Freundlich isotherm models for adsorption of Pb2+ on bagasse biochar Langmuir along with Freundlich isotherm constants and separation factor for the adsorption of Pb2+ on bagasse biochar Langmuir constants Freundlich constants Separation factor qmax (mg g−1) b (L mg−1) KF (mg g−1) RL* *Value is given for optimum dose (5.0 g l−1) and conc. of effluent (2.39 mg l−1) Although there is very less difference between both of the isotherms, the results indicated that in comparison with Freundlich (r2 = 0.927), Langmuir isotherm (r2 = 0.9298) better fitted for the adsorption of Pb2+ from battery manufacturing industry effluent. Langmuir model assumes that there were no interactions between solute particles and adsorbent surfaces, and solute particles were distributed in a monolayer carbon surface (Uçar et al. 2015). The monolayer adsorption capacity (qmax) of bagasse biochar for the removal of Pb2+ from battery manufacturing industry effluent was found to be 12.74 mg g−1. The value of qmax was found to be less than previously reported literature; the reason behind this may be the treatment of effluent (with lesser conc. of 2.39 mg l−1 Pb2+) than of known solutions of metals (with higher metal conc.) containing higher concentrations of it (Reddy et al. 2014; Naiya et al. 2009; Wang et al. 2012). Further, the feasibility of Langmuir isotherm can also be proved by a dimensionless constant, viz. separation factor or equilibrium parameter separation factor (RL) (Hall et al. 1966). It is represented as follows $$R_{\text{L}} = \frac{1}{{1 + K_{\text{L}} C_{0} }}$$ where C0 is initial metal concentration. If the value of RL is between 0 and 1, then the adsorption is favorable, and if it is higher than 1, then adsorption is unfavorable. If RL = 1, then adsorption is linear, and the RL = 0 indicates irreversible adsorption. In the present study, RL value was less than 0, i.e., 0.603, showing favorable adsorption of Pb2+ onto bagasse biochar. Desorption and reuse of adsorbent Desorption process is a reverse process of adsorption and is important to understand its recovering capacity and reusability for commercial application (Zhang et al. 2018). Three acids (HNO3, HCl and H2SO4) and one base (NaOH) of 0.1 M were used as eluants to assess the desorption process. HNO3 was found to be the best eluants to desorb about 90.05% of Pb2+ from the adsorbent (Fig. 9). The result is in agreement with previous studies done by Naiya et al. (2009) and Mondal (2009). Desorption efficiency of different eluants for desorption of Pb2+ from bagasse biochar ± SD shown by error bar From different agricultural wastes, bagasse was screened as a potential adsorbent for removing from wastewater of battery manufacturing industry effluent. The adsorption process was found to be dependent upon the pH of the medium as well as the contact time and dosage. The maximum adsorption capacity was recorded as 12.741 mg g−1 which is good for removing trace amount of metal from wastewater before disposal to the water bodies. The adsorption capacity of bagasse biochar may be increased by modifying it with various physicochemical methods like charring, addition of chemicals. The results concluded that the bagasse may be used as cost-effective adsorbent for the successful removal of different wastewater generating industries. It may also be utilized in the production of commercial bioadsorbents for removing various heavy metals from industrial effluents. The authors would like to thank Mr. Shamshad Ahmad, Department of Environmental Science, and Dr. Vertika Shukla, Department of Applied Geology, BBA University, Lucknow, for their help and support. Abdelhafez AA, Li J (2016) Removal of Pb(II) from aqueous solution by using biochars derived from sugarcane bagasse and orange peel. J Taiwan Inst Chem Eng 61:367–375CrossRefGoogle Scholar Ahmed TF, Sushil M, Krishna M (2012) Impact of dye industrial effluent on physicochemical characteristics of Kshipa River, Ujjain City, India. Int Res J Environ Sci 1:41–45Google Scholar Ajenifuja E, Ajao JA, Ajayi EOB (2017) Adsorption isotherm studies of Cu(II) and Co(II) in high concentration aqueous solutions on photocatalytically modified diatomaceous ceramic adsorbents. Appl Water Sci 7:3793–3801CrossRefGoogle Scholar Akpomie KG, Dawodua FA (2015) Physicochemical analysis of automobile effluent before and after treatment with an alkaline-activated montmorillonite. J Taibah Univ Sci 9(4):465–476CrossRefGoogle Scholar Alluri HK, Ronda SR, Settalluri VS, Bondili VS, Suryanarayana V, Venkateshwar P (2007) Biosorption: an eco-friendly alternative for heavy metal removal. Afr J Biotechnol 6(11):2924–2931Google Scholar Amuda OS, Giwa AA, Bello IA (2007) Removal of heavy metal from industrial wastewater using modified activated coconut shell carbon. Biochem Eng J 36:174–181CrossRefGoogle Scholar Andrade V, Mateus M, Batoréu M, Aschner M, dos Santos AM (2015) Lead, arsenic, and manganese metal mixture exposures: focus on biomarkers of effect. Biol Trace Elem Res 166(1):13–23CrossRefGoogle Scholar APHA (2005) Standard methods for the examination of water and waste water, 21st edn. American Public Health Association, Washington, DCGoogle Scholar Babel S, Kurniawan TA (2003) Low-cost adsorbents for heavy metals uptake from contaminated water: a review. J Hazard Mater B 97:219–243CrossRefGoogle Scholar Banerjee K, Ramesh ST, Nidheesh PV, Bharathi KS (2012) A novel agricultural waste adsorbent, watermelon shell for the removal of copper from aqueous solutions. Iran J Energy Environ 3:143–156Google Scholar Basu M, Guha AK, Ray L (2017) Adsorption of lead on Cucumber peel. J Clean Prod 151:603–615CrossRefGoogle Scholar BIS (2012) Indian standards specifications for drinking water. IS:10500, Bureau of Indian Standards, New Delhi. http://cgwb.gov.in/Documents/WQ-standards.pdf Boehm HP, Diehl E, Heck W, Sappok R (1964) Surface oxides of carbon. Angewandte Chem Int Ed 3:669–677CrossRefGoogle Scholar Buasri A, Nattawut C, Tapang K, Jaroensin S, Panphrom S (2012) Equilibrium and kinetic studies of biosorption of Zn(II) ions from wastewater using modified corn cob. APCBEE Procedia 3:60–64CrossRefGoogle Scholar Cheraghi M, Sobhanardakani S, Zandipak R, Lorestani B, Merrikhpour H (2015) Removal of Pb(II) from aqueous solutions using waste tea leaves. Iran J Toxicol 9(28):1247–1253Google Scholar Doke KM, Khan EM (2017) Equilibrium, kinetic and diffusion mechanism of Cr(VI) adsorption onto activated carbon derived from wood apple shell. Arab J Chem 10(Supplement 1):S252–S260. https://doi.org/10.1016/j.arabjc.2012.07.031 CrossRefGoogle Scholar Ehrampoush MH, Miria M, Salmani MH, Mahvi AH (2015) Cadmium removal from aqueous solution by green synthesis iron oxide nanoparticles with tangerine peel extract. J Environ Health Science Eng 13(1):1CrossRefGoogle Scholar Fernandez ME, Nunell GV, Bonelli PR, Cukiermen AL (2014) Activated carbon developed from orange peels: batch and dynamic competitive adsorption of basic dyes. Ind Crops Prod 62:437–445CrossRefGoogle Scholar Freundlich H (1906) Adsorption in solutions (57). Z Phys Chem Germany 385-470Google Scholar Gardare VN, Yadav S, Avhad DN, Rathod VK (2015) Preparation of adsorbent using sugarcane bagasse by chemical treatment for the adsorption of methylene blue. Desalin Water Treat. https://doi.org/10.1080/19443994.2014.967727 Google Scholar Gil A, Amiri MJ, Javad M, Abedi-Koupai J (2018) Adsorption/reduction of Hg(II) and Pb(II) from aqueous solutions by using bone ash/nZVI composite: effects of aging time, Fe loading quantity and co-existing ions. Environ Sci Pollut Res 25:2814–2829CrossRefGoogle Scholar Guo X, Zhang S, Shan X (2008) Adsorption of metal ions on lignin. J Hazard Mater 151:134–142CrossRefGoogle Scholar Hall KR, Eagleton LC, Acrivos A, Vermeulen T (1966) Pore- and solid-diffusion kinetics in fixed-bed adsorption under constant pattern conditions. Ind Eng Chem Fundam 5:212–223CrossRefGoogle Scholar Ho YS, Mckay G, Wase DJ, Foster CF (2000) Study of the sorption of divalent metal ions on to peat. Adv Sci Technol 18:639–650Google Scholar Ibrahim HS, Ammar NS, Ibrahim M (2012) Removal of Cd(II) and Pb(II) from aqueous solution using dried water hyacinth as a biosorbent. Spectrochim Acta A Mol Biomol Spectrosc 96:413–420CrossRefGoogle Scholar Jin Y, Yu S, Teng C, Song T, Dong L, Liang J, Bai X, Xu X, Qu J (2017) Biosorption characteristic of Alcaligenes sp. BAPb.1 for removal of lead(II) from aqueous solution. 3. Biotech 7:123Google Scholar Joseph O, Rouez M, Métivier-Pignon H, Bayard R, Emmaneul E, Gourdon R (2009) Adsorption of heavy metals on to sugar cane bagasse: improvement of adsorption capacities due to anaerobic degradation of the biosorbent. Environ Technol. https://doi.org/10.1080/09593330903139520 Google Scholar Kılıç M, Kırbıyık Ç, Çepelioğullar Ö, Pütün AE (2013) Adsorption of heavy metal ions from aqueous solutions by bio-char, a by-product of pyrolysis. Appl Surf Sci 283:856–862CrossRefGoogle Scholar Kim N, Park M, Park D (2015) A new efficient forest biowaste as biosorbent for removal of cationic heavy metals. Bioresour Technol 175:629–632CrossRefGoogle Scholar Kong Z, Li X, Tian J, Yang J, Sun S (2014) Comparative study on the adsorption capacity of raw and modified litchi pericarp for removing Cu(II) from solutions. J Environ Manag 134:109–116CrossRefGoogle Scholar Lagergren S (1898) Kungl. Svenska Vetenskapasakad Handl 24:1–39Google Scholar Laidlaw MAS, Filoppelli G, Mielke H, Gulson B, Ball AS (2017) Lead exposure at firing ranges—a review. Environ Law. https://doi.org/10.1186/s12940-017-0246-0 Google Scholar Langmuir I (1918) The adsorption of gases on plane surfaces of glass, mica and platinum. J Am Chem Soc 4(9):1361–1403CrossRefGoogle Scholar Li J, Zheng L, Liu H (2017) A novel carbon aerogel prepared for adsorption of copper(II) ion in water. J Porous Mater 24:1575–1580CrossRefGoogle Scholar Liu Z, Zhang FS (2009) Removal of lead from water using biochars prepared from hydrothermal liquefaction of biomass. J Hazard Mater 167:933–939CrossRefGoogle Scholar Liu B, Chen W, Peng X, Cao Q, Wang Q, Wang D, Meng X, Yu G (2016) Biosorption of lead from aqueous solutions by ion-imprinted tetra ethylene pentamine modified chitosan beads. Int J Biol Macromol 86:562–569CrossRefGoogle Scholar Mckay G, Ho YS (1999) Pseudo-second-order model for sorption processes. Process Biochem 34:451CrossRefGoogle Scholar Mehmood S, Rizwan M, Bashir S, Ditta A, Aziz O, Yong LZ, Dai Z, Akmal M, Ahmed W, Adeel M, Imtiaz M, Tu S (2017) Comparative effects of biochar, slag and Ferrous–Mn ore on lead and cadmium immobilization in soil. Bull Environ Contam Toxicol. https://doi.org/10.1007/s00128-017-2222-3 Google Scholar Mondal MK (2009) Removal of Pb(II) ions from aqueous solution using activated tea waste: adsorption on a fixed-bed column. J Environ Manag 90:3266–3271CrossRefGoogle Scholar Naiya TK, Bhattacharya AK, Das SK (2009) Adsorption of Cd(II) and Pb(II) from aqueous solutions on activated alumina. J Colloid Interface Sci 333:14–26CrossRefGoogle Scholar Nguyen TAH, Ngo HH, Guo WS, Zhang J, Liang S, Yue QY, Li Q, Nguyen TV (2013) Applicability of agricultural waste and by-products for adsorptive removal of heavy metals from wastewater: review. Bioresour Technol 148:574–585CrossRefGoogle Scholar Oickle AM, Goertzen SL, Hopper KR, Abdalla YO, Andreas HA (2010) Standardization of the Boehm titration: part II. Method of agitation, effect of filtering and dilute titrant. Carbon 48:3313–3322CrossRefGoogle Scholar Oluyemi EA, Adeyemi AF, Olabanji IO (2012) Removal of Pb2+ and Cd2+ ions from wastewaters using palm kernel shell charcoal (pksc). Res J Eng Appl Sci 1(5):308–313Google Scholar Petrović M, Šoštarić T, Stojanović M, Petrović J, Mihajlović M, Ćosović M, Stanković (2017) Mechanism of adsorption of Cu2+ and Zn2+ on the corn silk (Zea mays L.). Ecol Eng 99:83–90CrossRefGoogle Scholar Poonam, Kumar M (2018) Efficiency of sweet lemon (Citrus limetta) biochar adsorbent for removal of chromium from tannery effluent. Indian J Environ Prot 38(3):246–256Google Scholar Rangabhashiyam S, Selvaraju N (2015) Adsorptive remediation of hexavalent chromium from synthetic wastewater by a natural and ZnCl2 activated Sterculia guttata shell. J Mol Liq 207:39–49CrossRefGoogle Scholar Reddy NA, Lakshmipathy R, Sarada NC (2014) Application of Citrullus lanatus rind as biosorbent for removal of trivalent chromium from aqueous solution. Alex Eng J 53:969–975CrossRefGoogle Scholar Renner R (2010) Exposure on tap: drinking water as an overlooked source of lead. Environ Health Perspect 118:A68–A72CrossRefGoogle Scholar Saad SA, Isa KM, Bahari R (2010) Chemically modified sugarcane bagasse as a potentially low-cost biosorbent for dye removal. Desalination 264(1–2):123–128CrossRefGoogle Scholar Said AE-AA, Soliman AS, El-Hafez AAA, Goda MN (2013) Application of modified bagasse as a biosorbent for reactive dyes removal from industrial wastewater. J Water Resoue Prot 5:10–17CrossRefGoogle Scholar Saka C, Şahin Ö, Demir H, Kahyaoğlu M (2011) Removal of lead(II) from aqueous solutions using pre-boiled and formaldehyde-treated onion skins as a new adsorbent. Sep Sci Technol 46(3):507–517CrossRefGoogle Scholar Salam OEA, Reiad NA, ElShafei MM (2011) A study of the removal characteristics of heavy metals from wastewater by low-cost adsorbents. J Adv Res 2:297–303CrossRefGoogle Scholar Salmani MH, Abedi M, Mozaffari SA, Sadeghian HA (2017) Modification of pomegranate waste with iron ions a green composite for removal of Pb from aqueous solution: equilibrium, thermodynamic and kinetic studies. AMB Express 7:225. https://doi.org/10.1186/s13568-017-0520-0 CrossRefGoogle Scholar Solum MS, Pugmire RJ, Jagtoyen M, Derbyshire F (1995) Evolution of carbon structure in chemically activated wood. Carbon 33:1247–1254CrossRefGoogle Scholar Taha G, Arifien A, El-Nahas S (2011) Removal efficiency of potato peels as a new biosorbent material for uptake of Pb(II) Cd(II) and Zn(II) from their aqueous solutions. J Solid Waste Technol Manag 37(2):128–140CrossRefGoogle Scholar Tahir H, Sultan M, Akhtar N, Hameed U, Abid T (2016) Application of natural and modified sugar cane bagasse for the removal of dye from aqueous solution. J Saudi Chem Soc 20:S115–S121CrossRefGoogle Scholar Trans VH, Tran LD, Nguyen TN (2010) Preparation of chitosan/magnetite composite beads and their application for removal of Pb(II) and Ni(II) from aqueous solution. Mater Sci Eng C 30:304–310CrossRefGoogle Scholar Uçar S, Erdem M, Tay T, Karagö S (2015) Removal of lead(II) and nickel(II) ions from aqueous solution using activated carbon prepared from rapeseed oil cake by Na2CO3 activation. Clean Technol Environ Policy 17:747–756. https://doi.org/10.1007/s10098-014-0830-8 CrossRefGoogle Scholar Venkatesha NJ, Bhat YS, Prakash BSJ (2016) Volume accessibility of acid sites in modified montmorillonite and triacetin selectivity in acetylation of glycerol. Appl Catal A Gen 496:51–57CrossRefGoogle Scholar Wang J, Chen C (2009) Biosorbents for heavy metals removal and their future. Biotechnol Adv 27:195–226CrossRefGoogle Scholar Wang Y, Wang X, Wang X, Liu M, Yang L, Wu Z, Xia S, Zhao J (2012) Adsorption of Pb(II) in aqueous solutions by bamboo charcoal modified with KMnO4 via microwave irradiation. Colloids Surf A 414:1–8CrossRefGoogle Scholar Xu X, Cao X, Zhao L, Wang H, Yu H, Gao B (2013) Removal of Cu, Zn, and Cd from aqueous solutions by the dairy manure-derived biochar. Environ Sci Pollut Res 20:358–368CrossRefGoogle Scholar Yang X, Cui X (2013) Adsorption characteristics of Pb(II) on alkali treated tea residue. Water Resourc Ind 3:1–10CrossRefGoogle Scholar Zhang X, Tong J, Hu XB, Wei W (2018) Adsorption and desorption for dynamics transport of hexavalent chromium Cr(VI) in soil column. Environ Sci Pollut Res 25:459–468CrossRefGoogle Scholar Zhao W, Zhou T, Zhu J, Sun X, Xu Y (2017) Adsorption of cadmium ions using the bioadsorbent of Pichia kudriavzevii YB5 immobilized by polyurethane foam and alginate gels. Environ Sci Pollut Res. https://doi.org/10.1007/s11356-017-0785-5 Google Scholar 1.Department of Environmental ScienceBabasaheb Bhimrao Ambedkar UniversityLucknowIndia Poonam, Bharti, S.K. & Kumar, N. Appl Water Sci (2018) 8: 119. https://doi.org/10.1007/s13201-018-0765-z Accepted 02 July 2018
CommonCrawl
Mathematics > Combinatorics Title:On $(k,l,H)$-kernels by walks and the H-class digraph Authors:Hortensia Galeana-Sánchez, Miguel Tecpa-Galván Abstract: Let $H$ be a digraph possibly with loops and $D$ a loopless digraph whose arcs are colored with the vertices of $H$ ($D$ is said to be an $H-$colored digraph). If $W=(x_{0},\ldots,x_{n})$ is an open walk in $D$ and $i\in \{1,\ldots,n-1\}$, we say that there is an obstruction on $x_{i}$ whenever $(color(x_{i-1},x_{i}),color(x_{i},x_{i+1}))\notin A(H)$. A $(k,l,H)$-kernel by walks in an $H$-colored digraph $D$ ($k\geq 2$, $l\geq 1$), is a subset $S$ of vertices of $D$, such that, for every pair of different vertices in $S$, every walk between them has at least $k-1$ obstructions, and for every $x\in V(D)\setminus S$ there exists an $xS$-walk with at most $l-1$ obstructions. This concept generalize the concepts of kernel, $(k,l)$-kernel, kernel by monochromatic paths, and kernel by $H$-walks. If $D$ is an $H$-colored digraph, an $H$-class partition is a partition $\mathscr{F}$ of $A(D)$ such that, for every $\{(u,v),(v,w)\}\subseteq A(D)$, $(color(u,v),color(v,w))\in A(H)$ iff there exists $F\in \mathscr{F}$ such that $\{(u,v),(v,w)\}\subseteq F$. The $H$-class digraph relative to $\mathscr{F}$, denoted by $C_{\mathscr{F}}(D)$, is the digraph such that $V(C_{\mathscr{F}}(D))=\mathscr{F}$, and $(F,G)\in A(C_{\mathscr{F}}(D))$ iff there exist $(u,v)\in F$ and $(v,w)\in G$ with $\{u,v,w\}\subseteq V(D)$. We will show sufficient conditions on $\mathscr{F}$ and $C_{\mathscr{F}}(D)$ to guarantee the existence of $(k,l,H)$-kernels by walks in $H$-colored digraphs, and we will show that some conditions are tight. For instance, we will show that if an $H$-colored digraph $D$ has an $H$-class partition in which every class induces a strongly connected digraph, and has a obstruction-free vertex, then for every $k\geq 2$, $D$ has a $(k,k-1,H)$-kernel by walks. Despite finding $(k,l)$-kernels is a $NP$-complete problem, some hypothesis presented in this paper can be verified in polynomial time. Subjects: Combinatorics (math.CO) MSC classes: 05C15, 05C20, 05C69 Cite as: arXiv:2105.00044 [math.CO] (or arXiv:2105.00044v1 [math.CO] for this version) From: Miguel Tecpa-Galván [view email] [v1] Fri, 30 Apr 2021 19:00:48 UTC (165 KB) math.CO
CommonCrawl
Oscillatory Solutions to the Heat Equation 2019-10-03 Last updated on 2019-10-07 4 min read mathematical diversions As we all learned in a first course in PDEs, for "reasonable" initial data $u_0$, the solution to the linear heat equation \begin{equation} \partial_t u = \triangle u \end{equation} exists classically and converge to zero uniformly. The proof can be given using the explicit Gaussian kernel \begin{equation} K_t(x) = \frac{1}{(4\pi t)^{d/2}} \exp( -|x|^2 / 4t ) \end{equation} for which \[ u(t,x) = K_t \star u_0 \] solves the heat equation. The fact that $K_t$ is a Schwartz function for every $t > 0$, and that it depends smoothly on $t$, means that as long as $u_0$ is locally integrable and has no more than polynomial growth (for example), then $u(t,x)$ is well-defined everywhere and smooth. Furthermore, by H\"older's inequality we see \begin{equation} |u(t,x)| \leq |K_t|_{L^p} |u_0|_{L^q} \end{equation} where $p$ and $q$ are conjugate exponents. Observe that by explicit computation, $K_t$ always have $L^1$ norm $1$, but has decreasing $L^p$ norm for every $p > 1$. This shows that, in particular, if $u_0 \in L^q$ for any $q\in [1,\infty)$, then the classical solution converges uniformly to zero. Notice that the theory leaves out $L^\infty$. In particular, we see easily that if $u_0 \equiv C$, then the convolution defines the classical solution $u(t,x) \equiv C$ as the solution to the heat equation, and this does not converge to $0$. An obvious question is: is it the case that for bounded, locally integrable functions $u_0$, that convergence to a constant is always the case for solutions to the linear heat equation? On compact manifolds Our case is bolstered by the situation on compact manifolds. When solving the heat equation on compact manifolds, if the initial data is bounded, it is also in $L^2$. So we can decompose the initial data using the eigen-expansion relative to the Laplace-Beltrami operator $\triangle$. The system then diagonalizes into decoupled ordinary differential equations which, except for the kernel of $\triangle$,have solutions that are exponentially decaying. In fact, this is sufficient to tell us that on compact manifolds, solutions to the heat equation converge exponentially to its mean. We immediately however see a difference with the case on $\mathbb{R}^d$. On Euclidean space, the solution frequently only decay polynomially to zero, when the data is in $L^p$. One can check this by letting $u_0$ be a Gaussian function, for which the heat equation solution decay at rate $t^{-d/2}$. Solutions are asymptotically locally constant On the positive side, we do have that, for every $u_0$ that is bounded, the solutions are asymptotically locally constant at every time. This is because if $u_0$ is bounded (say by the number $b$), we have that \[ u(t,x) - u(t,y) = \int (K_t(x-z) - K_t(y-z))u_0(z) ~\mathrm{d}z \] so \begin{equation} |u(t,x) - u(t,y)| \leq b \int |K_t(x-z) - K_t(y-z) | ~\mathrm{d}z \end{equation} Performing an explicit change of variables we get \begin{equation} |u(t,x) - u(t,y)| \leq \frac{b}{\pi^{d/2}} \int | \exp(- |z|^2) - \exp(- |z - (y-x)/ (2\sqrt{t})|^2) | ~\mathrm{d}z. \end{equation} For $(y-x)/\sqrt{t}$ sufficiently small, the integral can be very roughly bounded by $O(|y-x| / \sqrt{t})$. This shows that on every compact set $K$, \begin{equation} \sup_K u(t,\cdot) - \inf_K u(t,\cdot) \lesssim t^{-1/2} \end{equation} giving that the solutions are asymptotically locally constant. This argument, however, says nothing about whether the "constant" value is the same between different times. Oscillatory solutions In fact, it is fairly easy to construct a bounded initial data for which the solution, evaluated at $x = 0$, fails to converge as $t \to \infty$. Performing the explicit change of variable we get \begin{equation} u(t,0) = \int \frac{1}{(4\pi)^{d/2}} \exp( - |z|^2/4) u_0( - \sqrt{t} z) \mathrm{d}z. \end{equation} Denote by $\mu_G$ the Gaussian measure $(4\pi)^{-d/2} \exp( - |z|^2/4) \mathrm{d}z$ (which we observe has total mass 1) we see that we can write $u(t,0)$ as the integral of a rescaling of $u_0$ against $\mu_G$. First choose $\epsilon \ll 1$. We can choose $0 < \lambda < \Lambda < \infty$ such that \[ \int_{ |z| \leq \lambda} \mu_G < \epsilon, \quad \int_{|z| \geq \Lambda} \mu_G < \epsilon. \] Let $\kappa = \Lambda / \lambda$. Let $u_0$ be defined by \begin{equation} u_0(x) = \begin{cases} 0 & |x| = \frac12 \newline 1 & |x| \in [\lambda \kappa^n, \lambda \kappa^{n+1}), n \text{ is even}\newline 0 & |x| \in [\lambda \kappa^m, \lambda \kappa^{m+1}), m \text{ is odd} \end{cases} \end{equation} Observe that this function satisfies \[ u_0(\kappa^{2n} x) = u_0(x) \] and \[ u_0(x) + u_0(\kappa x) \equiv 1.\] By construction, there exists a value $\gamma \in [1 - 2\epsilon, 1]$ such that \[ \int u_0(x) \mu_G = \gamma, \quad \int u_0(\kappa x) \mu_G = 1 - \gamma.\] Returning to the formula for $u(t,0)$, we see that \begin{equation} u(\kappa^{2n},0) = \begin{cases} \gamma & n \text{ is even} \newline 1 - \gamma & n \text{ is odd} \end{cases} \end{equation} This shows an oscillatory behavior at $x = 0$, showing that the limit as $t\to \infty$ of $u(t,0)$ to not exist. On the other hand, this particular solution has the nice property that, on any compact set $K$, and for any fixed $t_0 > 0$, the sequence of functions $v_n(x) = u(t_0 \kappa^{4n}, x)$ converges uniformly to $u(t_0,0)$ as $n \to \infty$. math.AP parabolic PDE Convergence to Planewaves for Solutions to Schrödinger's equation III Convergence to Planewaves for Solutions to Schrödinger's equation II Convergence to Planewaves for Solutions to Schrödinger's equation Simulating Closed Cosmic Strings Shooting particles with Python
CommonCrawl
Economics Microeconomics Marginal Rate of Transformation Definition Reviewed by Marshall Hargrave What Is Marginal Rate of Transformation (MRT)? The marginal rate of transformation (MRT) is the number of units or amount of a good that must be forgone in order to create or attain one unit of another good. In particular, it's defined as the number of units of good X that will be foregone in order to produce an extra unit of good Y, while keeping constant the use of production factors and the technology being used. MRT is the number of units that must be forgone in order to create or attain a unit of another good, considered the opportunity cost to produce one extra unit of something. MRT is also considered the absolute value of the slope of the production possibilities frontier. The marginal rate of substitution focuses on demand, while MRT focuses on supply. The Formula for Marginal Rate of Transformation Is MRT=MCxMCywhere:MCx=money needed to produce another unit of XMCy=rate of increase by cutting production of Y\begin{aligned} &\text{MRT} = \frac{MC_x}{MC_y} \\ &\textbf{where:}\\ &MC_x=\text{money needed to produce another unit of X}\\ &MC_y=\text{rate of increase by cutting production of Y}\\ \end{aligned}​MRT=MCy​MCx​​where:MCx​=money needed to produce another unit of XMCy​=rate of increase by cutting production of Y​ So the ratio tells you how much Y you need to cut in order to produce another X. How to Calculate the Marginal Rate of Transformation (MRT) The marginal rate of transformation (MRT) is calculated as the marginal cost of producing another unit of a good divided by the resources freed up by cutting production of another unit. What Does MRT Tell You? The marginal rate of transformation (MRT) allows economists to analyze the opportunity costs to produce one extra unit of something. In this case, the opportunity cost is represented in the lost production of another specific good. The marginal rate of transformation is tied to the production possibility frontier (PPF), which displays the output potential for two goods using the same resources. MRT is the absolute value of the slope of the production possibilities frontier. For each point on the frontier, which is displayed as a curved line, there is a different marginal rate of transformation, based on the economics of producing each product individually. To produce more of one good means producing less of the other because the resources are efficiently allocated. In other words, resources used to produce one good are diverted from other goods, which means less of the other goods will be produced. This tradeoff is measured by the marginal rate of transformation. Generally speaking, the opportunity cost rises (as does the absolute value of the MRT) as one moves along (down) the PPF. As more of one good is produced, the opportunity cost (in units) of the other good increases. Example of How to Use the Marginal Rate of Transformation (MRT) The MRT is the rate at which a small amount of X can be foregone for a small amount of Y. The rate is the opportunity cost of a unit of each good in terms of another. As the number of units of X relative to Y changes, the rate of transformation may also change. For perfect substitute goods, the MRT will equal 1 and remain constant. As an example, if baking one less cake frees up enough resources to bake three more loaves of bread, the rate of transformation is 3 to 1 at the margin. Or consider that it costs $3 to make a make a cake. Meanwhile, $1 can be saved by not making a loaf of bread. Thus, the MRT is $3, or $3 divided $1. As another example, consider a student who faces a trade-off that involves giving up some free time to get better grades in a particular class by studying more. The MRT is the rate at which the student's grade increases as free time is given up for studying, which is given by the absolute value of the slope of the production possibility frontier curve. The Difference Between MRT and the Marginal Rate of Substitution (MRS) While the marginal rate of transformation (MRT) is similar to the marginal rate of substitution (MRS), these two concepts are not the same. The marginal rate of substitution focuses on demand, while MRT focuses on supply. The marginal rate of substitution highlights how many units of X would be considered by a given consumer group to be compensation for one less unit of X. For example, a consumer who prefers oranges to apples may only find equal satisfaction if she receives three apples instead of one orange. Limitations of Using MRT The marginal rate of transformation (MRT) is generally never constant and may need to be recalculated frequently. As well, if MRT doesn't equal MRS then goods will not be distributed efficiently. Learn More About the MRT To better understand the marginal rate of transformation (MRT) see exactly how the production possibility frontier works. Inside the Marginal Rate of Substitution The marginal rate of substitution is defined as the amount of a good that a consumer is willing to give up for another good, as long as it is equally satisfying. Understanding the Marginal Rate of Technical Substitution The marginal rate of technical substitution is the rate at which a factor must decrease and another must increase to retain the same level of productivity. Maximizing Product Efficiency with the Production Possibility Frontier In business analysis, the production possibility frontier (PPF) is a curve illustrating the different possible amounts that two separate goods may be produced when there is a fixed availability of a certain resource that both items require for their manufacture. How the High-Low Method Works In cost accounting, the high-low method is a way of attempting to separate out fixed and variable costs given a limited amount of data. Ceteris Paribus Ceteris paribus is a Latin phrase usually rendered as "all other things being equal." What Is the Isoquant Curve? The isoquant curve is a graph, used in the study of microeconomics, that charts all inputs that produce a specified level of output. A Practical Look At Microeconomics What is the Quantity Theory of Money? Learn to Value Real Estate Investment Property Can Keynesian Economics Reduce Boom-Bust Cycles? Inflation for Dummies This Is How You Could Live in Mexico on $1,000 a Month
CommonCrawl
Band Offset Measurements in Atomic-Layer-Deposited Al2O3/Zn0.8Al0.2O Heterojunction Studied by X-ray Photoelectron Spectroscopy Baojun Yan1, Shulin Liu1, Yuekun Heng1, Yuzhen Yang1,2, Yang Yu1,3 & Kaile Wen1,4 Pure aluminum oxide (Al2O3) and zinc aluminum oxide (Zn x Al1-x O) thin films were deposited by atomic layer deposition (ALD). The microstructure and optical band gaps (E g ) of the Zn x Al1-x O (0.2 ≤ x ≤ 1) films were studied by X-ray diffractometer and Tauc method. The band offsets and alignment of atomic-layer-deposited Al2O3/Zn0.8Al0.2O heterojunction were investigated in detail using charge-corrected X-ray photoelectron spectroscopy. In this work, different methodologies were adopted to recover the actual position of the core levels in insulator materials which were easily affected by differential charging phenomena. Valence band offset (ΔE V) and conduction band offset (ΔE C) for the interface of the Al2O3/Zn0.8Al0.2O heterojunction have been constructed. An accurate value of ΔE V = 0.82 ± 0.12 eV was obtained from various combinations of core levels of heterojunction with varied Al2O3 thickness. Given the experimental E g of 6.8 eV for Al2O3 and 5.29 eV for Zn0.8Al0.2O, a type-I heterojunction with a ΔE C of 0.69 ± 0.12 eV was found. The precise determination of the band alignment of Al2O3/Zn0.8Al0.2O heterojunction is of particular importance for gaining insight to the design of various electronic devices based on such heterointerface. Nano-thick oxide films with high resistance have attracted much attention as the most promising conductive layer for the applications of microchannel plate (MCP) as electron multiplier [1, 2], resistive memories [3], and electron-optical micro-electro mechanical systems (MEMS) [4]. A large research effort has been devoted to the novel idea of adjusting resistivity of such thin films due to the abovementioned large potential applications in a special environment. MCP is a thin glass plate with thickness of about 500 μm consisting of several millions pores of a cylinder geometry with a 4–25-μm diameter and with a bias angle usually 5°–13° to the normal of the plate surface, and the high aspect ratio in each pore is about 20:1–100:1 [5, 6]. For recent MCP fabrication, two kinds of nano-thick layers are deposited on the MCP pore surfaces to conduct an electron multiplication function [1, 2]. The first layer is a conductive layer for supplying electrons, and the second layer is a secondary electron emission (SEE) layer for generating electrons. The three-dimensional surfaces and high aspect ratio of MCP should be firstly taken into consideration for depositing uniform thickness and composition of thin films. So far, the only effective approach growing high-quality thin films is the atomic layer deposition (ALD) technique based on sequential self-terminating gas-solid reactions [7]. ZnO is an n-type semiconductor with a direct bandgap of around 3.37 eV and a large exciton binding energy of 60 meV at room temperature [8, 9]. A lot of elements such as Mg [10, 11], Cd [12], Ga [13], W [14], and Mo [15] were used to doping in ZnO in order to tune its optical and electrical properties for special applications. In electron multiplier application, such as MCP, zinc aluminum oxide (Zn x Al1-x O) films have been investigated because of their thermal stability in a special application environment and low cost of industrialization. The properties of Zn x Al1-x O films can be controlled by changing the Al content, paving a way to design optoelectronic and photonic devices based on this material. Usually, high-resistivity Zn x Al1-x O thin films as a conductive layer with x at the range of 0.7–0.85 have been applied in the field of electron multiplier [16]. For SEE layers, boron-doped diamond with hydrogen-terminated material has higher SEE coefficient than that of other traditional insulators. This provides a strong impetus for the development of electron multipliers. However, in the presence of degradation due to electron beam-induced contamination, these must be seriously regarded as preliminary [17]. From a practical point of view, two kinds of traditional insulators used as SEE layers in MCP are magnesium oxide (MgO) and Al2O3 thin films [18]. Although pure MgO has higher SEE coefficient than that of Al2O3, it is limited in the application on MCP because it is highly deliquescent and its surface is rather reactive with atmospheric moisture and carbon dioxide as demonstrated by our previous work [19], which probably results in degraded SEE performance. However, the physical and chemical properties of Al2O3 are very stable even after long-term exposure to the atmosphere. Therefore, Al2O3 is one of the most commonly used SEE materials in MCP application. According to the structure of MCP, the Al2O3 and Zn x Al1-x O thin films have different band gaps (E g) resulting in band offsets in the heterointerface. Therefore, the determination of the band offsets at Al2O3/Zn x Al1-x O interface is of importance because valence band offset (ΔE V) and conduction band offset (ΔE C) can deteriorate or promote SEE performance and also have a great influence on the performance of electron multiplier. Generally, Kraut's method is widely used to calculate the valence band maximum (VBM) and the conduction band minimum (CBM) of semiconductor/semiconductor heterojunctions [20]. However, in the case of insulator/semiconductor or, in more serious cases, insulator/insulator heterojunctions, the positive charges generated during X-ray bombardment accumulate in the insulators and induce a strong modification of the kinetic energy of the emitted photoelectrons which is the so-called differential charging effect [21]. Although it is probably dealt with using a neutralizing electron gun [22], the use of C 1s peak recalibration [23], and zero charging method [24,25,26], a careful evaluation of the experimental result is necessary due to the differential charging effect during X-ray irradiation [19]. In this work, we will study the structure and optical E g of Zn x Al1-x O (0.2 ≤ x ≤ 1) thin films firstly, and then, we especially determine the ΔE V and ΔE C of the Al2O3/Zn0.8Al0.2O heterojunction by using high-resolution X-ray photoelectron spectroscopy (XPS). Several samples were used in this study: nine 80-nm-thick Zn x Al1-x O samples (0.2 ≤ x ≤ 1) individually grown on n-Si (1 1 1) and quartz substrates, a 30-nm-thick Al2O3 grown on n-Si (1 1 1) substrate, and 3, 4, 5, 8 nm of Al2O3 on 80 nm of Zn0.8Al0.2O grown on n-Si (1 1 1). The quartz substrates were ultrasonically cleaned in an ethanol/acetone solution and then rinsed in deionized water. The polished Si substrates were dipped in hydrofluoric acid for 30 s and then placed in an ALD chamber waiting for deposition. For Zn x Al1-x O layer deposition, ZnO:Al2O3 ALD was carried out using diethylzinc (DEZ), trimethylaluminum (TMA), and deionized water as Zn, Al, and oxidant precursor, respectively. The Al2O3 ALD was performed using separate TMA and H2O exposures with sequence TMA/N2/H2O/N2 (150 ms/4 s/150 ms/4 s). The ZnO ALD was performed using separate DEZ and H2O exposures following the sequence DEZ/N2/H2O/N2 (150 ms/4 s/150 ms/4 s). The doping was carried out by substituting TMA exposure for DEZ. The Zn contents in the Zn x Al1-x O layers were controlled by adjusting the ratio of the pulse cycles of DEZ and TMA, where the Zn content x was varied from 0.2 to 1 (pure ZnO) atom %. For Zn0.8Al0.2O layer, the DEZ and H2O pulses were alternated, and every fifth DEZ pulse was substituted with a TMA pulse. Ultrahigh purity nitrogen was used as a carrier and purge gas. The reaction temperatures were 200 °C. The detailed parameters are listed in Table 1. Table 1 Detailed parameters for Zn0.8Al0.2O and Al2O3 layers Optical transmittance spectra in a wavelength range from 185 to 700 nm were carried out by using a double-beam UV-Vis-IR spectrophotometer (Agilent Cary 5000) at room temperature in air. The crystal structure of the films were characterized by X-ray diffraction (XRD, Bruker D8) using Cu K α radiation (40 kV, 40 mA, λ = 1.54056 Å). The film thickness was measured by Spectroscopic Ellipsometry (Sopra GES5E) where the incident angle was fixed at 75°, and the wavelength region from 230 to 900 nm was scanned with 5-nm steps. And the ellipsometric thicknesses of samples ALD03, ALD04, ALD05, and ALD06 were 3.01, 4.02, 5.01, and 8.01 nm, respectively. The XPS (PHI Quantera SXM) is used to analyze both the core levels (CLs) and valence band spectra of the samples. Charge neutralization was performed with an electron flood gun, and all XPS spectra were calibrated by the C 1s peak at 284.6 eV. In order to avoid differential charging effect, during the measurement, the spectra were taken after a few minutes of X-ray irradiation. All the samples are measured under the same conditions in order to acquire reliable data. The ΔE V of the Al2O3/Zn0.8Al0.2O heterojunction can be calculated from Kraut's formula $$ \varDelta {E}_{\mathrm{V}}=\left({E}_{\mathrm{CL}}^{{\mathrm{Zn}}_{0.8}{\mathrm{Al}}_{0.2}\mathrm{O}}(y)-{E}_{\mathrm{V}\mathrm{BM}}^{{\mathrm{Zn}}_{0.8}{\mathrm{Al}}_{0.2}\mathrm{O}}\right)\hbox{-} \left({E}_{\mathrm{CL}}^{{\mathrm{Al}}_2{\mathrm{O}}_3}(x)-{E}_{\mathrm{V}\mathrm{BM}}^{{\mathrm{Al}}_2{\mathrm{O}}_3}\right)\hbox{-} \varDelta {E}_{\mathrm{CL}} $$ where \( \varDelta {E}_{\mathrm{CL}}=\left({E}_{\mathrm{CL}}^{{\mathrm{Zn}}_{0.8}{\mathrm{Al}}_{0.2}\mathrm{O}}(y)-{E}_{\mathrm{CL}}^{{\mathrm{Al}}_2{\mathrm{O}}_3}(x)\right) \) was the energy difference between feature y and feature x CLs, which were measured by XPS measurement in the heterojunction sample, and \( \left({E}_{\mathrm{CL}}^{{\mathrm{Al}}_2{\mathrm{O}}_3}(x)-{E}_{\mathrm{VBM}}^{{\mathrm{Al}}_2{\mathrm{O}}_3}\right) \) and \( \left({E}_{\mathrm{CL}}^{{\mathrm{Zn}}_{0.8}{\mathrm{Al}}_{0.2}\mathrm{O}}(y)-{E}_{\mathrm{VBM}}^{{\mathrm{Zn}}_{0.8}{\mathrm{Al}}_{0.2}\mathrm{O}}\right) \) were the Al2O3 and Zn0.8Al0.2O bulk constants, which were obtained on the respective thick films. The VBM values were determined by linear extrapolation of the leading edge to the baseline of the valence band spectra. A root sum square relationship is used to combine the uncertainties in the different binding energies to determine the uncertainty of calculated results [26]. Structure and Band Gaps of Zn x Al1-x O Samples The XRD patterns of the as-deposited 80-nm-thick Zn x Al1-x O (x = 0.2, 0.6, 0.8, 0.9, 1) thin films grown on quartz and Si substrates are shown in Fig. 1a, b, respectively. For the pure ZnO grown on quartz substrates in Fig. 1a, the strong peaks at 32.4° and 34.8° and the relatively weak peaks at 36.5° and 57.2° come from hexagonal ZnO phase, indicating the polycrystalline nature of the ZnO layer. And strong (0 0 2) peak shows the preferential orientation growth of ALD ZnO. However, the above characteristic peaks become weak for Zn0.9Al0.1O sample and disappear for Zn x Al1-x O (x ≤ 0.8) samples, which suggests that ZnO crystallization is suppressed with Al concentration increase. Besides, the broad peak ranging from 20° to 30° is the typical pattern of the quartz substrate. For Si substrate, the strong peaks around 28.4° and 58.9° are easily detected (data not shown). These peaks are corresponding to the diffractions originated from Si (1 1 1) and Si (2 2 2) crystal planes. In addition, the relatively weak peaks in Fig. 1b at 2θ = 32.6°, 33.2°, 35.4°, 35.9°, 38.8°, 39.2°, and 42.8° in the diffractograms that arise from the Si substrate itself are also observed. These unknown peaks may be related to the process conditions for producing crystalline silicon and are observed in previous work [27, 28]. Except for diffraction peaks from the Si substrate, no other diffraction peaks from the Zn x Al1-x O (x ≤ 0.9) samples are detected. Only (0 0 2) and weak (1 1 0) peaks appear in the pure ZnO sample. From the above results, the crystal quality of the Zn x Al1-x O film is a serious decline with the increasing concentration of Al content. It is well known that the particle size of Al ions is less than that of Zn ions. Zn is easily substituted by Al when doping concentration of Al increases. This results in weakened ZnO crystallinity, so the structure of Zn x Al1-x O (x ≤ 0.8) samples is amorphous, in good agreement with previous results [29]. Taken into consideration, Zn x Al1-x O layer growth appears to be substrate sensitive and Al doping concentration has an influence on the crystallization of the films. XRD patterns of Zn x Al1-x O samples deposited on a quartz substrate and b Si substrate Figure 2a shows transmission spectra of the Zn x Al1-x O samples deposited on quartz substrate. The average transmittance is above 80% in the visible wavelength for all samples. It is found that ZnO film exhibits abrupt absorption edge which appears at ~390 nm corresponding to the fundamental E g of ZnO. A blue shift of the absorption edge is apparently observed when Al concentration increases. The E g of Zn x Al1-x O thin films can be obtained by fitting the sharp absorption edges. The relationship between absorption coefficient (α) and E g of direct band gap semiconductor is given by Tauc equation [30], (αhv)2 = B(hv−E g), where hν is the photon energy and B is a constant. The dependence of (αhν)2 on photon energy is shown in Fig. 2b. The E g is obtained by the extrapolations of the liner regions of the optical absorption edges. The E g of pure ZnO thin film deposited by ALD is 3.26 eV, which is consistent with the previous reports [31, 32]. With the Zn concentration x decreases from 0.9 to 0.2, the E g of Zn x Al1-x O thin films increases from 4.11 to 6.51 eV. It is directly demonstrated that the E g of Zn x Al1-x O thin films can be adjusted in a large range by controlling the Al doping concentration, which makes it a suitable candidate for application in many scientific research fields [33, 34]. For the new type of MCP, the properties of Zn0.8Al0.2O thin film are suitable for conductive layer proved by previous study [2]. Therefore, the E g of atomic-layer-deposited Zn0.8Al0.2O thin film is 5.29 eV, which is sufficient to make a band gap discontinuity in Al2O3/Zn0.8Al0.2O heterojunction and is used for calculating the ΔE C value later. Transmittance spectra (a) and the plots of (αhν)2 vs. photon energy (b) of Zn x Al1-x O samples Valence and Conduction Band Offset Measurements of Al2O3/Zn0.8Al0.2O Heterojunction The XPS spectra of survey scan, CLs, and VBM region for Zn0.8Al0.2O and Al2O3 samples are shown in Fig. 3. In this study, we find that the CLs positions of the Zn0.8Al0.2O and Al2O3 thin films do not change as a function of X-ray irradiation time for 15 min (data not shown), because of operating a low energy electron flood gun. Figure 3a, e shows the whole scanning spectrum of the thick Zn0.8Al0.2O and Al2O3 thin films, respectively. The C 1s peak at 284.6 eV appeared due to some surface contamination, and the Ar 2p peak at 242.1 eV appeared because of residual inert gas composition in the ultrahigh vacuum chamber. The peaks in Fig. 3a located 660, 652, 582, 573, 559, 495, and 472 eV are Auger lines of Zn element. The stoichiometry of the thick films are checked by the ratio of the integrated area of Zn 2p peak to Al 2p peak for the Zn0.8Al0.2O sample and Al 2p peak to O 1 s peak for the Al2O3 sample. Both are corrected by corresponding atomic sensitivity factors S [35], taking into account their corresponding photoionization cross-sections of CLs calculated by Scofield [36], and the mean free path of the photoelectrons calculated by Tanuma et al [37]. Here, the S values are calculated to be 0.256, 2.768, and 0.733 for Al 2p, Zn 2p 3/2, and O 1s. The atomic ratios Zn:Al = 3.97:1.01 for Zn0.8Al0.2O and Al:O = 1.99:3.01 for Al2O3 compare well with that of designed ratio of atomic layer deposition, which indicate good stoichiometry of the Zn0.8Al0.2O and Al2O3 layers. The high-resolution scans of Zn 2p 3/2 and Zn 2p 1/2 CLs of Zn0.8Al0.2O are shown in Fig. 3b, c. The peaks fitted using Shirley backgrounds and Voigt (mixed Lorentzian-Gaussian) functions located 1021.41 and 1044.51 eV in Fig. 3b, c correspond to the electronic states of Zn 2p 3/2 and Zn 2p 1/2, respectively, and both are fitted by a single contribution, attributed to the bonding configuration Zn-O. The Al 2p peak of Al2O3 located 74.35 eV and O 1s peak located 531.1 eV are shown in Fig. 3f, g. The Al 2p spectrum as fitted by a single contribution, attributed to the bonding configuration Al-O. However, for the O 1s spectrum, an additional peak low-intensity higher binding energy component is also observed. This extra component is attributed to both O-Al and O-H bonds [38]. The VBM positions are determined by a linear extrapolation of the leading edge of the valence band spectrum and the background [39], as shown in Fig. 3d,h. This linear method has already been widely used to determine the VBM of semiconductors with high accuracy. The VBM values of atomic-layer-deposited thick Zn0.8Al0.2O and Al2O3 samples are 2.26 and 3.19 eV, respectively. The scatter of the data relative to the fit are estimated as an uncertainty in VBM positions of less than 0.03 eV. The parameters deduced from Fig. 3 are summarized in Table 2 for clarity. XPS spectra for a survey scan, b Zn 2p 3/2, c Zn 2p 1/2, and d VBM of Zn0.8Al0.2O and e survey scan, f Al 2p, g O 1s, and h VBM of Al2O3, with application of a low-energy electron flood gun Table 2 Peak positions of CLs and VBM positions used to calculate the ΔE V of the Al2O3/Zn0.8Al0.2O heterojunction Four CLs of Al2O3/Zn0.8Al0.2O heterojunction with different Al2O3 thickness are shown in Fig. 4. The Al 2p, Zn 2p 1/2, and Zn 2p 3/2 XPS spectra in Fig. 4(a, e, i), (b, f, j), and (c, g, k), respectively, are fitted by a single contribution, attributed to the bonding configurations Al-O and Zn-O. For the O 1s XPS spectrum in Fig. 4d, h, l, an additional low-intensity higher binding energy component is observed. The extra component is attributed to metal (Al, Zn)-O bonding at the interface and/or inelastic losses to free carries in the Al2O3 layer, similar results obtained by previous study [19]. With the increase of the Al2O3 thickness, the intensity of Zn 2p 1/2 peak is weakened while the energy resolution is deteriorated shown in Fig. 4f. It is difficult to observe and fit for Al2O3 thickness of 5 nm as shown in Fig. 4j. So, the peak position of Zn 2p 1/2 in 5-nm Al2O3 sample listed by a bold number in Table 2 is a large deviation as a result of the big error of fitting. The CLs of Al2O3/Zn0.8Al0.2O samples are summarized in Table 2. CLs of Al2O3/Zn0.8Al0.2O samples with varied Al2O3 thickness a–d 3 nm, e–h 4 nm, and i–l 5 nm, with application of a low-energy electron flood gun The ΔE V of the Al2O3/Zn0.8Al0.2O heterojunction is determined from the energy separation between the CLs in the Al2O3/Zn0.8Al0.2O sample and the VBM to CLs separations in the thick Al2O3 and Zn0.8Al0.2O samples, respectively. Table 3 lists the ΔE V values for all Al2O3 samples with thickness of 3–5 nm, and the error in each value is ± 0.07 eV. Therefore, the averaged ΔE V value is 0.87 ± 0.22 eV. It is important to note that the calculation does not include the italic numbers in Table 3 because of the big error fitting of CLs of Zn 2p 1/2 in the 5-nm Al2O3/Zn0.8Al0.2O sample. Table 3 The ΔE V values of the Al2O3/Zn0.8Al0.2O heterojunction with Al2O3 thickness of 3–5 nm However, there are obvious considerable CL shifts up to 0.6 eV sensitive to the thicknesses of the Al2O3 and Zn0.8Al0.2O layers from the given experimental data in Table 2, and different ΔE V values are obtained in the various combinations of XPS CLs in Table 3. It is directly proved that the charging phenomenon generated by the X-ray irradiation results in adverse effects on the ΔE V determination when taking XPS measurement on insulator/semiconductor heterojunction in spite of operating neutralizing electron gun. As has been widely reported, the influences of differential charging on the band offsets determination cannot be neglected even in very thin oxides. Therefore, zero charging method is adopted in this study in order to eliminate charging-induced errors and recover the accurate ΔE V value. The error in the ΔE V measurement is resulting from the differential charging effect that prevents the correct determination of the energy differences, such as between the Al 2p and Zn 2p 3/2 signals even in very thin Al2O3 films in heterojunction. In Fig. 5, the binding energies of the Al 2p, Zn 2p 3/2, and Zn 2p 1/2 CLs for the 3, 4, 5, and 8 nm Al2O3 films are plotted as a function of X-ray irradiation time. The binding energies of Al 2p, Zn 2p 3/2, and Zn 2p 1/2 CLs of the 3-nm Al2O3 sample in Fig. 5a increase slowly until they stabilize on a steady state value of 74.63 ± 0.01, 1021.77 ± 0.01, and 1044.83 ± 0.02 eV, respectively. Similar time dependencies are observed in the 4-, 5-, and 8-nm Al2O3 films, as shown in Fig. 5b–d. The results show that CL steady state spectra are obtained after stabilization of the signals in the heterojunction-considered charge saturated when X-ray irradiation time is more than 25 min. Therefore, X-ray irradiation time is one of the most important parameters to determine the insulator/semiconductor band offsets. Layer thickness dependence in peak positions is mainly resulting from the differential charging effects. True peak positions can be acquired by extrapolating the measured binding energies to zero oxide thickness and ideally to zero charge, similar results are reported for SiO2/Si [25], HfO2/Si [26], and MgO/Zn0.8Al0.2O [19] systems. Time-resolved plots showing the respective binding energies vs. X-ray irradiation time for a 3 nm, b 4 nm, c 5 nm, and d 8 nm Al2O3 films on Zn0.8Al0.2O on Si substrates, with application of a low-energy electron flood gun The CLs positions of the Al 2p, Zn 2p 1/2, and Zn 2p 3/2 are plotted as a function of the Al2O3 film thickness, as shown in Fig. 6. By linear fitting of the experimental data, the CLs positions of the Al 2p, Zn 2p 1/2, and Zn 2p 3/2 peaks are determined to be 74.51 ± 0.03, 1044.77 ± 0.06, and 1021.7 ± 0.04 eV, respectively. In order to correct the ΔE V of the Al2O3/Zn0.8Al0.2O heterojunction, we calculate the energy differences between the extrapolated (Al 2p, Zn 2p 1/2) and (Al 2p, Zn 2p 3/2) at zero thickness. The values are 970.26 ± 0.07 and 947.19 ± 0.05 eV, respectively. Inserting these values in Eq. (1), the ΔE V are calculated to be 0.83 ± 0.09 and 0.8 ± 0.08 eV, which is in good agreement using the two combinations of CLs of the Al2O3/Zn0.8Al0.2O heterojunction. Therefore, the averaged ΔE V value is 0.82 ± 0.12 eV. Al 2p (a), Zn 2p 1/2 (b), and Zn 2p 3/2 (c), CL binding energies as a function of the Al2O3 thin film thickness There are three possible reasons that affect the ΔE V values in addition to the XPS method itself. Firstly, the oxide stoichiometry of the Al2O3 thin films measured by XPS is almost the same in the different Al2O3 samples with thickness of 3–8 nm. Therefore, the composition of the Al2O3 film is independent of thickness and the binding energy shifts in Fig. 5 is related to the differential charging effect occurring in the Al2O3/Zn0.8Al0.2O heterojunction during X-ray irradiation. Secondly, band bending at the heterointerface could induce a systematic error in determination of ΔE V, and we check that this error is much smaller than the average standard deviation of ± 0.03 eV given above. Finally, the strain existing in the Al2O3 overlayer of the heterojunction will induce a piezoelectric field that probably affects the measured ΔE V value, a similar phenomenon explained by Martin et al [40]. The heterojunction underlayer Zn0.8Al0.2O is thick enough, and the structure of both materials is amorphous. Therefore, the strain-induced piezoelectric field is not taken into consideration in this study. To infer the ΔE C based on the value of ΔE V, we need to know the E g of the ultrathin Al2O3 layer, which can be estimated from O 1s core-level binding energy spectrum of atomic-layer-deposited Al2O3 thin films with energy loss structure. The binding energy is calculated from the difference in the total photoelectron energy minus the kinetic energy due to the loss in photoelectron energy by inelastic collision processes within the sample. The minimum inelastic loss is equal to the band gap energy, and the most cited value of E g is 6.8 eV [41,42,43]. Together with the Zn0.8Al0.2O E g of 5.29 eV at room temperature, the ΔE C can be simply derived by the equation, ΔE C = E g(Al2O3) − ΔE V − E g(Zn0.8Al0.2O), where E g(Al2O3) and E g(Zn0.8Al0.2O) are the band gaps of the Al2O3 and Zn0.8Al0.2O thin films, respectively. The ΔE C is calculated to be 0.69 ± 0.12 eV, which means that the barrier height for transport of electrons is smaller than that of holes. The band alignment of the Al2O3/Zn0.8Al0.2O heterojunction obtained from XPS measurements is shown in Fig. 7. The CBM of Al2O3 is higher than that of Zn0.8Al0.2O; however, the VBM of Al2O3 is lower than that of Zn0.8Al0.2O. Therefore, a nested type-I band alignment with a ratio ΔE C/ΔE V of about 1:1.2 is obtained. The schematic diagram of band offset at the Al2O3/Zn0.8Al0.2O heterojunction interface Usually, the MCP gain under direct current (DC) mode is limited by the space charge density without consideration of ion feedback, and the recharge time constant or dead time (τ) is several milliseconds [44]. When operating a MCP as a DC current amplifier, the gain is constant until the output current (I oc) exceeds about 10% of the strip current through the plate. However, the MCP works in a highly saturated state under a photon-counting mode, and the electron avalanche multiplication is done within several nanoseconds that is a million times faster than τ [44,45,46]. The peak output current in pulsed operation exceeding the I oc by several orders of magnitude is observed. Therefore, anode signal charges probably come from the tunneling electrons in the Al2O3/Zn x Al1-x O heterojunction of the inner wall of the MCP. For photon-counting mode, both ΔE V and ΔE C should be sufficiently large, which can prevent the thermal excitation of electrons generated from the SEE layer into the electron multiplier system that probably produces high electronics dark noise and result in a reduced signal to noise ratio. The present result has no effects on the MCP operating under DC mode which is determined by space charge saturation, but has negative effects on the photon-counting mode which needs a type-II heterojunction to improve tunneling probability for excellent performance. The relationship between the Al2O3/Zn x Al1-x O heterojunction and charge replenishment mechanism under photon-counting mode needs further study. Therefore, the band alignment of the Al2O3/Zn x Al1-x O heterojunction should be constructed and adjusted by appropriately changing the ratio of Al and Zn elements under the premise of meeting the requirements of the electron multiplier. The structure and optical band gaps of Zn x Al1-x O (0.2 ≤ x ≤ 1) films deposited by atomic layer deposition are investigated. And the band offset measurements of the Al2O3/Zn0.8Al0.2O heterojunction have been determined by XPS with zero charging method. The results show that X-ray irradiation time is one of the most important parameters to determine the band offsets. The layer thickness dependence in peak positions is mainly derived from the differential charging effects, and true peak positions are obtained by extrapolating the measured binding energies to zero oxide thickness and ideally to zero charge. The ΔE V value is obtained to be 0.82 ± 0.12 eV, and the corresponding ΔE C is calculated to be 0.69 ± 0.12 eV. Therefore, a nested type-I band alignment is obtained. Understanding of the band alignment parameters of the Al2O3/Zn0.8Al0.2O interface will facilitate the knowledge of their carrier transport mechanism and design of corresponding hybrid devices, especially in the research process of electron multipliers. Al2O3 : ALD: CBM: Conduction band minimum CLs: Core levels DC: Direct current DEZ: Diethylzinc E g : Band gap I oc : MCP: Microchannel plate MgO: Secondary electron emission TMA: Trimethylaluminum VBM: Valence band maximum XPS: X-ray photoelectron spectroscopy XRD: X-ray diffractometer Zn x Al1-x O: Zinc aluminum oxide ΔE C : Conduction band offset ΔE V : Valence band offset Mane AU, Peng Q, Elam JW et al (2012) An atomic layer deposition method to fabricate economical and robust large area microchannel plates for photodetectors. Physics Procedia 37:722–732 Yan BJ, Liu SL, Heng YK (2015) Nano-oxide thin films deposited via atomic layer deposition on microchannel plates. Nanoscale Res Lett 10:11 Jana D, Maikap S, Tien TC et al (2012) Formation-polarity-dependent improved resistive switching memory performance using IrO x /GdO x /WO x /W structure. Jpn J Appl Phys 51:6 Petric P, Bevis C et al (2010) Reflective electron beam lithography: a maskless ebeam direct write lithography approach using the reflective electron beam lithography concept. J Vac Sci Technol B 28:C6C6 Siegmund OHW, McPhate JB, Tremsin AS et al (2012) Atomic layer deposited borosilicate glass microchannel plates for large area event counting detectors. Nucl Instrum Methods Phys Res Sect A 695:168–171 Yang YZ, Liu SL, Zhao TC, YAN BJ et al (2016) Single electron counting using a dual MCP assembly. Nucl Instrum Methods Phys Res Sect A 830:438–443 Makino T, Segawa Y, Kawasaki M (2001) Band gap engineering based on Mg x Zn1-x O and Cd y Zn1-y O ternary alloy films. Appl Phys Lett 78:1237–1239 Hummer K (1973) Interband magnetoreflection of ZnO. Phys Status Solidi B-Basic Res 56:249–260 Ke Y, Lany S, Berry JJ et al (2014) Enhanced electron mobility due to dopant-defect pairing in conductive ZnMgO. Adv Funct Mater 24:2875–2882 Kumar P, Malik HK, Ghosh A et al (2013) Bandgap tuning in highly c-axis oriented Zn1-x Mg x O thin films. Appl Phys Lett 102:5 Yao G, Tang YQ, Fu YJ et al (2015) Fabrication of high-quality ZnCdO epilayers and ZnO/ZnCdO heterojunction on sapphire substrates by pulsed laser deposition. Appl Surf Sci 326:271–275 Lee CS, Cuong HB et al (2015) Comparative study of group-II alloying effects on physical property of ZnGaO transparent conductive films prepared by RF magnetron sputtering. J Alloy Compd 645:322–327 Mane AU, Elam JW (2013) Atomic layer deposition of W:Al2O3 nanocomposite films with tunable resistivity. Chem Vapor Depos 19:186–193 Tong WM, Brodie AD, Mane AU et al (2013) Nanoclusters of MoO3-x embedded in an Al2O3 matrix engineered for customizable mesoscale resistivity and high dielectric strength. Appl Phys Lett 102:5 Elam JW, Routkevitch D, George SM (2003) Properties of ZnO/Al2O3 alloy films grown using atomic layer deposition techniques. J Electrochem Soc 150:G339–G347 Lapington JS, Thompson DP, May PW et al (2009) Investigation of the secondary emission characteristics of CVD diamond films for electron amplification. Nucl Instrum Methods Phys Res Sect A 610:253–257 Jokela SJ, Veryovkin IV, Zinovev AV et al (2012) Secondary electron yield of emissive materials for large-area micro-channel plate detectors: surface composition and film thickness dependencies. In: Liu T (ed) Proceedings of the 2nd International Conference on Technology and Instrumentation in Particle Physics. Elsevier Science Bv, Amsterdam, pp 740–747 Yan BJ, Liu SL, Yang YZ, Heng YK (2016) Band alignment of atomic layer deposited MgO/Zn0.8Al0.2O heterointerface determined by charge corrected X-ray photoelectron spectroscopy. Appl Surf Sci 371:118–128 Kraut EA, Grant RW et al (1983) Semiconductor core-level to valence-band maximum binding-energy differences: precise determination by X-ray photoelectron-spectroscopy. Phys Rev B 28:1965–1977 Alay JL, Hirose M (1997) The valence band alignment at ultrathin SiO2/Si interfaces. J Appl Phys 81:1606 Grunthaner FJ, Grunthaner PJ (1986) Chemical and electronic structure of the SiO2/Si interface. Materials Science Reports 1:65–160 Seguini G, Perego M, Spiga S et al (2007) Conduction band offset of HfO2 on GaAs. Appl Phys Lett 91:3 Tanimura T, Toyoda S, Kamada H et al (2010) Photoinduced charge-trapping phenomena in metal/high-k gate stack structures studied by synchrotron radiation photoemission spectroscopy. Appl Phys Lett 96:3 Iwata S, Ishizaka A (1996) Electron spectroscopic analysis of the SiO2/Si system and correlation with metal-oxide-semiconductor device characteristics. J Appl Phys 79:6653–6713 Perego M, Seguini S (2011) Charging phenomena in dielectric/semiconductor heterostructures during X-ray photoelectron spectroscopy measurements. J Appl Phys 110:11 Hilmi I, Thelander E, Schumacher P et al (2016) Epitaxial Ge2Sb2Te5 films on Si(111) prepared by pulsed laser deposition. Thin Solid Films 619:81–85 Pandikunta M, Ledyaev O, Kuryatkov V et al (2014) Structural analysis of N-polar AlN layers grown on Si (111) substrates by high resolution X-ray diffraction. Phys Status Solidi C 11:487–490 Tynell T, Yamauchi H, Karppinen M et al (2013) Atomic layer deposition of Al-doped ZnO thin films. J Vac Sci Technol A 31:4 Tauc J (1966) The Optical Properties of Solids. Academic, Waltham Dar TA, Agrawal A, Misra P et al (2014) Valence and conduction band offset measurements in Ni0.07Zn0.93O/ZnO heterostructure. Curr Appl Phys 14:171–175 Dhakal T, Vanhart D, Christian R et al (2012) Growth morphology and electrical/optical properties of Al-doped ZnO thin films grown by atomic layer deposition. J Vac Sci Technol A 30:10 Dhakal TP, Peng CY, Tobias RR et al (2014) Characterization of a CZTS thin film solar cell grown by sputtering method. Sol Energy 100:23–30 Zhu BL, Lu K, Wang J et al (2013) Characteristics of Al-doped ZnO thin films prepared in Ar + H2 atmosphere and their vacuum annealing behavior. J Vac Sci Technol A 31:9 Moulder JF, Stickle WF (1992) Sobol PE et al Handbook of X-ray photoelectron spectroscopy. PerkinElmer Corp, Eden Priarie Scofield JH (1976) Hartree-slater subshell photoionization cross-sections at 1254 and 1487 eV. J Electron Spectrosc Relat Phenom 8:129–137 Tanuma S, Powell CJ, Penn DR (1994) Calculations of electron inelastic mean free paths. 5. Data for 14 organic compounds over the 50–2000 eV range. Surf Interface Anal 21:165–176 Wu Y, Hermkens PM, van de Loo BWH et al (2013) Electrical transport and Al doping efficiency in nanoscale ZnO films prepared by atomic layer deposition. J Appl Phys 114:024308 Chambers SA, Droubay T, Kaspar TC et al (2004) Experimental determination of valence band maxima for SrTiO3, TiO2, and SrO and the associated valence band offsets with Si(001). J Vac Sci Technol B 22:2205–2215 Martin G, Botchkarev A, Rockett A et al (1996) Valence-band discontinuities of wurtzite GaN. AlN, and InN heterojunctions measured by X-ray photoemission spectroscopy, Appl Phys Lett 68:2541–2543 Kamimura T, Sasaki K, Hoi Wong M et al (2014) Band alignment and electrical properties of Al2O3/β-Ga2O3 heterojunctions. Appl Phys Lett 104:192104 Huang ML, Chang YC, Chang YH et al (2009) Energy-band parameters of atomic layer deposited Al2O3 and HfO2 on In x Ga1-x As. Appl Phys Lett 94:3 Liu JS, Clavel M, Hudait MK (2015) Tailoring the valence band offset of Al2O3 on epitaxial GaAs1-y Sb y with tunable antimony composition. ACS Appl Mater Interfaces 7:28624–28631 Wiza JL (1979) Microchannel plate detectors. Nuclear Instruments and Methods 162:587–601 Adams B, Chollet M, Elagin A et al (2013) A test-facility for large-area microchannel plate detector assemblies using a pulsed sub-picosecond laser. Rev Sci Instrum 84:061301 Siegmund OHW, McPhate JB, Jelinsky SR et al (2013) Large area microchannel plate imaging event counting detectors with sub-nanosecond timing. IEEE Trans Nucl Sci 60:923–931 We express our sincere gratitude to the reviewer, who has given us the most valuable advice. We thank Dr. Lin Wang and Dr. Hailiang Nie for the meaningful discussions on XRD measurement and chemical formula of Zn0.8Al0.2O, respectively. We are especially grateful to Danjiao Wang for her careful reading and polishing the manuscript. This work was supported by the National Natural Science Foundation of China (Nos. 11675278 and 11535014), the Strategic Priority Research Program of the Chinese Academy of Sciences (No. XDA10010400), and the Beijing Municipal Science and Technology Project (No. Z171100002817004). The raw data used in this study is not available for the time being, because the data has not been fully analyzed, and the results of the analysis will be gradually introduced in the recent published articles. BY designed and conducted the experiments and drafted the manuscript. YY, YY and KW prepared the thin films and performed the XRD, UV-Vis-IR and XPS measurements. SL and YH provided the technical support and advices on the work. All authors read and approved the final manuscript. State Key Laboratory of Particle Detection and Electronics, Institute of High Energy Physics of Chinese Academy of Sciences, Beijing, 100049, People's Republic of China Baojun Yan, Shulin Liu, Yuekun Heng, Yuzhen Yang, Yang Yu & Kaile Wen Department of Physics, Nanjing University, Nanjing, 210093, People's Republic of China Yuzhen Yang School of Science, Xi'an University of Technology, Xi'an, 710048, People's Republic of China Yang Yu University of Chinese Academy of Sciences, Beijing, 100049, People's Republic of China Kaile Wen Baojun Yan Shulin Liu Yuekun Heng Correspondence to Baojun Yan. Yan, B., Liu, S., Heng, Y. et al. Band Offset Measurements in Atomic-Layer-Deposited Al2O3/Zn0.8Al0.2O Heterojunction Studied by X-ray Photoelectron Spectroscopy. Nanoscale Res Lett 12, 363 (2017). https://doi.org/10.1186/s11671-017-2131-8 Heterojunction
CommonCrawl
Global existence for a two-phase flow model with cross-diffusion Complete dynamical analysis for a nonlinear HTLV-I infection model with distributed delay, CTL response and immune impairment March 2020, 25(3): 935-955. doi: 10.3934/dcdsb.2019197 The global attractor for a class of extensible beams with nonlocal weak damping Chunxiang Zhao , Chunyan Zhao and Chengkui Zhong , * Corresponding author: Chengkui Zhong Received January 2019 Published March 2020 Early access September 2019 Fund Project: This work is partly supported by the NSFC (11731005, 11801228) and Postgraduate Research and Practice Innovation Program of Jiangsu Province(KYCX19-0027). The goal of this paper is to study the long-time behavior of a class of extensible beams equation with the nonlocal weak damping $ \begin{eqnarray*} u_{tt}+\Delta^2 u-m(\|\nabla u\|^2)\Delta u +\| u_t\|^{p}u_t+f(u) = h, \rm{in}\; \Omega\times\mathbb{R^{+}}, p\geq0 \end{eqnarray*} $ on a bounded smooth domain $ \Omega\subset\mathbb{R}^{n} $ with hinged (clamped) boundary condition. Under some suitable conditions on the Kirchhoff coefficient $ m(\|\nabla u\|^2) $ and the nonlinear term $ f(u) $ , the well-posedness is established by means of the monotone operator theory and the existence of a global attractor is obtained in the subcritical case, where the asymptotic smooothness of the semigroup is verified by the energy reconstruction method. Keywords: Extensible beam equation, nonlocal weak damping, global attractor, subcritical growth exponent. Mathematics Subject Classification: Primary: 35B40, 35B41; Secondary: 37L30. Citation: Chunxiang Zhao, Chunyan Zhao, Chengkui Zhong. The global attractor for a class of extensible beams with nonlocal weak damping. Discrete & Continuous Dynamical Systems - B, 2020, 25 (3) : 935-955. doi: 10.3934/dcdsb.2019197 A. C. Biazutti and H. R. Crippa, Global attractor and inertial set for the beam equation, Appl. Anal., 55 (1994), 61-78. doi: 10.1080/00036819408840290. Google Scholar E. H. de Brito, The damped elastic stretched string equation generalized: Existence, uniqueness, regularity and stability, Applicable Anal., 13 (1982), 219-233. doi: 10.1080/00036818208839392. Google Scholar J. M. Ball, Initial-boundary value problems for an extensible beam, J. Math. Anal. Appl., 42 (1973), 61-90. doi: 10.1016/0022-247X(73)90121-2. Google Scholar J. M. Ball, Stability theory for an extensible beam, J. Differential Equations, 14 (1973), 399-418. doi: 10.1016/0022-0396(73)90056-9. Google Scholar J. M. Ball, Global attractors for damped semilinear wave equations, Discrete Contin. Dyn. Syst., 10 (2004), 31-52. doi: 10.3934/dcds.2004.10.31. Google Scholar P. Biler, Remark on the decay for damped string and beam equations, Nonlinear Anal., 10 (1986), 839-842. doi: 10.1016/0362-546X(86)90071-4. Google Scholar V. Barbu, Nonlinear Semigroups and Differential Equations in Banach Spaces., Noordhoff International Publishing, Leiden, 1976,352 pp. Google Scholar M. M. Cavalcanti, V. N. Domingos Cavalcanti and J. A. Soriano, Global existence and asymptotic stability for the nonlinear and generalized damped extensible plate equation, Commun. Contemp. Math., 6 (2004), 705-731. doi: 10.1142/S0219199704001483. Google Scholar I. D. Chueshov, Introduction to the Theory of Infinite-Dimensional Dissipative Systems, AKTA, Kharkiv, 1999. 436 pp. Google Scholar I. Chueshov and I. Lasiecka, Long-time behavior of second order evolution equations with nonlinear damping, Mem. Amer. Math.Soc., 195 (2008). doi: 10.1090/memo/0912. Google Scholar I. Chueshov and S. Kolbasin, Long-time dynamics in plate models with strong nonlinear damping, Commun. Pure Appl. Anal., 11 (2012), 659-674. doi: 10.3934/cpaa.2012.11.659. Google Scholar M. Coti Zelati, Global and exponential attractors for the singularly perturbed extensible beam, Discrete Contin. Dyn. Syst., 25 (2009), 1041-1060. doi: 10.3934/dcds.2009.25.1041. Google Scholar R. W. Dickey, Free vibrations and dynamic buckling of the extensible beam, J.Math. Anal.Appl., 29 (1970), 443-454. doi: 10.1016/0022-247X(70)90094-6. Google Scholar A. Eden and A. J. Milani, Exponential attractor for extensible beam equations, Nonlinearity, 6 (1993), 457-479. doi: 10.1088/0951-7715/6/3/007. Google Scholar J. G. Eisley, Nonlinear vibration of beams and rectangular plates, Z. Angew. Math. Phys., 15 (1964), 167-175. doi: 10.1007/BF01602658. Google Scholar J. K. Hale, Asymptotic Behavior of Dissipative Systems, Mathematical Surveys and Monographs, 25. AMS, Providence, RI, 1988. Google Scholar H. Lange and G. Perla Menzala, Rates of decay of a nonlocal beam equation, Differential Integral Equations, 10 (1997), 1075-1092. Google Scholar J.-L. Lions, On some questions in boundary value problems in mathematical physics, in Contemporary Developments in Continuum Mechanics and Partial Differential Equations, Rio de Janeiro, North-Holland, Amsterdam-New York, 30 (1978), 284–346. Google Scholar H. M. Berger, A new approach to the analysis of large deflections of plates, J. Appl. Mech., 22 (1955), 465-472. Google Scholar T. F. Ma and V. Narciso, Global attractor for a model of extensible beam with nonlinear damping and source terms, Nonlinear Anal., 73 (2010), 3402-3412. doi: 10.1016/j.na.2010.07.023. Google Scholar L. A. Medeiros, On a new class of nonlinear wave equations, J. Math. Anal. Appl., 69 (1979), 252-262. doi: 10.1016/0022-247X(79)90192-6. Google Scholar F. J. Meng, J. Wu and C. X. Zhao, Time-dependent global attractor for extensible Berger equation, J. Math. Anal. Appl., 469 (2019), 1045-1069. doi: 10.1016/j.jmaa.2018.09.050. Google Scholar F. J. Meng, M. H. Yang and C. K. Zhong, Attractors for wave equations with nonlinear damping on time-dependent space, Discrete. Contin. Dyn. Syst. Ser. B, 21 (2016), 205-225. doi: 10.3934/dcdsb.2016.21.205. Google Scholar S. Kouémou Patcheu, On a global solution and asymptotic behaviour for the generalized damped extensible beam equation, J. Differential Equations, 135 (1997), 299-314. doi: 10.1006/jdeq.1996.3231. Google Scholar V. Pata and S. Zelik, Smooth attractors for strongly damped wave equations, Nonlinearity, 19 (2006), 1495-1506. doi: 10.1088/0951-7715/19/7/001. Google Scholar I. Perai, Multiplicity of Solutions for the $p$-Laplacian, 1997. Google Scholar M. A. Jorge Silva and V. Narciso, Long-time behavior for a plate equation with nonlocal weak damping, Differential Integral Equations, 27 (2014), 931-948. Google Scholar M. A. Jorge Silva and V. Narciso, Attractors and their properties for a class of nonlocal extensible beams, Discrete Contin.Dyn. Syst., 35 (2015), 985-1008. doi: 10.3934/dcds.2015.35.985. Google Scholar M. A. J. da Silva and V. Narciso, Long-time dynamics for a class of extensible beams with nonlocal nonlinear damping, Evol. Equ. Control Theory, 6 (2017), 437-470. doi: 10.3934/eect.2017023. Google Scholar J. Simon, Régularité de la solution d'une équation non linéaire dans ${{\mathbf{R}}^{N}}$, Journées d'Analyse Non Linéaire, Lecture Notes in Math., Springer, Berlin, 665 (1978), 205–227. Google Scholar J. Simon, Compact sets in the space $L^{p}(0, T;B)$, Ann. Mat. Pure Appl., 146 (1987), 65-96. doi: 10.1007/BF01762360. Google Scholar R. E. Showalter, Monotone Operators in Banach Spaces and Nonlinear Partial Differential Equations, Mathematical Surveys and Monographs, 49. AMS, Providence, RI, 1997. Google Scholar C. Y. Sun, D. M. Cao and J. Q. Duan, Non-autonomous dynamics of wave equations with nonlinear damping and critical nonlinearity, Nonlinearity, 19 (2006), 2645-2665. doi: 10.1088/0951-7715/19/11/008. Google Scholar R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physics, 2$^{nd}$ edition, Applied Mathematical Sciences, 68, Springer-Verlag, New York, 1997. doi: 10.1007/978-1-4612-0645-3. Google Scholar C. F. Vasconcellos and L. M. Teixeira, Existence, uniqueness and stabilization for a nonlinear plate system with nonlinear damping, Ann. Fac. Sci. Toulouse Math., 8 (1999), 173-193. doi: 10.5802/afst.928. Google Scholar S. Woinowsky-Krieger, The efect of an axial force on the vibration of hinged bars, J. Appl. Mech., 17 (1950), 35-36. Google Scholar L. Yang and X. Wang, Existence of attractors for the non-autonomous Berger equation with nonlinear damping, Electron. J. Differential Equations, (2017), 14 pp. Google Scholar L. Yang, Uniform attractor for non-autonomous plate equation with a localized damping and a critical nonlinearity, J. Math. Anal. Appl., 338 (2008), 1243-1254. doi: 10.1016/j.jmaa.2007.06.011. Google Scholar Z. J. Yang, On an extensible beam equation with nonlinear damping and source terms, J.Dierential Equations, 254 (2013), 3903-3927. doi: 10.1016/j.jde.2013.02.008. Google Scholar Yanan Li, Zhijian Yang, Fang Da. Robust attractors for a perturbed non-autonomous extensible beam equation with nonlinear nonlocal damping. Discrete & Continuous Dynamical Systems, 2019, 39 (10) : 5975-6000. doi: 10.3934/dcds.2019261 Biyue Chen, Chunxiang Zhao, Chengkui Zhong. The global attractor for the wave equation with nonlocal strong damping. Discrete & Continuous Dynamical Systems - B, 2021, 26 (12) : 6207-6228. doi: 10.3934/dcdsb.2021015 Michele Coti Zelati. Global and exponential attractors for the singularly perturbed extensible beam. Discrete & Continuous Dynamical Systems, 2009, 25 (3) : 1041-1060. doi: 10.3934/dcds.2009.25.1041 Yue Sun, Zhijian Yang. Strong attractors and their robustness for an extensible beam model with energy damping. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021175 Takayuki Niimura. Attractors and their stability with respect to rotational inertia for nonlocal extensible beam equations. Discrete & Continuous Dynamical Systems, 2020, 40 (5) : 2561-2591. doi: 10.3934/dcds.2020141 Marcio Antonio Jorge da Silva, Vando Narciso. Long-time dynamics for a class of extensible beams with nonlocal nonlinear damping*. Evolution Equations & Control Theory, 2017, 6 (3) : 437-470. doi: 10.3934/eect.2017023 Azer Khanmamedov, Sema Simsek. Existence of the global attractor for the plate equation with nonlocal nonlinearity in $ \mathbb{R} ^{n}$. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 151-172. doi: 10.3934/dcdsb.2016.21.151 D. Hilhorst, L. A. Peletier, A. I. Rotariu, G. Sivashinsky. Global attractor and inertial sets for a nonlocal Kuramoto-Sivashinsky equation. Discrete & Continuous Dynamical Systems, 2004, 10 (1&2) : 557-580. doi: 10.3934/dcds.2004.10.557 Tomás Caraballo, Marta Herrera-Cobos, Pedro Marín-Rubio. Global attractor for a nonlocal p-Laplacian equation without uniqueness of solution. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1801-1816. doi: 10.3934/dcdsb.2017107 Fengjuan Meng, Chengkui Zhong. Multiple equilibrium points in global attractor for the weakly damped wave equation with critical exponent. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 217-230. doi: 10.3934/dcdsb.2014.19.217 Marcio A. Jorge Silva, Vando Narciso, André Vicente. On a beam model related to flight structures with nonlocal energy damping. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3281-3298. doi: 10.3934/dcdsb.2018320 Dalibor Pražák. On the dimension of the attractor for the wave equation with nonlinear damping. Communications on Pure & Applied Analysis, 2005, 4 (1) : 165-174. doi: 10.3934/cpaa.2005.4.165 Jian-Guo Liu, Jinhuan Wang. Global existence for a thin film equation with subcritical mass. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1461-1492. doi: 10.3934/dcdsb.2017070 Alessia Berti, Maria Grazia Naso. Vibrations of a damped extensible beam between two stops. Evolution Equations & Control Theory, 2013, 2 (1) : 35-54. doi: 10.3934/eect.2013.2.35 Jiayun Lin, Kenji Nishihara, Jian Zhai. Critical exponent for the semilinear wave equation with time-dependent damping. Discrete & Continuous Dynamical Systems, 2012, 32 (12) : 4307-4320. doi: 10.3934/dcds.2012.32.4307 Wenru Huo, Aimin Huang. The global attractor of the 2d Boussinesq equations with fractional Laplacian in subcritical case. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2531-2550. doi: 10.3934/dcdsb.2016059 Tomás Caraballo, David Cheban. On the structure of the global attractor for non-autonomous dynamical systems with weak convergence. Communications on Pure & Applied Analysis, 2012, 11 (2) : 809-828. doi: 10.3934/cpaa.2012.11.809 Marcio Antonio Jorge da Silva, Vando Narciso. Attractors and their properties for a class of nonlocal extensible beams. Discrete & Continuous Dynamical Systems, 2015, 35 (3) : 985-1008. doi: 10.3934/dcds.2015.35.985 Sergey Zelik. Asymptotic regularity of solutions of a nonautonomous damped wave equation with a critical growth exponent. Communications on Pure & Applied Analysis, 2004, 3 (4) : 921-934. doi: 10.3934/cpaa.2004.3.921 Gui-Dong Li, Chun-Lei Tang. Existence of positive ground state solutions for Choquard equation with variable exponent growth. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 2035-2050. doi: 10.3934/dcdss.2019131 Chunxiang Zhao Chunyan Zhao Chengkui Zhong
CommonCrawl
Homeomorphism A one-to-one correspondence between two topological spaces such that the two mutually-inverse mappings defined by this correspondence are continuous. These mappings are said to be homeomorphic, or topological, mappings, and also homeomorphisms, while the spaces are said to belong to the same topological type or are said to be homeomorphic or topologically equivalent. They are isomorphic objects in the category of topological spaces and continuous mappings. A homeomorphism must not be confused with a condensation (a bijective continuous mapping); however, a condensation of a compactum onto a Hausdorff space is a homeomorphism. Examples. 1) The function $1/(e^X+1)$ establishes a homeomorphism between the real line $\mathbb{R}$ and the interval $(0,1)$; 2) a closed circle is homeomorphic to any closed convex polygon; 3) three-dimensional projective space is homeomorphic to the group of rotations of the space $\mathbb{R}^3$ around the origin and also to the space of unit tangent vectors to the sphere $S^2$; 4) all compact zero-dimensional groups with a countable base are homeomorphic to the Cantor set; 5) all infinite-dimensional separable Banach spaces, and even all Fréchet spaces, are homeomorphic; 6) a sphere and a torus are not homeomorphic. The term "homeomorphism" was introduced in 1895 by H. Poincaré [3], who applied it to (piecewise-) differentiable mappings of domains and submanifolds in $\mathbb{R}^n$; however, the concept was known earlier, e.g. to F. Klein (1872) and, in a rudimentary form, to A. Möbius (as an elementary likeness, 1863). At the beginning of the 20th century homeomorphisms began to be studied without assuming differentiability, as a result of the development of set theory and the axiomatic method. This problem, which was explicitly stated for the first time by D. Hilbert [7], forms the content of Hilbert's fifth problem. Of special importance was the discovery by L.E.J. Brouwer that $\mathbb{R}^n$ and $\mathbb{R}^m$ are not homeomorphic if $n \neq m$. This discovery restored the faith put by mathematicians in geometric intuition. This faith had been shaken by G. Cantor's result stating that $\mathbb{R}^n$ and $\mathbb{R}^m$ have the same cardinality and by the result obtained by G. Peano on the possibility of a continuous mapping from $\mathbb{R}^n$ onto $\mathbb{R}^m$, $n < m$. The concepts of a metric (or, respectively, a topological) space, introduced by M. Fréchet and F. Hausdorff, laid a firm foundation for the concept of a homeomorphism and made it possible to formulate the concepts of a topological property (a property which remains unchanged under a homeomorphism), of topological invariance, etc., and to formulate the problem of classifying topological spaces of various types up to a homeomorphism. However, when presented in this manner, the problem becomes exceedingly complicated even for very narrow classes of spaces. In addition to the classical case of two-dimensional manifolds, such a classification was given only for certain types of graphs, for two-dimensional polyhedra and for certain classes of manifolds. The general problem of classification cannot be algorithmically solved at all, since it is impossible to obtain an algorithm for distinguishing, say, manifolds of dimension larger than three. Accordingly, the classification problem is usually posed in the framework of a weaker equivalence relation, e.g. in algebraic topology using homotopy type or, alternatively, to classify spaces having a certain specified structure. Even so, the homeomorphism problem remains highly important. In the topology of manifolds it was only in the late 1960s that methods for studying manifolds up to a homeomorphism were developed. These studies are carried out in close connection with homotopic, topological, piecewise-linear, and smooth structures. A second problem is the topological characterization of individual spaces and classes of spaces (i.e. a specification of their characteristic topological properties, formulated in the language of general topology, cf. Topology, general and Topological invariant). This has been solved, for example, for one-dimensional manifolds, two-dimensional manifolds, Cantor sets, the Sierpiński curve, the Menger curve, pseudo-arcs, Baire spaces, etc. Spectra furnish a universal tool for the topological characterization of spaces; Aleksandrov's homeomorphism theorem was obtained using spectra [4]. The sphere and, in general, the class of locally Euclidean spaces, has been characterized by a sequence of subdivisions gradually diminishing in size [5]. A description of locally compact Hausdorff groups by means of spectra has been given [6]. Another method is to consider various algebraic structures connected with the mappings. Thus, a compact Hausdorff space is homeomorphic to the space of maximal ideals of the algebra of real functions defined on it. Many spaces are characterized by the semi-group of continuous mappings into themselves (cf. Homeomorphism group). In general topology a topological description is given of numerous classes of topological spaces. The characterization of spaces inside a given class is also of interest. Thus, it is very useful to describe a sphere as a compact manifold covered by two open cells. The problem of algorithmic identification of spaces has not been studied much. At the time of writing (1977) it has not been solved for the sphere $S^n$ where $n \ge 3$. In general, the non-homeomorphism of two topological spaces is proved by specifying a topological property displayed by only one of them (compactness, connectedness, etc.; e.g., a segment differs from a circle in that it can be divided into two by one point); the method of invariants is especially significant in this connection. Invariants are either defined in an axiomatic manner for a whole class of spaces at the same time, or else algorithmically, according to a specific representation of the space, e.g. by triangulation, by the Heegaard diagram, by decomposition into handles (cf. Handle theory), etc. The problem in the former case is to compute the invariant, while in the latter it is to prove topological invariance. An intermediate case is also possible — e.g. characteristic classes (cf. Characteristic class) of smooth manifolds were at first defined as obstructions to the construction of vector and frame fields, and later as the image of the tangent bundle under mappings of the $KO$-functor into a cohomology functor, but in neither case can the respective problems be solved by definition. Historically the first example of proving topological invariance (of the linear dimension of $\mathbb{R}^n$) was given by Brouwer in 1912. The classical method, due to Poincaré, is to begin by giving both definitions — the "computable" and the "invariant" — and then to prove that they are identical. This method proved especially useful in the theory of homology of a polyhedron. Another method is to prove that an invariant remains unchanged under elementary transformations of a representation of the space (e.g. subdivision by triangulations). It is completed if it is known that it is possible to obtain all the representations of a given type in this manner. Thus, the so-called "Hauptvermutung" of combinatorial topology arose in the topology of polyhedra in this connection. This method (which was also proposed by Poincaré) proved highly useful in the topology of two and three dimensions, in particular in knot theory, but it is out of use now (except for the constructive direction) not so much because the "Hauptvermutung" proved to be untrue, as because the development of category theory made it possible to give more realistic definitions, more in accordance with the subject matter, with a more accurate presentation of the problem of computation and topological invariance. Thus, the invariance of homology groups, which are defined functorially for spaces but are defined in a computable manner for complexes, follows from the comparison of the category of complexes and homotopy classes of simplicial mappings with the category of homotopy classes of continuous mappings. In this way one does not have to give a separate definition for a large category and one can extend it to a smaller category as well. (The sources of this idea are found in Brouwer's theory of degree.) The superiority of the new method was seen to be particularly evident in connection with the second definition of characteristic classes, given above, as transformations of functors. Thus, the problem of topological invariance naturally turned out to be a part of the question of the relation between the $K$-functor and its topological generalization. If two spaces are homeomorphic, then the method of spectra (and of diminishing subdivisions) is the only one of general value for the establishment of homeomorphism. On the other hand, if a classification has already been constructed the problem is solved by comparison of invariants. In practice the establishment of homeomorphism often proves to be a very difficult geometrical problem, which must be solved by employing special tools. Thus, homeomorphism of Euclidean spaces and some of their quotient spaces is established using a pseudo-isotopy. [1] D. Hilbert, S.E. Cohn-Vossen, "Anschauliche Geometrie" , Springer (1932) [2] V.G. Boltyanskii, V.A. Efremovich, "Outline of new ideas in topology" Mat. Prosveshchenie , 2 (1957) pp. 3–34 (In Russian) [3] H. Poincaré, "Oeuvres" , 2 , Gauthier-Villars (1952) [4] P.S. Aleksandrov, "Topological duality theorems. Part 2. Non-closed sets" Trudy Mat. Inst. Steklov. , 54 (1959) (In Russian) [5] O.G. Harrold, "A characterization of locally Euclidean spaces" Trans. Amer. Math. Soc. , 118 (1965) pp. 1–16 [6] L.S. Pontryagin, "Topological groups" , Princeton Univ. Press (1958) (Translated from Russian) [7] "Hilbert problems" Bull. Amer. Math. Soc. , 8 (1902) pp. 437–479 (Translated from German) Homeomorphism. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Homeomorphism&oldid=33549 This article was adapted from an original article by A.V. Chernavskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Homeomorphism&oldid=33549"
CommonCrawl
Infinite time blow-up of many solutions to a general quasilinear parabolic-elliptic Keller-Segel system DCDS-S Home Improvement of conditions for asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity doi: 10.3934/dcdss.2020014 Decay in chemotaxis systems with a logistic term Monica Marras , , Stella Vernier-Piro and Giuseppe Viglialoro Università di Cagliari, Dipartimento di Matematica e Informatica, Viale Merello 92, 09123 Cagliari, Italy * Corresponding author: Monica Marras Received May 2017 Revised May 2018 Published January 2019 Full Text(HTML) This paper is concerned with a general fully parabolic Keller-Segel system, defined in a convex bounded and smooth domain $Ω$ of $\mathbb{R}^N, $ for N∈{2, 3}, with coefficients depending on the chemical concentration, perturbed by a logistic source and endowed with homogeneous Neumann boundary conditions. For each space dimension, once a suitable energy function in terms of the solution is defined, we impose proper assumptions on the data and an exponential decay of such energies is established. Keywords: Nonlinear parabolic systems, asymptotic behavior, chemotaxis, PDEs in connection with biology and other natural sciences. Mathematics Subject Classification: Primary: 35K55, 35B40; Secondary: 92C17, 35Q92. Citation: Monica Marras, Stella Vernier-Piro, Giuseppe Viglialoro. Decay in chemotaxis systems with a logistic term. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020014 N. Bellomo, A. Belloquid, Y. Tao and M. Winkler, Toward a mathematical theory of Keller-Segel model of pattern formation in biological tissues, Math. Mod. Meth. Appl. Sci., 25 (2015), 1663-1763. doi: 10.1142/S021820251550044X. Google Scholar T. Cieàlak and C. Stinner, Finite-time blowup and global-in-time unbounded solutions to a parabolic-parabolic quasilinear Keller-Segel system in higher dimensions, J. Diff. Eq., 252 (2012), 5832-5851. doi: 10.1016/j.jde.2012.01.045. Google Scholar G. H. Hardy, J. E. Littlewood and G. Polya, Inequalities, Cambridge University Press, Cambridge, 1988. Google Scholar D. Horstmann and G. Wang, Blow-up in a chemotaxis model without symmetry assumptions, European J. Appl. Math., 12 (2001), 159-177. doi: 10.1017/S0956792501004363. Google Scholar R. Kaiser and L. X. Xu, Nonlinear stability of the rotating Bé énard problem, the case Pr=1, Nonlin. Dffer. Equ. Appl., 5 (1998), 283-307. doi: 10.1007/s000300050047. Google Scholar E. F. Keller and L. A. Segel, Initiation of slime mold aggregation viewed as an instability, J. Theoret. Biol., 26 (1970), 399-415. doi: 10.1016/0022-5193(70)90092-5. Google Scholar E. F. Keller and L. A. Segel, Model for chemotaxis, J. Theoret. Biol., 30 (1971), 225-234. doi: 10.1016/0022-5193(71)90050-6. Google Scholar M. Marras and S. Vernier-Piro, Blow up and decay bounds in quasilinear parabolic problems, Dynamical System and Diff. Equ.(DCDS), supplement (2007), 704-712. Google Scholar M. Marras and S. Vernier Piro, Blow-up phenomena in reaction-diffusion systems, Discrete and Continuous Dynamical Systems, 32 (2012), 4001-4014. doi: 10.3934/dcds.2012.32.4001. Google Scholar M. Marras, S. Vernier-Piro and G. Viglialoro, Estimates from below of blow-up time in a parabolic system with gradient term, International Journal of Pure and Applied Mathematics, 93 (2014), 297-306. Google Scholar M. Marras, S. Vernier-Piro and G. Viglialoro, Blow-up phenomena in chemotaxis systems with a source term, Mathematical Methods in the Applied Sciences, 39 (2016), 2787-2798. doi: 10.1002/mma.3728. Google Scholar J. D. Murray, Mathematical Biology. I: An Introduction, Springer, New York, 2002. Google Scholar J. D. Murray, Mathematical Biology. II: Spatial Models and Biomedical Applications, Springer, New York, 2003. Google Scholar L. E. Payne and B. Straughan, Decay for a Keller-Segel chemotaxis model, Studies in Applied Mathematics, 123 (2009), 337-360. doi: 10.1111/j.1467-9590.2009.00457.x. Google Scholar Y. Tao and S. Vernier Piro, Explicit lower bound for blow-up time in a fully parabolic chemotaxis system with nonlinear cross-diffusion, Journal of Mathematical Analysis and Applications, 436 (2016), 16-28. doi: 10.1016/j.jmaa.2015.11.048. Google Scholar Y. Tao and M. Winkler, Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity, J. Differential Equations, 252 (2012), 692-715. doi: 10.1016/j.jde.2011.08.019. Google Scholar G. Viglialoro, Blow-up time of a Keller-Segel-type system with Neumann and Robin boundary conditions, Differential Integral Equations, 29 (2016), 359-376. Google Scholar G. Viglialoro, Very weak global solutions to a parabolic-parabolic chemotaxis-system with logistic source, J. Math. Anal. Appl., 439 (2016), 197-212. doi: 10.1016/j.jmaa.2016.02.069. Google Scholar G. Viglialoro, Boundedness properties of very weak solutions to a fully parabolic chemotaxissystem with logistic source, Nonlinear Anal. Real World Appl., 34 (2017), 520-535. doi: 10.1016/j.nonrwa.2016.10.001. Google Scholar G. Viglialoro and T. Woolley, Eventual smoothness and asymptotic behaviour of solutions to a chemotaxis system perturbed by a logistic growth, Discrete Contin. Dyn. Syst. Ser. B, 23 (2018), 3023-3045. doi: 10.3934/dcdsb.2017199. Google Scholar M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source, Comm. Part. Diff. Equ., 35 (2010), 1516-1537. doi: 10.1080/03605300903473426. Google Scholar M. Winkler, Blow-up in a higher-dimensional chemotaxis system despite logistic growth restriction, J. Math. Anal. Appl., 384 (2011), 261-272. doi: 10.1016/j.jmaa.2011.05.057. Google Scholar M. Winkler, Finite-time blow-up in higher-dimensional parabolic-parabolic Keller-Segel system, J. Math. Pures Appl., 100 (2013), 748-767. doi: 10.1016/j.matpur.2013.01.020. Google Scholar Yuanyuan Liu, Youshan Tao. Asymptotic behavior in a chemotaxis-growth system with nonlinear production of signals. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 465-475. doi: 10.3934/dcdsb.2017021 Giuseppe Da Prato, Arnaud Debussche. Asymptotic behavior of stochastic PDEs with random coefficients. Discrete & Continuous Dynamical Systems - A, 2010, 27 (4) : 1553-1570. doi: 10.3934/dcds.2010.27.1553 P. R. Zingano. Asymptotic behavior of the $L^1$ norm of solutions to nonlinear parabolic equations. Communications on Pure & Applied Analysis, 2004, 3 (1) : 151-159. doi: 10.3934/cpaa.2004.3.151 Marco Di Francesco, Alexander Lorz, Peter A. Markowich. Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion: Global existence and asymptotic behavior. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1437-1453. doi: 10.3934/dcds.2010.28.1437 Igor Chueshov, Björn Schmalfuss. Qualitative behavior of a class of stochastic parabolic PDEs with dynamical boundary conditions. Discrete & Continuous Dynamical Systems - A, 2007, 18 (2&3) : 315-338. doi: 10.3934/dcds.2007.18.315 Judith R. Miller, Huihui Zeng. Stability of traveling waves for systems of nonlinear integral recursions in spatial population biology. Discrete & Continuous Dynamical Systems - B, 2011, 16 (3) : 895-925. doi: 10.3934/dcdsb.2011.16.895 M. Grasselli, V. Pata. Asymptotic behavior of a parabolic-hyperbolic system. Communications on Pure & Applied Analysis, 2004, 3 (4) : 849-881. doi: 10.3934/cpaa.2004.3.849 Minkyu Kwak, Kyong Yu. The asymptotic behavior of solutions of a semilinear parabolic equation. Discrete & Continuous Dynamical Systems - A, 1996, 2 (4) : 483-496. doi: 10.3934/dcds.1996.2.483 Shota Sato, Eiji Yanagida. Asymptotic behavior of singular solutions for a semilinear parabolic equation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (11) : 4027-4043. doi: 10.3934/dcds.2012.32.4027 Chunpeng Wang. Boundary behavior and asymptotic behavior of solutions to a class of parabolic equations with boundary degeneracy. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 1041-1060. doi: 10.3934/dcds.2016.36.1041 Eugene Kashdan, Dominique Duncan, Andrew Parnell, Heinz Schättler. Mathematical methods in systems biology. Mathematical Biosciences & Engineering, 2016, 13 (6) : i-ii. doi: 10.3934/mbe.201606i F. R. Guarguaglini, R. Natalini. Global existence and uniqueness of solutions for multidimensional weakly parabolic systems arising in chemistry and biology. Communications on Pure & Applied Analysis, 2007, 6 (1) : 287-309. doi: 10.3934/cpaa.2007.6.287 M. L. Bertotti, Sergey V. Bolotin. Chaotic trajectories for natural systems on a torus. Discrete & Continuous Dynamical Systems - A, 2003, 9 (5) : 1343-1357. doi: 10.3934/dcds.2003.9.1343 Lie Zheng. Asymptotic behavior of solutions to the nonlinear breakage equations. Communications on Pure & Applied Analysis, 2005, 4 (2) : 463-473. doi: 10.3934/cpaa.2005.4.463 Yongqin Liu. Asymptotic behavior of solutions to a nonlinear plate equation with memory. Communications on Pure & Applied Analysis, 2017, 16 (2) : 533-556. doi: 10.3934/cpaa.2017027 Irena Lasiecka, W. Heyman. Asymptotic behavior of solutions in nonlinear dynamic elasticity. Discrete & Continuous Dynamical Systems - A, 1995, 1 (2) : 237-252. doi: 10.3934/dcds.1995.1.237 Youshan Tao, Lihe Wang, Zhi-An Wang. Large-time behavior of a parabolic-parabolic chemotaxis model with logarithmic sensitivity in one dimension. Discrete & Continuous Dynamical Systems - B, 2013, 18 (3) : 821-845. doi: 10.3934/dcdsb.2013.18.821 Alexander Komech. Attractors of Hamilton nonlinear PDEs. Discrete & Continuous Dynamical Systems - A, 2016, 36 (11) : 6201-6256. doi: 10.3934/dcds.2016071 Alexandre Montaru. Wellposedness and regularity for a degenerate parabolic equation arising in a model of chemotaxis with nonlinear sensitivity. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 231-256. doi: 10.3934/dcdsb.2014.19.231 Tian Xiang. Dynamics in a parabolic-elliptic chemotaxis system with growth source and nonlinear secretion. Communications on Pure & Applied Analysis, 2019, 18 (1) : 255-284. doi: 10.3934/cpaa.2019014 HTML views (448) Monica Marras Stella Vernier-Piro Giuseppe Viglialoro Article outline
CommonCrawl
The protocol of a prospective, multicenter, randomized, controlled phase III study evaluating different cycles of oxaliplatin combined with S-1 (SOX) as neoadjuvant chemotherapy for patients with locally advanced gastric cancer: RESONANCE-II trial Xinxin Wang1, Shuo Li1, Yihong Sun2, Kai Li3, Xian Shen4, Yingwei Xue5, Pin Liang6, Guoli Li7, Luchuan Chen8, Qun Zhao9, Guoxin Li10, Weihua Fu11, Han Liang12, Hairong Xin13, Jian Suo14, Xuedong Fang15, Zhichao Zheng16, Zekuan Xu17, Huanqiu Chen18, Yanbing Zhou19, Yulong He20, Hua Huang21, Linghua Zhu22, Kun Yang23, Jiafu Ji24, Yingjiang Ye25, Zhongtao Zhang26, Fei Li27, Xin Wang28, Yantao Tian29, Sungsoo Park30 & Lin Chen1 Curing locally advanced gastric cancer through surgery alone is difficult. Adjuvant and neoadjuvant chemotherapy bring potential benefits to more patients with gastric cancer based on several clinical trials. According to phase II studies and guidelines, SOX regimen as neoadjuvant chemotherapy is efficient. However, the optimal duration of neoadjuvant chemotherapy has not been established. In this study, we will evaluate the efficacy and safety of different cycles of SOX as neoadjuvant chemotherapy for patients with locally advanced gastric cancer. RESONANCE-II trial is a prospective, multicenter, randomized, controlled phase III study which will enroll 524 patients in total. Eligible patients will be registered, pre-enrolled and receive three cycles of SOX, after which tumor response evaluations will be carried out. Those who show stable disease or progressive disease will be excluded. Patients showing complete response or partial response will be enrolled and assigned into either group A for another three cycles of SOX (six cycles in total) followed by D2 surgery; or group B for D2 surgery (three cycles in total). The primary endpoint is the rate of pathological complete response and the secondary endpoints are R0 resection rate, three-year disease-free survival, five-year overall survival, and safety. This study is the first phase III randomized trial to compare the cycles of neoadjuvant chemotherapy using SOX for resectable locally advanced cancer. Based on a total of six to eight cycles of perioperative chemotherapy usually applied in locally advanced gastric cancer, patients in group A can be considered to have completed all perioperative chemotherapy, the results of which may suggest the feasibility of using chemotherapy only before surgery in gastric cancer. Registered prospectively in the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP) with registration number ChiCTR1900023293 on May 21st, 2019. Gastric cancer remains the third leading cause of malignant tumor death both in China and worldwide [1, 2]. The overall five-year survival rate is about 20% [3]. At present radical gastrectomy is regarded as the only approach to curing gastric cancer. However, about 30% of patients with gastric cancer have recurrence after receiving radical gastrectomy [4, 5]. Thus it is difficult to achieve a cure for gastric cancer through surgery alone. Several randomized controlled trials have evaluated the efficacy of adjuvant chemotherapy, that is, postoperative chemotherapy, for gastric cancer recently. The ACTS-GC phase III trial enrolled 1059 patients with stage II or III gastric cancer at 109 centers throughout Japan. They were assigned randomly to either the S-1 group for receiving S-1 as adjuvant chemotherapy for 1 year postoperatively or the surgery-only group for receiving surgery alone. The five-year overall survival rates in the S-1 group and surgery-only group were 71.7 and 61.1% respectively, and the five-year relapse-free survival rates were 65.4 and 53.1% respectively. S-1 reduced the risk of death by 33.1% and of relapse by 34.7% [6]. The CLASSIC trial enrolled 1035 patients at 35 cancer centers in South Korea, China Mainland and Taiwan. Patients with stage II-IIIB gastric cancer who underwent D2 radical gastrectomy were assigned randomly to adjuvant chemotherapy with eight cycles of capecitabine and oxaliplatin (XELOX) or observation alone. The results showed that 27% of patients in the adjuvant capecitabine and oxaliplatin group and 39% in the observation alone group had recurrence or died (P< 0.0001) and the estimated five-year disease-free survival rates were 68 and 53% respectively [7]. These two trials from Asia confirmed that adjuvant chemotherapy using S-1 or XELOX could improve survival in patients with gastric cancer who had received D2 gastrectomy. Unlike in Asia, in western countries, neoadjuvant chemotherapy, that is, perioperative chemotherapy, received widespread interest. The MAGIC trial was the first study to confirm the efficacy of neoadjuvant chemotherapy in gastric cancer treatment. A total of 503 patients with resectable adenocarcinoma of the stomach, esophagogastric junction or lower esophagus were enrolled and then assigned to either the perioperative-chemotherapy group or surgery group. Three cycles of preoperative and three cycles of postoperative chemotherapy using epirubicin, cisplatin and fluorouracil (ECF) were administrated to patients only in the perioperative-chemotherapy group, which resulted in a higher five-year survival rate than that of the surgery group (36% vs 23%, P=0.009) [8]. Afterwards, the FNCLCC/FFCD 9703 phase III trial was conducted in France to investigate the benefit of perioperative fluorouracil plus cisplatin (CF) in resectable gastroesophageal adenocarcinoma. Two hundred twenty-four patients were randomly assigned to either the perioperative chemotherapy and surgery group (CS group) medicated with two or three cycles of CF before surgery and three or four cycles after surgery, or the surgery alone group (S group) for receiving surgery only. The results showed that the CS group had a better five-year overall survival (38% vs 24%, P=0.02) and five-year disease-free survival (34% vs 19%, P=0.003) and that the perioperative chemotherapy significantly improved the curative resection rate (84% vs 73%, P=0.04) [9]. However, the completion rate of perioperative chemotherapy in these two trials was relatively low. Later, the FLOT4 trial altered the situation. It aimed to compare the safety and efficacy of FLOT (fluorouracil plus leucovorin, oxaliplatin and docetaxel) with that of ECF/ECX (epirubicin, cisplatin and fluorouracil or capecitabine). Seven hundred sixteen patients were enrolled and randomly assigned to treatment in 38 German hospitals. Results showed that the median overall survival and chemotherapy completion rate were increased in the FLOT group (50 months vs 35 months, P=0.012; 46% vs 37%), which indicated that FLOT could be preferred compared to ECF/ECX [10]. Although all trials mentioned above had positive findings, the EORTC 40954 study, which was conducted in Europe, showed no survival benefit for neoadjuvant chemotherapy. This trial was for the purpose of proving the superiority of preoperative chemotherapy using cisplatin, d-L-folinic acid and fluorouracil. Compared with the surgery-only group, the neoadjuvant group had higher R0 resection rate (81.9% vs 66.7%, P=0.036) and less lymph node metastases (61.4% vs 76.5%, P=0.018). Survival benefit was not demonstrated after median follow-up of 4.4 years and 67 deaths (HR 0.84, 95% Cl 0.52 to 1.35, P=0.466). Notably, because of the poor accrual, the trial was terminated and only 144 patients were randomly assigned in the neoadjuvant arm or surgery-alone arm (72:72) instead of an estimated total of 360 patients. The low statistical power, the high rate of proximal gastric cancer and the better outcome than expected due to the high quality of surgery were possible reasons for no survival benefit [11]. Recently, S-1 plus oxaliplatin (SOX) as neoadjuvant chemotherapy showed relatively high efficacy and safety in several studies [12,13,14]. A phase II trial conducted by our medical center demonstrated that neoadjuvant chemotherapy with SOX regimen yielded an overall response rate of 68.8% and a disease control rate of 93.8%. The D2 lymph nodes dissection rate and R0 resection rate in neoadjuvant chemotherapy group were both higher than those in surgery-only group [15]. Furthermore, fluoropyrimidine plus oxaliplatin was recommended in the National Comprehensive Cancer Network (NCCN) Guidelines Version 1.2020 Gastric Cancer and the Chinese Society of Clinical Oncology (CSCO) guidelines for the diagnosis and treatment of gastric cancer (2019) [16, 17]. Therefore, the SOX is considered to have good application prospects. It should be noted that the number of cycles of preoperative chemotherapy applied in these studies is not standardized and two to four cycles seemed to be acceptable based on previous studies [8,9,10]. So, the proper duration for neoadjuvant therapy needs to be settled. A randomized phase II trial conducted in Japan enrolled 83 patients with stage III and IV resectable gastric cancer. They were randomly assigned into two courses of SC (S-1 plus cisplatin), four courses of SC, two courses of PC (paclitaxel plus cisplatin) and four courses of PC. The study found no significant differences between two regimens (P=0.956) or between the two- and four- course treatments (P=0.723). However, the limitations of the study are the relatively small sample size and the inclusion of some stage IV patients, which could lead to the negative result [18]. Based on these findings, the randomized phase III RESONANCE-II trial was designed to evaluate the efficacy and safety of different cycles of SOX as neoadjuvant chemotherapy for patients with locally advanced gastric cancer, so as to provide a theoretical basis for setting an optimal duration of neoadjuvant chemotherapy. RESONANCE-II trial is a prospective, multicenter, randomized, controlled phase III study which will be conducted in 29 medical centers in China and one medical center in Korea. Chinese PLA General Hospital is the lead center. Lin Chen is the trial principal investigator (PI). All screened patients should state that they are healthy and have no discomfort, then they will sign the informed consent. After signing the informed consent, they will receive consultation from investigator and examinations. Eligible patients will be registered, pre-enrolled. Main possible reasons for screening failure include: (1) The results of physical examination or laboratory tests do not meet the inclusion criteria or meet the exclusion criteria; (2) Patients withdraw informed consent or refuse to participate in the research. Pre-enrolled patients will receive three cycles of SOX. Then, tumor response evaluation will be carried out according to the Response Evaluation Criteria for Solid Tumors (RECIST) 1.1 [19]. Those who demonstrate stable disease (SD) or progressive disease (PD) will be excluded. Patients achieving complete response (CR) or partial response (PR) will be enrolled and assigned into either group A (six cycles of neoadjuvant chemotherapy with SOX) for another three cycles of SOX followed by D2 surgery or group B (three cycles of neoadjuvant chemotherapy with SOX) for D2 surgery. Figure 1 showed the study flow chart. The trial has been approved by the ethics committee of the Chinese PLA General Hospital, Beijing, China and registered prospectively in the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP) with registration number ChiCTR1900023293 on May 21st, 2019. All patients are required to sign the written informed consent. Study Flow Chart Primary endpoint: Pathological complete response rate (pCR%) based on Ryan's 0–3 pathological tumor regression grade (TRG) system [20]. Secondary endpoints: R0 resection rate, three-year disease-free survival (3-y DFS), five-year overall survival (5-y OS) and safety. The study started in October 2019. Local ethics committee approvals are expected in October 2020, the enrollment of the first patient in November 2020, the last patient completing the research in November 2027, database lock in June 2027 and publication in June 2028. Patient eligibility Non-bedridden, aged 18 to 70 years old; Eastern Cooperative Oncology Group (ECOG) score is 0 to 1; Histologically confirmed gastric adenocarcinoma; Have evaluable lesions based on RECIST 1.1; Stage III (cT3-4aN1-3 M0, American Joint Committee on Cancer (AJCC) TNM staging system 8th edition) gastric cancer confirmed by enhanced computer tomography (enhanced CT) and laparoscopic exploration (endoscopic ultrasonography (EUS) and magnetic resonance imaging (MRI) if necessary); The research center and the surgeon have the ability to complete standard D2 radical gastrectomy, and the gastrectomy can be tolerated by the patient; Laboratory test criteria: peripheral blood hemoglobin (Hb) ≥ 90 g/L, neutrophil absolute count ≥ 3× 109 /L, platelet count (PLT) ≥ 100× 109 /L, alanine aminotransferase (ALT) and aspartate aminotransferase (AST) ≤ 2.5 times the upper limit of normal (ULN), total bilirubin ≤ 1.5×ULN, serum creatinine (SCr) ≤ 1.5×ULN, and serum albumin (ALB) ≥ 30 g/L; Patients with heart disease, echocardiogram showing that the left ventricular ejection fraction ≥ 50%, electrocardiogram (ECG) is basically normal within 4 weeks before operation and with no obvious symptoms are acceptable; There is no serious underlying disease that could lead to an expected life expectancy < 5 years; Willing to sign the informed consent for participation and publication of results. Pregnant or lactating women; Positive pregnancy test for women in childbearing age. Menopausal women without menstruation for at least 12 months can be regarded as women with no possibility of getting pregnant; Refusal of birth control during the study; Prior chemotherapy, radiotherapy or immunotherapy; History of other malignant diseases in the last 5 years (except for cervical carcinoma in situ); History of uncontrolled central nervous system diseases, which could influence the compliance; History of severe liver diseases (Child-Pugh class C), renal diseases (endogenous creatinine clearance rate (Ccr) ≤ 50 ml/min or SCr > 1.5 ULN) or respiratory diseases; Uncontrolled diabetes and hypertension; Clinically severe heart disease, such as symptomatic coronary heart disease, New York Heart Association (NYHA) class II or more severe congestive heart failure, uncontrolled arrhythmia requiring drug intervention, or a history of myocardial infarction in the last 6 months; History of dysphagia, complete or partial gastrointestinal obstruction, active gastrointestinal bleeding and gastrointestinal perforation; On steroid treatment after organ transplant; With uncontrolled severe infections; Known dihydropyrimidine dehydrogenase deficiency (DPD); Anaphylaxis to any research drug ingredient; Known peripheral neuropathy (> NCI-CTC AE 1). Patients with only disappearance of deep tendon reflex need not to be excluded. All eligible patients will be registered, pre-enrolled and receive three cycles of SOX. After tumor response evaluation using RECIST 1.1, patients achieving CR or PR will be enrolled [19]. Then randomization without stratification will be carried out by computer generated allocation using IBM SPSS Statistics 22, and enrolled patients will be randomly assigned (1:1) to Arm A or Arm B. Arm A: Patients will be pre-enrolled and receive three cycles of SOX. After randomization, patients in Arm A will receive three more cycles of SOX (six cycles of neoadjuvant chemotherapy with SOX in total) followed by D2 gastrectomy. After which postoperative chemotherapy will not be recommended. Arm B: Patients will be pre-enrolled and receive three cycles of SOX. After randomization, patients in Arm B will receive D2 gastrectomy (three cycles of neoadjuvant chemotherapy with SOX in total). After which postoperative chemotherapy using SOX will be recommended. Laparoscopic exploration Laparoscopic exploration is to detect occult peritoneal metastases and inspect the primary lesion, liver, diaphragm, pelvic organs, bowel and omentum according to the standard requirements reported before [21]. Patients with any patterns of distant metastases, suggestive of distant metastasis (M1), will be excluded from the trial and recommended to a multi-disciplinary team (MDT) for further treatment. A standard D2 radical open or laparoscopic gastrectomy will be planned 3–4 weeks after the last cycle of chemotherapy. Laparoscopic gastrectomy will be recommended. The extent of gastric resection and lymphadenectomy will be performed as per the treatment guidelines [22]. Reconstruction after gastrectomy will be decided by the surgeon. All operations will be performed by well trained and experienced surgical team to guarantee the quality of surgery, including harvesting more than 16 lymph nodes. The surgeon will determine if D2 lymphadenectomy is completed and photos of the surgical field after gastrectomy will be monitored and reviewed centrally. Neoadjuvant chemotherapy The preoperative SOX chemotherapy consists of three-week cycles of intravenously administered oxaliplatin 130 mg/m2 on day 1 and orally administered S-1 40–60 mg twice a day (BID) on day 1 to 14. The dose of S-1 depends on body surface area (BSA): 40 mg BID for BSA < 1.25 m2; 50 mg BID for 1.25 m2 < BSA < 1.5 m2; 60 mg BID for BSA > 1.5 m2. Day 15 to day 21 is the rest period. Tumor response and toxicity criteria Lesions will be evaluated according to the RECIST 1.1 criteria after the third and the sixth cycle of SOX by enhanced CT, EUS, MRI as needed [19]. Toxicities are measured according to National Cancer Institute Common Toxicity Criteria for Adverse Event (NCI-CTC AE), version 4.0. AE will be recorded in the AE report form. Serious adverse events (SAE) are defined according to the rules of good clinical practice (GCP) and will be reported to the lead center within one working day, after which other centers will be notified promptly. After treatment, patients will be examined every 6 months for 5 years, then every 12 months for life. The follow-up will include enhanced CT/MRI, abdominal ultrasound, chest X-ray, physical and laboratory examination, according to the schedule (Table 1). Table 1 Follow-up schedule Sample size calculation The primary endpoint of this trial is pCR rate. Based on the data from previous study, after three cycles of SOX, the PCR rate is about 7% (p0), and we estimated that with six cycles of SOX, the PCR rate can be increased to 15% (p). Based on the Z test, the null and alternative hypotheses are H0: p=p0; H1: p≠p0. With the statistical power of 0.8 (β=0.2), the type I error rate of 0.05 (α=0.05), when Z > marginal value, the H0 will be rejected and H1 will be accepted. Using the following formulas with the sampling ratio is 1:1 and the drop-out rate of 10%, the sample size is set at 262 per arm [23]. $$ n=p\left(1-p\right){\left(\frac{z_{1-\frac{\alpha }{2}}+{z}_{1-\beta }}{p-{p}_0}\right)}^2 $$ Each participant center takes 400 to 1000 cases of gastric cancer per year and about 2% of patients can be enrolled. Thus recruitment can be completed in 2 years. The pCR rate is defined as the rate of patients achieving pCR. The R0 resection rate is defined as the rate of R0 resection. The DFS is defined as the period from the time of surgery to recurrence or death. The OS is defined as the period from the time of surgery to death or last follow-up. Chi-square test will be used to compare patients' characteristics, pCR rate and R0 resection rate between two arms. Survival curves will be estimated by the Kaplan-Meier method. Survival rate of DFS and OS will be compared respectively between two arms using a two-sided log-rank test. Both intention-to-treatment analysis (ITT, patients who receive any treatment after enrollment) and per-protocol analysis (PP, patients who receive all preoperative chemotherapy and surgery) will be performed. Missing data will be processed using multiple imputation. For loss of follow-up, ITT analysis and sensitivity analysis will be performed respectively. Management and quality control Each participating surgical team performs over 200 gastrectomies with D2 lymphadenectomies. Each researcher has over 20 years of professional experience. The role of clinical research organization (CRO) is filled by Beijing Sinocro PharmaScience Co.,Ltd. The lead center and CRO will organize periodical training in standard operating procedure (SOP) of gastroscopy, enhanced CT, EUS, MRI, pathological examination, staging and response evaluation. Local PI will be responsible for staging, observation of D2 completion and identification of eligible patients. To address interrater disparities, three researchers will perform the evaluation. The central review for observation and evaluation will be conduct. CRO will appoint and authorize clinical research associates (CRA) as monitors throughout the trial according to the SOP. CRA are responsible for regularly monitoring case report forms (CRF), ensuring compliance with the protocol and GCP regulations, and checking completeness, consistency and accuracy of data. Researchers will work with CRA to ensure that any problems can be solved. Through the Electronic Data Capture System (EDC), the lead center will have the right to obtain all data and report the results. Other participating centers will only have the right to obtain the data from their own centers. The test drugs will be purchased by the patients at their own expense according to medical orders, and the pharmacies of participating centers will be responsible for distributing them. The patients will take the medicine and make records in the patient diaries, which will be regularly monitored by CRA. National Medical Products Administration of China and Human Genetic Resource Administration of China will inspect the trial and fulfillment of legal requirements. The RESONANCE-II study was conducted to evaluate the efficacy and safety of different cycles of SOX as neoadjuvant chemotherapy for patients with locally advanced gastric cancer. The primary endpoint of the study is the rate of pathological complete response. A study evaluated the histopathological tumor regression in 480 gastric cancer patients who received surgery after neoadjuvant chemotherapy. The results showed that complete or subtotal tumor regression was an independent prognostic factor for survival, which suggested that pathological complete regression could be regarded as a proper endpoint for neoadjuvant trials in gastric cancer [24]. Moreover, our study aims to evaluate the efficacy of preoperative chemotherapy. Using pCR rate as the primary endpoint instead of survival could reduce the influence of surgery and postoperative chemotherapy on the endpoint. Also, less time will be required to get the primary results. Therefore, we chose pCR rate as the primary endpoint. However, to illustrate the relation between tumor regression and survival thoroughly, further research is still needed. In our study, eligible patients will be pre-enrolled and receive 3 cycles of SOX. Those who achieve CR or PR will be enrolled and considered as subjects with cancer that is sensitive to SOX chemotherapy. As for patients achieve SD or PD, they will be considered as subjects with tumors that are not sensitive to SOX chemotherapy and will be excluded. The purpose of this setting is to answer the question of whether more cycles of preoperative chemotherapy should be applied when it is effective in short-term application. When preoperative chemotherapy shows little effectiveness, other treatments will be recommended instead of continuing to apply chemotherapy. In addition, pre-enrollment can ensure the completion of chemotherapy after randomization and reduce the early termination of chemotherapy due to poor efficacy. Therefore, in our opinion, the design is appropriate. The total number of cycles of preoperative and postoperative chemotherapy in most trials is about six to eight [8,9,10]. In this study, patients in group A who receive 6 cycles of neoadjuvant chemotherapy can be considered to have completed all perioperative chemotherapy. It will be recommended that these patients could not receive adjuvant chemotherapy with close monitoring after surgery if there is no evidence of disease progression. Relevant results of group A may suggest the feasibility of applying only preoperative chemotherapy in gastric cancer. To avoid bias, uniform inclusion and exclusion criteria will be applied and randomization will be performed. To prevent loss to follow-up, investigators will keep timely and effective contact with patients and provide a user-friendly mobile phone application for patients to contact investigators and record medications. All radiologists, endoscopists and pathologists will not participate in any process relevant to grouping or intervention and they will not have access to any chemotherapy-related or surgery-related records. SOPs will be defined and used with sufficient training. The CRO will monitor throughout the project. To the best of our knowledge, this study is the first phase III randomized trial to compare the cycles of neoadjuvant chemotherapy using SOX for resectable locally advanced cancer. Due to the relatively high efficacy and safety of SOX neoadjuvant chemotherapy, it can be widely used for treatment of resectable locally advanced gastric cancer. We hope that the results of this trial can provide theoretical basis for setting an optimal duration of neoadjuvant chemotherapy. The data and materials of the study will be made available on request. SOX: S-1 combined with oxaliplatin WHO ICTRP: World health Organization international clinical trails registry platform XELOX: Capecitabine and oxaliplatin ECF: Epirubicin, cisplatin and fluorouracil CF: Fluorouracil plus cisplatin FLOT: Fluorouracil plus leucovorin, oxaliplatin and docetaxel ECF/ECX: Epirubicin, cisplatin and fluorouracil or capecitabine S-1 plus cisplatin paclitaxel plus cisplatin RECIST: Response evaluation criteria for solid tumors Stable disease PD: Progressive disease CR: Partial response pCR%: The rate of pathological complete response 3-y DFS: Three-year disease free survival 5-y OS: Five-year overall survival ECOG: Eastern cooperative oncology group CT: Computer tomography EUS: Endoscopic ultrasonography MRI: PLT: ALT: Alanine aminotransferase AST: Aspartate aminotransferase SCr: Serum creatinine ALB: Serum albumin ULN: Upper limit of normal Ccr: Creatinine clearance rate ECG: Dihydropyrimidine dehydrogenase deficiency BID: Bis in die (twice a day) BSA: Body surface area NYHA: New York heart association NCI-CTC AE: National cancer institute common toxicity criteria for adverse event GCP: Good clinical practice ITT: Intention-to-treat Per-protocol NCCN: CSCO: Chinese society of clinical oncology TRG: Tumor regression grade CRO: Clinical research organization SOP: CRA: CRF: Case report form EDC: Electronic data capture Bray F, Ferlay J, Soerjomataram I, et al. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2018;68(6):394–424. Chen W, Sun K, Zheng R, et al. Cancer incidence and mortality in China, 2014. Chin J Cancer Res. 2018;30(1):1–12. Chun N, Ford JM. Genetic testing by cancer site: stomach. Cancer J. 2012;18(4):355–63. Nakagawa N, Kanda M, Ito S, et al. Pathological tumor infiltrative pattern and sites of initial recurrence in stage II/III gastric cancer: propensity score matching analysis of a multi-institutional dataset. Cancer Med. 2018;7(12):6020–9. Kim JH, Lee HH, Seo HS, et al. Borrmann type 1 Cancer is associated with a high recurrence rate in locally advanced gastric Cancer. Ann Surg Oncol. 2018;25(7):2044–52. Sasako M, Sakuramoto S, Katai H, et al. Five-year outcomes of a randomized phase III trial comparing adjuvant chemotherapy with S-1 versus surgery alone in stage II or III gastric cancer. J Clin Oncol. 2011;29:4387–93. Bang YJ, Kim YW, Yang HK, et al. Adjuvant capecitabine and oxaliplatin for gastric cancer after D2 gastrectomy (CLASSIC): a phase 3 open-label, randomised controlled trial. Lancet. 2012;379:315–21. Cunningham D, Allum WH, Stenning SP, et al. Perioperative chemotherapy versus surgery alone for resectable gastroesophageal cancer. N Engl J Med. 2006;355:11–20. Ychou M, Boige V, Pignon JP, et al. Perioperative chemotherapy compared with surgery alone for resectable gastroesophageal adenocarcinoma: an FNCLCC and FFCD multicenter phase III trial. J Clin Oncol. 2011;29:1715–21. Al-Batran SE, Homann N, Pauligk C, et al. Perioperative chemotherapy with fluorouracil plus leucovorin, oxaliplatin, and docetaxel versus fluorouracil or capecitabine plus cisplatin and epirubicin for locally advanced, resectable gastric or gastro-oesophageal junction adenocarcinoma (FLOT4): a randomised, phase 2/3 trial. Lancet. 2019;393(10184):1948–57. Schuhmacher C, Gretschel S, Lordick F, et al. Neoadjuvant chemotherapy compared with surgery alone for locally advanced cancer of the stomach and cardia: European Organisation for Research and Treatment of Cancer randomized trial 40954. J Clin Oncol. 2010;28(35):5210–8. Oh SY, Kwon HC, Jeong SH, et al. A phase II study of S-1 and oxaliplatin (SOx) combination chemotherapy as a first-line therapy for patients with advanced gastric cancer. Investig New Drugs. 2012;30:350–6. Koizumi W, Takiuchi H, Yamada Y, et al. Phase II study of oxaliplatin plus S-1 as first-line treatment for advanced gastric cancer (G-SOX study). Ann Oncol. 2010;21:1001–5. Xiao C, Qian J, Zheng Y, et al. A phase II study of biweekly oxaliplatin plus S-1 combination chemotherapy as a first-line treatment for patients with metastatic or advanced gastric cancer in China. Medicine (Baltimore). 2019;98(20):e15696. Li T, Chen L. Efficacy and safety of SOX regimen as neoadjuvant chemotherapy for advanced gastric cancer. Zhonghua Wei Chang Wai Ke Za Zhi. 2011;14:104–6. NCCN Clinical Practice Guidelines in Oncology Gastric Cancer Version 1. 2020. https://www.nccn.org/professionals/physician_gls/pdf/gastric.pdf. Accessed 10th June 2020. Wang FH, Shen L, Li J, et al. The Chinese Society of Clinical Oncology (CSCO): clinical guidelines for the diagnosis and treatment of gastric cancer. Cancer Commun (Lond). 2019;39(1):10. Yoshikawa T, Morita S, Tanabe K, et al. Survival results of a randomised two-by-two factorial phase II trial comparing neoadjuvant chemotherapy with two and four courses of S-1 plus cisplatin (SC) and paclitaxel plus cisplatin (PC) followed by D2 gastrectomy for resectable advanced gastric cancer. Eur J Cancer. 2016;62:103–11. Eisenhauer EA, Therasse P, Bogaerts J, et al. New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur J Cancer. 2009;45:228–47. Ryan R, Gibbons D, Hyland JM, et al. Pathological response following long-course neoadjuvant chemoradiotherapy for locally advanced rectal cancer. Histopathology. 2005;47(2):141–6. Li H, Zhang Q, Chen L. Role of diagnostic laparoscopy in the treatment plan of gastric cancer. Zhonghua Wei Chang Wai Ke Za Zhi. 2017;20(2):195–9. Japanese Gastric Cancer Association. Japanese gastric cancer treatment guidelines 2018 (5th edition). Gastric Cancer. 2020. https://doi.org/10.1007/s10120-020-01042-y. Chow S, Shao J, Wang H. Sample size calculations in clinical research. 2nd ed. Boca Raton: Chapman & Hall/CRC; 2008. Becker K, Langer R, Reim D, et al. Significance of histopathological tumor regression after neoadjuvant chemotherapy in gastric adenocarcinomas: a summary of 480 cases. Ann Surg. 2011;253(5):934–9. All investigators will not receive any remuneration. CRO cooperates with all the medical centers and provide services for free. Therefore, this research did not receive any specific grant from funding agencies in the public, commercial or not-for-profit sectors. Department of General Surgery & Institute of General Surgery, Chinese PLA General Hospital, No.28 Fuxing Road, Haidian District, Beijing, 100853, China Xinxin Wang, Shuo Li & Lin Chen Department of General Surgery, Zhongshan Hospital, Fudan University, No.180 Fenglin Road, Xuhui District, Shanghai, 200032, China Yihong Sun Department of Surgical Oncology, The First Hospital of China Medical University, No.155 Nanjing Street North, Heping District, Shenyang, 110001, China Division of Gastrointestinal Surgery, The Second Affiliated Hospital of Wenzhou Medical University, No.109 West Xueyuan Road, Wenzhou, 325027, China Xian Shen Department of Gastroenterological Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Nangang District, Harbin, 150081, China Yingwei Xue Department of Gastrointestinal Surgery, The First Affiliated Hospital of Dalian Medical University, No.222 Zhongshan Road, Xigang District, Dalian, 116011, China Pin Liang Institute of General Surgery, General Hospital of Eastern Theater Command of Chinese PLA, No.305 East Zhongshan Road, Xuanwu District, Nanjing, 210002, China Guoli Li Department of Gastrointestinal Surgery, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, No.420 Fuma Road, Jinan District, Fuzhou, 350014, China Luchuan Chen Department of General Surgery, The Fourth Hospital of Hebei Medical University, No.12 Jiankang Road, Shijiazhuang, 050011, China Qun Zhao Department of General Surgery, Nanfang Hospital, Southern Medical University, No.1838 Guangzhoudadaobei Road, Guangzhou, 510515, China Guoxin Li Department of General Surgery, Tianjin Medical University General Hospital, No.154 Anshan Road, Heping District, Tianjin, 300052, China Weihua Fu Department of Gastric Cancer Surgery, Tianjin Medical University Cancer Hospital, West Huan-Hu Road, Ti Yuan Bei, Hexi District, Tianjin, 300060, China Han Liang Department of General Surgery, Shanxi Provincial Cancer Hospital, No.3 Zhigongxincun, Xinghualing District, Taiyuan, 030013, China Hairong Xin Department of General Surgery, The First Bethune Hospital of Jilin University, No.71 Xinmindajie Street, Changchun, 130021, China Jian Suo Department of General Surgery, China-Japan Union Hospital of Jilin University, No.126 Xi'antai Avenue, Changchun, 130033, China Xuedong Fang Department of Gastric Surgery, Liaoning Cancer Hospital and Institute, No.44 Xiaoheyan Road, Dadong District, Shenyang, 110042, China Zhichao Zheng Department of General Surgery, The First Affiliated Hospital of Nanjing Medical University, No.300 Guangzhou Road, Gulou District, Nanjing, 210029, China Zekuan Xu Department of General Surgery, Jiangsu Cancer Hospital (Jiangsu Institute of Cancer Research, Nanjing Medical University Affiliated Cancer Hospital), No.42 Baiziting, Nanjing, 210009, China Huanqiu Chen Department of General Surgery, The Affiliated Hospital of Qingdao University, No.16 Jiangsu Road, Shinan District, Qingdao, 266003, China Yanbing Zhou Department of Gastrointestinal Surgery, The First Affiliated Hospital, Sun Yat-sen University, No.58 Zhongshaner Road, Guangzhou, 510080, China Yulong He Department of Gastric Surgery, Fudan University Shanghai Cancer Center, No.270 Dongan Road, Xuhui District, Shanghai, 200032, China Hua Huang Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, No.3 East Qingchun Road, Jianggan District, Hangzhou, 310016, China Linghua Zhu Department of Gastrointestinal Surgery, West China Hospital, Sichuan University, No.37 Guoxue Alley, Wuhou District, Chengdu, 610041, China Kun Yang Department of Gastrointestinal Surgery, Peking University Cancer Hospital, No.52 Fucheng Road, Haidian District, Beijing, 100142, China Jiafu Ji Department of Gastroenterological Surgery, Peking University People's Hospital, No.11 Xizhimen South Street, Xicheng District, Beijing, 100044, China Yingjiang Ye Department of General Surgery, Beijing Friendship Hospital, Capital Medical University, No.95 Yongan Road, Xicheng District, Beijing, 100050, China Zhongtao Zhang Department of General Surgery, Xuanwu Hospital, Capital Medical University, No.45 Changchun Street, Xicheng District, Beijing, 100053, China Fei Li Department of General Surgery, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China Xin Wang Department of Pancreatic and Gastric Surgery, Cancer Hospital, Chinese Academy of Medical Sciences, No.17 Panjiayuannanli, Chaoyang District, Beijing, 100021, China Yantao Tian Division of Upper GI Surgery, Department of Surgery, Korea University Anam Hospital, Korea University College of Medicine, 73 Goryeodae-ro Seongbuk-gu, Seoul, 02841, South Korea Sungsoo Park Xinxin Wang Shuo Li Lin Chen The concept of the trial was proposed and designed by Xinxin Wang and LC. SL, Xinxin Wang, YS, KL, XS, YX, PL, Guoli Li, LC, QZ, Guoxin Li prepared the manuscript. SL, WF, HL, HX, JS, XF, Zhichao Zheng, ZX contributed to the statistical analysis. HC, YZ, YH, HH, LZ, KY, JJ, YY, Zhongtao Zhang, FL, XW, SP, YT contributed to the background check. LC is the corresponding author and checked the manuscript. All authors read and finally approved the manuscript. Correspondence to Lin Chen. The trial has been approved by the Ethics Committee of Chinese PLA General Hospital. All patients are required to sign the written informed consent. Written informed consent for publication will be obtained from all participants before inclusion in the study. Wang, X., Li, S., Sun, Y. et al. The protocol of a prospective, multicenter, randomized, controlled phase III study evaluating different cycles of oxaliplatin combined with S-1 (SOX) as neoadjuvant chemotherapy for patients with locally advanced gastric cancer: RESONANCE-II trial. BMC Cancer 21, 20 (2021). https://doi.org/10.1186/s12885-020-07764-7 Locally advanced gastric cancer Duration of neoadjuvant chemotherapy
CommonCrawl
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) Statistical and systematical errors in analyses of separate experimental data sets in high energy physics (1804.05201) R. Orava, O.V. Selyugin April 14, 2018 hep-ph, hep-ex, physics.data-an Different ways of extracting parameters of interest from combined data sets of separate experiments are investigated accounting for the systematic errors. It is shown, that the frequentist approach may yield larger $\chi^2$ values when compared to the Bayesian approach, where the systematic errors have a Gaussian distributed prior calculated in quadrature. The former leads to a better estimation of the parameters. A maximum-likelihood method, applied to different "gedanken" and real LHC data, is presented. The results allow to choose an optimal approach for obtaining the fit based model parameters. Search for magnetic monopoles with the MoEDAL forward trapping detector in 2.11 fb$^{-1}$ of 13 TeV proton-proton collisions at the LHC (1712.09849) MoEDAL Collaboration: B. Acharya, J. Alexandre, S. Baines, P. Benes, B. Bergmann, J. Bernabéu, A. Bevan, H. Branzas, M. Campbell, L. Caramete, S. Cecchini, M. de Montigny, A. De Roeck, J. R. Ellis, M. Fairbairn, D. Felea, M. Frank, D. Frekers, C. Garcia, J. Hays, A. M. Hirt, J. Janecek, D.-W. Kim, K. Kinoshita, A. Korzenev, D. H. Lacarrère, S. C. Lee, C. Leroy, G. Levi, A. Lionti, J. Mamuzic, A. Margiotta, N. Mauri, N. E. Mavromatos, P. Mermod, V. A. Mitsou, R. Orava, I. Ostrovskiy, B. Parker, L. Patrizii, G. E. Păvălaş, J. L. Pinfold, V. Popa, M. Pozzato, S. Pospisil, A. Rajantie, R. Ruiz de Austri, Z. Sahnoun, M. Sakellariadou, A. Santra, S. Sarkar, G. Semenoff, A. Shaa, G. Sirri, K. Sliwa, R. Soluk, M. Spurio, Y. N. Srivastava, M. Suk, J. Swain, M. Tenti, V. Togo, J. A. Tuszyński, V. Vento, O. Vives, Z. Vykydal, A. Widom, G. Willems, J. H. Yoon, I. S. Zgura April 10, 2018 hep-ph, hep-ex We update our previous search for trapped magnetic monopoles in LHC Run 2 using nearly six times more integrated luminosity and including additional models for the interpretation of the data. The MoEDAL forward trapping detector, comprising 222~kg of aluminium samples, was exposed to 2.11~fb$^{-1}$ of 13 TeV proton-proton collisions near the LHCb interaction point and analysed by searching for induced persistent currents after passage through a superconducting magnetometer. Magnetic charges equal to the Dirac charge or above are excluded in all samples. The results are interpreted in Drell-Yan production models for monopoles with spins 0, 1/2 and 1: in addition to standard point-like couplings, we also consider couplings with momentum-dependent form factors. The search provides the best current laboratory constraints for monopoles with magnetic charges ranging from two to five times the Dirac charge. A search for the exotic meson $X(5568)$ with the Collider Detector at Fermilab (1712.09620) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, A. Elagin, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, L. Marchese, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfmeister, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli Dec. 27, 2017 hep-ex A search for the exotic meson $X(5568)$ decaying into the $B^0_s \pi^{\pm}$ final state is performed using data corresponding to $9.6 \textrm{fb}^{-1}$ from $p{\bar p}$ collisions at $\sqrt{s} = 1960$ GeV recorded by the Collider Detector at Fermilab. No evidence for this state is found and an upper limit of 6.7\% at the 95\% confidence level is set on the fraction of $B^0_s$ produced through the $X(5568) \rightarrow B^0_s \, \pi^{\pm}$ process. Measurement of the inclusive-isolated prompt-photon cross section in $p\bar{p}$ collisions using the full CDF data set (1703.00599) CDF Collaboration: T. Aaltonen, M.G. Albrow, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, L. Marchese, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, P. Sinervo, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli March 2, 2017 hep-ex A measurement of the inclusive production cross section of isolated prompt photons in proton-antiproton collisions at center-of-mass energy $\sqrt{s}$=1.96TeV is presented. The results are obtained using the full Run II data sample collected with the Collider Detector at the Fermilab Tevatron, which corresponds to an integrated luminosity of 9.5fb$^{-1}$. The cross section is measured as a function of photon transverse energy, $E_T^{\gamma}$, in the range 30$ < E_T^{\gamma} <$500GeV and in the pseudorapidity region $|\eta^{\gamma}|<$1.0. The results are compared with predictions from parton-shower Monte Carlo models at leading order in quantum chromodynamics (QCD) and from next-to-leading order perturbative QCD calculations. The latter show good agreement with the measured cross section. Search for magnetic monopoles with the MoEDAL forward trapping detector in 13 TeV proton-proton collisions at the LHC (1611.06817) MoEDAL Collaboration: B. Acharya, J. Alexandre, S. Baines, P. Benes, B. Bergmann, J. Bernabéu, H. Branzas, M. Campbell, L. Caramete, S. Cecchini, M. de Montigny, A. De Roeck, J. R. Ellis, M. Fairbairn, D. Felea, J. Flores, M. Frank, D. Frekers, C. Garcia, A. M. Hirt, J. Janecek, M. Kalliokoski, A. Katre, D.-W. Kim, K. Kinoshita, A. Korzenev, D. H. Lacarrère, S. C. Lee, C. Leroy, A. Lionti, J. Mamuzic, A. Margiotta, N. Mauri, N. E. Mavromatos, P. Mermod, V. A. Mitsou, R. Orava, B. Parker, L. Pasqualini, L. Patrizii, G. E. Păvălaş, J. L. Pinfold, V. Popa, M. Pozzato, S. Pospisil, A. Rajantie, R. Ruiz de Austri, Z. Sahnoun, M. Sakellariadou, S. Sarkar, G. Semenoff, A. Shaa, G. Sirri, K. Sliwa, R. Soluk, M. Spurio, Y. N. Srivastava, M. Suk, J. Swain, M. Tenti, V. Togo, J. A. Tuszyński, V. Vento, O. Vives, Z. Vykydal, T. Whyntie, A. Widom, G. Willems, J. H. Yoon, I. S. Zgura Jan. 12, 2017 hep-ph, hep-ex MoEDAL is designed to identify new physics in the form of long-lived highly-ionising particles produced in high-energy LHC collisions. Its arrays of plastic nuclear-track detectors and aluminium trapping volumes provide two independent passive detection techniques. We present here the results of a first search for magnetic monopole production in 13 TeV proton-proton collisions using the trapping technique, extending a previous publication with 8 TeV data during LHC run-1. A total of 222 kg of MoEDAL trapping detector samples was exposed in the forward region and analysed by searching for induced persistent currents after passage through a superconducting magnetometer. Magnetic charges exceeding half the Dirac charge are excluded in all samples and limits are placed for the first time on the production of magnetic monopoles in 13 TeV $pp$ collisions. The search probes mass ranges previously inaccessible to collider experiments for up to five times the Dirac charge. LHC Forward Physics (1611.05079) K. Akiba, M. Akbiyik, M. Albrow, M. Arneodo, V. Avati, J. Baechler, O. Villalobos Baillie, P. Bartalini, J. Bartels, S. Baur, C. Baus, W. Beaumont, U. Behrens, D. Berge, M. Berretti, E. Bossini, R. Boussarie, S. Brodsky, M. Broz, M. Bruschi, P. Bussey, W. Byczynski, J. C. Cabanillas Noris, E. Calvo Villar, A. Campbell, F. Caporale, W. Carvalho, G. Chachamis, E. Chapon, C. Cheshkov, J. Chwastowski, R. Ciesielski, D. Chinellato, A. Cisek, V. Coco, P. Collins, J. G. Contreras, B. Cox, D. de Jesus Damiao, P. Davis, M. Deile, D. D'Enterria, D. Druzhkin, B. Ducloué, R. Dumps, R. Dzhelyadin, P. Dziurdzia, M. Eliachevitch, P. Fassnacht, F. Ferro, S. Fichet, D. Figueiredo, B. Field, D. Finogeev, R. Fiore, J. Forshaw, A. Gago Medina, M. Gallinaro, A. Granik, G. von Gersdorff, S. Giani, K. Golec-Biernat, V. P. Goncalves, P. Göttlicher, K. Goulianos, J.-Y. Grosslord, L. A. Harland-Lang, H. Van Haevermaet, M. Hentschinski, R. Engel, G. Herrera Corral, J. Hollar, L. Huertas, D. Johnson, I. Katkov, O. Kepka, M. Khakzad, L. Kheyn, V. Khachatryan, V. A. Khoze, S. Klein, M. van Klundert, F. Krauss, A. Kurepin, N. Kurepin, K. Kutak, E. Kuznetsova, G. Latino, P. Lebiedowicz, B. Lenzi, E. Lewandowska, S. Liu, A. Luszczak, M. Luszczak, J. D. Madrigal, M. Mangano, Z. Marcone, C. Marquet, A. D. Martin, T. Martin, M. I. Martinez Hernandez, C. Martins, C. Mayer, R. Mc Nulty, P. Van Mechelen, R. Macula, E. Melo da Costa, T. Mertzimekis, C. Mesropian, M. Mieskolainen, N. Minafra, I. L. Monzon, L. Mundim, B. Murdaca, M. Murray, H. Niewiadowski, J. Nystrand, E. G. de Oliveira, R. Orava, S. Ostapchenko, K. Osterberg, A. Panagiotou, A. Papa, R. Pasechnik, T. Peitzmann, L. A. Perez Moreno, T. Pierog, J. Pinfold, M. Poghosyan, M. E. Pol, W. Prado, V. Popov, M. Rangel, A. Reshetin, J.-P. Revol, M. Rijssenbeek, M. Rodriguez, B. Roland, C. Royon, M. Ruspa, M. Ryskin, A. Sabio Vera, G. Safronov, T. Sako, H. Schindler, D. Salek, K. Safarik, M. Saimpert, A. Santoro, R. Schicker, J. Seger, S. Sen, A. Shabanov, W. Schafer, G. Gil Da Silveira, P. Skands, R. Soluk, A. van Spilbeeck, R. Staszewski, S. Stevenson, W.J. Stirling, M. Strikman, A. Szczurek, L. Szymanowski, J. D. Tapia Takaki, M. Tasevsky, K. Taesoo, C. Thomas, S. R. Torres, A. Tricomi, M. Trzebinski, D. Tsybychev, N. Turini, R. Ulrich, E. Usenko, J. Varela, M. Lo Vetere, A. Villatoro Tello, A. Vilela Pereira, D. Volyanskyy, S. Wallon, G. Wilkinson, H. Wöhrmann, K. C. Zapp, Y. Zoccarato Nov. 15, 2016 hep-ph, hep-ex The goal of this report is to give a comprehensive overview of the rich field of forward physics, with a special attention to the topics that can be studied at the LHC. The report starts presenting a selection of the Monte Carlo simulation tools currently available, chapter 2, then enters the rich phenomenology of QCD at low, chapter 3, and high, chapter 4, momentum transfer, while the unique scattering conditions of central exclusive production are analyzed in chapter 5. The last two experimental topics, Cosmic Ray and Heavy Ion physics are presented in the chapter 6 and 7 respectively. Chapter 8 is dedicated to the BFKL dynamics, multiparton interactions, and saturation. The report ends with an overview of the forward detectors at LHC. Each chapter is correlated with a comprehensive bibliography, attempting to provide to the interested reader with a wide opportunity for further studies. Measurement of the $D^+$-meson production cross section at low transverse momentum in $p\bar{p}$ collisions at $\sqrt{s}=1.96$ TeV (1610.08989) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, L. Marchese, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli Oct. 27, 2016 hep-ex We report on a measurement of the $D^{+}$-meson production cross section as a function of transverse momentum ($p_T$) in proton-antiproton ($p\bar{p}$) collisions at 1.96 TeV center-of-mass energy, using the full data set collected by the Collider Detector at Fermilab in Tevatron Run II and corresponding to 10 fb$^{-1}$ of integrated luminosity. We use $D^{+} \to K^-\pi^+\pi^+$ decays fully reconstructed in the central rapidity region $|y|<1$ with transverse momentum down to 1.5 GeV/$c$, a range previously unexplored in $p\bar{p}$ collisions. Inelastic $p\bar{p}$-scattering events are selected online using minimally-biasing requirements followed by an optimized offline selection. The $K^-\pi^+\pi^+$ mass distribution is used to identify the $D^+$ signal, and the $D^+$ transverse impact-parameter distribution is used to separate prompt production, occurring directly in the hard scattering process, from secondary production from $b$-hadron decays. We obtain a prompt $D^+$ signal of 2950 candidates corresponding to a total cross section $\sigma(D^+, 1.5 < p_T < 14.5~\mbox{GeV/}c, |y|<1) = 71.9 \pm 6.8 (\mbox{stat}) \pm 9.3 (\mbox{syst})~\mu$b. While the measured cross sections are consistent with theoretical estimates in each $p_T$ bin, the shape of the observed $p_T$ spectrum is softer than the expectation from quantum chromodynamics. The results are unique in $p\bar{p}$ collisions and can improve the shape and uncertainties of future predictions. Measurement of Elastic pp Scattering at $\sqrt{s}$ = 8 TeV in the Coulomb-Nuclear Interference Region - Determination of the $\rho$-Parameter and the Total Cross-Section (1610.00603) TOTEM Collaboration: G. Antchev, P. Aspell, I. Atanassov, V. Avati, J. Baechler, V. Berardi, M. Berretti, E. Bossini, U. Bottigli, M. Bozzo, P. Broulím, H. Burkhardt, A. Buzzo, F. S. Cafagna, C. E. Campanella, M. G. Catanesi, M. Csanád, T. Csörgő, M. Deile, F. De Leonardis, A. D'Orazio, M. Doubek, K. Eggert, V. Eremin, F. Ferro, A. Fiergolski, F. Garcia, V. Georgiev, S. Giani, L. Grzanka, C. Guaragnella, J. Hammerbauer, J. Heino, A. Karev, J. Kašpar, J. Kopal, V. Kundrát, S. Lami, G. Latino, R. Lauhakangas, R. Linhart, E. Lippmaa, J. Lippmaa, M. V. Lokajíček, L. Losurdo, M. Lo Vetere, F. Lucas Rodríguez, M. Macrí, A. Mercadante, N. Minafra, S. Minutoli, T. Naaranoja, F. Nemes, H. Niewiadomski, E. Oliveri, F. Oljemark, R. Orava, M. Oriunno, K. Österberg, P. Palazzi, L. Paločko, V. Passaro, Z. Peroutka, V. Petruzzelli, T. Politi, J. Procházka, F. Prudenzano, M. Quinto, E. Radermacher, E. Radicioni, F. Ravotti, S. Redaelli, E. Robutti, L. Ropelewski, G. Ruggiero, H. Saarikko, B. Salvachua, A. Scribano, J. Smajek, W. Snoeys, J. Sziklai, C. Taylor, N. Turini, V. Vacek, G. Valentino, J. Welti, J. Wenninger, P. Wyszkowski, K. Zielinski Oct. 3, 2016 hep-ph, hep-ex, nucl-ex The TOTEM experiment at the CERN LHC has measured elastic proton-proton scattering at the centre-of-mass energy $\sqrt{s}$ = 8 TeV and four-momentum transfers squared, |t|, from 6 x $10^{-4}$ GeV$^2$ to 0.2 GeV$^2$. Near the lower end of the |t|-interval the differential cross-section is sensitive to the interference between the hadronic and the electromagnetic scattering amplitudes. This article presents the elastic cross-section measurement and the constraints it imposes on the functional forms of the modulus and phase of the hadronic elastic amplitude. The data exclude the traditional Simplified West and Yennie interference formula that requires a constant phase and a purely exponential modulus of the hadronic amplitude. For parametrisations of the hadronic modulus with second- or third-order polynomials in the exponent, the data are compatible with hadronic phase functions giving either central or peripheral behaviour in the impact parameter picture of elastic scattering. In both cases, the $\rho$-parameter is found to be 0.12 $\pm$ 0.03. The results for the total hadronic cross-section are $\sigma_{tot}$ = (102.9 $\pm$ 2.3) mb and (103.0 $\pm$ 2.3) mb for central and peripheral phase formulations, respectively. Both are consistent with previous TOTEM measurements. Measurement of the $WW$ and $WZ$ production cross section using final states with a charged lepton and heavy-flavor jets in the full CDF Run II data set (1606.06823) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, R. C. Group, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli July 31, 2016 hep-ex We present a measurement of the total {\it WW} and {\it WZ} production cross sections in $p\bar{p}$ collision at $\sqrt{s}=1.96$ TeV, in a final state consistent with leptonic $W$ boson decay and jets originating from heavy-flavor quarks from either a $W$ or a $Z$ boson decay. This analysis uses the full data set collected with the CDF II detector during Run II of the Tevatron collider, corresponding to an integrated luminosity of 9.4 fb$^{-1}$. An analysis of the dijet mass spectrum provides $3.7\sigma$ evidence of the summed production processes of either {\it WW} or {\it WZ} bosons with a measured total cross section of $\sigma_{WW+WZ} = 13.7\pm 3.9$~pb. Independent measurements of the {\it WW} and {\it WZ} production cross sections are allowed by the different heavy-flavor decay-patterns of the $W$ and $Z$ bosons and by the analysis of secondary-decay vertices reconstructed within heavy-flavor jets. The productions of {\it WW} and of {\it WZ} dibosons are independently seen with significances of $2.9\sigma$ and $2.1\sigma$, respectively, with total cross sections of $\sigma_{WW}= 9.4\pm 4.2$~pb and $\sigma_{WZ}=3.7^{+2.5}_{-2.2}$~pb. The measurements are consistent with standard-model predictions. Search for magnetic monopoles with the MoEDAL prototype trapping detector in 8 TeV proton-proton collisions at the LHC (1604.06645) MoEDAL Collaboration: B. Acharya, J. Alexandre, K. Bendtz, P. Benes, J. Bernabéu, M. Campbell, S. Cecchini, J. Chwastowski, A. Chatterjee, M. de Montigny, D. Derendarz, A. De Roeck, J. R. Ellis, M. Fairbairn, D. Felea, M. Frank, D. Frekers, C. Garcia, G. Giacomelli, D. Haşegan, M. Kalliokoski, A. Katre, D.-W. Kim, M. G. L. King, K. Kinoshita, D. H. Lacarrère, S. C. Lee, C. Leroy, A. Lionti, A. Margiotta, N. Mauri, N. E. Mavromatos, P. Mermod, D. Milstead, V. A. Mitsou, R. Orava, B. Parker, L. Pasqualini, L. Patrizii, G. E. Păvălaş, J. L. Pinfold, M. Platkevič, V. Popa, M. Pozzato, S. Pospisil, A. Rajantie, Z. Sahnoun, M. Sakellariadou, S. Sarkar, G. Semenoff, G. Sirri, K. Sliwa, R. Soluk, M. Spurio, Y. N. Srivastava, R. Staszewski, M. Suk, J. Swain, M. Tenti, V. Togo, M. Trzebinski, J. A. Tuszyński, V. Vento, O. Vives, Z. Vykydal, T. Whyntie, A. Widom, J. H. Yoon July 11, 2016 hep-ex, physics.ins-det The MoEDAL experiment is designed to search for magnetic monopoles and other highly-ionising particles produced in high-energy collisions at the LHC. The largely passive MoEDAL detector, deployed at Interaction Point 8 on the LHC ring, relies on two dedicated direct detection techniques. The first technique is based on stacks of nuclear-track detectors with surface area $\sim$18 m$^2$, sensitive to particle ionisation exceeding a high threshold. These detectors are analysed offline by optical scanning microscopes. The second technique is based on the trapping of charged particles in an array of roughly 800 kg of aluminium samples. These samples are monitored offline for the presence of trapped magnetic charge at a remote superconducting magnetometer facility. We present here the results of a search for magnetic monopoles using a 160 kg prototype MoEDAL trapping detector exposed to 8 TeV proton-proton collisions at the LHC, for an integrated luminosity of 0.75 fb$^{-1}$. No magnetic charge exceeding $0.5g_{\rm D}$ (where $g_{\rm D}$ is the Dirac magnetic charge) is measured in any of the exposed samples, allowing limits to be placed on monopole production in the mass range 100 GeV$\leq m \leq$ 3500 GeV. Model-independent cross-section limits are presented in fiducial regions of monopole energy and direction for $1g_{\rm D}\leq|g|\leq 6g_{\rm D}$, and model-dependent cross-section limits are obtained for Drell-Yan pair production of spin-1/2 and spin-0 monopoles for $1g_{\rm D}\leq|g|\leq 4g_{\rm D}$. Under the assumption of Drell-Yan cross sections, mass limits are derived for $|g|=2g_{\rm D}$ and $|g|=3g_{\rm D}$ for the first time at the LHC, surpassing the results from previous collider experiments. Measurement of $\sin^2\theta^{\rm lept}_{\rm eff}$ using $e^+e^-$ pairs from $\gamma^*/Z$ bosons produced in $p\bar{p}$ collisions at a center-of-momentum energy of 1.96 TeV (1605.02719) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli June 10, 2016 hep-ex At the Fermilab Tevatron proton-antiproton ($p\bar{p}$) collider, Drell-Yan lepton pairs are produced in the process $p \bar{p} \rightarrow e^+e^- + X$ through an intermediate $\gamma^*/Z$ boson. The forward-backward asymmetry in the polar-angle distribution of the $e^-$ as a function of the $e^+e^-$-pair mass is used to obtain $\sin^2\theta^{\rm lept}_{\rm eff}$, the effective leptonic determination of the electroweak-mixing parameter $\sin^2\theta_W$. The measurement sample, recorded by the Collider Detector at Fermilab (CDF), corresponds to 9.4~fb$^{-1}$ of integrated luminosity from $p\bar{p}$ collisions at a center-of-momentum energy of 1.96 TeV, and is the full CDF Run II data set. The value of $\sin^2\theta^{\rm lept}_{\rm eff}$ is found to be $0.23248 \pm 0.00053$. The combination with the previous CDF measurement based on $\mu^+\mu^-$ pairs yields $\sin^2\theta^{\rm lept}_{\rm eff} = 0.23221 \pm 0.00046$. This result, when interpreted within the specified context of the standard model assuming $\sin^2 \theta_W = 1 - M_W^2/M_Z^2$ and that the $W$- and $Z$-boson masses are on-shell, yields $\sin^2\theta_W = 0.22400 \pm 0.00045$, or equivalently a $W$-boson mass of $80.328 \pm 0.024 \;{\rm GeV}/c^2$. Measurement of the $B_c^{\pm}$ production cross section in $p\bar{p}$ collisions at $\sqrt{s}=1.96$ TeV (1601.03819) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, M. Hartz, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli March 26, 2016 hep-ex We describe a measurement of the ratio of the cross sections times branching fractions of the $B_c^+$ meson in the decay mode $B_c^+ \rightarrow J/\psi \mu\nu$ to the $B^+$ meson in the decay mode $B^+ \rightarrow J/\psi K^+$ in proton-antiproton collisions at center-of-mass energy $\sqrt{s}=1.96$ TeV. The measurement is based on the complete CDF Run II data set, which comes from an integrated luminosity of $8.7\,{\rm fb}^{-1}$. The ratio of the production cross sections times branching fractions for $B_c^+$ and $B_c^+$ mesons with momentum transverse to the beam greater than $6~\textrm{GeV}/c$ and rapidity magnitude smaller than 0.6 is $0.211\pm 0.012~\mbox{(stat)}^{+0.021}_{-0.020}~\mbox{(syst)}$. Using the known $B^+ \rightarrow J/\psi K^+$ branching fraction, the known $B^+$ production cross section, and a selection of the predicted $B_c^+ \rightarrow J/\psi \mu\nu$ branching fractions, the range for the total $B_c^+$ production cross section is estimated. Measurement of vector boson plus $D^{*}(2010)^+$ meson production in $\bar{p}p$ collisions at $\sqrt{s}=1.96\, {\rm TeV}$ (1508.06980) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, A. Elagin, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, R.C. Group, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, H. Liu, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, C. Neu, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, I. Oksuzian, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli A measurement of vector boson ($V$) production in conjunction with a $D^{*}(2010)^+$ meson is presented. Using a data sample corresponding to $9.7\, {\rm fb}^{-1}$ of ^Mproton-antiproton collisions at center-of-mass energy $\sqrt{s}=1.96\rm~ TeV$ produced by the Fermilab Tevatron, we reconstruct $V+D^{*+}$ samples with the CDF~II detector. The $D^{*+}$ is fully reconstructed in the $D^{*}(2010)^+ \rightarrow D^{0}(\to K^-\pi^+)\pi^+$ decay mode. This technique is sensitive to the associated production of vector boson plus charm or bottom mesons. We measure the ratio of production cross sections $\sigma(W+D^{*})/\sigma(W)$ = $[1.75\pm 0.13 {\rm (stat)}\pm 0.09 {\rm (syst)}]\% $ and $\sigma(Z+D^{*})/\sigma(Z)$ = $[1.5\pm 0.4 {\rm (stat)} \pm 0.2 {\rm (syst)}]\% $ and perform a differential measurement of $d\sigma(W+D^{*})/dp_T(D^{*})$. Event properties are utilized to determine the fraction of $V+D^{*}(2010)^+$ events originating from different production processes. The results are in agreement with the predictions obtained with the {\sc pythia} program, limiting possible contribution from non-standard-model physics processes. Measurement of the forward-backward asymmetry of top-quark and antiquark pairs using the full CDF Run II data set (1602.09015) Feb. 29, 2016 hep-ex We measure the forward--backward asymmetry of the production of top quark and antiquark pairs in proton-antiproton collisions at center-of-mass energy $\sqrt{s} = 1.96~\mathrm{TeV}$ using the full data set collected by the Collider Detector at Fermilab (CDF) in Tevatron Run II corresponding to an integrated luminosity of $9.1~\rm{fb}^{-1}$. The asymmetry is characterized by the rapidity difference between top quarks and antiquarks ($\Delta y$), and measured in the final state with two charged leptons (electrons and muons). The inclusive asymmetry, corrected to the entire phase space at parton level, is measured to be $A_{\text{FB}}^{t\bar{t}} = 0.12 \pm 0.13$, consistent with the expectations from the standard-model (SM) and previous CDF results in the final state with a single charged lepton. The combination of the CDF measurements of the inclusive $A_{\text{FB}}^{t\bar{t}}$ in both final states yields $A_{\text{FB}}^{t\bar{t}}=0.160\pm0.045$, which is consistent with the SM predictions. We also measure the differential asymmetry as a function of $\Delta y$. A linear fit to $A_{\text{FB}}^{t\bar{t}}(|\Delta y|)$, assuming zero asymmetry at $\Delta y=0$, yields a slope of $\alpha=0.14\pm0.15$, consistent with the SM prediction and the previous CDF determination in the final state with a single charged lepton. The combined slope of $A_{\text{FB}}^{t\bar{t}}(|\Delta y|)$ in the two final states is $\alpha=0.227\pm0.057$, which is $2.0\sigma$ larger than the SM prediction. Measurement of the forward-backward asymmetry in low-mass bottom-quark pairs produced in proton-antiproton collisions (1601.06526) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, O. Majersky, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli Jan. 25, 2016 hep-ex We report a measurement of the forward-backward asymmetry, $A_{FB}$, in $b\bar{b}$ pairs produced in proton-antiproton collisions and identified by muons from semileptonic $b$-hadron decays. The event sample was collected at a center-of-mass energy of $\sqrt{s}=1.96$ TeV with the CDF II detector and corresponds to 6.9 fb$^{-1}$ of integrated luminosity. We obtain an integrated asymmetry of $A_{FB}(b\bar{b})=(1.2 \pm 0.7)$\% at the particle level for $b$-quark pairs with invariant mass, $m_{b\bar{b}}$, down to $40$ GeV/$c^2$ and measure the dependence of $A_{FB}(b\bar{b})$ on $m_{b\bar{b}}$. The results are compatible with expectations from the standard model. Search for a Low-Mass Neutral Higgs Boson with Suppressed Couplings to Fermions Using Events with Multiphoton Final States (1601.00401) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli Jan. 4, 2016 hep-ex A search for a Higgs boson with suppressed couplings to fermions, $h_f$, assumed to be the neutral, lower-mass partner of the Higgs boson discovered at the Large Hadron Collider, is reported. Such a Higgs boson could exist in extensions of the standard model with two Higgs doublets, and could be produced via $p\bar{p} \to H^\pm h_f \to W^* h_f h_f \to 4\gamma + X$, where $H^\pm$ is a charged Higgs boson. This analysis uses all events with at least three photons in the final state from proton-antiproton collisions at a center-of-mass energy of 1.96~TeV collected by the Collider Detector at Fermilab, corresponding to an integrated luminosity of 9.2~${\rm fb}^{-1}$. No evidence of a signal is observed in the data. Values of Higgs-boson masses between 10 and 100 GeV/$c^2$ are excluded at 95\% Bayesian credibility. Evidence for non-exponential elastic proton-proton differential cross-section at low |t| and sqrt(s) = 8 TeV by TOTEM (1503.08111) TOTEM Collaboration: G. Antchev, P. Aspell, I. Atanassov, V. Avati, J. Baechler, V. Berardi, M. Berretti, E. Bossini, U. Bottigli, M. Bozzo, A. Buzzo, F. S. Cafagna, C. E. Campanella, M. G. Catanesi, M. Csanád, T. Csörgő, M. Deile, F. De Leonardis, A. D'Orazio, M. Doubek, K. Eggert, V. Eremin, F. Ferro, A. Fiergolski, F. Garcia, V. Georgiev, S. Giani, L. Grzanka, C. Guaragnella, J. Hammerbauer, J. Heino, A. Karev, J. Kašpar, J. Kopal, V. Kundrát, S. Lami, G. Latino, R. Lauhakangas, E. Lippmaa, J. Lippmaa, M. V. Lokajíček, L. Losurdo, M. Lo Vetere, F. Lucas Rodríguez, M. Macrí, A. Mercadante, N. Minafra, S. Minutoli, T. Naaranoja, F. Nemes, H. Niewiadomski, E. Oliveri, F. Oljemark, R. Orava, M. Oriunno, K. Österberg, P. Palazzi, V. Passaro, Z. Peroutka, V. Petruzzelli, T. Politi, J. Procházka, F. Prudenzano, M. Quinto, E. Radermacher, E. Radicioni, F. Ravotti, E. Robutti, L. Ropelewski, G. Ruggiero, H. Saarikko, A. Scribano, J. Smajek, W. Snoeys, T. Sodzawiczny, J. Sziklai, C. Taylor, N. Turini, V. Vacek, J. Welti, P. Wyszkowski, K. Zielinski Sept. 12, 2015 hep-ex The TOTEM experiment has made a precise measurement of the elastic proton-proton differential cross-section at the centre-of-mass energy sqrt(s) = 8 TeV based on a high-statistics data sample obtained with the beta* = 90 optics. Both the statistical and systematic uncertainties remain below 1%, except for the t-independent contribution from the overall normalisation. This unprecedented precision allows to exclude a purely exponential differential cross-section in the range of four-momentum transfer squared 0.027 < |t| < 0.2 GeV^2 with a significance greater than 7 sigma. Two extended parametrisations, with quadratic and cubic polynomials in the exponent, are shown to be well compatible with the data. Using them for the differential cross-section extrapolation to t = 0, and further applying the optical theorem, yields total cross-section estimates of (101.5 +- 2.1) mb and (101.9 +- 2.1) mb, respectively, in agreement with previous TOTEM measurements. A Study of the Energy Dependence of the Underlying Event in Proton-Antiproton Collisions (1508.05340) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, M. Albrow, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, A. Elagin, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, R.C. Group, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, H. Liu, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, C. Neu, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, I. Oksuzian, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli Aug. 27, 2015 hep-ex We study charged particle production in proton-antiproton collisions at 300 GeV, 900 GeV, and 1.96 TeV. We use the direction of the charged particle with the largest transverse momentum in each event to define three regions of eta-phi space; toward, away, and transverse. The average number and the average scalar pT sum of charged particles in the transverse region are sensitive to the modeling of the underlying event. The transverse region is divided into a MAX and MIN transverse region, which helps separate the hard component (initial and final-state radiation) from the beam-beam remnant and multiple parton interaction components of the scattering. The center-of-mass energy dependence of the various components of the event are studied in detail. The data presented here can be used to constrain and improve QCD Monte Carlo models, resulting in more precise predictions at the LHC energies of 13 and 14 TeV. Measurement of the production and differential cross sections of $W^{+}W^{-}$ bosons in association with jets in $p\bar{p}$ collisions at $\sqrt{s}=1.96$ TeV (1505.00801) We present a measurement of the $W$-boson-pair production cross section in $p\bar{p}$ collisions at 1.96 TeV center-of-mass energy and the first measurement of the differential cross section as a function of jet multiplicity and leading-jet energy. The $W^{+}W^{-}$ cross section is measured in the final state comprising two charged leptons and neutrinos, where either charged lepton can be an electron or a muon. Using data collected by the CDF experiment corresponding to $9.7~\rm{fb}^{-1}$ of integrated luminosity, a total of $3027$ collision events consistent with $W^{+}W^{-}$ production are observed with an estimated background contribution of $1790\pm190$ events. The measured total cross section is $\sigma(p\bar{p} \rightarrow W^{+}W^{-}) = 14.0 \pm 0.6~(\rm{stat})^{+1.2}_{-1.0}~(\rm{syst})\pm0.8~(\rm{lumi})$ pb, consistent with the standard model prediction. Measurement of the top-quark mass in the ${t\bar{t}}$ dilepton channel using the full CDF Run II data set (1505.00500) We present a measurement of the top-quark mass in events containing two leptons (electrons or muons) with a large transverse momentum, two or more energetic jets, and a transverse-momentum imbalance. We use the full proton-antiproton collision data set collected by the CDF experiment during the Fermilab Tevatron Run~II at center-of-mass energy $\sqrt{s} = 1.96$ TeV, corresponding to an integrated luminosity of 9.1 fb$^{-1}$. A special observable is exploited for an optimal reduction of the dominant systematic uncertainty, associated with the knowledge of the absolute energy of the hadronic jets. The distribution of this observable in the selected events is compared to simulated distributions of ${t\bar{t}}$ dilepton signal and background.We measure a value for the top-quark mass of $171.5\pm 1.9~{\rm (stat)}\pm 2.5~{\rm (syst)}$ GeV/$c^2$. Measurement of central exclusive pi+pi- production in p-pbar collisions at sqrt(s) = 0.9 and 1.96 TeV at CDF (1502.01391) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, A. Elagin, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, R.C. Group, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, H. Liu, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, D. Lontkovskyi, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, I. Makarenko, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, C. Neu, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, I. Oksuzian, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli, M. Zurek We measure exclusive $\pi^+\pi^-$ production in proton-antiproton collisions at center-of-mass energies $\sqrt{s}$ = 0.9 and 1.96 TeV in the Collider Detector at Fermilab. We select events with two oppositely charged particles, assumed to be pions, with pseudorapidity $|\eta| < 1.3$ and with no other particles detected in $|\eta| < 5.9$. We require the $\pi^+\pi^-$ system to have rapidity $|y|<$ 1.0. The production mechanism of these events is expected to be dominated by double pomeron exchange, which constrains the quantum numbers of the central state. The data are potentially valuable for isoscalar meson spectroscopy and for understanding the pomeron in a region of transition between nonperturbative and perturbative quantum chromodynamics. The data extend up to dipion mass $M(\pi^+\pi^-)$ = 5000 MeV/$c^2$ and show resonance structures attributed to $f_0$ and $f_2(1270)$ mesons. From the $\pi^+\pi^-$ and $K^+K^-$ spectra, we place upper limits on exclusive $\chi_{c0}(3415)$ production. First measurement of the forward-backward asymmetry in bottom-quark pair production at high mass (1504.06888) April 26, 2015 hep-ex We measure the particle-level forward-backward production asymmetry in $b\bar{b}$ pairs with masses $m(b\bar{b})$ larger than 150 GeV/$c^2$, using events with hadronic jets and employing jet charge to distinguish $b$ from $\bar{b}$. The measurement uses 9.5/fb of ppbar collisions at a center of mass energy of 1.96 TeV recorded by the CDF II detector. The asymmetry as a function of $m(b\bar{b})$ is consistent with zero, as well as with the predictions of the standard model. The measurement disfavors a simple model including an axigluon with a mass of 200 GeV/$c^2$ whereas a model containing a heavier 345 GeV/$c^2$ axigluon is not excluded. Search for Resonances Decaying to Top and Bottom Quarks with the CDF Experiment (1504.01536) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, F. Anza', G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, L. Bianchi, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, A. Elagin, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, R.C. Group, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, H. Liu, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, C. Neu, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, I. Oksuzian, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli April 7, 2015 hep-ph, hep-ex We report on a search for charged massive resonances decaying to top ($t$) and bottom ($b$) quarks in the full data set of proton-antiproton collisions at center-of-mass energy of $\sqrt{s} = 1.96$ TeV collected by the CDF~II detector at the Tevatron, corresponding to an integrated luminosity of 9.5 $fb^{-1}$. No significant excess above the standard model (SM) background prediction is observed. We set 95% Bayesian credibility mass-dependent upper limits on the heavy charged particle production cross section times branching ratio to $t b$. Using a SM extension with a $W^{\prime}$ and left-right-symmetric couplings as a benchmark model, we constrain the $W^{\prime}$ mass and couplings in the 300 to 900 GeV/$c^2$ range. The limits presented here are the most stringent for a charged resonance with mass in the range 300 -- 600 GeV/$c^2$ decaying to top and bottom quarks. Measurement of the Single Top Quark Production Cross Section and |Vtb| in Events with One Charged Lepton, Large Missing Transverse Energy, and Jets at CDF (1407.4031) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, A. Elagin, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, R.C. Group, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, D. Hirschbuehl, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, H. Liu, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, C. Neu, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, I. Oksuzian, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli We report a measurement of single top quark production in proton-antiproton collisions at a center-of-mass energy of \sqrt{s} = 1.96 TeV using a data set corresponding to 7.5 fb-1 of integrated luminosity collected by the Collider Detector at Fermilab. We select events consistent with the single top quark decay process t \to Wb \to l{\nu}b by requiring the presence of an electron or muon, a large imbalance of transverse momentum indicating the presence of a neutrino, and two or three jets including at least one originating from a bottom quark. An artificial neural network is used to discriminate the signal from backgrounds. We measure a single top quark production cross section of 3.04+0.57-0.53 pb and set a lower limit on the magnitude of the coupling between the top quark and bottom quark |Vtb| > 0.78 at the 95% credibility level. Measurement of indirect CP-violating asymmetries in $D^0\to K^+K^-$ and $D^0\to \pi^+\pi^-$ decays at CDF (1410.5435) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, A. Elagin, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, R.C. Group, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, H. Liu, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, C. Neu, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, I. Oksuzian, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli Jan. 6, 2015 hep-ph, hep-ex We report a measurement of the indirect CP-violating asymmetries ($A_{\Gamma}$) between effective lifetimes of anticharm and charm mesons reconstructed in $D^0\to K^+ K^-$ and $D^0\to \pi^+\pi^-$ decays. We use the full data set of proton-antiproton collisions collected by the Collider Detector at Fermilab experiment and corresponding to $9.7$~fb$^{-1}$ of integrated luminosity. The strong-interaction decay $D^{*+}\to D^0\pi^+$ is used to identify the meson at production as $D^0$ or $\overline{D}^0$. We statistically subtract $D^0$ and $\overline{D}^0$ mesons originating from $b$-hadron decays and measure the yield asymmetry between anticharm and charm decays as a function of decay time. We measure $A_\Gamma (K^+K^-) = (-0.19 \pm 0.15 (stat) \pm 0.04 (syst))\%$ and $A_\Gamma (\pi^+\pi^-)= (-0.01 \pm 0.18 (stat) \pm 0.03 (syst))\%$. The results are consistent with the hypothesis of CP symmetry and their combination yields $A_\Gamma = (-0.12 \pm 0.12)\%$.
CommonCrawl
Ring with three binary operations A rather precocious student studying abstract algebra with me asked the following question: Are there interesting rings where there are not just two but three binary operations along with some appropriate distributivity properties? ra.rings-and-algebras 3 revisions, 2 users Deane Yang 100% $\begingroup$ For example, there are dendriform algebras: math.tamu.edu/~maguiar/depaul.pdf loic.foissy.free.fr/pageperso/article5.pdf . These have four binary operations, but one is the sum of two of the others and can be left out. Many algebras with complicated product structures ("complicated" meaning something like "the product of two simple things can be a sum of many simple things, rather than one single simple thing") are actually dendriform algebras, and the $\succ$ and $\prec$ operations simplify proofs of their properties (due to having simpler recursions). $\endgroup$ – darij grinberg Feb 5 '13 at 16:47 $\begingroup$ Also, the ring $\mathbf{Symm}$ of symmetric polynomials (in infinitely many variables) over $\mathbb Z$ has at least four operations: addition, multiplication, "second multiplication" and plethysm. I don't know how well this generalizes (I fear not too well). $\endgroup$ – darij grinberg Feb 5 '13 at 16:49 $\begingroup$ Darij, thanks for your comments (which should be answers)! $\endgroup$ – Deane Yang Feb 5 '13 at 16:56 $\begingroup$ @darij You left out "second plethysm", usually called "inner plethysm" $\endgroup$ – Bruce Westbury Feb 5 '13 at 17:07 $\begingroup$ @Darij: These are answers, not comments. $\endgroup$ – Martin Brandenburg Feb 5 '13 at 19:16 The real numbers $\mathbb{R}$ with the following three binary operations: The maximum: $(x,y)\mapsto\max\{x,y\}$. The sum: $(x,y)\mapsto x+y$. The product: $(x,y)\mapsto x\cdot y$. The maximum is to the sum what the sum is to the product, except for the fact that the maximum does not have inverses, nor a unit, i.e. $(\mathbb{R},\max,+)$ is a semiring, while $(\mathbb{R},+,\cdot)$ is a ring. Fernando MuroFernando Muro $\begingroup$ Nice. Thanks for such a simple yet useful answer. $\endgroup$ – Deane Yang Feb 5 '13 at 22:39 $\begingroup$ You're welcome. That semiring structure on the reals is the motto of tropical geometry. There are very nice mathematics to learn starting with this observation. Some basic concepts are actually easy for young students. (Disclaimer: I'm not an expert in this topic). $\endgroup$ – Fernando Muro Feb 5 '13 at 23:16 $\begingroup$ Or the minimum function. Or any subring of $\mathbb{R}$ such as $\mathbb{Q}$, $\mathbb{Z}$, $\mathbb{Q(\sqrt{2})}$. $\endgroup$ – user30304 Feb 6 '13 at 13:28 An important example is the notion of a Gerstenhaber algebra. It is simultaneously a commutative ring and a Lie algebra, such that the product and bracket satisfy the Poisson identity, except all these things need to be understood in a differential graded sense. Dan PetersenDan Petersen $\begingroup$ Isn't this just a "twisted" lie object in the symmetric monoidal category of $\mathbb{Z}$-graded commutative algebras? $\endgroup$ – Martin Brandenburg Feb 5 '13 at 19:19 $\begingroup$ I had never seen anyone exclaim beautiful at being presented with Gerstenhaber algebras :-) $\endgroup$ – Mariano Suárez-Álvarez Feb 5 '13 at 19:40 $\begingroup$ @Martin: one glitch is that the underlying Lie algebra is supposed to be graded with respect to a different grading than the associative underlying algebra, obtained from the other one by a shift of $1$. $\endgroup$ – Mariano Suárez-Álvarez Feb 5 '13 at 19:42 Very much in the spirit of Dan's answer, but more elementary, are Poisson algebras, associative algebras with Lie brackets that act as derivations. Axel BoldtAxel Boldt $\begingroup$ Thanks! I should have thought of this or at least the example of functions on a symplectic manifold. $\endgroup$ – Deane Yang Feb 5 '13 at 22:43 For experimenting you could use alg, a program which computes all finite models of a given theory. The best thing may be to pass the ball back to your student and ask him to use alg to find some interesting structures. For example, suppse we want a structure $(R, 0, +, -, \times, \&)$ such that $(R, 0, +, -)$ is a commutative group, $\times$ and $\&$ are associative, $\times$ and $\&$ distribute over $+$ and $\&$ distributes over $\times$ (I am making stuff up, the point is to experiment until something interesting is found). In alg the input file would be: Constant 0. Unary ~. Binary + * &. # 0, + is a commutative group Axiom plus_commutative: x + y = y + x. Axiom plus_associative: (x + y) + z = x + (y + z). Axiom zero_neutral_left: 0 + x = x. Axiom zero_neutral_right: x + 0 = x. Axiom negative_inverse: x + ~ x = 0. Axiom negative_inverse: ~ x + x = 0. Axiom zero_inverse: ~ 0 = 0. Axiom inverse_involution: ~ (~ x) = x. # * and & are associative Axiom mult_associative: (x * y) * z = x * (y * z). Axiom and_associative: (x & y) & z = x & (y & z). # Distributivity laws Axiom mult_distr_right: (x + y) * z = x * z + y * z. Axiom mult_distr_left: x * (y + z) = x * y + x * z. Axiom and_distr_right: (x + y) & z = (x & z) + (y & z). Axiom and_distr_left: x & (y + z) = (x & y) + (x & z). Axiom mult_and_distr_right: (x * y) & z = (x & z) * (y & z). Axiom mult_and_distr_left: x & (y * z) = (x & y) * (x & z). Let us count how many of these are, up to isomorphism, of given sizes: $ ./alg.native --size 1-7 --count three.th size | count -----|------ Check the numbers [4, 3, 36, 3, 12, 3](http://oeis.org/search?q=4,3,36,3,12,3) on-line at oeis.org We can also look at these structures, but that's the sort of thing a student should do. Here is a random one of size 4 that alg prints out when we omit --count: ~ | 0 a b c --+------------ | 0 a b c + | 0 a b c 0 | 0 a b c a | a 0 c b b | b c 0 a c | c b a 0 * | 0 a b c 0 | 0 0 0 0 a | 0 a 0 a b | 0 0 b b c | 0 a b c & | 0 a b c a | 0 0 0 0 b | 0 a b c Up to size 7 I cannot actually see any interesting ones, there are always large blocks of 0's in $\&$. Other things should be tried out. Andrej BauerAndrej Bauer An interesting non-example is the Eckmann-Hilton theorem, stating that if a set is endowed with two associative unital binary operations that "commute" (i.e., if I get it correctly, each multiplication operator $a\mapsto a*b$ is a homomorphism with respect to the other multiplication) then the two operations are the same. This would exclude the existence of rings $(R, +,*,\circ)$ with two genuinely different "commuting" multiplications. QfwfqQfwfq An exponential field is a field with an additional unary operation $x\mapsto E(x)$ extending the usual idea of exponentiation. So it satisfies the usual law of exponents $E(a+b)=E(a)\cdot E(b)$ and also has $E(0)=1$. An exponential ring has an underlying ring, rather than field, and the exponentiation function is a homomorphism from the additive group of the ring to the multiplicative group of units. The example of the real exponential field $\langle\mathbb{R},+,\cdot,e^x\rangle$ has been a principal focus of the research program in model theory that has lead to the theory of o-minimality. Tarski had famously proved that the theory of real-closed fields $\langle\mathbb{R},+,\cdot,0,1,\lt\rangle$ is a decidable theory, and one of the original motivating questions, still open to my knowledge, is whether the first-order theory of the real exponential field is similarly decidable. Meanwhile, the o-minimalists are making huge progress on our understanding of the structure of definable sets in these and many other similar structures. $\begingroup$ Isn't there a connection between exponential fields and the Schanuel's conjecture? $\endgroup$ – Asaf Karagila Feb 5 '13 at 20:45 $\begingroup$ Oh, yes, there are some deep connections. Some of this is explained on the wikipedia pages to which I link. $\endgroup$ – Joel David Hamkins Feb 5 '13 at 20:50 Not only are there such examples, there is a very natural way of continuing the progression that starts with addition and multiplication. Begin with sets. Then monoids are just the monoidal objects in the category of sets. That ends the story at level 1. Now consider the objects that ended the story at level 1 which are also commutative, that is, commutative monoids. Then semirings are the monoidal objects in the category of commutative monoids. Here the monoidal operation on the category of commutative monoids is the tensor product, rather than the Cartesian product, but this is as it should be because of the tensor-hom adjunction: $\mathrm{Hom}(A\otimes B,C)=\mathrm{Hom}(B,\mathrm{Hom}(A,C))$. In other words, the composition of the representable functors $\mathrm{Hom}(B,-)$ and $\mathrm{Hom}(A,-)$ is the functor represented by $A\otimes B$. Thus for the present purposes, the tensor product for commutative monoids plays the role of the Cartesian product for sets. Now what if we try to go one step further? Can we fill in the missing entries in the following table of analogies? (Sets : Cartesian product : monoids) :: (Commutative monoids : tensor product : semirings) :: (Commutative semirings : ?? : ??) The answer is yes, but there is one more twist to the story, which is that unlike in the categories of sets and commutative monoids, in the categories of commutative semirings, representable functors $\mathrm{Hom}(A,-)$ take values all the way down in the category of sets, not in the category of semirings. This actually occurs already at the tensor product stage above, but we didn't see it because we were working with commutative monoids (i.e. modules over the semiring of natural numbers) instead of modules over more general semirings. If we let $K$ be any semiring, then $\mathrm{Hom}_K(A,-)$ takes values in abelian group, not in $K$-modules. To make everything above work, we need $A$ to be a $K$-$K$-bimodule. Thus what we really want is to complete the following table of analogies. (Sets : sets : Cartesian product : monoids) :: ($K$-modules : $K$-$K$-bimodules : tensor product $\otimes_K$ : semirings over $K$) :: (Commutative $L$-algebras : ?? : ?? : ??), where $L$ is a fixed commutative semiring. For the first ??, we want commutative $L$-algebras $A$ such that $\mathrm{Hom}(A,-)$ takes values in $L$-algebras, rather than sets. This extra structure will be the analogue of the right $K$-module structure above. So let us call it an $L$-$L$-bialgebra structure. For instance, $A$ will need to have two co-operations $A\to A\otimes_L A$, which will induce a functorial ring structure on the objects $\mathrm{Hom}(A,B)$. If $L$ is a ring, then this is precisely the structure of a commutative $L$-algebra scheme on $\mathrm{Spec}(A)$. Then if $A$ and $B$ are $L$-$L$-bialgebras, we can compose the functors $\mathrm{Hom}(B,-)$ and $\mathrm{Hom}(A,-)$. The result is easily seen to be representable, and the representing object is denoted $A\odot B$. (So when $L$ is a ring, we are then taking two affine commutative $L$-algebra schemes over $L$, viewing them as endofunctors on the category of commutative $L$-algebras, and then composing them. This is not such a common thing to do, but the result is another affine commutative $L$-algebra scheme over $L$.) I like to call it the composition product of $A$ and $B$. Then we can define a composition $L$-algebra to be a monoid object in the category of $L$-$L$-bialgebras. Thus the final line of the analogy table above is (Commutative $L$-algebras : $L$-$L$-bialgebras : composition product $\odot_L$ : composition $L$-algebras). The monoidal operation on a composition $L$-algebra is usually denoted $\circ$ and is called composition or plethysm. So the usual hierarchy of operations "(1) addition, (2) multiplication" extends to "(3) plethysm". There are many examples of composition $L$-algebras (but interestingly not too many when $L$ is a ring). The most basic example is the polynomial algebra $L[x]$, where $\circ$ is given by usual composition of polynomials. The $L$-$L$-bialgebra structure is the one such that $L[x]$ represents the identity functor on commutative $L$-algebras. A more interesting example is the polynomial algebra `A=$\mathbb{C}[\partial^0, \partial^1,\dots]$ in infinitely many indeterminants, which we think of as all algebraic differential operators in one variable. Here $\circ$ is the usual composition of differential operators. (The $L$-$L$-bialgebra structure is determined by requiring each $\partial^i$ to be linear and to satisfy the appropriate Leibniz rule.) There are also exotic, arithmetic examples when $L$ is not a $\mathbb{Q}$-algebra. These are responsible for concepts like Witt vectors and $\Lambda$-rings. (When $L$ is a field of characteristic $0$, it is conjectured that any composition $L$-algebra can be generated by linear operators, and so all composition $L$-algebras should reduce to more familiar multilinear constructions, as with the differential operators above.) Perhaps the easiest one of these to give has already been mentioned by Darij Grinberg. It is the ring $\Lambda=\mathrm{Symm}$ of symmetric functions in infinitely many variables. Here addition and multiplication are as usual, and $\circ$ is the operation known as plethysm in the theory of symmetric functions. It is the composition algebra whose representations are $\Lambda$-rings and whose co-induction functor is the big Witt functor. (I haven't defined these concepts here.) This is discussed in only a few places in the literature. In order of appearance: Tall-Wraith, Bergman-Hausknecht (this deals with general cagtegories of a universal-algebraic nature), Borger-Wieland, Stacey-Whitehouse. And it seems that every paper on the subject uses different term for what I called a composition algebra above. On my web page, I have slides from a talk a gave not so long ago on these things. JBorger The claim that there are two binary operations on rings is misleading. Rings are actually equipped with countably many $n$-ary operations, one for each noncommutative polynomial in $n$ variables over $\mathbb{Z}$. These generate the morphisms in a category with finite products, the Lawvere theory of rings $T$, which is a category with the property that finite product-preserving functors $T \to \text{Set}$ are the same thing as rings. It just happens to be the case that as a category with finite products, $T$ is generated by addition and multiplication. The Lawvere theory of commutative rings is similar except that the polynomials are commutative; incidentally, it may also be regarded as the category of affine spaces over $\mathbb{Z}$. This gives a useful perspective from which to understand other ring-like structures. For example: commutative Banach algebras are equipped with an $n$-ary operation for each holomorphic function $\mathbb{C}^n \to \mathbb{C}$. smooth algebras like the algebras $C^{\infty}(M)$ of smooth functions on a smooth manifold are equipped with an $n$-ary operation for each smooth function $\mathbb{R}^n \to \mathbb{R}$. Here is a general procedure for determining what operations are actually available to you when working with some mathematical objects. If $C$ is a concrete category and $F : C \to \text{Set}$ the forgetful functor, then one interpretation of "$n$-ary operation" is "natural transformation $F^n \to F$." If $C$ has finite coproducts and $F$ is representable by an object $a$, then by the Yoneda lemma these are the same thing as elements of $F(a \sqcup ... \sqcup a)$. This reproduces the obvious answers for groups, rings, etc., and when $C$ is the opposite of the category of smooth manifolds and $F : M \mapsto C^{\infty}(M)$ then we get that "$n$-ary operation" means element of $C^{\infty}(\mathbb{R}^n)$ as above. Qiaochu YuanQiaochu Yuan $\begingroup$ But most of those infinite operations in a ring are derived ones. Counting them is sort of a display of love for formalities... $\endgroup$ – Mariano Suárez-Álvarez Feb 5 '13 at 19:38 $\begingroup$ No, it's a commitment to talking about mathematical objects instead of presentations of mathematical objects. Lawvere theories exist independent of a choice of generators in the same way that groups do. Some Lawvere theories (e.g. the Lawvere theory of smooth algebras) are best described all at once rather than using a presentation in the same way that some groups are. $\endgroup$ – Qiaochu Yuan Feb 5 '13 at 19:58 $\begingroup$ Can you imagine the classification of finite simple groups done using (even the notation required to handle) all derived operations in a group? Unless «talking about mathematical objects instead of presentations of mathematical objects» serves a purpose, it is just formalities. And, sure, in some situations, it does serve a purpose. —in finding significative examples of ring-with-three-binary-operations, not so much! $\endgroup$ – Mariano Suárez-Álvarez Feb 5 '13 at 20:03 $\begingroup$ Well, you started with «The claim that there are two binary operations on rings is misleading», which is rather difficult to misunderstand, and my point is that that is sort of backwards. All the other operations that show up when you view rings as a Lawvere theory are the result of forcing rings into a Lawvere theory —this may be useful at times (it is useful at times!) but it is just a (mostly harmless) side effect of adopting a specific the point of view. $\endgroup$ – Mariano Suárez-Álvarez Feb 5 '13 at 20:51 $\begingroup$ I think perhaps the sense in which this answer does not address the original question is that it shows ways of making lots of $n$-ary operations, but not ones which distribute over each other, which was perhaps the crux of the original question. $\endgroup$ – Noah Stein Feb 6 '13 at 16:42 This isn't an answer to this exact question, but it sounds like the student may also be interested in hearing about Hopf algebras. Peter Samuelson $\begingroup$ They might be interested in Hopf algebras (e.g., group objects in the category of coalgebras) and also Hopf rings (e.g., ring objects in the category of coalgebras). $\endgroup$ – Paul Pearson Feb 6 '13 at 14:09 Take the ring of polynomials. More generally, any ring of functions (from a ring to itself). With the functions, you have 3 operations defined: Pointwise addition: + such that f+g is the function t--> f(t) + g(t) Pointwise multiplication: . such that f.g is the function t --> f(t).g(t) Composition of functions o such that fog is the function t--> f(g(t)) This is defined for functions but in particular for polynomials on some ring R and for matrices with ring coefficients. The paper "The Natural Chain of Binary Arithmetic Operations and Generalized Derivatives" by M. Carroll (link) is a great paper for undergrads that demonstrates an infinite number of binary operations (defined recursively and in terms of the exponential function) on the reals where the $i$th operation distributes over the $(i-1)$th operation. Aeryk $\begingroup$ Curiously, in bounded arithmetic, integer-valued analogues of these functions (called "smash functions") are used: they are usually defined as $x\#_1y=xy$ and $x\#_{k+1}y=2^{|x|\#_k|y|}$, where $|x|=\lceil\log_2(x+1)\rceil$. (However, these no longer satisfy the associative and distributive laws exactly, only approximately.) $\endgroup$ – Emil Jeřábek Aug 19 '20 at 10:54 A differential-geometric example of such a thing would be differential forms (with operations of addition, wedge product and differentiation). More generally, one considers DGLAs, Differential Graded Lie Algebras (with the usual caveat that the Lie bracket is not quite commutative/associative); the operations are addition, derivation and the bracket (as well as multiplication by scalars as a free bonus). The main example is, of course, differential forms on a manifold with values in a Lie algebra. One uses DGLA's to describe deformations of pretty much anything under the sun, look here. $L^2$ with addition, multiplication, and convolution, with both multiplication and convolution linear with respect to addition. $\begingroup$ $L^2(\mathbb R)$ is not closed under multiplication, and is also not closed under convolution. You should probably use the Schwarz space $\mathit S(\mathbb R)$ instead. $\endgroup$ – André Henriques Dec 8 '15 at 23:35 $\begingroup$ @AndréHenriques: or replace $\mathbb R$ with a compact domain. $\endgroup$ – Michael Dec 9 '15 at 0:07 $\begingroup$ For $L^2(X)$ to be closed under multiplication, $X$ needs to be discrete. For $L^2(X)$ to be closed under convolution, $X$ needs to be compact. So the only $X$ that work are finite sets (by which I mean finite groups, otherwise, there is no such thing as convolution). $\endgroup$ – André Henriques Dec 9 '15 at 11:28 Addition and multiplication over $\Bbb{R}$ can actually be thought of as two parts of an infinite chain of binary operations $b_n: \Bbb{R}^2 \rightarrow \Bbb{R}$ where each one distributes over the next-lower-level one: $$b_n(x, b_{n-1}(y, z)) = b_{n-1}(b_n(x, y), b_n(x, z))$$ for all $n \in \Bbb{Z}$. Let $b_0(x, y) = x + y$, and make the definitions \begin{align*} b_{k+1}(x, y) &:= e^{b_k(\ln(x), \ln(y))} \\ b_{k-1}(x, y) &:= \ln(b_k(e^x, e^y)) \ \end{align*} Using these definitions, we find, for example: \begin{align*} b_1(x, y) &= xy \\ b_2(x, y) &= e^{\ln(x) \ln(y)} \\ &... \\ b_{-1}(x, y) &= \ln(e^x + e^y) \\ b_{-2}(x, y) &= \ln(\ln(e^{e^x}+e^{e^y})) \\ &... \ \end{align*} So for instance, addition $b_0$ distributes over $b_{-1}$: $$b_0(x, b_{-1}(y, z)) = x + \ln(e^y + e^z) = \ln(e^x (e^y + e^z)) = \ln(e^{x+y} + e^{x+z}) = b_{-1}(b_0(x, y), b_0(x, z)),$$ and this can be proven inductively for all $n \in \Bbb{Z}$. Ignoring domain restrictions on logarithms, I can repeat this construction over any field equipped with an exponential function and its logarithmic inverse. I didn't need to use $e$ and $\ln$ here in particular--the choice of the exponential/log base doesn't actually affect $b_0$ (regular addition) and $b_1$ (regular multiplication), although in general it will affect higher and lower functions in the chain. Rivers McForge $\begingroup$ This is a duplicate of this answer: mathoverflow.net/a/129558 . $\endgroup$ – Emil Jeřábek Aug 19 '20 at 10:48 Not the answer you're looking for? Browse other questions tagged ra.rings-and-algebras or ask your own question. Familiar equations in more general settings Is there a notion of congruence relation for essentially algebraic structures? Groups and rings which are not sets Can we unify addition and multiplication into one binary operation? To what extent can we find universal binary operations? Divisibility and factorization in rings that are not integral domains Geometry of rings and semi-rings A particularly "natural" algebraic structure with three commutative, pairwise-distributive operations When is a ring or algebra a ring/algebra of functions? Affine embedding of the two-point lattice into a semiring-like structure
CommonCrawl
Heliophysics Seminars CPPG seminars R&R Seminars External Seminars Our Theory Seminars are usually held on Thursdays at 10:45am in T169 (come a little early for coffee, tea and cookies!) (All visitors to PPPL must have their host notify the Site Protection Division or the Departmental Administrator Jennifer Jones for entrance to the laboratory.) Generation of suprathermal electrons by collective processes in Maxwellian plasmas Sabrina F. Tigik, Instituto de Física, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil , abstract [#s1095, 06 Jan 2020] Weak turbulence theory is the standard formalism for treating nonlinear, low amplitude, kinetic instabilities in collisionless plasmas; therefore, it is concerned only with the description of collective plasma oscillations. However, the long-lasting timescale of nonlinear processes suggests that collisions might affect the late plasma dynamics, acting alongside nonlinear oscillatory effects [1]. Such cases, where collective and collisional processes coexist in the plasma dynamics, have been rigorously addressed only recently [2]. One of the outcomes of this extended formulation is a then-unknown mechanism named electrostatic bremsstrahlung emission. The electrostatic bremsstrahlung is a kind of transient radiation, emitted in all spectrum, caused by continuous interparticle interactions. The portion irradiated in the eigenmode frequency range is capable of altering the wave spectrum, which will then modify the velocity distribution through wave-particle resonance. Considering emissions in the Langmuir wave frequency range, and Maxwellian electrons as the initial state, I have analyzed the time evolution of the system. The result was the scattering off of a small fraction of the thermal electrons in the resonance region, to high velocities, creating a suprathermal population. The tail grows consistently in the beginning, but after long integration time, the growth rate slows down, indicating that the system is arriving at a new asymptotic equilibrium state [3]. References: [1] S. F. Tigik, L. F. Ziebell, P. H. Yoon, and E. P. Kontar. Two-dimensional time evolution of beam-plasma instability in the presence of binary collisions. Astronomy & Astrophysics, 586:A19, February 2016. [2] P. H. Yoon, L. F. Ziebell, E. P. Kontar, and R. Schlickeiser. Weak turbulence theory for collisional plasmas. Physical Review E, 93:033203, Mar 2016. [3] S. F. Tigik, L. F. Ziebell, and P. H. Yoon. Generation of suprathermal electrons by collective processes in collisional plasma. The Astrophysical Journal Letters, 849(2):L30, 2017. Remote participation via Zoom: https://zoom.us/j/8320719053 Flow effects on visco-resistive MHD in a periodic cylindrical tokamak Jervis Mendonca, Institute for Plasma Research, HBNI, India , abstract Flow and viscosity significantly modify resistive modes in a tokamak, and I have investigated these using the CUTIE code. These studies indicate that flow can be used to improve plasma duration and quality in a tokamak, and this has motivated my investigation. In this presentation, I have begun by studying the (2,1) tearing mode and found several new results, namely, that nature of stabilisation depended on whether axial or poloidal flows were used. I also observed that the sign of shear in helical flow mattered. This symmetry breaking is also seen in the nonlinear regime where the island saturation level is found to depend on the sign of the flows. I have proceeded to study the (1,1) internal kink mode using a Visco-Resistive MHD(V-RMHD) model. I have observed here that stabilisation due to axial flows in particular are affected by the viscosity regime. Symmetry breaking at higher viscosity in linear growth rates and nonlinear saturation levels as well is observed. In summary, for axial, poloidal, and most helical flow cases, there is flow induced stabilisation of the nonlinear saturation level in the high viscosity regime and destabilisation in the low viscosity regime. I have continued my studies in the two fluid regime. In the linear regime, we have studied how the growth rate as well as diamagnetic flow frequency of the modes changes due to fluid effects for a range of viscosity and resistivity values. I have also found diamagnetic drift stabilisation of the (1,1) mode in the two fluid case, that is, we have seen the growth rate of the (1,1) mode reduces with an increase in density gradient. In the nonlinear case, I investigate the evolution of the mode with imposed axial flow, poloidal and helical flows. I find the viscosity regime affects the nonlinear saturation regime. Wide q95 Windows for Edge-Localized-Mode Suppression by Resonant Magnetic Perturbations in the DIII-D Tokamak Qiming Hu, PPPL, abstract [#s1075, 12 Dec 2019] Edge-Localized-Mode (ELM) suppression by resonant magnetic perturbations (RMPs) typically occurs over narrow ranges in the plasma magnetic safety factor q95, however wide q95 windows of ELM suppression are favourable for the robust avoidance of ELMs in ITER and future reactors. Here we show from experiment and nonlinear two-fluid MHD simulation that wide q95 windows for ELM suppression are accessible in low-collisionality DIII-D plasmas relevant to ITER high power mission so long as the applied RMP strength exceeds the threshold for resonant field penetration at the pedestal top by ~1.5X. When the applied RMP is close to the threshold for resonant field penetration at the top of the pedestal, only isolated magnetic islands form near the pedestal top, producing narrow q95 windows of ELM suppression (dq95 ~ 0.1). However, as the threshold for field penetration decreases relative to the RMP amplitude, then multiple magnetic islands can be driven on adjacent rational surfaces near the pedestal top. Multiple magnetic islands lead to q95 window merging and wide regions of ELM suppression (up to dq95 ~ 0.7 seen in DIII-D). Nonlinear MHD simulations are in quantitative agreement with experiment and predict improved access to wide windows of ELM suppression in DIII-D by using n = 4 RMPs, due to the denser rational surfaces and the weak dependence of the penetration threshold and island width on toroidal mode number. Regimes of weak ITG/TEM modes for transport barriers without velocity shear Mike Kotschenreuther, University of Texas, Austin , abstract A deep dive into the distribution function: understanding phase space dynamics using continuum Vlasov-Maxwell simulations J. Juno, University of Maryland , abstract In collisionless and weakly collisional plasmas, the particle distribution function is a rich tapestry of the underlying physics. However, actually leveraging the particle distribution function to understand the dynamics of a weakly collisional plasma is challenging. The equation system of relevance, the Vlasov-Maxwell system of equations, is difficult to numerically integrate, and traditional methods such as the particle-in-cell method introduce counting noise into the distribution function. But motivated by the physics contained in the distribution function, we have implemented a novel continuum Vlasov-Maxwell method in the Gkeyll framework [1,2]. The algorithm uses the discontinuous Galerkin finite element method to produce a high order accurate solution, and a number of algorithmic breakthroughs have been made to produce a robust, cost efficient solver for the Vlasov-Maxwell system of equations. In this talk, I will present both our algorithmic work and the phase space dynamics of a number of plasma processes, focusing on two applications which make manifest the power and utility of a continuum Vlasov-Maxwell method. I will demonstrate how the high fidelity representation of the distribution function, combined with novel diagnostics, permits detailed analysis of the energization processes in a perpendicular collisionless shock. In addition, I will show a set of recent results on the evolution of kinetic instabilities driven by unstable beams of plasma, where the generation of small scale structures in velocity space has dramatic consequences for the overall macroscopic evolution of the plasma[3]. To further motivate the development of the continuum Vlasov-Maxwell method, I will also show how particle noise can modify the dynamics of the latter set of simulations with analogous particle-in-cell simulations. [1] J. Juno, A. Hakim, J. TenBarge, E. Shi, W. Dorland (2018). Discontinuous Galerkin algorithms for fully kinetic plasmas. Journal of Computational Physics, 353, 110—147. [2] A. Hakim, M. Francisquez, J. Juno, G. Hammett. "Conservative Discontinuous Galerkin Discretizations of Fokker-Planck Operators." submitted to Journal of Plasma Physics, 2019 [3] V. Skoutnev, A. Hakim, J. Juno, J. TenBarge. "Temperature-dependent Saturation of Weibel-type Instabilities in Counter-streaming Plasmas." Astrophysical Journal Letters, 872, 2. (2019) More Intelligent Execution of Ensembles of Nonlinear Kinetic Simulations: learning and improving efficiency and accuracy as we go - BARS, SHAD BARS and mini-BARS Bedros Afeyan, Polymath Research Inc. Machine learning of equivariant functions inspired by atomistic modelling with a special view towards fusion materials simulations Bastiaan J. Braams, Centrum Wiskunde & Informatica (CWI), Amsterdam, The Netherlands , abstract [#s1089, 27 Nov 2019] Over the past several years big data methods, including but not limited to use of deep convolutional neural networks, have been very successful in computer science applications and there is increasing effort to apply big data or machine learning methods to problems in physical science and engineering. Conversely we are seeing that problems from physical science are influencing machine learning research done in computer science environments. A very important application of big data methods in physical science where we see this mutual influence is the construction of effective interatomic potentials and force fields for atomistic modelling of molecular and condensed phase systems (e.g. [1]). This application shares features with certain applications in three-dimensional image processing in having data associated with point clouds and in seeking to represent functions that are invariant or covariant with respect to a permutation group (applied to the labelling of points in the cloud) and with respect to spatial groups of translations, rotations and inversion. Some by now almost classical big data approaches to the atomistic problem include use of Gaussian process approximation (kernel ridge regression) [2] and use of spherical wavelet expansions [3]. In addition deep neural networks are being applied (e.g. [4], [5]) and here we see the closest link to machine learning research with key words such as Point Cloud Convolutional Networks, Deep Sets, Spherical CNNs, Tensor Field Networks and Gauge Equivariant Neural Networks [6]. The presentation will provide a survey of these machine learning developments in the context of the application in physical science with a special view towards radiation damage simulations for fusion. [1] Ceriotti, Michele. "Atomistic machine learning between predictions and understanding." arXiv preprint arXiv:1902.05158 (2019). [2] Bartók, Albert P., and Gábor Csányi. "Gaussian approximation potentials: A brief tutorial introduction." International Journal of Quantum Chemistry 115, no. 16 (2015): 1051-1057. [3] Eickenberg, Michael, Georgios Exarchakis, Matthew Hirn, Stéphane Mallat, and Louis Thiry. "Solid harmonic wavelet scattering for predictions of molecule properties." The Journal of chemical physics 148, no. 24 (2018): 241732. [4] Schütt, Kristof T., Huziel E. Sauceda, P-J. Kindermans, Alexandre Tkatchenko, and K-R. Müller. "SchNet–A deep learning architecture for molecules and materials." The Journal of Chemical Physics 148, no. 24 (2018): 241722. [5] Zhang, Linfeng, Jiequn Han, Han Wang, Wissam Saidi, Roberto Car, and Weinan E. "End-to-end symmetry preserving inter-atomic potential energy model for finite and extended systems." In Advances in Neural Information Processing Systems, pp. 4441-4451. 2018. [6] Extensive further references to be provided in the presentation. An adjoint approach for the shape gradients of 3D MHD equilibria Elizabeth Paul, University of Maryland , abstract The design of modern stellarators often employs gradient-based optimization to navigate the high-dimensional spaces used to describe their geometry. However, computing the numerical gradient of a target function with respect to many parameters can be expensive. The adjoint method allows these gradients to be computed at a much lower cost and without the noise associated with finite differences. In addition to gradient-based optimization, the derivatives obtained from the adjoint method are valuable for local sensitivity analysis and tolerance quantification. A continuous adjoint method has been developed for obtaining the derivatives of functions of the MHD equilibrium equations with respect to the shape of the boundary of the domain or the shape of the electro-magnetic coils [1]. This approach is based on the generalization of the self-adjointness of the linearized MHD force operator. The adjoint equation corresponds to a perturbed force balance equation with the addition of a bulk force, rotational transform, or toroidal current perturbation. We numerically demonstrate this approach by adding a small perturbation to the non-linear VMEC [2] solution, obtaining an order 102-103 reduction in cost in comparison with a finite difference approach. Examples are presented for the shape gradient of the rotational transform and vacuum magnetic well, a proxy for MHD stability. The adjoint solution required for the magnetic ripple, a proxy for near-axis quasisymmetry, requires the addition of an anisotropic pressure tensor to the MHD force balance equation. This modification has been implemented in the ANIMEC [3] code. We furthermore demonstrate that this adjoint approach can be applied to compute shape gradients of two important figures of merit [4], the departure from quasisymmetry and the effective ripple in the low-collisionality neoclassical regime, but require the development of new equilibrium solvers. Finally, initial steps toward adjoint solutions with a linearized equilibrium approach will be presented. [1] Antonsen, T.M., Paul, E.J. & Landreman, M. 2019 Adjoint approach to calculating shape gradients for three-dimensional magnetic confinement equilibria. Journal of Plasma Physics 85 (2). [2] Hirshman, S.P. & Whitson, J.C. 1983 Steepest descent moment method for three-dimensional magnetohydrodynamic equilibria. Physics of Fluids 26 (12), 3553. [3] Cooper, W.A., Hirshman, S.P., Merazzi, S. & Gruber, R. 1992 3D magnetohydrodynamic equilibria with anisotropic pressure. Computer Physics Communications 72 (1),1–13. [4] Paul, E.J., Antonsen, T.M., Landreman, M., Cooper, W.A. Adjoint approach to calculating shape gradients for three-dimensional magnetic confinement equilibria, Part II: Applications. Submitted to Journal of Plasma Physics. Learning data driven discretizations for partial differential equations Stephan Hoyer, Google, abstract The numerical solution of partial differential equations (PDEs) is challenging because of the need to resolve spatiotemporal features over wide length and timescales. Often, it is computationally intractable to resolve the finest features in the solution. The only recourse is to use approximate coarse-grained representations, which aim to accurately represent long-wavelength dynamics while properly accounting for unresolved small scale physics. Deriving such coarse grained equations is notoriously difficult, and often ad hoc. Here we introduce data driven discretization, a method for learning optimized approximations to PDEs based on actual solutions to the known underlying equations. Our approach uses neural networks to estimate spatial derivatives, which are optimized end-to-end to best satisfy the equations on a low resolution grid. The resulting numerical methods are remarkably accurate, allowing us to integrate in time a collection of nonlinear equations in one spatial dimension at resolutions 4-8x coarser than is possible with standard finite difference methods. Variational Principles and Applications of Local Topological Constants of Motion for Non-Barotropic Magnetohydrodynamics Asher Yaholom, Ariel University, Israel, abstract Variational principles for magnetohydrodynamics (MHD) were introduced by previous authors both in Lagrangian and Eulerian form. In this presentation we introduce simpler Eulerian variational principles from which all the relevant equations of non-barotropic MHD can be derived for certain field topologies. The variational principle is given in terms of five independent functions for non-stationary non-barotropic flows. This is less than the eight variables which appear in the standard equations of barotropic MHD which are the magnetic field B ⃗ the velocity field v ⃗, the entropy s and the density ρ. The case of non-barotropic MHD in which the internal energy is a function of both entropy and density was not discussed in previous works which were concerned with the simplistic barotropic case. It is important to understand the rule of entropy and temperature for the variational analysis of MHD. Thus we introduce a variational principle of non-barotropic MHD and show that five functions will suffice to describe this physical system. We will also discuss the implications of the above analysis for topological constants. It will be shown that while cross helicity is not conserved for non-barotropic MHD a variant of this quantity is. The implications of this to non-barotropic MHD stability is discussed. [1] Asher Yahalom, "Simplified Variational Principles for non-Barotropic Magnetohydrodynamics," J. Plasma Phys. (2016), vol. 82, 905820204. [2] Asher Yahalom, "Non-Barotropic Magnetohydrodynamics as a Five Function Field Theory," International Journal of Geometric Methods in Modern Physics, No. 10 (November 2016), Vol. 13 1650130. [3] Asher Yahalom, "A Conserved Local Cross Helicity for Non-Barotropic MHD," Journal of Geophysical & Astrophysical Fluid Dynamics (2017), Vol. 111, No. 2, 131–137. [4] Asher Yahalom, "Non-Barotropic Cross-helicity Conservation Applications in Magnetohydrodynamics and the Aharanov - Bohm effect," Fluid Dynamics Research (2017), Volume 50, Number 1, 011406. First-principle formulation of resonance broadened quasilinear theory near an instability threshold Vinícius Duarte, PPPL, abstract [#s1069, 19 Sep 2019] A method is developed to analytically determine the resonance broadening function in quasilinear theory, due to either Krook or Fokker-Planck scattering collisions of marginally unstable plasma systems where discrete resonance instabilities are excited without any mode overlap. It is demonstrated that a quasilinear system that employs the calculated broadening functions reported here systematically recovers the nonlinear growth rate and mode saturation levels for near-threshold plasmas previously calculated from kinetic theory. The distribution function is also calculated, which enables precise determination of the characteristic collisional resonance width. [Based on preprint V. Duarte, N. Gorelenkov, R. White and H. Berk, The collisional resonance function in discrete-resonance quasilinear plasma systems, arXiv:1906.01780v1 (2019)] Global gyrokinetic PIC simulations of electromagnetic perturbations in fusion plasmas Alexey Mishchenko, IPP, abstract An overview of European gyrokinetic PIC codes will be presented with a focus on numerical schemes used in the electromagnetic regime. Simulations of Alfven modes destabilized by the fast particles and of the zonal flows generated by the unstable Alfven waves will be discussed. Gyrokinetic PIC simulations of the MHD-type instabilities will be addressed on example of the internal kink modes. Simulations of the transition from the ITG-dominated regime to the KBM regime and finally to the microtearing instabilities with plasma beta increasing will be shown. Next-gen distributed computing networks in support of research, discovery and innovation Dan Desjardin, Royal Military College of Canada/Distributed Compute Labs, abstract [#s1065, 15 Aug 2019] Distributed Compute Labs (DCL) is a Canadian educational nonprofit organization responsible for developing and deploying the Distributed Compute Protocol (DCP), a lightweight, easy-to-use, idiomatic, and powerful computing framework built on modern web technology, that allows any device — from smartphones to enterprise web servers — to contribute otherwise idle CPU and GPU capacity to secure and configurable general-purpose computing networks. By leveraging existing devices and infrastructure — a university's desktop fleet, for example — a large supply of latent computational resources becomes available at no additional capital cost. DCP makes it possible for everyone — from a student in Santa Barbara to a large enterprise in New York City — to have access to large quantities of cost-effective computing resources. In summary, the Distributed Compute Protocol democratizes access to digital infrastructure, reduces barriers, and unleashes innovation. Phase-space theory of the Dimits shift and cross-scale interactions in drift-wave turbulence Ilya Dodin, PPPL, abstract, slides Interactions of drift-wave turbulence with zonal flows, which are of interest due to their effect on fusion-plasma confinement, can be elucidated by using phase-space methods from quantum theory [1-7]. In this talk, I will show how applying these methods: (i) helps explain the cross-scale interactions between ITG and ETG turbulence seen in gyrokinetic simulations and also (ii) leads to a semi-quantitative analytic theory of the zonal-flow stability and the Dimits shift within the Terry-Horton model of drift-wave turbulence. [1] H. Zhu, Y. Zhou, and I. Y. Dodin, Nonlinear saturation and oscillations of collisionless zonal flows, New J. Phys. 21, 063009 (2019). [2] Y. Zhou, H. Zhu, and I. Y Dodin, Formation of solitary zonal structures via the modulational instability of drift waves, Plasma Phys. Control. Fusion 61, 075003 (2019). [3] D. E. Ruiz, M. E. Glinsky, and I. Y. Dodin, Wave kinetic equation for inhomogeneous drift-wave turbulence beyond the quasilinear approximation, J. Plasma Phys. 85, 905850101 (2019). [4] H. Zhu, Y. Zhou, and I. Y. Dodin, On the Rayleigh-Kuo criterion for the tertiary instability of zonal flows, Phys. Plasmas 25, 082121 (2018). [5] H. Zhu, Y. Zhou, and I. Y. Dodin, On the structure of the drifton phase space and its relation to the Rayleigh-Kuo criterion of the zonal-flow stability, Phys. Plasmas 25, 072121 (2018). [6] H. Zhu, Y. Zhou et al., Wave kinetics of drift-wave turbulence and zonal flows beyond the ray approximation, Phys. Rev. E 97, 053210 (2018). [7] D. E. Ruiz, J. B. Parker et al., Zonal-flow dynamics from a phase-space perspective, Phys. Plasmas 23, 122304 (2016). Gyrokinetic study of RMP-driven plasma density and heat transport in tokamak edge plasma using MHD screened RMP field Robert Hager, PPPL, abstract, slides ITER plans to rely on RMP coils as the primary means for ELM control. However, puzzling observations on present-day experiments complicate understanding the underlying physics: plasma density is pumped out, which can lower the fusion efficiency in ITER, while the electron heat transport is still low in the pedestal. Kinetic level understanding including most of the important physics is needed but has not been available. Gyrokinetic total-f simulation of the plasma transport driven by n=3 resonant magnetic perturbations (RMPs) in a DIII-D H-mode plasma is performed using the gyrokinetic code XGC. The RMP field is calculated in M3D-C1 and coupled into XGC in realistic divertor geometry with neutral particle recycling. The RMP field is stochastic around the pedestal foot but exhibits good KAM surfaces at pedestal top and steep-slope. The simulation qualitatively reproduces the experimental phenomena: plasma density is pumped-out due to enhanced electrostatic turbulence while electron heat transport is low. Different from earlier gyrokinetic studies, the present simulation consistently combines neoclassical and turbulent transport, a fully nonlinear Fokker-Planck collision, neutral particle recycling, and the full 3-D electric field. Density pump-out is not seen without turbulence effects. Reduction of the ExB shearing rate is likely to be responsible, mostly, for the enhanced edge turbulence, which is found to be from trapped electron modes. Velocimetry and the aperture problem for 2D incompressible flows Tim Stoltzfus-Dueck, abstract, slides [#s1028, 16 May 2019] The inference of velocity fields from 2D movies evolving conserved scalars (optical flow) is fundamentally ambiguous due to the well-known "aperture problem": velocities along isocontours of the scalar are not visible. This may even corrupt the inference of velocity fields averaged at scales longer than the typical length scale of features in the scalar field, as in the drift wave. However, for divergence-free flows, a stream-function formulation allows us to show that the "invisible velocity" vanishes in the surface average inside any closed scalar isocontour. This error-free averaged velocity may be used as an "anchor" for a more reliable inference of the larger-scale velocity field, or to test model-based optical-flow schemes. We have also used the stream-function formulation to derive a new method of optical flow for divergence-free flows. We discuss the new algorithm, including details of discretization, boundary conditions, and image preprocessing that can significantly affect its performance. A simple implementation of the new method is shown to work well for a number of synthetic movies, and is also applied to a GPI movie of edge turbulence in NSTX. Realisability of discontinuous MHD equilibria Adelle Wright, Australian National University, Canberra, abstract, slides Smooth 3D MHD equilibria with non-uniform pressure may not exist but, mathematically, there exist 3D MHD equilibria with non-uniform, stepped pressure profiles. The pressure jumps occur at surfaces with highly irrational values of rotational transform and generate singular current sheets. If physically realisable, how such states form dynamically remains to be understood. To be physically realisable states, MHD equilibria must exist for some non-trivial timescale, meaning they must be at least be ideally stable and sufficiently stable to the fastest growing resistive instabilities. This presentation will discuss recent progress towards understanding discontinuous MHD equilibria via a stability analysis of a continuous cylindrical equilibrium model with radially localised pressure gradients, which examines how the resistive stability characteristics of the model change as the localisation of pressure gradients is increased to approach a discontinuous pressure profile in the zero-width limit. Kinetic Effects on Adiabatic Index via Geodesic Acoustic Continuum Calculations Fabio Camilo de Souza, University of Sao Paulo, Brazil, abstract, slides Geodesic Acoustic Mode (GAM) is primarily an electrostatic oscillation with the dominant toroidal N=0 and poloidal M=0,1 mode numbers. First predicted within the magnetohydrodynamic (MHD) theory [1], it's frequency is proportional to the square root of the adiabatic index gamma. GAMs can be treated by the kinetic theory, to include such parameters as the plasma rotation of different species [2], fast ions with bump-on-tail like [3] or slowing down [4] distribution function, and other models, to investigate the plasma equilibrium conditions. NOVA [5] is an ideal MHD code that computes the Alfvénic and acoustic continua and eigenmodes. The adiabatic index in NOVA is a fixed parameter, typically 5/3. The acoustic oscillations are calculated using the prescribed gamma value and its coupling with others continua. As in kinetic calculations it is possible to include other effects for more accuracy of the GAM continuum. We modified NOVA to include a profile for gamma, given as a function of the magnetic surfaces. This modification makes the MHD acoustic continuum to match the Kinetic one. This kinetic gamma allows to compute more accurate eigenmodes which are strongly coupled to GAM continuum structure. This conclusion is similar to all low frequency oscillations including GAMs, the Alfvén-acoustic BAAE modes and others. The understanding of the impact of discharge parameters in these modes can improve the plasma transport control. Simulation results for DIII-D will be presented, an Energetic Particle induced GAM appears in the maximum of the modified GAM continuum, the result matches the observed frequency for N=0 oscillation. Implications to the experiments in NSTX will be discussed. [1] N. Winsor, et. al., Phys. Fluids 11, 2448 (1968) [2] A.G. Elfimov, et. al., Phys. Plasmas 22, 114503 (2015) [3] F. Camilo de Souza, et. al, Phys. Lett. A 381, 3066 (2017) [4] Z. Qiu, et. al., Plasma Phys. Controlled Fusion 52, 095003 (2010) [5] C. Z. Cheng and M. S. Chance, Journal of Computational Phys., 71, 124-146 (1987) Mapping the Sawtooth Chris Smiet, PPPL, abstract The magnetic field in a tokamak defines a field line map: a mapping from a poloidal cross-section to that same cross-section by following magnetic field lines. Such a map must necessarily contain fixed points, of which the magnetic axis is an example. The jacobian (the matrix of partial derivatives) $\mathsf{M}$ describes the mapping around such a fixed point to first order, and is part of the Lie group $SL(2,\mathbb{R})$. Different elements of this group act on the euclidean plane as rotations, shear mappings or hyperbolic fixed points. We look at a transition from an ellpitic fixed point (field lines lie on nested surfaces around the fixed point) to an alternating hyperbolic fixed point (field lines map to opposite branches of hyberbolic surfaces) that can occur at $q=2, 2/3, 2/5, 2/7 ...$. Using the NOVA-K code we identify an ideally unstable mode that is localized on the axis and has a high growth rate when the safety factor is close to 2/3. This mode drives the fixed point into the alternating hyperbolic regime. The nonlinear evolution of this instability can lead to complete stochastization of a region near the axis. Though the Sawtooth oscillation has long been observed, there is still disagreement between theoretical models and observations, and no model can reconcile all observations. We speculate that the above transition could explain some of the observations that do not fit other models, such as measurements of q below 1, snakes, and persistent Alfven Eigenmodes. Hyper-Resistive Model of UHE Cosmic Ray Acceleration by AGNs Ken Fowler, UC Berkeley, abstract, slides [#s1024, 25 Apr 2019] Ultra High Energy (UHE) cosmic rays (~ 10^20 eV) may be produced by known processes of acceleration by plasma turbulence in magnetized jets produced by Active Galactic Nuclei (AGNs). A simple model in which turbulence is represented as hyper-resistivity in Ohm's Law yields several predictions in sufficient agreement with observations to motivate further investigation. Besides jet dimensions, these predictions include the unique extra-galactic cosmic ray energy spectrum (\propto 1/E^3 ) and a different interpretation of the synchrotron radiation by which AGN jets are observed. Crucial to the model is a new theory of jet propagation whereby un-collimated jets generated by General Relativistic MHD simulations evolve to a highly collimated structure, finally evolving at speed 0.01c that explains jet dimensions, while relativistic acceleration parallel to field lines yields both cosmic rays and synchrotron radiation. [1] S. A. Colgate, T. K. Fowler, H. Li & J.Pino, 2014 ApJ 789, 144, on AGN jets [2] S. A. Colgate, T. K. Fowler, H. Li et al. 2015 ApJ 813, 136, on jet stability [3] T. K. Fowler & H. Li, 2016 J. Plas. Phys. 82, 595820513, on UHE acceleration [4] T. K. Fowler, H. Li, R. Anantua, 2019 ArXiV 2615445 Realistic 2D quasilinear modeling of fast ion relaxation Vinicius Duarte, PPPL, slides Fishbone instability and transport of energetic particles Guillaume Brochard, CEA, IRFM, abstract, slides Analytical and numerical studies have been carried out to verify linearly the newly implemented 6D PIC module into the nonlinear hybrid code XTOR-K [1],[2]. This code solves the two-fluid extended MHD equations in toroidal geometry while taking into account, self-consistently, kinetic ion populations. The verification has been performed by a linear model [3], developed from [4],[5],[6]. It solves non-perturbatively the kinetic internal kink dispersion relation, with the particularity to take into account non-resonant kinetic terms and passing particles, which have revealed to be crucial features of the fishbone instability. A verification between the model and XTOR-K in its linear phase is presented, regarding the pulsation and growth rate (ω, γ) of the internal kink, and the position in phase space of resonant planes. As expected, the instability is stabilized on the kink branch and then destabilized on the fishbone branch. On the basis of this verification, a series of linear and nonlinear runs have been performed with XTOR-K. Firstly a linear study of the alpha-induced fishbone instability on the ITER 15 MA equilibrium has been done. It highlights the linear thresholds of the instability in the diagram (q0, βh,0), with q0 the on-axis safety factor and βh,0 the on-axis kinetic beta. The fishbone mode is found to be unstable for ITER relevant (q0, βh) couples. Secondly, a first nonlinear simulation has been performed to study the nonlinear evolution of the wave-particle interaction between the kink rotation and the alpha particle precessional motion on the fishbone branch. The equilibrium is taken to have circular ITER-like flux surfaces and alpha particles at peak energy of 1MeV. Such a simulation explores one limit described in [7] where particles non-linearities dominate, far above the linear threshold in βh,0. Results show strong chirping of the fishbone mode associated with the transport of resonant particles. It is found that, for this configuration, the fishbone instability transports 5% of the core's alphas toward q = 1. Once the resonant particles have reached q = 1, a classical internal kink grows. [1] H. L ̈utjens, J-F. Luciani, JCP, (2010), 229, 8130-8143 [2] D. Leblond, PhD thesis, (2011) [3] G. Brochard et al 2018 J. Phys.: Conf. Ser. 1125 012003 [4] D.Edery, X.Garbet, J-P.Roubin, A.Samain (1992), PPCF, 34, 6, 1089-1112 [5] F. Porcelli et al, Phys. FLuid B 4 (10), 1992 [6] F. Zonca, L. Chen, Physics of Plasmas 21, 072121 (2014) 1 [7] F.Zonca et al, New J. Phys. 17 (2015) 013052 Improving computational efficiency of kinetic simulations with physics, mathematics, and machine learning George Wilkie, Centrum Wiskunde & Informatica (CWI), abstract, slides Kinetic theory has made tremendous progress in recent decades thanks to reduced models and improved computational capacity. Some problems, especially in the non-Maxwellian regime, remain difficult even for large supercomputing clusters. In this talk, I will discuss how such problems can be solved on laptops with the right tools applied. Physical approximations can be made to reduce the burden of predicting the interaction between turbulence and energetic particles. To complement well-established physical reductions of the nonlinear Boltzmann equation, recent advances in applied mathematics are utilized for direct efficient solution. Throughout, I will discuss ongoing and potential applications of machine learning to improve efficiency even further. Simulating Relativistic Astroplasmas from Microphysics to Global Dynamics Kyle Parfrey, NASA Goddard Space Flight Center, abstract, slides [#s1007, 19 Mar 2019] The most extreme and surprising behaviors of black holes and neutron stars are driven by their surrounding magnetic fields and plasmas. Numerical simulations of these systems are complicated by the exotic physical conditions, requiring new approaches. I will present a range of computational methods which are well adapted to challenges such as strongly curved spacetime, energetically dominant electromagnetic fields, and pathological current sheets. In particular, I will describe how a new technique for general-relativistic plasma kinetics will aid in understanding black holes' particle acceleration and jet launching, and in interpreting future observations with the Earth-spanning Event Horizon Telescope. Effects of zonal flows on transport crossphase in Dissipative Trapped-Electron Mode turbulence Michael Leconte, National Fusion Research Institute, Korea, abstract, slides [#s992, 14 Mar 2019] Confinement regimes with edge transport barriers occur through the suppression of turbulent (convective) fluxes in the particle and/or thermal channels, i.e. $Γ = \sum{\sqrt{|n_k|^2}\sqrt{|φ_k|^2} sin δ^{n,φ}_k}$ and $Q = \sum{\sqrt{|T_k|^2}\sqrt{|φ_k|^2} sin δ^{T,φ}_k}$, respectively, for drift-wave turbulence. The quantity $|φ_k|^2$ is the turbulence intensity, while $δ^{n,φ}_k$ and $δ^{T,φ}_k$ are the crossphases. For H-mode, a decorrelation theory predicts that it is the turbulence intensity $|φ_k|^2$ that is mainly affected via flow-induced shearing of turbulent eddies [1]. However, for other regimes (e.g. I-mode), characterized by high energy confinement but low particle confinement, this decrease of turbulence amplitude cannot explain the decoupling of particle v.s. thermal flux, since a suppression of turbulence intensity $|φ_k|^2$ would necessarily affect both fluxes the same way. Here, we explore a possible new stabilizing mechanism: zonal flows may directly affect the transport crossphase. We show the effect of this novel mechanism on the turbulent particle flux, by using a simple fluid model [2,3] for dissipative trapped-electron mode (DTEM), including zonal flows. We first derive the evolution equation for the transport crossphase $δ_k$ between density and potential fluctuations, including contributions from the E × B nonlinearity [4]. By using a parametric interaction analysis including the back-reaction on the pump, we obtain a predator-prey like system of equations for the pump amplitude $φ_p$, the pump crossphase $δ_p$, the zonal amplitude $φ_z$ and the triad phase-mismatch ∆δ. The system displays limit-cycle oscillations where the instantaneous DTEM growth rate - proportional to the crossphase - shows quasi-periodic relaxations where it departs from that predicted by a linear theory. This implies that the crossphase does not respond instantaneously to the driving gradient, instead there is a finite response time, which for DTEM corresponds to the inverse of the de-trapping rate $ν = ν_{ei}/\sqrt{\epsilon}$ with $ν_{ei}$ the electron-ion collisionality and $\epsilon$ the inverse aspect ratio. [1] H. Biglari, P.H. Diamond, P.W. Terry, Phys. Fluids B 2, 1 (1990). [2] D. A. Baver, P.W. Terry, R. Gatto and E. Fernandez, Phys. Plasmas 9, 3318 (2002). [3] F.Y. Gang, P.H. Diamond, J.A. Crotinger and A.E. Koniges, Phys. Fluids 3, 955 (1991). [4] C.Y. An, B. Min and C.B. Kim, Plasma Phys. Controlled Fusion 59, 115006 (2017). Astrophysical Collisionless Shock and Current Sheet Instabilities: Particle Modeling and Laboratory Study Zhenyu Wang, Princeton University, abstract, slides In the first part of this talk, I will present the modelling and interpretation of a campaign of laser experiments designed to generate high Mach number magnetized collisionless shocks on OMEGA-EP. We compare the data to the results of 3-D PIC simulations, and describe the signatures of magnetized shock formation, including the early contact discontinuity formation stage, and a later magnetic reflection with magnetic overshoots. We explain the geometrical effects on the radiography introduced by density gradient in expanding plasma and by the curvature of the imposed magnetic field. We conclude that our experiments have reproducibly achieved magnetized shocks with Alfvenic Mach number 3 to 9 in laboratory conditions. In the second part, I will describe the gyrokinetic (GK) electron and fully kinetic (FK) ion particle (GeFi) simulation model and the particle simulation results of waves and current sheet instabilities. In the GeFi model, the GK electron approximation removes the high frequency electron gyromotion and plasma oscillation, but the electron finite Lamor radii effects are retained. For lower-hybrid waves, the GeFi results agree well with the fully kinetic explicit delta-f particle code and the fully kinetic Darwin particle code. Our 3-D GeFi and FK simulation results demonstrate the existence of the lower-hybrid-drift, kink and sausage instability in current sheet under finite guide magnetic fields with the realistic proton-to-electron mass ratio. Kinetic Simulations of Collisionless Plasmas Rahul Kumar, Princeton University, abstract, slides The particle-in-cell method has been remarkably successful in understanding physical processes occurring at the kinetic scales. I will discuss the implementation of the electromagnetic particle-in-cell method for collisionless plasmas in a self-developed code called PICTOR. I will focus on a few techniques employed to improve performance, diagnostic, and scalability of the code. I will then briefly discuss two physics problems to illustrate the efficacy of PIC simulations in addressing a few outstanding problems in plasma physics. First, I will discuss preferential heating and acceleration of heavy ion in Alfvenic turbulence. Then, I'll discuss how self-consistent PIC simulations, combined with the measurements from Voyager spacecraft, could be used to obtain a comprehensive understanding of the dynamics of solar wind termination shock. Available energy of magnetically confined plasmas Per Helander, Max Planck Institute for Plasma Physics, Greifswald, Germany, abstract, slides [#s990, 28 Feb 2019] In this talk, the energy budget of a collisionless plasma subject to electrostatic fluctuations is studied. In particular, the excess of thermal energy over the minimum accessible to it under various constraints that limit the possible forms of plasma motion is considered. This excess measures how much thermal energy is "available" for conversion into plasma instabilities, and therefore constitutes a nonlinear measure of plasma stability. The "available energy" defined in this way becomes an interesting and useful quantity in situations where adiabatic invariants impose non-trivial constraints on the plasma motion. For instance, microstability properties of certain stellarators can be inferred directly from the available energy, without the need to solve the gyrokinetic equation. The technique also suggests that an electron-positron plasma confined by a dipole magnetic field could be entirely free from turbulence. Finite Larmor Radius effects at the high confinement mode pedestal and the related force-free steady state Wei-li Lee, PPPL, abstract For this talk, we will first relate our previous calculations on the radial electric field at the high confinement H-mode pedestal [W. W. Lee and R. B. White, Phys. Plasmas 24, 081204 (2017)] with the actual magnetic fusion experimental measurements. We will then discuss the new pressure balance due to the E x B current, which is induced by the resulting radial electric field, and its impact on the gyrokinetic MHD equations as well as their conservation properties in the force-free steady state. Impurity transport in stellarator plasmas Albert Mollén, IPP Greifswald, abstract, slides In contrast to tokamaks where turbulence typically dominates, a substantial fraction of the radial energy and particle transport in stellarators can often be attributed to collisional processes. The kinetic calculation of collisional transport has for a long time relied on simplified models which use the "mono-energetic" approximation, the simple pitch-angle scattering collision operator and are radially local. But not all experimental observations have been satisfactorily explained, and in recent years more advanced numerical tools have appeared which relax some of the approximations. These improvements in the modelling can be of particular importance when analyzing the impurity transport. I will discuss how the calculation of the impurity transport has advanced in recent years, and what the latest observations in the Wendelstein 7-X stellarator are. Black Aurora, Bohm Diffusions, Quasi-Neutrality Mysteries Explained Kwan Chul Lee, NFRI, abstract, slides Gyrocenter-Shift Analysis has been developed based on the momentum exchange of ion-neutral interaction in the magnetized plasmas. Recent 7 years, after the last PPPL theory seminar held on January 10, 2012, 3 papers on GCS analysis have been published [1-3]. This presentation will focus on the quasi-neutrality controversy after revisiting the basis of GCS analysis. The explanations on the satellite measurement of the black aurora and the transport phenomena named as Bohm diffusions will be presented as well as a related KSTAR experimental proposal. [1] K. C. Lee, "Violation of Quasi-neutrality for Ion-neutral Charge-exchange Reactions in Magnetized Plasmas", J. of Korean Phys. Soc., Vol. 63, No.10, 1944 (2013) [2] Kwan Chul Lee, "Analysis of Bohm Diffusions Based on the Ion-Neutral Collisions", IEEE Trans. on Plasma Science, Vol. 43 No. 2, 494 (2015) [3] Kwan Chul Lee, "Electric field formation in three different plasmas: A fusion reactor, arc discharge, and the ionosphere", Phys. of Plasmas, Vol. 24, 112505 (2017) Bringing the golden standard into the silicon age Francesca Poli, PPPL, abstract, slides [#s975, 31 Jan 2019] A lot happened since TRANSP made its first appearance about thirty years ago. The 'golden standard' for tokamak discharge analysis has evolved to a code that is capable of predicting heating and current drive, thermal and particle transport. Its pool of users has expanded internationally to cover almost every tokamak operating nowadays, demanding modernization, increasing support and additional capabilities. This talk will review the strength and weaknesses of TRANSP, the plans for implementation of new physics modules targeting (as close as possible to) a Whole Device Model. It will discuss areas for partnership with the theory department and ongoing activities and plans in collaboration with the SciDAC projects, for extension of the transport outside the plasma boundary and for self-consistent calculations of transport and stability, including transport induced by energetic particles. Compressional Alfvén eigenmodes excited by runaway electrons Chang Liu, PPPL, abstract Runaway electron phenomena is an important topic in general, and critically important in tokamak disruption studies. Given its high potential for damaging effects, it is critical to understand the physics of their generation and find a mitigation strategy for ITER. Kinetic instabilities associated with high-energy runaway electrons have been shown to play an important role in the behavior of runaway electrons Recently, a new kind of instability with magnetic signals in the Alfvén frequency range have been observed in disruption experiments in DIII-D, which is found to be related to the dissipation of the runaway electron current. In this talk, a candidate explanation for this phenomena is presented, namely resonant interaction with compressional Alfvén eigenmodes (CAE). CAEs driven by energetic ions have been well studied in spherical tokamaks like NSTX. For CAEs driven by resonances with runaway electrons, the damping rate of the modes due to electron-ion collisions are calculated. The model is applied to a time-dependent kinetic simulation of runaway electrons, which includes the bounce-average effect and the enhanced ion pitch-angle scattering due to partial screening. The results match with experiments qualitatively, and provide a promising way to diffuse runaway electrons before their energization. A brief overview of related research into runaway electrons is also given, indicating how this work fits into the wider effort to find mitigation strategies. Magnetohydrodynamical equilibria with current singularities and continuous rotational transform Yao Zhou, abstract, slides We revisit the Hahm-Kulsrud-Taylor (HKT) problem, a classic prototype problem for studying resonant magnetic perturbations and 3D magnetohydrodynamical equilibria. We employ the boundary-layer techniques developed by Rosenbluth, Dagazian, and Rutherford (RDR) for the internal m=1 kink instability, while addressing the subtle difference in the matching procedure for the HKT problem. Pedagogically, the essence of RDR's approach becomes more transparent in the reduced slab geometry of the HKT problem. We then compare the boundary-layer solution, which yields a current singularity at the resonant surface, to the numerical solution obtained using a flux-preserving Grad-Shafranov solver. The remarkable agreement between the solutions demonstrates the validity and universality of RDR's approach. In addition, we show that RDR's approach consistently preserves the rotational transform, which hence stays continuous, contrary to a recent claim that RDR's solution contains a discontinuity in the rotational transform. Gyrokinetic continuum simulations of plasma turbulence in the Texas Helimak Tess Bernard, University of Texas - Austin, abstract [#s942, 13 Dec 2018] The first gyrokinetic simulations of plasma turbulence in the Texas Helimak device are presented. These have been performed using the Gkeyll (http://gkyl.readthedocs.io/) computational framework. The Helimak is a simple magnetized torus with a toroidal and vertical magnetic field and open field lines that terminate on conducting plates at the top and bottom of the device. It has features similar to the scrape-off layer region of tokamaks, such as bad curvature-driven instabilities and sheath boundary conditions on the end plates, which are included in these initial gyrokinetic simulations. A bias voltage can be applied across conducting plates to drive E x B flow and study the effect of velocity shear on turbulence suppression. Comparisons are presented between simulations and measurements from the experiment, showing qualitative similarities, including fluctuation amplitudes and equilibrium profiles that approach experimental values. There are also some important quantitative differences, and I discuss how certain physical and geometric effects may improve agreement in future results. Machine Learning Techniques for Analysis of High-Dimensional Chaotic Spatiotemporal Dynamics Jaideep Pathak, University of Maryland, abstract, slides High-dimensional chaos is a commonly observed feature of many interesting natural systems such as fluid flows or atmospheric dynamics. We formulate a machine learning approach for short-term prediction [1] and for understanding the long-term ergodic properties of the underlying dynamical processes of high-dimensional chaotic spatiotemporal dynamical systems. Specifically, we consider a situation in which limited duration time series data from some dynamical process is available, but a mechanistic, knowledge-based model of how that data is produced is either unavailable or too inaccurate to be useful. Our proposed approach, using a particular kind of recurrent neural network called a Reservoir Computer [2], is computationally efficient and scalable through parallelization to dynamical systems with arbitrarily high-dimensional chaotic attractors. Further, in the context of chaotic dynamical systems, machine learning can also be used in conjunction with an imperfect knowledge-based model for filling in gaps in our underlying mechanistic knowledge that can cause the model to be inaccurate. We demonstrate this technique using simulated data from the Kuramoto-Sivashinsky partial differential equation as well as the Lorenz '96 toy model of atmospheric dynamics [3]. [1] J. Pathak, B. Hunt, M. Girvan, Z. Lu, E. Ott, Phys. Rev. Lett. 120 (2018) 024102. [2] H. Jaeger, H. Haa, Science 304 (2004) 78–80. [3] E.N. Lorenz, in:, T. Palmer, R. Hagedorn, Predict. Weather Clim., Cambridge University Press, Cambridge, 2006, pp. 40–58. From SOL turbulence to planetary magnetospheres: computational plasma physics at (almost) all scales using the Gkeyll code Ammar Hakim, PPPL, abstract, slides Gkeyll is a computational plasma physics package that aims to simulate plasmas at all scales. At present, the code contains solvers for three major equation systems: Vlasov-Maxwell equations, electromagnetic gyrokinetic equations and multi-fluid moment equations. These span the complete range of plasma physics; electromagnetic shocks, turbulence and first-principles sheath physics, requiring full kinetic treatment; turbulence in tokamak core and SOL, requiring EM gyrokinetics; and planetary magnetospheres, requiring fluid treatment with proper accounting of kinetic effects to capture reconnection and current sheet dynamics. In this talk I will present the status of each of the major solvers implemented in the code, with emphasis on the features of the algorithms as well as the physics being studied. In particular, I will focus on recent progress in implementing a conservative algorithm for collisions; details of a novel algorithm for EM gyrokinetic in the symplectic formulation; and recent progress in developing a robust semi-implicit algorithm for multi-fluid moment equations. I will conclude with the short- and medium-term plans for the project as well as some ideas on strengthening advanced computing at PPPL, leveraging on the work done in Gkeyll as well as other projects. https://gkeyll.readthedocs.io/en/latest/ Coupled core-edge gyrokinetic simulation and Tungsten impurity transport in JET with XGC Julien Dominski, abstract, slides Incoming Exascale capabilities of super computers will enable whole-device simulations based on first principles plasma physics. To take full advantage of these new capabilities, new numerical schemes and more complete physical models are developed in XGC. XGC is a whole-volume total-f gyrokinetic code optimized for simulation of edge plasma in magnetic fusion devices. One goal of the ECP project is to couple XGC with a core code, such as GENE, to optimize the efficiency of whole-device simulations. The current status of this core-edge coupling project will be presented, including the presentation of the core-edge coupling scheme and of the coupled GENE-XGC simulations. Another goal is to study the influence of tungsten and beryllium on the performance of ITER. The Total-f gyrokinetic XGC is thus being upgraded to simulate the physics of many-species impurities in the whole-volume including SOL. First multi-species simulation of a JET plasma under tungsten contamination will be demonstrated, showing that the lower charge state W can move inward from into core, but the higher charge state W will move outward toward the pedestal top and accumulate, as seen in ASDEX-U. KBM nonlinear dynamics and first-principles-based classifiers with Machine Learning predictions for disruptions Ge Dong, Princeton University, abstract, slides [#s923, 20 Nov 2018] Kinetic ballooning modes (KBM) are widely believed to play a critical role in disruptive dynamics as well as turbulent transport in tokamaks. While the nonlinear evolution of ballooning modes has been proposed as a mechanism for "detonation" in tokamak plasmas, the role of kinetic effects in such nonlinear dynamics remains largely unexplored. In this study saturation mechanism and nonlinear dynamics of KBM is presented, with global gyrokinetic simulation results of KBM nonlinear behavior. Instead of the finite-time singularity predicted by ideal MHD theory, the kinetic instability is shown to develop into an intermediate non-linear regime of exponential growth, followed by a nonlinear saturation regulated by spontaneously generated zonal fields. In the intermediate nonlinear regime, rapid growth of localized current sheet, which can mediate reconnection, is observed. In the future, the linear properties as well as nonlinear mode structures from the simulations could be incorporated into the deep learning models for disruption predictions in the form of a new parameter/channel, as a first-principles physics guide to the AI. The deep learning model could in turn provide feedback on the sensitivity of the parameters, including the linear stability properties of various modes, and nonlinear dynamics of these instabilities, and thus automatically select new inputs for the first-principles codes. Electron-impact excitation of molecular hydrogen: dissociation and vibrationally resolved cross sections Dmitry Fursa, Curtin University, abstract Molecular hydrogen and its isotopologues are present in a range of vibrationally excited states in fusion, atmospheric, and interstellar plasmas. Electron-impact excitation cross sections resolved in both final and initial vibrational levels of the target are required for modeling the properties and dynamics, and controlling the conditions of many low-temperature plasmas. Recently, the convergent close-coupling (CCC) method has been utilized to provide a comprehensive set of accurate excitation, ionization, and grand total cross sections for electrons scattering on H2 in the ground (electronic and vibrational) state, and calculations are being conducted to extend this data set to include cross sections resolved in all initial and final vibrational levels. In this talk I will review the available e-H2 collision data, discuss the resolution of a significant discrepancy between theory and experiment for excitation of the b3Su+ state, and present estimates for dissociation of H2. Strong-flow gyrokinetics with a unified treatment of all length scales Amil Sharma, University of Warwick, abstract, slides Tokamak turbulence exhibits interaction on all length scales, but standard gyrokinetic treatments consider global scale flows and gyroscale flows separately, and assume a separation between these length scales. However, the use of a small-vorticity ordering (Dimits, 2010) allows for the presence of large, time-varying flows on large length scales, whilst providing a unified treatment including shorter length scales near and below the gyroradius. Some examples of strong-flow generalisations of gyrokinetics are presented, followed by a description of the nuances of the equations and numerical implementation that use the ordering of Dimits (2010). Our Euler-Lagrange and Poisson equations contain an implicit dependence that appears as a partial time derivative of the E × B flow. This implicit dependence is analogous to the v||-formulation of gyrokinetics. However, as these implicit terms are small, we use an iterative scheme to resolve this. Additionally, we have developed a stand-alone Poisson solver based on that from the ORB5 code, and use this to simulate certain flow and density gradient driven instabilities in cylindrical geometry. Some Novel Features of Three Dimensional MagnetoHydroDynamic Plasma Rupak Mukherjee, Institute for Plasma Research, abstract, slides Within the framework of MagnetoHydroDynamics, a strong interplay exists between flow and magnetic fields leading to several interesting pathways for energy cascade. In this talk I numerically demonstrate three examples of such interplay using our in-house developed DNS code G-MHD3D which simulates three dimensional single fluid MHD equations. I also suggest analytical arguments in some of our numerical observations. The first problem discusses the phenomena of nonlinear interaction of magnetic and kinetic energies within the premise of two and three dimensional MHD equations leading to periodic exchange of energy. Scaling of the energy exchange frequency with deviation from Alfven resonance and initial wave number of excitation is numerically determined. Results are qualitatively reproduced by analysing the set of single fluid MHD equations through low degrees of freedom coupled ODEs obtained via a Galerkin procedure. Secondly, in three dimensions, at Alfven resonance, for some chaotic flows, the initial flow profile is found to "recur" periodically with the time evolution of the plasma. Such recurrence is unexpected in systems with high degree of freedom (e.g. 3D MHD). The primary cause of such phenomena is analysed using an effective number of active degrees of freedom present in the system. Finally we observe some preliminary results of large and intermediate scale magnetic field generation in plasmas often called as "dynamo" using our code. The growth rate of such 'fast' dynamos are compared for different parameters of the system and the fastest dynamos for the parameter set is identified. Physics details and numerical aspects of the development of the code, numerical protocols followed, direct numerical simulations results, numerical tools to diagnose the three dimensional grid data and the analytical arguments in support of the numerical observations will be presented in the talk. A gyrokinetic model for the tokamak periphery Rogerio Jorge, EPFL, abstract, slides We present a new gyrokinetic model that retains the fundamental elements of the plasma dynamics at the tokamak periphery, namely electromagnetic fluctuations at all scales, comparable amplitudes of background and fluctuating components, and a large range of collisionality regimes. Such model is derived within a gyrokinetic full-F approach, describing distribution functions arbitrarily far from equilibrium, and projecting the gyrokinetic equation onto a Hermite-Laguerre velocity space polynomial basis, obtaining a gyrokinetic moment hierarchy. The treatment of arbitrary collisionalities is performed by expressing the full Coulomb collision operator in gyrocentre phase space coordinates, and providing a closed formula for its gyroaverage in terms of the gyrokinetic moments. In the electrostatic regime and long-wavelength limit, the novel gyrokinetic hierarchy reduces to a drift-kinetic moment hierarchy that in the high collisionality regime further reduces to an improved set of drift-reduced Braginskii equations, which are widely used in scrape-off layer simulations. First insights on the linear modes described by our novel gyrokinetic model will be presented. APS Invited dry run - Quasi-linear resonance broadened model for fast ion relaxation in the presence of Alfvénic instabilities Nikolai Gorelenkov, PPPL, abstract, slides [#s883, 25 Oct 2018] The resonance broadened quasi-linear (RBQ) model is developed for the problem of relaxing the fast energetic particle distribution function in constant-of-motion (COM) 3D space [N.N. Gorelenkov et al., Nucl. Fusion 58 (2018) 082016]. The model is generalized by using the QL theory [H. Berk et al., Phys. Plasmas'96] and carefully reexamining the wave particle interaction (WPI) in the presence of realistic AE mode structures and pitch angle scattering with the help of the guiding center code ORBIT. The RBQ model applied in realistic plasma conditions is improved by going beyond the perturbative-pendulum-like approximation for the wave particle dynamics near the resonance. The resonance region is broadened but remains 2-3 times smaller than predicted by an earlier bump-on-tail QL model. In addition the resonance broadening includes the Coulomb collisional or anomalous pitch angle scattering. The RBQ code takes into account the beam ion diffusion in the direction of the canonical toroidal momentum. The wave particle interaction is reduced to one-dimensional dynamics where for the Alfvénic modes typically the particle kinetic energy is nearly constant. The diffusion equation is solved simultaneously for all particles together with the evolution equation for the mode amplitudes. We apply the RBQ code to a DIII-D plasma with elevated q -profile where the beam ion profiles show stiff transport properties [C. Collins et al. PRL'16]. The sources and sinks are included via the Krook operator. The properties of AE driven fast ion distribution relaxation are studied for validations of the applied QL model to DIII-D discharges. Initial results show that the model is robust, numerically efficient, and can predict intermittent fast ion relaxation in present and future burning plasmas. Nonlinear interaction between toroidal Alfvén eigenmodes and tearing modes Z. W. Ma, Zhejiang University, abstract, slides [#s809, 15 Aug 2018] Nonlinear interaction between toroidal Alfvén eigenmodes (TAEs) and the tearing mode is investigated by using the hybrid code CLT-K. It is found that the $n=1$ TAE is first excited by isotropic energetic particles at the linear stage and reaches the first steady state due to wave- particle interaction. After the saturation of the $n=1$ TAE, the $m/n=2/1$ tearing mode grows continuously and reaches its steady state due to nonlinear mode-mode coupling, especially, the $n=0$ component plays a very important role in the tearing mode saturation. The results suggest that the enhancement of the tearing mode activity with increase of the resistivity could weaken the TAE frequency chirping through the interaction between the $p=1$ TAE resonance and the $p=2$ tearing mode resonance for passing particles in the phase space, which is opposite to the classical physical picture of the TAE frequency chirping that is enhanced with increase of dissipation. Understanding coronal structures on the Sun Hardi Peter, Max Planck Institute for Solar System Research, Germany, abstract, slides [#s799, 17 Jul 2018] Since decades coronal heating is a buzzword that is used as a motivation on coronal research. Depending on the level of detail one is interested in, one could define this question anything ranging from answered to not understood at all. 3D MHD models can now produce a corona in a numerical experiment that comes close to the real Sun in complexity. And the fact alone that in these models a three-dimensional loop-dominated time-variable corona is produced could be used as an argument that the problem of coronal heating is solved. However, careful inspection of these model results shows that despite their success they leave many fundamental questions unanswered. In this talk I will address some of these aspects, including the mass and energy exchange between chromosphere and corona, the apparent width of coronal loops, the energy source of hot active region core loops, or the internal structure of loops. In this sense this talk will pose more questions that it provide answers. Machine-learning driven correlation studies: multi-band frequency chirping at NSTX Ben Woods, University of York, abstract, slides [#s777, 28 Jun 2018] Magnetic perturbations in a very broadband range (<30 kHz to >1 GHz) are commonly measured on tokamaks such as NSTX by using Mirnov coils. The spectral behaviour of the perturbations can be categorised as quiescent, fixed-frequency, chirping, or avalanching. Here, 'chirping' modes experience a time-dependent frequency shift due to non-linear effects – in some cases, multiple plasma modes chirp in a near-concurrent fashion (mode 'avalanching'). Mode avalanching in the Alfvénic and super-Alfvénic frequency bands is typically correlated with fast-ion loss. However, transition to this phase of mode behaviour is not fully understood. Traditional methods for characterising mode behaviour are highly labour intensive for human characterisation - studying parametric dependences of plasma parameters on mode character proves difficult. Here, preliminary results are presented from machine-learning driven studies of correlations between different mode character (quiescent, fixed-frequency, chirping, avalanching) and weighted averages of plasma parameters obtained from TRANSP (v-fast/v-Alfvén, Q-profile, β-fast/β-Alfvén). The weighted averages form insightful metrics of the stability of plasma modes, allowing for correlations to be drawn (i.e. magnetic shear versus mode character). These results yield similar correlations to previous work by Fredrickson et al. [1] in both the kink/tearing/fishbone frequency band (~1-30 kHz), and the TAE band (~50-200 kHz). An overall framework is presented to utilise this tool for generic tokamaks, for possible future use on MAST-U and DIII-D. [1] E. D. Fredrickson, N.N. Gorelenkov et al., Nucl. Fusion 54, 093007 (2014) Conservative Discontinuous Galerkin Discretization for the Landau Collision Operator Alexander Frank, Technical University of Munich, abstract, slides This talk presents a mass-, momentum- and energy conserving discretization of the nonlinear Landau collision integral. The semi-discrete form is achieved using a modal discontinuous Galerkin method on a tensor product mesh and a recovery method to define second derivatives at the element boundaries. Combined with an explicit time stepping scheme this gives a fully discrete conservative form. The conservation properties are proven algebraically and shown numerically for a two dimensional relaxation test problem. Computing local sensitivity and tolerances for stellarators using shape gradients Matt Landreman, University of Maryland, abstract, slides Tight tolerances have been a leading driver of cost in recent stellarator experiments, so improved definition and control of tolerances can have significant impact on progress in the field. Here we relate tolerances to the shape gradient representation that has been useful for shape optimization in industry, used for example to determine which regions of a car or aerofoil most affect drag, and we demonstrate how the shape gradient can be computed for physics properties of toroidal plasmas. The shape gradient gives the local differential contribution to some scalar figure of merit (shape functional) caused by normal displacement of the shape. In contrast to derivatives with respect to quantities parameterizing a shape (e.g. Fourier amplitudes), which have been used previously for optimizing plasma and coil shapes, the shape gradient gives spatially local information and so is more easily related to engineering constraints. We present a method to determine the shape gradient for any figure of merit using the parameter derivatives that are already routinely computed for stellarator optimization, by solving a small linear system relating shape parameter changes to normal displacement. Examples of shape gradients for plasma and electromagnetic coil shapes are given. We also derive and present examples of an analogous representation of the local sensitivity to magnetic field errors; this magnetic sensitivity can be rapidly computed from the shape gradient. The shape gradient and magnetic sensitivity can both be converted into local tolerances, which inform how accurately the coils should be built and positioned, where trim coils and structural supports for coils should be placed, and where magnetic material and current leads can best be located. Both sensitivity measures provide insight into shape optimization, enable systematic calculation of tolerances, and connect physics optimization to engineering criteria that are more easily specified in real space than in Fourier space. Energy-, momentum-, density-, and positivity-preserving spatio-temporal discretizations for the nonlinear Landau collision operator with exact H-theorems Eero Hirvijoki, PPPL, abstract, slides [#s737, 31 May 2018] This talk explores energy-, momentum-, density-, and positivity-preserving spatio-temporal discretizations for the nonlinear Landau collision operator. We discuss two approaches, namely direct Galerkin formulations and discretizations of the underlying infinite-dimensional metriplectic structure of the collision integral. The spatial discretizations are chosen to reproduce the time-continuous conservation laws that correspond to Casimir invariants and to guarantee the positivity of the distribution function. Both the direct and the metriplectic discretization are demonstrated to have exact H-theorems and unique, physically exact equilibrium states. Most importantly, the two approaches are shown to coincide, given the chosen Galerkin method. A temporal discretization, preserving all of the mentioned properties, is achieved with so-called discrete gradients. The proposed algorithm successfully translates all properties of the infinite-dimensional time-continuous Landau collision operator to time- and space-discrete sparse-matrix equations suitable for numerical simulation. Spontaneous reconnection in thin current sheet: the "ideal" tearing mode Anna Tenerani, Department of Earth, Planetary and Space Sciences, UCLA, abstract, slides Magnetic field reconnection is considered one of the most important mechanisms of magnetic energy conversion and reorganization acting in astrophysical and laboratory plasmas. Although our knowledge has been greatly advanced in the last few decades, the problem of how magnetic reconnection can be triggered explosively in weakly collisional (quasi-ideal) plasmas (as observed e.g in solar flares, geomagnetic substorms and sawtooth crashes in tokamaks) still remains an open field of research. Here we discuss a possible scenario for the triggering of explosive reconnection via the onset of an 'ideal' tearing instability within forming current sheets and we show results from MHD numerical simulations that support our scenario. We demonstrate that the same reasoning, if applied recursively, can describe the complete nonlinear disruption of the original current sheet until microscopic marginally-stable current sheets are formed. We show that the 'ideal' tearing mode provides a general frame of work that can be extended to include other effects such as kinetic effects and different current profiles. Design of a new quasi-axisymmetric stellarator equilibrium Sophia Henneberg, IPP Greifswald, abstract, slides A new quasi-axisymmetric, two-field-period stellarator configuration has been designed following a broad study using the optimization code ROSE (Rose Optimizes Stellarator Equilibria). Because of the toroidal symmetry of the magnetic field strength, quasi-axisymmetric stellarators share many neoclassical properties of tokamaks, such as a comparable bootstrap current which can be employed to simplify the coil structure, which is favorable for finding compact equilibria. The ROSE code optimizes the plasma boundary calculated with VMEC based on a set of physical and engineering criteria. Various aspect ratios, number of field periods and iota profiles are investigated. As an evaluation of the design, the bootstrap current, the ideal MHD stability, the fast-particle losses, and the existence of islands are examined. The main result of this extensive study – a compact, MHD-stable, two-field-period stellarator with small fast-particle loss fraction – will be presented. Bringing the GBS plasma turbulent simulation code from limited to diverted configurations Paola Paruta, École Polytechnique Fédérale de Lausanne, abstract The GBS code has been developed to describe the plasma turbulent behaviour in the SOL,[1] [2], by solving the two-fluids Drift Reduced Braginskii equations.[3] We report on the implementation of diverted magnetic equilibria in GBS: by abandoning flux coordinates systems, which are not defined at the X-point, the model equations are written in toroidal coordinates, and a 4th order finite difference scheme is used for the implementation of the spatial operators on staggered poloidal and toroidal grids. The GBS numerical implementation is verified through the Method of Manufactured Solutions.[4] Its convergence properties are tested. First results of TCV-like simulations are presented. [1] F. D. Halpern, P. Ricci et al., J. Comput. Phys. 315, 388 (2016) [2] P. Ricci, F. D. Halpern et al., Plasma Phys. Control. Fusion 54, 124047 (2012) [3] A. Zeiler, J. F. Drake & B. Rogers, Phys. Plasmas 4, 2134 (1997) [4] Patrick J. Roache, J. Fluids Eng. 124, 4 (2002) Modeling Resonant Field Penetration and Its Effects on Transport in the DIII-D Tokamak Qiming Hu, PPPL, abstract, slides [#s665, 05 Apr 2018] We use the nonlinear cylindrical two fluid MHD code (TM1) [1-4] to model the effects of multiple magnetic islands in the DIII-D tokamak. The TM1 code has been previously used to study classical and neoclassical tearing mode (TM) stability, TM stabilization by ECCD, and plasma response (transport and stability) to resonant magnetic perturbations (RMPs). Recently, TM1 has been used to understand the effects of multiple locked magnetic islands on heat transport in DIII-D Ohmic plasmas. It is found that co-existence of 2/1, 3/1 and 4/1 locked islands can produce a large (~50%) reduction in the central electron temperature, even without island overlap. However, the observed reduction in the edge temperature requires island overlap and stochasticity within the TM1. X-ray imaging reveals the appearance of multiple locked islands in the plasma edge at the time of the thermal collapse [5], consistent with TM1 modeling. For ELM suppression studies, we analyze the nonlinear evolution of edge magnetic islands in response to resonant magnetic perturbations [6]. The observed increase in pedestal toroidal rotation with the decrease in the core toroidal rotation is shown to be quantitatively consistent with TM1 modeling of multiple locked non-overlapping magnetic islands in the plasma edge. The TM1 results reveal interesting physics effects of locked modes that motivate further study using more comprehensive physics models. [1] Qingquan Yu, Phys. Plasmas 4, 1047 (1997) [2] Q. Yu, S. Günter & B. D. Scott, Phys. Plasmas 10, 797 (2003) [3] Q. Yu, Nucl. Fusion 50, 025014 (2010) [4] S. Günter, Q. Yu et al., J. Comput. Phys. 209, 354 (2005) [5] X. D. Du et al., Phys. Rev . Lett. (submitted) [6] R. Nazikian, C. Paz-Soldan et al., Phys. Rev. Lett. 114, 105002 (2015) Two etudes on unexpected behaviour of drift-wave turbulence near stability threshold Alex Schekochihin, University of Oxford, abstract, slides I will discuss some recent results — numerical and experimental — on the nature of drift-wave turbulence in MAST, obtained in the doctoral theses of my students Ferdinand van Wyk [1,4], Michael J. Fox [2] and Greg Colyer [3]. At ion scales, in the presence of flow shear, we find numerically [1,4] a type of transition to turbulence that is new (as far as we know) in the tokamaks, but reminiscent of some fluid dynamical phenomena (e.g., pipe flows or accretion discs in astrophysics): close to threshold, the nonlinear saturated state and the associated anomalous heat transport are dominated by long-lived coherent structures, which drift across the domain, have finite amplitudes, but are not volume filling; as the system is taken away from the threshold into the more unstable regime, the number of these structures increases until they overlap and a more conventional chaotic state emerges. Such a transition has its roots in the subcritical nature of the turbulence in the presence of flow shear. It can be diagnosed in terms of the breaking of the statistical up-down symmetry of turbulence: this manifests itself in the form of tilted two-point correlation functions and skewed distributions of the fluctuating density field, found both in simulations and in BES-measured density fields in MAST [2]. The governing (order) parameter in the system is the distance from the threshold, rather than individual values of equilibrium gradients; the symmetries — and drift-wave/zonal-flow turbulence of conventional type — are restored away from the threshold. The experiment appears to lie just at the edge of this latter transition rather than at the exact stability threshold. At electron scales in MAST, the conventional streamer-dominated state of ETG turbulence turns out to be a long-time transient, during which an initially unimportant zonal component continues to grow slowly, eventually leading to a new saturated state dominated by zonal modes, rather similar to ITG turbulence [3]. In this regime, the heat flux turns out to be proportional to the collision rate, in approximate agreement with the experimentally observed collisionality scaling of the energy confinement in MAST. Our explanation of this effect is based on a model of ETG turbulence dominated by zonal–nonzonal interactions and on an analytically derived scaling of the zonal-mode damping rate with the electron–ion collisionality. These developments open some intriguing possibilities both for enterprising theoreticians tired of the V&V routine and for ingenious experimentalists interested in making use of tokamaks to probe transitions to turbulence in nonlinear plasma systems. [1] F. van Wyk, E. G. Hichcock et al., J. Plasma Phys. 82, 905820609 (2016) [2] M. F. J. Fox, F. van Wyk et al., Plasma Phys. Control. Fusion 59, 034002 (2017) [3] G. J. Colyer, A. A. Schekochihin et al., Plasma Phys. Control. Fusion 59, 055002 (2017) [4] F. van Wyk, E. G. Highcock et al., Plasma Phys. Control. Fusion 59, 114003 (2017) An adjoint method for gradient-based optimization of stellarator coil shapes Elizabeth Joy Paul, University of Maryland, abstract, slides We present a method for stellarator coil design via gradient-based optimization of the coil-winding surface. The REGCOIL [Landreman, Nucl. Fusion 57, 046003 (2017)] approach is used to obtain the coil shapes on the winding surface using a continuous current potential. We apply the adjoint method to calculate derivatives of the objective function, allowing for efficient computation of analytic gradients while eliminating the numerical noise of approximate derivatives. We are able to improve engineering properties of the coils by targeting the root-mean-squared current density in the objective function. We obtain winding surfaces for W7-X and HSX which simultaneously decrease the normal magnetic field on the plasma surface and increase the surface-averaged distance between the coils and the plasma in comparison with the actual winding surfaces. The coils computed on the optimized surfaces feature a smaller toroidal extent and curvature and increased inter-coil spacing. A technique for computing the local sensitivity of figures of merit to normal displacement of the winding surface is presented, with potential applications for understanding engineering tolerances. Reduced MHD and gyrokinetic studies on auroral plasmas Tomo-Hiko Watanabe, Nagoya University, abstract, slides The reduced MHD and gyrokinetics developed from fusion theory have been applied to a variety of topics in space and astrophysical plasmas. Our theoretical and numerical methods developed from fusion studies at NU have also facilitated to understand key issues in auroral physics. Here, I would like to discuss some of the topics, such as structural formation of auroras and their dynamics, where competitive process of the ballooning, Kelvin-Helmholtz, and feedback instabilities in the magnetosphere-ionosphere (M-I) coupling are investigated [1, 2]. Furthermore, in the nonlinear stage of auroral growth, the M-I coupling system transits into the Alfvénic turbulence state [2]. The auroral turbulence is resulted from interactions of the shear Alfvén waves propagating in the opposite directions, and is an interesting application of the Goldreich-Sridhar theory. I would also like to discuss the gyrokinetic extension of the auroral theory including auroral electron acceleration [3]. [1] T.-H. Watanabe, Phys. Plasmas 17, 022904 (2010) [2] Tomo-Hiko Watanabe, Hiroaki Kurata & Shinya Maeyama, New J. Phys. 18, 125010 (2016) [3] T.-H. Watanabe, Geophys. Res. Lett. 41, 6071 (2014) Parametric Instability, Inverse Cascade, and the $1/f$ Range of Solar-Wind Turbulence Ben Chandran, University of New Hampshire, abstract, slides Turbulence likely plays an important role in generating the solar wind, and spacecraft measurements indicate that solar-wind turbulence is largely non-compressive and Alfvén-wave-like. Although compressive fluctuations are sub-dominant, Alfvén waves in the solar wind couple to compressive slow magnetosonic waves ("slow waves") via the parametric-decay instability. In this instability, an outward-propagating Alfvén wave decays into an outward-propagating slow wave and an inward-propagating Alfvén wave. In this talk, I will describe a weak-turbulence calculation of the nonlinear evolution of the parametric instability in the solar wind at wavelengths much greater than the ion inertial length under the assumption that slow waves, once generated, are rapidly damped. I'll show that the parametric instability leads to an inverse cascade of Alfvén-wave quanta and present several exact solutions to the wave kinetic equations. I will also present a numerical solution to the wave kinetic equations for the solar-wind-relevant case in which most of the Alfvén waves initially propagate away from the Sun in the plasma rest frame. In this case, the outward-propagating Alfvén waves evolve toward a $1/f$ frequency spectrum that shows promising agreement with spacecraft measurements of interplanetary turbulence in the fast solar wind. I will also present predictions that will be tested by NASA's upcoming Solar Probe Plus mission, which will travel much closer to the Sun than any previous spacecraft. On the Global Attractor of 2D Incompressible Turbulence with Random Forcing John Bowman, U. Alberta, abstract, slides We revisit bounds on the projection of the global attractor in the energy-enstrophy plane obtained by Dascaliuc, Foias, and Jolly [1,2]. In addition to providing more elegant proofs of some of the required nonlinear identities, the treatment is extended from the case of constant forcing to the more realistic case of random forcing. Numerical simulations in particular often use a stochastic white-noise forcing to achieve a prescribed mean energy injection rate. The analytical bounds are illustrated numerically for the case of white-noise forcing. [1] R. Dascaliuc, C. Foias & M.S. Jolly, J. Dynam. Differential Equations 17, 643 (2005) [2] R. Dascaliuc, C. Foias & M.S. Jolly, J. Differential Equations 248, 792 (2010) Magnetic Reconnection in Three Dimensional Space Prof. Allen Boozer, Columbia University, abstract, slides The breaking of magnetic field line connections is of fundamental importance in essentially all applications of plasma physics: laboratory to astrophysics. For sixty years the theory of magnetic reconnection has been focused on two-coordinate models. When dissipative time scales far exceed natural evolution times, such models are not realistic for ordinary three dimensional space. The ideal (dissipationless) evolution of a magnetic field is shown to in general lead to a state in which the magnetic field lines change their connections on an Alfvénic (inertial), not resistive, time scale. Only a finite mass of the lightest current carrier, the electron, is required. During the reconnection, the gradient in $j_\parallel/B$ relaxes while conserving magnetic helicity in the reconnecting region. This implies a definite amount of energy is released from the magnetic field and transferred to shear Alfvén waves, which in turn transfer their energy to the plasma. When there is a strong non-reconnecting component of the magnetic field, called a guide field, $j_\parallel/B$ obeys the same evolution equation as that of an impurity being mixed into a fluid by stirring. Although the enhancement of mixing by stirring has been recognized by every cook for many millennia, the analogous effect in magnetic reconnection is not generally recognized. An interesting mathematical difference is a three-coordinate model is required for the enhancement of magnetic reconnection while only two coordinates are required in fluid mixing. The issue is the number of spatial coordinates required to obtain an exponential spatial separation of magnetic field lines versus streamlines of a fluid flow. A moment approach to plasma fluid/kinetic theory and closures Jeong-Young Ji, Utah State University, abstract, slides A system of exact fluid equations always involves more unknowns than equations. This is called the closure problem. An important aspect of obtaining quantitative closures is an accurate account of collisional effects. Recently, analytical calculations of the Landau (Fokker-Planck) collision operator as well the derivation of an infinite hierarchy of moment equations have been carried out using expansions for the distribution function in terms of irreducible Hermite polynomials. In this talk, I will present solutions to the moment hierarchy that provide closure for the set of five-moment fluid equations. In the collisional limit, improved Braginskii closures are obtained by increasing the number of moments and considering the ion-electron collision effects. For magnetized plasmas, I highlight the effect of long mean free path and derive parallel integral closures for arbitrary collisionality. Finally, I will show how the integral closures can be used to study radial transport due to magnetic field fluctuations and electron parallel transport for arbitrary collisionality. Overview of ENN and Fusion Technology Dr. Minsheng Liu, ENN Sci. & Tech. Co. Ltd., China, abstract The talk includes 3 parts: 1.overview of ENN; 2.ENN research areas and achievements; and 3.ENN fusion technology roadmap. The Development and Applications of CLT Wei Zhang, Zhejiang University, abstract CLT is an explicit, three-dimensional, fully toroidal, non-reduced, Hall-MHD code, which is developed in Zhejiang University. Through CLT, I find that electron diamagnetic rotation, which is well described by the CLT code, can significantly modify the dynamics of the tearing mode. It can also affect the characteristics of the sawtooth oscillations. Besides, I have also studied the influence of driven current on tearing mode instabilities, which can explain some experimental data of EAST. Now CLT is updating to study the influence of RMPs (Resonant Magnetic Perturbations) on tearing mode instabilities. Preliminary results show that the threshold for 'mode locking' increases with the frequency of RMPs, which is consistent with theory prediction. The JET 2020 Program: D-T and Alpha physics with the ITER Like Wall M. Romanelli, JET, Culham Science Centre, abstract, slides The 2018-2019 JET program in preparation for DTE2 has been launched in November 2017 and will start in March 2018 with the first of the 2018 analysis and modeling campaigns followed by the first experimental campaign in D lasting until October 2018. The program will feature also an H campaign followed by a full T campaign in 2019 and aims at studying differences in plasma dynamics related to operation with pure and mixed hydrogen isotopes along with the impact of fast particles on heating, transport and confinement followed by a H campaign with NBI in H, and a D campaign for finalising the plasma preparation for DTE2. Extensive modeling and analysis has been devoted to the preparation of successful scenarios that will allow to achieving during D-T operations a fusion power of 15MW continuously for at least 5s and observing clear alpha-particle effects. In this seminar I'll present the striking results on the isotope (H/D) effect on transport and global confinement from the latest experimental campaigns along with the plans for experiments on transport and confinement in the 2018-2020 campaigns. e-mail contact of the main author: [email protected] Understanding mechanisms underlying ohmic breakdown in a tokamak by considering multi-dimensional plasma responses Min-Gu Yoo, Seoul National University, abstract, slides The ohmic breakdown is generally used to produce initial plasmas in tokamaks. However, the complex electromagnetic structure of tokamaks has obscured the physical mechanism of ohmic breakdown for several decades. Previous studies ignored plasma responses to external electromagnetic fields and adopted only the simplest Townsend avalanche theory. However, we found clear evidence that experimental results cannot be explained by the Townsend theory. Here, we propose a completely new type of breakdown mechanism that systematically considers multi-dimensional plasma responses in the complex electromagnetic topology. As the plasma response, self-electric fields produced by space-charge were found to be crucial for significantly reducing plasma density growth rate and enhancing perpendicular transport via $\small {\bf E}\times{\bf B}$ drifts. A particle simulation code, BREAK, clearly captured these effects and provided a remarkable reproduction of the mysterious experimental results in KSTAR. These new physical insights into complex electromagnetic topology provide general design guideline for a robust breakdown scenario in a tokamak fusion reactor. Investigation of whistler-electron interaction in Earth's radiation belt Lei Zhao, New Mexico Consortium, abstract, slides In this talk, I will focus on how to describe the energetic electron dynamics through quasilinear whislter-electron interactions in Earth's magnetosphere. First of all, we explore gyro-resonant wave-particle interaction and quasi-linear diffusion in different magnetic field configurations related to the March 17 2013 storm. We consider the Earth's magnetic dipole field as a reference, and compare the results against non-dipole field configurations corresponding to quiet and stormy conditions. The latter are obtained with RAM-SCB, a code that models the Earth's ring current and provides a realistic model of the Earth's magnetic field. By applying quasi-linear theory, the bounce- and magnetic local time (MLT) averaged electron pitch angle, mixed term and energy diffusion coefficients are calculated for each magnetic field configuration. The results show that the diffusion coefficients become quite independent of the magnetic field configuration for relativistic electrons (∼ 1MeV ), while a realistic model of the magnetic field configuration is necessary to adequately describe the diffusion rates of lower energy electrons (∼ 100keV ) In the second part of the talk, a Test Particle Model (TPM) is used to explore the limitations of quasilinear theory when applied to whistler wave-electron resonance. We consider the influence of wave amplitude and wave bandwidth on the wave-particle interaction within the quasilinear theory limit. The result implies that quasilinear theory tends to breakdown more easily for energetic particles even at small wave amplitude. While for broad wave bandwidths, it allows for more stochasticity for particles in phase space. On the contrary, electron phase trapping and bunching (nonlinear wave-particle interaction phenomenon) will dominate a narrow bandwidth spectrum that resembles a monochromatic wave. Nonlinear ECDI and anomalous transport in $\small {\bf E} \times {\bf B}$ discharges S. Janhunen, U. Saskatchewan, abstract, slides Cross-field anomalous transport is an important feature affecting the operation and performance of $\small {\bf E} \times {\bf B}$ discharges. Instabilities excited by the $\small {\bf E} \times {\bf B}$ flow cause anomalous current to develop, elicited in the nonlinear regime as a large amplitude coherent wave driven by the energy input from the unstable cyclotron resonances. A persistent train of soliton-like waves characterized by the fundamental harmonic of the electron cyclotron oscillations appears in the ion density. Simultaneously, there is an observable energy cascade toward long wavelength (inverse cascade) which is manifested by the formation of the long wavelength envelope of the wave train in 1D simulations. It is shown that the long wavelength part of the turbulent spectrum provides a dominant contribution to anomalous electron mobility.[1] We present results from high fidelity 1D3V and 2D3V particle-in-cell simulations for a simplified Hall-effect thruster like plasma. We describe the non-linear evolution of the system and speculate on mechanisms behind the coherent structures and their interactions. The 2D the picture is complicated by the existence of a simultaneous long-wavelength mode (modified two-stream instability), in addition to the non-linear cascades observable in 1D. [1] S. Janhunen, A. Smolyakov et al., Phys. Plasmas 25, 011608 (2018) [2] Movie 1) 1D case with $T_e = 10 eV$ [2] Movie 2) 1D with $T_e = 0 eV$ [2] Movie 3) Ion density fluctuations in 2D simulation [2] Movie 4) Electron temperature in 2D simulation A fast integral equation based solver for the computation of Taylor states in toroidal geometries Antoine Cerfon, Courant Institute, New York University , abstract, slides The stellarator equilibrium code SPEC [S.R. Hudson et al.,, Phys. Plasmas 19, 112502 (2012)] computes 3D equilibria by subdividing the plasma into separate regions assumed to have undergone Taylor relaxation to a minimum energy state subject to conserved fluxes and magnetic helicity, and separated by ideal MHD barriers. In this talk, we present a numerical scheme for the fast and high order computation of Taylor states in toroidal regions based on an integral formulation of the problem. Our formulation offers the advantage that the unknowns are only defined on the boundary of the toroidal regions. As a result, high accuracy is reached with a small number of unknowns, leading to a code which is fast and has low memory requirements. In the context of SPEC, in which the locations of the ideal MHD interfaces are iteratively updated until force balance is satisfied at each interface, our formulation gives the possibility to apply the entire iterative procedure without ever discretizing the plasma volume, only discretizing the ideal interfaces. This is joint work with M. O'Neil, L. Greengard, and L.-M. Imbert-Gerard The shearing modes approach to the theory of plasma shear flow Vladimir Mikhailenko, Pusan National University, South Korea , abstract, slides The basic point in the understanding the processes of the evolution of the instabilities and turbulence in the plasma shear flows across the magnetic field is the proper treatment the effects of the persistent distortion of the perturbations by the shearing flow, particularly in the applications of the spectral transforms to the governing equations. The problem of extracting the separate spatial Fourier mode in the stability theory of the plasma shearing flows, and joined with it the problem of the limits of the applicability to such a plasma the spectral methods of the investigation of the stability properties on the base of the dispersion equations, may be resolved by employing the non-modal fluid and kinetic theory, which grounds on the methodology of the shearing modes and completely involves the effect of the persistent deformation of the perturbations by the sheared flows. That theory displays, that the application of the methodology of the shearing modes and convective-shearing coordinates and solution of the initial value problem in the wave vector and time variables, instead of the application of the static spatial Fourier modes and spectral transform in time, has the decisive impact on the understanding the wave-particle interaction in the shearing flow. That theory recovers main linear and non-linear processes and corresponding numerous characteristic time scales, which may be observable in the experiments and numerical simulations, but can't be distinguished, when the spectral transform in time is applied. A most famous is the "quench rule", which was detected in first in the numerical simulations, but was not confirmed analytically in the calculations of the shearing flows stability, grounded on the spectral transformations in time. The primary intent of this report is to show that a non-modal approach is a decisive in the reconciling observational evidence with stability theory of plasma shearing flows and to suggest a more frequent use of that approach. Gyrokinetic simulation of boundary plasma in contact with material wall Seung-Hoe Ku, PPPL, abstract, slides [#s449, 11 Sep 2017] Boundary plasma is in a non-equilibrium statistical state governed by self-organization among multiscale physics, and needs to be modeled with total-f gyrokinetic equations. A unique pariticle-in-cell technique will be introduced that has enabled XGC to be the world's first, and only, gyrokinetic code that simulates the boundary plasma across the magnetic separatrix into the scrape-off layer in contact with a material wall. Examples of successful boundary physics discoveries enabled by the technique will also be presented. Role of electron physics in 3D two-fluid 10-moment simulations of the Ganymede's magnetosphere Liang Wang, U. New Hampshire, abstract, slides We studied the role of electron physics in 3D two-fluid 10-moment simulations of the Ganymede's magnetosphere. The model captures non-ideal physics like the Hall effect, the electron inertia, and anisotropic, non-gyrotropic pressure effects. A series of analyses were carried out: 1) The resulting magnetic field topology and electron and ion convection patterns were investigated. The magnetic fields were shown to agree reasonably well with in-situ measurements by the Galileo satellite. 2) The physics of collisionless magnetic reconnection were carefully examined in terms of the current sheet formation and decomposition of generalized Ohm's law. The importance of pressure anisotropy and non-gyrotropy in supporting the reconnection electric field is confirmed. 3) We compared surface "brightness" morphology, represented by surface electron and ion pressure contours, with oxygen emission observed by the Hubble Space Telescope (HST). The correlation between the observed emission morphology and spatial variability in electron/ion pressure was demonstrated. We also briefly discussed the relevance of this work to the future JUICE mission. A new coil design code FOCUS for designing stellarator coils without the winding surface Caoxiang Zhu, U. Sci. & Tech. China, Hefei, China , abstract, slides Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal "winding" surface and suffer the difficulties of nonlinear optimization. A new coil design method, the FOCUS code, is introduced by representing each discrete coil as an arbitrary, closed space curve. The first and second derivatives of the target function that covers both physical requirements and engineering constraints are calculated analytically. We have employed several advanced nonlinear optimization algorithms, like the nonlinear conjugate gradient and the modified Newton method, for minimizing the target function. Numerical illustrations show that the new method can be applied to different types of coils for various configurations with great flexibilities and robustness. An extended application for analyzing the error field sensitivity is also presented. Stochastic modelling of fluctuations in scrape-off layer plasmas Ralph Kube, UIT, Tromso, Norway, abstract, slides Scrape-off layer plasmas feature intermittent, large-amplitude fluctuations which are attributed to the radial outwards propagation of plasma blobs through this volume. We introduce a stochastic model which describes scrape-off layer time series by superposing uncorrelated pulses with variable amplitude. The resulting time series is Gamma distributed where the lowest order statistical moments are given by the pulse parameters. The power spectral density is governed by the pulse shape - for a double exponential pulse shape presents the power spectral density a flat region for low frequencies and a steep power law scaling for high frequencies. Predictions from this model are compared to fluctuation measurements of electron density, temperature and plasma potential by mirror Langmuir probes in the SOL during an ohmic L-mode discharge in Alcator C-Mod. Simulation of resonant wave-particle interaction in tokamaks Meng Li, IFS, The University of Texas, Austin, abstract, slides We present a numerical procedure for modeling resonant response of energetic particles to waves in tokamaks. With Littlejohn Lagrangian for guiding center motion, we use the action-angle variables to simplify simulations of the fast ion dynamics to one dimensional. The transformation also involves construction of canonical straight field line coordinates, which render a Hamiltonian description of the guiding center motion. This module can be integrated with the modified MHD code AEGIS to simulate wave-particle interactions. Tearing mode dynamics and sawtooth oscillation in Hall-MHD Zhiwei Ma, Zhejiang University, Hangzhou, China, abstract, slides Tearing mode instability is one of the most important dynamic processes in space and laboratory plasmas. Hall effects, resulted from the decoupling of electron and ion motions, could cause the fast development and perturbation structure rotation of the tearing mode and become non-negligible. A high accuracy nonlinear MHD code (CLT) is developed to study Hall effects on the dynamic evolution of tearing modes with Tokamak geometries. It is found that the rotation speed of the mode structure from the simulation is in a good agreement with that from the analytical theory in a single tearing mode. The linear growth rate increases with increase of the ion skin depth. The self-consistent generated rotation largely alters the dynamic behaviors of the double tearing mode and the sawtooth oscillation. Continuum kinetic schemes for Vlasov-Maxwell equations, with applications to laboratory and space plasmas Increasingly accurate laboratory experiments and satellite observations have led to a "golden age" in plasma physics. Detailed kinetic features, including distribution functions, can now be measured in-situ, putting severe strain on theory and modeling to explain the experiments/observations. The Particle-In-Cell (PIC) method remains a powerful and widely used tool to study such kinetic physics numerically. Recently, complimenting the PIC approach, significant progress has been made in discretizing the Vlasov equation directly, treating it as a partial differential equation (PDE) in 6D phase-space. In this talk, I present a high-order discontinuous Galerkin (DG) algorithm to solve the Vlasov equation. This continuum scheme leads to noise-free solutions and, with the use of specially optimized computational kernels, can be very efficient, in particular, for problems in which the structure of distribution functions and its higher-moments are required. In addition, with a proper choice of basis functions and numerical fluxes, the scheme conserves energy exactly, while conserving momentum to a high degree of accuracy. Applications of the scheme to (a) kinetic saturation mechanism of Wiebel-like instability and (b) turbulence in the solar wind are presented. We demonstrate a new and novel mechanism for the nonlinear saturation of the Wiebel instability that comes about from a balance between filamentation and a secondary electrostatic two-stream instability. We use 5D simulations of turbulent plasmas to study detailed kinetic physics of magnetized turbulence, showing that the solution contains remarkable amount of detail in the distribution function, leading to new and novel insights the nature of kinetic wave-particle exchange in turbulent plasmas. Nonthermal particle acceleration in magnetic reconnection and turbulence in collisionless relativistic plasmas Dmitri Uzdensky, IAS & U. Colorado Boulder, abstract, slides One of the key recurrent themes in high-energy plasma astrophysics is relativistic nonthermal particle acceleration (NTPA) necessary to explain the bright X-ray and gamma-ray flaring emission with ubiquitous power-law spectra in astrophysical objects such as pulsar wind nebulae, hot accretion flows and coronae of accreting black holes, and black-hole powered relativistic jets in active galactic nuclei and gamma-ray bursts. Two leading physical processes often invoked as possible NTPA mechanisms are collisionless magnetic reconnection and turbulence. In order to understand these processes, as well as their resulting observable radiation signatures, I have recently initiated a broad theoretical and computational research program in kinetic radiative plasma astrophysics. This program employs large-scale first-principles particle-in-cell kinetic simulations (including those that self-consistently incorporate radiation-reaction effects) coupled with analytical theory. In this talk I will review the resulting progress that we have achieved in recent years towards understanding and quantitative characterization of NTPA in reconnection and turbulence over a broad range of physical regimes. I will present 2D and 3D simulation results that demonstrate that both reconnection and turbulence in relativistic collisionless astrophysical plasmas can robustly produce non-thermal energy spectra with power-law indices that show an intriguingly similar characteristic dependence on the plasma magnetization. I will also describe the effects of strong radiative cooling on reconnection and turbulence. Hot Particle Equilibrium code (HPE) with plasma anisotropy and toroidal rotation Leonid Zakharov, LiWFusion, abstract, slides The HPE code represents the extension of ESC (Equilibrium and Stability Code) to tokamak equilibria with plasma anisotropy and toroidal rotation. In addition to conventional 1-D input profiles of plasma pressure $p(a)$ and safety factor $q(a)$ HPE accepts the poloidal Mach number ${\cal M}(a)$ of plasma rotation and 2-D parallel pressure profile of hot particles $p_\parallel(a,B)$ as the input. Here, $a$ is the normalized radial flux coordinate of magnetic configuration and $B$ is the strength of magnetic field. The HPE code includes the effect of finite width of hot particle orbits and for the case of powerful NBI injection the code can generate the plasma equilibria for theory needs as well as use the experimental (or kinetic simulations) data for interpretation of experiments. Anomalous Diffusion across a Tera Gauss Field in accreting Neutron Stars Russell Kulsrud, PPPL, abstract, slides A large amount of mass falls on the polar region of neutron star in Xray binaries and the question is, is the mass completely frozen on the field lines or can it diffuse through them? In this talk we present a mechanism for the latter possibility. A strong MHD instability occurs in the top layers of the neutron star driven by the incoming mass. This instability has the same properties as the Schwarzschild instability in the solar convection zone. It gives rise to a turbulent cascade which mixes up the field lines so that lines originally far apart can come with a resistivity diffusion distance and transfer the masses between them. However, the lines of force themselves are not disrupted. This leads to an equilibrium which is marginal with respect to the instability just as happens in the Schwarzschild case. Modeling substrom dipolarizations and particle injections in the terrestrial magnetosphere Konstantin Kabin, Royal Military College of Canada, abstract, slides Increased fluxes of energetic electrons and ions in the inner magnetosphere of the Earth are often associated with sudden reconfigurations of the magnetotail, often referred to as substorm dipolarizations. We describe a novel model of the magnetotail which is easily controlled by several adjustable parameters, such as the thickness of the tail and the location of transition from dipole-like to tail-like magnetic field lines. This model is fully three-dimensional and includes the day-night asymmetry of the terrestrial magnetosphere, however, the field lines are confined to the meridional planes. Our model is well suited to studies of the magnetotail dipolarizations which we consider to be the tailward movements of the transition between dipole-like and tail-like field lines. We also study the effects of a dipolarizing electromagnetic pulse propagating towards the Earth. The calculated electric and magnetic fields are used to describe the motion of electrons and ions and changes in their energies. In some cases, particle energies increase by a factor of 25 or more. The energized particles are transported earthward where they are often observed by geostationary satellites as substorm injections. The energization level obtained in our model is reasonably consistent with satellite and ground-based observation (e.g. carried out by riometers), and therefore we consider our scenario of the dipolarization process to be feasible. DCON for Stellarators Alan Glasser, Fusion Theory & Computation, Inc., abstract, slides We report the development of a new version of DCON for nonaxisymmetric toroidal plasmas, e.g. stellarators. The DCON code is widely used for fast and accurate determination of ideal MHD stability of axisymmetric toroidal plasmas. Minimization of the ideal MHD energy principle $\delta W$ is reduced to adaptive numerical integration of a coupled set of ordinary differential equations for the complex poloidal Fourier harmonics of the perturbed normal displacement. For a periodic cylindrical plasma, both the poloidal and toroidal coordinates are ignorable, allowing for treatment of single harmonics $m$ and $n$. For an axisymmetric toroidal plasma, poloidal symmetry is broken, causing different $m$'s to couple and requiring simultaneous treatment of $M$ harmonics. For a nonaxisymmetric plasma, toroidal symmetry is also broken, causing different $n$'s to couple and requiring treatment of $N$ harmonics. For a stellarator with field period $l$, e.g. $l = 5$ for W7X, each toroidal harmonic $n$ is coupled to toroidal harmonics $n+k \, l$, for all integers $k$. Both $M$ and $N$ are truncated on the basis of convergence. Singular surfaces occur at all values of safety factor $q = m/n$ in the plasma. The DCON equations have been generalized to allow for multiple $n$'s with coupling matrices $F$, $K$, and $G$. An interface has been developed for the nonaxisymmetric equilibrium code VMEC, which provides values for these matrices. Their Fourier harmonics are fit to cubic splines as a function of the radial coordinate $s$, allowing for adaptive integration of the ODEs. The status of the code development will be presented. Disintegration threshold of Langmuir solitons in inhomogeneous plasmas Yasutaro Nishimura, National Cheng Kung University, Taiwan, abstract, slides Dynamics of Langmuir solitons in inhomogeneous plasmas is investigated numerically employing Zakharov equations. The solitons are accelerated toward a lower background density side. With the steep density gradients, balance between the electric field part of the soliton and the density cavity breaks and the solitons disintegrate. The disintegration threshold is given by regarding the electric field part of the soliton as a point mass moving along the self-generated potential well produced by the density cavity. On the other hand, when the density gradient is below the threshold, Langmuir solitons adjust themselves by expelling the imbalanced portion as density cavities at the sound velocity. When the gradient is below the threshold, the electric field part of the soliton bounces back and forth within the potential well. The study is extended to kinetic simulation. Generation mechanism of high energy electron tails in the presence of solitons is discussed. The electron distribution function resembles that of the Lorentzian type. The particle acceleration is explained as a transport process toward high energy side due to overlapping of multiple resonant islands in the phase space. Gyrokinetic simulation of a fast L-H bifurcation dynamics in a realistic diverted tokamak edge geometry Despite its critical importance in the fusion program and over 30 years of H-mode operation, there has been no fundamental understanding at the kinetic level on how the H-mode bifurcation occurs. We report the first observation of an edge transport barrier formation event in an electrostatic gyrokinetic simulation carried out in a realistic C-Mod like diverted tokamak edge geometry under strong forcing by a high rate of heat deposition. The results show that the synergistic action between two multiscale dynamics, the turbulent Reynolds-stress driven [1] and the neoclassical X-point orbit loss drive [2] sheared ${\bf E}\times{\bf B}$ flows, works together to quench turbulent transport and form a transport barrier just inside the last closed magnetic flux surface. The synergism helps reconcile experimental reports of the key role of turbulent stress in the bifurcation [3, and references therein] with some other experimental observations that ascribe the bifurcation to X-point orbit loss/neoclassical effects [4,5]. The synergism could also explain other experimental observations that identified a strong correlation between the L-H transition and the orbit loss driven ${\bf E}\times{\bf B}$ shearing rate [6,7]. The synergism is consistent with the general experimental observation that the L-H bifurcation is more difficult with the $\nabla B$-drift away from the single-null X-point, in which the X-point orbit-loss effect is weaker [2]. [1] P.H. Diamond, S-I Itoh et al., Plasma Phys. Controlled Fusion 47, R35 (2005) [2] C.S. Chang, Seunghoe Kue & H. Weitzner, Phys. Plasmas 9, 3884 (2002) [3] G.R. Tynan, M. Xu et al., Nucl. Fusion 53, 073053 (2013) [4] T. Kobayashi, K. Itoh et al., Phys. Rev. Lett. 111, 035002 (2013) [5] M. Cavedon, T. Pütterich et al., Nucl. Fusion 57, 014002 (2017) [6] D.J. Battaglia, C.S. Chang et al., Nucl. Fusion 53, 113032 (2017) [7] S.M. Kaye, R. Maingi et al., Nucl. Fusion 51, 113109 (2011) Parasitic momentum flux in the tokamak core Timothy Stoltzfus-Dueck, PPPL, abstract Tokamak plasmas rotate spontaneously in the absence of applied torque. This so-called "intrinsic rotation" may be very important for future low-torque devices such as ITER, since rotation can stabilize certain instabilities. In the mid-radius "gradient region", which reaches from the sawtooth inversion radius out to the pedestal top, intrinsic rotation profiles are sometimes flat and sometimes hollow. Profiles may even transition suddenly between these two states, an unexplained phenomenon referred to as rotation reversal. Theoretical efforts to identify the origin of the mid-radius rotation shear have focused primarily on quasilinear models, in which the phase relationships of some selected instability result in a nondiffusive momentum flux ("residual stress"). In contrast to these efforts, the present work demonstrates the existence of a robust, fully nonlinear symmetry-breaking momentum flux that follows from the free-energy flow in phase space and does not depend on any assumed linear eigenmode structure. The physical origin is an often-neglected portion of the radial ${\bf E}\times {\bf B}$ drift, which is shown to drive a symmetry-breaking outward flux of co-current momentum whenever free energy is transferred from the electrostatic potential to ion parallel flows [1]. The resulting rotation peaking is counter-current and scales as temperature over plasma current. As originally demonstrated by Landau [2], this free-energy transfer (thus also the corresponding residual stress) becomes inactive when frequencies are much higher than the ion transit frequency, which may explain the observed relation of density and counter-current rotation peaking in the core. Simple estimates suggest that this mechanism may be consistent with experimental observations, in both hollow and flat rotation regimes. [1] T. Stoltzfus-Dueck, Phys. Plasmas 24, 030702 (2017) [2] L. Landau, J. Exp. Theor. Phys. 16, 574 (1946); English translation in J. Phys. (USSR) 10, 25 (1946) Transport in the Coupled Pedestal and Scrape-off layer region of H-mode plasmas Michael Churchill, PPPL, abstract, slides Understanding the multi-scale neoclassical and turbulence physics in the edge region (pedestal + scrape-off layer) is required in order to reliably predict performance in future fusion devices. I will present research exploring characteristics of this transport using the family of X-point Gyrokinetic Codes (XGC). First, the variation of pressure in the scrape-off layer (important to understand in order to avoid divertor wall degradation) is widely believed to follow simple fluid prescriptions, due to high collisionality. However, simulation results in the near-SOL indicate a significant departure from the simple fluid models, even after including additional terms from neutral drag and the Chew-Goldberger-Low form of parallel ion viscosity to the parallel momentum balance. Second, turbulence characteristics in the edge region show nonlocal behavior, including convective transport of turbulent eddies ("blobs") born just inside the closed field line region out into the SOL. These large intermittent structures can be created even in the absence of collisions in the simulation. Tracking these structures show that on average their radial velocity is very small, even in the near-SOL, while their poloidal velocity is significant. The potential structure within these blobs is monopolar with a peak shifted from the density structure, contrary to the dipolar structure expected by analytical models for blob generation and transport based on interchange turbulence. Finally, coherent phase space structures in blobs are searched for, however only broad regions of velocity space are found to show significant structure. Improved Neoclassical Transport Simulation for Helical Plasmas Botsz Huang, The Graduate University for Advanced Studies, Japan, abstract This work contains the following studies: (1) benchmark of the neoclassical transport models and clarify the impact of their approximations among drift-kinetic equations in helical plasmas; and (2) apply the local models for a quantitative evaluation of bootstrap current in FFHR-d1 DEMO. In part 1, this work reports that the benchmarks of the neoclassical transport codes based on the several local drift-kinetic models. The drift-kinetic models are zero-orbit-width (ZOW), zero-magnetic-drift (ZMD), DKES-like, and global, as classified in [1]. The magnetic geometries of HSX, LHD, and W7-X are employed. It is found that the assumption of ${\bf E} \times {\bf B}$ incompressibility causes discrepancy of neoclassical radial flux and parallel flow among the models when ${\bf E} \times {\bf B}$ is sufficiently large compared to the magnetic drift velocities. For example, ${\cal M}_p ≤ 0.4$ where ${\cal M}_p$ is the poloidal Mach number. On the other hand, when ${\bf E} \times {\bf B}$ and the magnetic drift velocities are comparable, the tangential magnetic drift plays a role in suppressing the unphysical peaking of neoclassical radial-fluxes in the other local models at $E_r ≃ 0$. In low collisional frequency plasmas, the tangential drift suppress such unphysical behavior in the radial transport well. This work demonstrates that the ZOW model not only mitigates the unphysical behavior but also implements evaluation of bootstrap current in LHD with the low computation cost compared to the global model. In part 2, the impact of the parallel momentum conservation on the boot-strap current among the ZOW, DKES, and PENTA models are demonstrated. The ZOW model is extended to include the ion parallel mean flow effect on the electron-ion parallel friction. The DKES model employs the pitch-angle-scattering as the collision operator. The PENTA code employs the Sugama-Nishimura method to correct the momentum balance of DKES results. Therefore, the ZOW model and PENTA codes both conserve the parallel momentum in like-species collisions and include the electron-ion parallel frictions. The work shows that the ZOW and the PENTA code agree with each other well on the calculations of the bootstrap current. Then, this verifies the reliability of the bootstrap current calculation with the ZOW model and the PENTA code for FFHR-d1 DEMO. [1] Seikichi Matsuoka, Shinsuke Satake et al., Phys. Plasmas 22, 072511 (2015) Recent progress of understanding 3D magnetic topology in stellarators and tokamaks Yasuhiro Suzuki, National Institute for Fusion Science, Japan, abstract, slides Recent progress of 3D equilibrium calculation will be reported. The 3D equilibrium calculation is a fundamental consideration to understand the magnetic topology. In particular, for stellarators, the topological change due to the beta-sequences has been found and it is an important issue how to make good flux surfaces in general 3D configuration in the presence of the plasma beta. On the other hand, the 3D equilibrium calculation is also an important issue for tokamaks, because the RMP is widely used to control the stability and transport. In this talk, recent results of 3D equilibrium calculations based on the HINT code, which is a 3D equilibrium calculation code without an assumption of perfectly nested flux surfaces. Impacts of the beta-sequences and the plasma rotation will be discussed. Self-organizing knots Christopher Smiet, Leiden University, The Netherlands, abstract, slides Magnetic helicity, a measure for the linking and knotting of magnetic field lines, is a conserved quantity in Ideal MHD. In the presence of resistivity, helicity constrains the rate at which magnetic energy can be dissipated. When a localized, helical magnetic field is set to relax in a low-resistance high-beta plasma, the magnetic pressure drives the plasma to expand whilst the helicity is still approximately conserved. Using numerical simulations I show how this interplay gives rise to a novel MHD equilibrium: the initially linked field lines self-organize to form a structure where field lines lie on nested toroidal surfaces of constant pressure. The Lorentz forces are balanced by the gradient in pressure, with a minimum in pressure on the magnetic axis. Interestingly, the rotational transform is nearly constant on all magnetic surfaces, making the structure topologically nearly identical to a famous knotted structure in Topology: the Hopf fibration. I will explore the nature of this equilibrium, and how it relates geometrically to the structure of the Hopf map. Additional dynamics give rise phenomena that are well known from magnetic confinement devices; magnetic islands can occur at rational surfaces, and in certain regimes the equilibrium becomes nonaxisymmetric, triggering a marginal core-interchange mechanism. Development and application of BOUT++ for large scale turbulence simulation Jarrod Leddy - The University of York, UK, abstract, slides The transport of heat and particles in the relatively collisional edge regions of magnetically confined plasmas is a scientifically challenging and technologically important problem. Understanding and predicting this transport requires the self-consistent evolution of plasma fluctuations, global profiles, and flows, but the numerical tools capable of doing this in realistic (diverted) geometry are only now being developed. BOUT++ is one such tool that has had many recent develops towards this goal. A novel coordinate system has been developed to improve the resolution around the X-point and strike points in the divertor region. A 5-field reduced 2-fluid plasma model for the study of instabilities and turbulence in magnetised plasmas has been built on the BOUT++ framework that allows the evolution of global profiles, electric fields and flows on transport timescales, with flux-driven cross-field transport determined self-consistently by electromagnetic turbulence. Models for neutral evolution have also been included, and the interaction of these neutrals with the plasma is characterised through charge exchange, recombination, ionisation, and radiation. Simulation results for linear devices, MAST-U, and DIII-D are presented that shed light on the nature of plasma-neutral interaction, detachment in the super-X divertor, and turbulence in diverted geometry. Studies on proton effective heating in magnetic reconnection by means of particle simulations Shunsuke Usami, National Institute for Fusion Science, Japan, abstract By means of two-dimensional electromagnetic particle simulations, ion heating mechanism is investigated in magnetic reconnection with a guide magnetic field. These simulations mimic dynamics of two torus plasmas merging through magnetic reconnection in a spherical tokamak (ST) device. It is found that protons are effectively heated in the downstream by the pickup mechanism, since a ring-like structure of proton velocity distribution, which is theoretically predicted to be formed by picked-up ions, is observed at a local region of the downstream. Furthermore, based on the theory by J. F. Drake et al., only heavy ions were believed to be heated by suffering the pickup mechanism, however it is pointed out that the pickup of protons is consistent with the theory in the cases that the plasma beta is much less than 1 in the upstream, which can be satisfied in STs. Low-Frequency $\delta f$ PIC Models with Fully Kinetic Ions Benjamin J. Sturdevant, University of Colorado at Boulder, USA, abstract, slides A fully kinetic ion model is useful for the verification of gyrokinetic turbulence simulations in certain regimes where the gyrokinetic model may break down due to the lack of small ordering parameters. For a fully kinetic ion model to be of value, however, it must first be able to accurately simulate low-frequency drift-type instabilities typically well within the domain of gyrokinetics. In this talk, we present a fully kinetic ion model formulated with weak gradient drive terms and applied to the ion-temperature-gradient (ITG) instability. A $\delta f$ implementation in toroidal geometry is discussed, where orthogonal coordinates are used for the particle dynamics, but field-line-following coordinates are used for the field equation, allowing for high resolution of the field-aligned mode structure. Variational methods are formulated for integrating the particle equations of motion, allowing for accuracy on a long time scale with modest timestep sizes. Finally, an implicit orbit averaging and sub-cycling scheme for the fully kinetic ion model is considered. Exact collisional plasma fluid theories Eero Hirvijoki/David Pfefferlé, PPPL, abstract, slides Following Grad's procedure, an expansion of the velocity space distribution functions in terms of multi-index Hermite polynomials is carried out to derive a consistent set of collisional fluid equations for plasmas. The velocity-space moments of the often troublesome nonlinear Landau collision operator are evaluated exactly, and to all orders with respect to the expansion. The collisional moments are shown to be generated by applying gradients on two well-known functions, namely the Rosenbluth-MacDonald-Judd-Trubnikov potentials for a Gaussian distribution. The expansion can be truncated at arbitrary order with quantifiable error, providing a consistent and systematic alternative to the Chapman-Enskog procedure which, in plasma physics, amounts to the famous Braginskii equations. To illustrate our approach, we provide the collisional ten-moment equations and prove explicitly that the exact, nonlinear expressions for the momentum- and energy-transfer rate satisfy the correct conservation properties. Recent advances in the variational formulation for reduced Vlasov-Maxwell equations Alain Brizard, Saint Michael's College, Colchester, USA, abstract, slides The talk presents recent advances in the variational formulation of reduced Vlasov-Maxwell equations. First, the variational formulations of guiding-center Vlasov-Maxwell theory based on Lagrange, Euler, and Euler-Poincaré variational principles are presented. Each variational principle yields a different approach to deriving guiding-center polarization and magnetization effects into the guiding-center Maxwell equations. The conservation laws of energy, momentum, and angular momentum are also derived by Noether method, where the guiding-center stress tensor is now shown to be explicitly symmetric. Next, the Eulerian variational principle for the nonlinear electromagnetic gyrokinetic Vlasov-Maxwell equations is presented in the parallel-symplectic representation, where the gyrocenter Poisson bracket contains contributions from the perturbed magnetic field. Discrete Exterior Calculus Discretization of the Navier-Stokes Equations Ravindra Samtaney, King Abdullah University Sci. & Technology, abstract, slides A conservative discretization of incompressible Navier-Stokes equations over surface simplicial meshes is developed using discrete exterior calculus (DEC). The DEC discretization is carried out for the exterior calculus form of Navier-Stokes equations, where the velocity field is represented by a 1-form. A distinguishing feature of our method is the use of an algebraic discretization of the interior product operator and a combinatorial discretization of the wedge product. Numerical experiments for flows over surfaces reveal a second order accuracy for the developed scheme for structured-triangular meshes, and first order accuracy for general unstructured meshes. The mimetic character of many of the DEC operators provides exact conservation of both mass and vorticity, in addition to superior kinetic energy conservation. The employment of various discrete Hodge star definitions based on both circumcentric and barycentric dual meshes is also demonstrated. The barycentric Hodge star allows the discretization to admit arbitrary simplicial meshes instead of being limited only to Delaunay meshes, as in previous DEC-based discretizations. The convergence order attained through the circumcentric Hodge operator is retained when using the barycentric Hodge. The discretization scheme is presented in detail along with numerical test cases demonstrating its numerical convergence and conservation properties. Preliminary results regarding the implementation of hybrid (circumcentric/barycentric) Hodge star operator are also presented. We conclude with some ideas for employing a similar method for magnetohydrodynamics. Global electromagnetic gyrokinetic and hybrid simulations of Alfvén eigenmodes Michael Cole, Max Planck Institute for Plasma Physics, Greifswald , abstract, slides The pursuit of commercial fusion power has driven the development of increasingly complex and complete numerical simulation tools in plasma physics. Recent work with the EUTERPE particle-in-cell code has made possible global, electromagnetic, fully gyrokinetic and fluid-gyrokinetic hybrid simulations in a broad parameter space, where previously global gyrokinetic simulations had been hampered by the so-called 'cancellation problem'. This has been applied to the simulation of the interaction between Alfvén eigenmodes and energetic particles. In this talk, the range of numerical methods used will be detailed, and it will be shown with practical examples that self-consistent global simulations may be necessary for even a qualitatively accurate prediction of the perturbation of the magnetic field and fast particle transport due to wave-particle interaction. A brief outline will be given of the future direction of this work, such as the possibility of gyrokinetic simulation of the interaction between fine-scale turbulence and MHD modes. Sparse grid techniques for particle-in-cell schemes Lee Ricketson, Courant Institute, New York University , abstract, slides The particle-in-cell (PIC) method has long been the standard technique for kinetic plasma simulation across many applications. The downside, though, is that quantitatively accurate, 3-D simulations require vast computing resources. A prominent reason for this complexity is that the statistical figure of merit is the number of particles per cell. In 3-D, the number of cells grows rapidly with grid resolution, necessitating an astronomical number of particles. To address this challenge, we propose the use of sparse grids: by a clever combination of the results from a variety of grids, each of which is well resolved in at most one coordinate direction, we achieve similar accuracy to that of a full grid, but with far fewer grid cells, thereby dramatically reducing the statistical error. We present results from test cases that demonstrate the new scheme's accuracy and efficiency. We also discuss the limitations of the approach and, in particular, its need for an intelligent choice of coordinate system. Optical collapse and nonlinear laser beam combining Pavel Lushnikov, U. New Mexico , abstract, slides Many nonlinear systems of partial differential equations admit spontaneous formation of singularities in a finite time (blow up). Blow up is often accompanied by a dramatic contraction of the spatial extent of solution, which is called by collapse. A collapse in a nonlinear Schrodinger equation (NLSE) describes the self-focusing of the intense laser beam in the nonlinear Kerr medium (like usual glass) with the propagation distance $z$ playing the role of time. NLSE in the dimension two (two transverse coordinates) corresponds the stationary self-focusing of the laser beam eventually causing optical damage as was routinely observed in experiment since 1960s. NLSE in the dimension three (two transverse coordinates and time) is responsible for the formation of the optical bullet making pulse much shorter in time in addition to the spatial self-focusing. We address the universal self-similar scaling near collapse. In the critical 2D case the collapsing solutions have a form of rescaled soliton such that the $z$-dependence of that scale determines the $z$-dependent collapse width $L(z)$ and amplitude $\sim 1/L(z)$. At the leading order $L(z) \sim (z_c-z)^{1/2}$, where $z_c$ is the collapse time with the required log-log modification of that scaling. Log-log scaling for NLSE was first obtained asymptotically in 1980s and later proven in 2006. However, it remained a puzzle that this scaling was never clearly observed in simulations or experiment. We found that the classical log-log modification NLSE requires double-exponentially large amplitudes of the solution $\sim 10^{10^{100}}$, which is unrealistic to achieve in either physical experiments or numerical simulations. In contrast, we developed a new asymptotic theory which is valid starting from quite moderate (about 3 fold) increase of the solution amplitude compare with the initial conditions. We use that new theory to propose a nonlinear combining of multiple laser beams into a diffraction-limited beam by beam self-focusing in Kerr medium. Multiple beams with total power above critical are combined in near field and propagated through multimode optical fiber. Random fluctuations during propagation first trigger the formation of the strong optical turbulence. During subsequent propagation, the inverse cascade of optical turbulence tends to increase the transverse spatial scale of fluctuation until it efficiently triggers a strong optical collapse event producing diffraction-limited beam with the critical power. On Degenerate Lagrangians, Noncanonical Hamiltonians, Dirac Constraints and their Discretization Michael Kraus, Max Planck Institute for Plasma Physics, Garching , abstract, slides Most systems encountered in plasma physics are Hamiltonian and therefore have a rich geometric structure, most importantly symplecticity and conservation of momentum maps. As most of these systems are formulated in noncanonical coordinates, they are not amenable to standard symplectic discretisation methods, which are popular for the integration of canonical Hamiltonian systems. Variational integrators, which can be seen as the Lagrangian equivalent to symplectic methods, seem to provide an alternative route towards the systematic derivation of structure-preserving numerical methods for such systems. However, for noncanonical Hamiltonian systems the corresponding Lagrangian is often found to be degenerate. This degeneracy gives rise to instabilities of the variational integrators which need to be overcome in order to make long-time simulations possible. In this talk, recent attempts to devise long-time stable structure-preserving integrators for noncanonical Hamiltonian and degenerate Lagrangian systems will be reviewed. The guiding-centre system will be used to exemplify the problems which arise for such systems and to demonstrate the good long-time fidelity of the newly developed integrators. Development and applications of Verification and Validation procedures Fabio Riva, EPFL, Switzerland , abstract The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by three separate tasks: the code verification, which is a mathematical issue targeted to assess that the physical model is correctly implemented in a simulation code; the solution verification, which evaluates the numerical errors affecting a simulation; and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. To perform a code verification, we propose to use the method of manufactured solutions, a methodology that we have generalized to PIC codes, overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise. The solution verification procedure we put forward is based on the Richardson extrapolation, used as higher order estimate of the exact solution. These verification procedures were applied to GBS, a three-dimensional fluid code for SOL plasma turbulence simulation based on a finite difference scheme, and to a unidimensional, electrostatic, collisionless PIC code. To perform a detailed validation of GBS against experimental measurements, we generalized the magnetic geometry of the simulation code to include elongation and non-zero triangularity, and we investigated theoretically the impact of plasma shaping effects on SOL turbulence. An experimental campaign is now planned on TCV, to validate our findings against experimental measurements in tokamak limited configurations. Modeling efforts in hybrid kinetic-MHD and fully kinetic theories Cesare Tronci, University of Surrey, UK , abstract, slides Over the decades, multiscale modeling efforts have resorted to powerful methods, such as asymptotic/perturbative expansions and/or averaging techniques. As a result of these procedures, finer scale terms are typically discarded in the fundamental equations of motion. Although this process has led to well consolidated plasma models, consistency issues may emerge in certain cases especially concerning the energy balance. This may lead to the presence of spurious instabilities that are produced by nonphysical energy sources. The talk proposes alternative techniques based on classical mechanics and its underlying geometric principles. Inspired by Littlejohn's guiding-center theory, the main idea is to apply physical approximations to the action principle (or the Hamiltonian structure) underlying the fundamental system, rather than operating directly on its equations of motion. Here, I will show how this method provides new energy-conserving variants of hybrid kinetic-MHD models, which suppress the spurious instabilities emerging in previous non-conservative schemes. Also, this method allows for quasi-neutral approximations of fully kinetic Vlasov theories, thereby neglecting both radiation and Langmuir oscillations. Extending geometrical optics: A Lagrangian theory for vector waves Daniel Ruiz, Princeton University , abstract Even diffraction aside, the commonly known equations of geometrical optics (GO) are not entirely accurate. GO considers wave rays as classical particles, which are completely described by their coordinates and momenta, but rays have another degree of freedom, namely, polarization. As a result, wave rays can behave as particles with spin. A well-known example of polarization dynamics is wave-mode conversion, which can be interpreted as rotation of the (classical) ``wave spin.'' However, there are other less-known manifestations of the wave spin, such as polarization precession and polarization-driven bending of ray trajectories. This talk presents recent advances in extending and reformulating GO as a first-principle Lagrangian theory, whose effective-gauge Hamiltonian governs both mentioned polarization phenomena simultaneously. Examples and numerical results are presented. When applied to classical waves, the theory correctly predicts the polarization-driven divergence of left- and right- polarized electromagnetic waves in isotropic media, such as dielectrics and nonmagnetized plasmas. In the case of particles with spin, the formalism also yields a point-particle Lagrangian model for the Dirac electron, i.e. the relativistic spin-1/2 electron, which includes both the Stern-Gerlach spin potential and the Bargmann-Michel-Telegdi spin precession. Additionally, the same theory contributes, perhaps unexpectedly, to the understanding of ponderomotive effects in both wave and particle dynamics; e.g., the formalism allows to obtain the ponderomotive Hamiltonian for a Dirac electron interacting with an arbitrarily large electromagnetic laser field with spin effects included. Understanding and Predicting Profile Structure and Parametric Scaling of Intrinsic Rotation Weixing Wang, PPPL , abstract This talk reports on a recent advancement in developing physical understanding and a first-principles-based model for predicting intrinsic rotation profiles in magnetic fusion experiments, including ITER. It is shown for the first time that turbulent fluctuation-driven residual stress (a non-diffusive component of momentum flux) can account for both the shape and magnitude of the observed intrinsic toroidal rotation profile. The model predictions of core rotation based on global gyrokinetic simulations agree well with the experimental measurements for a set of DIII-D ECH discharges. The characteristic dependence of residual stress and intrinsic rotation profile structure on the multi-dimensional parametric space covering turbulence type, q-profile structure, collisionality and up-down asymmetry in magnetic geometry has been studied with the goal of developing physics understanding needed for rotation profile control and optimization. Finally, the first-principles-based model is applied to elucidating the ρ∗-scaling and predicting rotations in ITER regime. Laser-Driven Magnetized Collisionless Shocks Derek Schaeffer, Princeton University , abstract Collisionless shocks -- supersonic plasma flows in which the interaction length scale is much shorter than the collisional mean free path -- are common phenomena in space and astrophysical systems, including the solar wind, coronal mass ejections, supernovae remnants, and the jets of active galactic nuclei. These systems have been studied for decades, and in many the shocks are believed to efficiently accelerate particles to some of the highest observed energies. Only recently, however, have laser and diagnostic capabilities evolved sufficiently to allow the detailed study in the laboratory of the microphysics of collisionless shocks over a large parameter regime. We present experiments that demonstrate the formation of collisionless shocks utilizing the Phoenix laser laboratory and the LArge Plasma Device (LAPD) at UCLA. We also show recent observations of magnetized collisionless shocks on the Omega EP laser facility that extend the LAPD results to higher laser energy, background magnetic field, and ambient plasma density, and that may be relevant to recent experiments on strongly driven magnetic reconnection. Lastly, we discuss a new experimental regime for shocks with results from high-repetition (1 Hz), volumetric laser-driven measurements on the LAPD. These large parameter scales allow us to probe the formation physics of collisionless shocks over several Alfvenic Mach numbers ($M_A$), from shock precursors (magnetosonic solitons with $M_A<1$) to subcritical ($M_A<3$) and supercritical ($M_A>3$) shocks. The results show that collisionless shocks can be generated using a laser-driven magnetic piston, and agree well with both 2D and 3D hybrid and PIC simulations. Additionally, using radiation-hydrodynamic modeling and measurements from multiple diagnostics, the different shock regimes are characterized with dimensionless formation parameters, allowing us to place disparate experiments in a common and predictive framework. Radiation effects on the runaway electron avalanche Chang Liu, Princeton University , abstract, slides Runaway electrons are a critical area of research into tokamak disruptions. A thermal quench on ITER can result in avalanche production of a large amount of runaway electrons and a transfer of the plasma current to be carried by runaway electrons. The potential damage caused by the highly energetic electron beam poses a significant challenge for ITER to achieve its mission. It is therefore extremely important to have a quantitative understanding of the runaway electron avalanche process. It is found that the radiative energy loss and the pitch angle scattering from radiative E&M fields plays an important role in determining the runaway electron distribution in momentum space. In this talk we discuss three kinds of radiation from runaway electrons, synchrotron radiation, Cerenkov radiation, and electron cyclotron emission (ECE) radiation. Synchrotron radiation, which mainly comes from the cyclotron motion of highly relativistic runaway electrons, dominates the energy loss of runaway electrons in the high-energy regime. The Cerenkov radiation from runaway electrons gives an additional correction to the Coulomb logarithm in the collision operator, which changes the avalanche growth rate. The ECE emission mainly comes from electrons in the energy range 1.2<3, and gives an important approach to diagnose the runaway electron distribution in momentum and pitch angle. We developed a novel tool to self-consistently calculate normal mode scattering of runaway electrons using the quasi-linear method, and implement that in the a well-developed runaway electron kinetic simulation code CODE. Using this we successfully reproduce the experimental result of ECE signal qualitatively. Plasmoids formation in a laboratory and large-volume flux closure during simulations of Coaxial Helicity Injection in NSTX-U Fatima Ebrahimi, PPPL and Princeton University , abstract In NSTX-U, transient Coaxial Helicity Injection (CHI) is the primary method for current generation without reliance on the solenoid. A CHI discharge is generated by driving current along open field lines (the injector flux) that connect the inner and outer divertor plates on NSTX/NSTX-U, and has generated over 200 kA of toroidal current on closed flux surfaces in NSTX. Extrapolation of the concept to larger devices requires an improved understanding of the physics of flux closure and the governing parameters that maximizes the fraction of injected flux that is converted to useful closed flux. Here, through comprehensive resistive MHD NIMROD simulations conducted for the NSTX and NSTX-U geometries, two new major findings will be reported. First, formation of an elongated Sweet-Parker current sheet and a transition to plasmoid instability has for the first time been demonstrated by realistic global simulations [1]. This is the first observation of plasmoid instability in a laboratory device configuration predicted by realistic MHD simulations and then supported by experimental camera images from NSTX. Second, simulations have now, for the first time, been able to show large fraction conversion of injected open flux to closed flux in the NSTX-U geometry [2]. Consistent with the experiment, simulations also show that reconnection could occur at every stage of the helicity injection phase. The influence of 3D effects, and the parameter range that supports these important new findings is now being studied to understand the impact of toroidal magnetic field and the electron temperature, both of which are projected to increase in larger ST devices. [1] F. Ebrahimi & R. Raman, Phys. Rev. Lett. 114, 205003 (2015) [2] F. Ebrahimi & R. Raman, Nucl. Fusion 56, 044002 (2016) Generation of helium and oxygen EMIC waves by the bunch distribution of oxygen ions associated with weak fast magnetosonic shocks in the magnetosphere Lou-Chuang Lee, Academia Sinica, Taiwan , abstract, slides Electromagnetic ion cyclotron (EMIC) waves are often observed in the magnetosphere with frequency usually in the proton and helium cyclotron bands and sometimes in the oxygen band. The temperature anisotropy, caused by injection of energetic ions or by compression of magnetosphere, can efficiently generate proton EMIC waves, but not as efficient for helium or oxygen EMIC waves. Here we propose a new generation mechanism for helium and oxygen EMIC waves associated with weak fast magnetosonic shocks, which are observed in the magnetosphere. These shocks can be associated with either dynamic pressure enhancement or shocks in the solar wind and can lead to the formation of a "bunch" distribution in the perpendicular velocity plane of oxygen ions. The oxygen bunch distribution can excite strong helium EMIC waves and weak oxygen and proton waves. The dominant helium EMIC waves are strong in quasi-perpendicular propagation and show harmonics in frequency spectrum of Fourier analysis. The proposed mechanism can explain the generation and some observed properties of helium and oxygen EMIC waves in the magnetosphere. Penetration and amplification of resonant perturbations in 3D ideal-MHD equilibria Stuart Hudson, PPPL , abstract The nature of ideal-MHD equilibria in three-dimensional geometry is profoundly affected by resonant surfaces, which beget a non-analytic dependence of the equilibrium on the boundary. Furthermore, non-physical currents arise in equilibria with continuously-nested magnetic surfaces and smooth pressure and rotational-transform profiles. We demonstrate that three-dimensional, ideal-MHD equilibria with nested surfaces and δ-function current densities that produce a discontinuous rotational-transform are well defined and can be computed both perturbatively and using fully-nonlinear equilibrium calculations. The results are of direct practical importance: we predict that resonant magnetic perturbations penetrate past the rational surface (i.e. "shielding" is incomplete, even in purely ideal-MHD) and that the perturbation is amplified by plasma pressure, increasingly so as stability limits are approached. Gyrokinetic projection of the divertor heat-flux width from present tokamaks to ITER C.S. Chang, PPPL , abstract The total-f edge gyrokinetic code XGC1 shows that the divertor heat-flux width $λ_q$ in three US tokamaks (DIII-D for conventional aspect ratio, NSTX for tight aspect ratio, and C-Mod for high BP) obeys the experimentally observed $λ_q\propto 1/B_P^\gamma$ scaling in the so called "sheath-dominant regime." The low-beta edge plasma is non-thermal and approaches the quasi-steady state in a kinetic non-diffusive time scale. Nonlinear Fokker-Planck-Landau collision operator is used. Monte-Carlo neutral atoms are recycled near the material wall. Successful validation of the XGC1 simulation results on three US tokamak devices will be presented in the so called "sheath-limited" regime. It is found that $λ_q$ on DIII-D, NSTX, and lower-$B_P$ C-Mod is dominated by the neoclassical orbit dynamics of the supra-thermal ions. However, C-Mod at higher $B_P$ shows blob-dominance, while still fitting into the $λ_q\propto 1/B_P^\gamma$ graph. Predictive simulation on ITER shows that $λ_q$ is over 5 times greater than that predicted by the empirical $λ_q\propto 1/B_P^\gamma$ scaling. Relativistic Electrons and Magnetic Reconnection in ITER Allen Boozer, Columbia University , abstract, slides ITER, the largest scientific project ever undertaken, "has been designed to prove the feasibility of fusion" energy. That mission could be compromised if the net current in the plasma were transferred to relativistic-electron carriers, which can result in great damage to the device. More than a hundred-fifty papers on the topic have appeared in the twenty years since it was realized that a sudden drop in the plasma temperature could cause such a transfer. The theoretical papers have focused on electron runaway when magnetic field lines remain confined to the plasma volume. Experiments and their simulation imply most magnetic field lines intercept the walls when a sudden drop in the temperature occurs. In simulations not all of the magnetic field lines intercept the walls. If these non-intercepting flux tubes survive until outer magnetic surfaces reform, an especially dangerous situation arises in which the relativistic electrons can be lost in a short pulse along a narrow flux tube. Maxwell's equations imply the conservation of magnetic helicity during fast magnetic reconnections. Helicity conservation clarifies features, such as the spike in the plasma current during a thermal quench, and shows that a rapid, ~1ms, acceleration of electrons can occur. Magnetic reconnection can occur on a time scale of tens of Alfvén transit times either due to the formation of multiple plasmoids or as a result of the exponential sensitivity to non-ideal effects when an ideal evolution causes neighboring magnetic field lines to increase their separation exponentially with distance along the lines. In either case, magnetic flux tubes that had different force-free parallel currents $j_{\parallel}/B$ and different poloidal magnetic fluxes external to their surfaces can be joined in a fast reconnection. The differing parallel currents relax by Alfvén waves, and the change in the poloidal flux associated with a given enclosed toroidal magnetic flux is given by helicity conservation. Based on the observed time scale of current spikes, a few hundred toroidal transits are required to relax the $j_{\parallel}/B$ profile, which is consistent with distance along magnetic field lines implied by the speed of the electron temperature drop. Helicity conservation implies that although changes in the poloidal flux can occur on a time scale determined by the Alfvén transit time, the magnetic helicity and most of the poloidal flux can change only on a long resistive, $L/\mathcal{R}$, time scale. Rapid plasma terminations require plasma cooling. The potential for damage, the magnitude of the extrapolation from existing devices, and the importance of the atypical—incidents that occur once in a thousand shots—make theory and simulation essential for ensuring that relativistic runaway electrons will not prevent ITER from achieving its mission. The U.S. DoE Office of Science has established a Simulation Center for Runaway Electron Avoidance and Mitigation (SCREAM) with a two-year funding of 3.9M$\$$. The danger of runaway of electrons to relativistic energies can be avoided in magnetically confined fusion systems by making the net current the $j_{\parallel}/B$ sufficiently small. This is possible in the non-axisymmetric stellarator geometry though not in axisymmetric tokamaks, such as ITER. Variational current-coupling gyrokinetic-MHD Josh Burby, Courant Institute, New York University , abstract, slides In this talk I will describe the details and derivation of a new current-coupling gyrokinetic-MHD model. In particular, I will show that the model can be derived from a variational principle. Energy, and hot charge are conserved exactly regardless of the form of the background magnetic field. Likewise, when the background field admits a continuous rotation or translation symmetry, the corresponding component of the total momentum is conserved. The theory relies on a new gauge-invariant formulation of the motion of gyrocenters in prescribed electromagnetic fields, and this will be described in detail. Implicit Multiscale Full Kinetics as an Alternative to Gyrokinetics Scott E. Parker, University of Colorado , abstract Recent progress has been made developing full kinetic Lorentz force ion dynamics using implicit multiscale techniques [1]. It is now possible to capture low-frequency physics along with finite Larmor radius (FLR) effects with a fully kinetic multiscale $\delta f$ particle simulation. The utility of such a model is to be able to verify gyrokinetics in situations where the smallness of the ordering parameters is questionable. Additionally, such a model can help identify what higher order terms in gyrokinetics might be important. Orbit averaging and sub-cycling are utilized with an implicit particle time advance based on variational principles. This produces stable and accurate ion trajectories on long time scales. Excellent agreement with the gyrokinetic dispersion relation is obtained including full FLR effects. Ion Bernstein waves and the compressional Alfvén wave are easily suppressed with the implicit time advance. We have developed a global toroidal electrostatic adiabatic electron Lorentz ion code. We will report preliminary linear results benchmarking Lorentz ions with gyrokinetics for the Cyclone base case. We will begin by reviewing recent results from the GEM code simulating electromagnetic gyrokinetic turbulence in the edge pedestal where the timestep required is comparable to the ion cyclotron period. [1] Benjamin J. Sturdevant, Scott E. Parker et al., J. Comput. Phys. 316, 519 (2016) Current status of the LHD and the prospect for Deuterium experiment Dr. Masaki Osakabe, National Institute for Fusion Science, Japan Dissipation and Intermittency in Gyrokinetic Turbulence and Beyond Jason TenBarge, University of Maryland , abstract, slides Turbulence is a ubiquitous process in space and astrophysical plasmas that serves to mediate the transfer of large-scale motions to small scales at which the turbulence can be dissipated and the plasma heated. In situ solar wind observations and direct numerical simulations demonstrate that sub-proton scale turbulence is dominated by highly anisotropic and intermittent, low frequency, kinetic Alfvénic fluctuations. I will review recent work on the dissipation of Alfvénic turbulence observed in gyrokinetic simulations and discuss the coherent structures and intermittency associated with the turbulence, which suggest a non-local and non-self-similar energy cascade. Moving beyond the confines of gyrokinetics, I will also briefly discuss work on a full Eulerian Vlasov-Maxwell code, Gkeyll, being developed at Princeton and the University of Maryland. Effective resistivity in collisionless magnetic reconnection Zhi-Wei Ma, Zhejiang University, China , abstract [#s80, 18 Aug 2016] The well-known, physical mechanism for fast, magnetic reconnection in collisionless plasmas is that the off-diagonal terms of the electron-pressure tensor give rise to a larger electric-fields in the reconnection region. The electron-pressure tensor fully associated with electron kinetic effects is difficultly implemented into the MHD model. In this talk, we try to use a simple equation $E = \eta J$ (where $\eta$ is an effective resistivity) to illustrate the fast reconnection in collisionless, magnetic reconnection. The physical mechanism and formulation of the effective resistivity are addressed. Turbulence in shear MHD flows: Implications for accretion disks Farrukh Nauman, Niels Bohr International Academy, Copenhagen , abstract, slides Accretion flows are found in a large variety of astrophysical systems, from protoplanetary disks to active galactic nuclei. Our present understanding of such flows is severely limited by both observational and numerical resolution. I will discuss some new numerical results on zero magnetic flux shear MHD turbulence and its relation to the magnetic Prandtl number. I will then briefly discuss the effects of rotation on large scale magnetic fields. My talk will end with some speculations about how one might construct a self-consistent model for accretion flows based on our current understanding. Dynamics of the ELMs in pre-crash and crash suppressed period in KSTAR Hyeon Park, NFRI/UNIST, Korea , abstract, slides [#s77, 28 Jul 2016] Following the first operation of H-mode in KSTAR in 2009, study of the edge localized modes (ELM) has been actively conducted. A unique in-vessel control coil (IVCC) set (top, middle and bottom) capable of generating resonant (and non-resonant) magnetic perturbation (RMP) at low n(=1,2) number was successfully utilized to suppress and/or mitigate the ELM-crash in KSTAR. Extensive study of dynamics of the ELMs in both pre-crash and crash suppressed phase under magnetic perturbation with the 2D/3D Electron Cyclotron Emission Imaging (ECEI) system revealed new phenomenology of the ELMs and ELM-crash dynamics that were not available from conventional diagnostics. Since the first 2D images of the ELM time evolution from growth to crash through saturation, the detailed images of the ELMs leading to the crash together with the fast RF emission (<200MHz) signal demonstrated that the pre-crash events are complex. The measured 2D image of the ELM was validated by direct comparison with the synthetic 2D image by the BOUT++ code and non-linear modelling study is in progress. Recently, the observed dynamics of the ELMs at both high and low field sides such as asymmetries in intensity, mode number and rotation direction casted a doubt in peeling-ballooning mode. Response of high field side ELM to the RMP was more pronounced compared to that of the low field side. Other study includes observation of multi-modes and sudden mode number transition. During the ELM-crash suppression experiment, various types of ELM-crash patterns were observed and often the suppression was marginal. The observed semi-coherent turbulence spectra under the RMP provided an evidence of non-linear interaction between the ELMs and turbulence. Statistical origin and properties of kappa distributions George Livadiotis, Southwest Research Institute , abstract, slides Classical particle systems reside at thermal equilibrium with their velocity distribution function stabilized into a Maxwell distribution. On the contrary, collisionless and correlated particle systems, such as space and astrophysical plasmas, are characterized by a non-Maxwellian behavior, typically described by so-called $\kappa$ distributions, or combinations thereof. Empirical $\kappa$ distributions have become increasingly widespread across space and plasma physics. A breakthrough in the field came with the connection of $\kappa$ distributions to non-extensive statistical mechanics. Understanding the statistical origin of $\kappa$ distributions was the cornerstone of further theoretical developments and applications, some of which will be presented in this talk: (i) The physical meaning of thermal parameters, e.g., temperature and kappa index; (ii) the multi-particle description of $\kappa$ distributions; (iii) the generalization to phase-space $\kappa$ distribution of a Hamiltonian with non-zero potential; (iv) the Sackur-Tetrode entropy for $\kappa$ distributions, and (v) the existence of a large-scale phase-space cell, characteristic of collisionless space plasmas, indicating a new quantization constant, $\hbar ^* \sim 10^{-22} Js$. Explosive Solution of a Time Delayed Nonlinear Cubic Equation Derived for Fluids (Hickernell) and Plasmas (Berk-Breizman) Herb Berk, Institute for Fusion Studies, U. Texas at Austin, abstract, slides [#s61, 27 Jun 2016] This presentation will describe new explosive attractor solutions to the universal cubic delay equation found in both the fluid [1] and (for a kinetic system) in the plasma literature [2]. The cubic delay equation describes a system governed by a control parameter $\phi$ (in plasmas its value is determined by the linear properties of the kinetic response). The simulation of the temporal evolution reveals the development of an explosive mode, i.e. a mode growing without bound in a finite time. The two main features of the response are: (1) a well-known explosive envelope $(t_0-t)^{-5/2}$, with $t_0$ the blow-up time of the amplitude; (2) a spectrum with ever-increasing oscillation frequencies whose values depend on the parameter $\phi$. Analytic modeling explains the results and quantitatively nearly replicates the attractor solutions found in the simulations. These analytic attractor solutions are linearly stable except in some cases where the nonlinear solution needs to be corrected to include higher harmonics. Our analysis explains almost all of the rather complicated numerical attractor solutions for the cubic delay equation. [1] F.J. Hickernell, J. Fluid Mech. 142, 431 (1984) [2] B.N. Breizman, H.L. Berk et al., Phys. Plasmas 4, 1559 (1997) Physics of tokamak flow relaxation to equilibrium Bruce Scott, Max-Planck-IPP, EURATOM Association, abstract The theorem for toroidal angular momentum conservation within gyrokinetic field theory is used as a starting point for consideration of flow equilibration at low frequencies (less than fast-Alfvén or gyrofrequencies). Quasineutrality and perpendicular MHD force balance are inputs to the theory and therefore never violated. However, the gyrocenter densities are not ambipolar in equilibrium, since the flow vorticity is given by their difference. From an arbitrary initial state, flows evolve acoustically and via Landau damping into divergence balance, in which radial force balance of the electric field is a part. On collisional time scales, which in the tokamak core are longer, the neoclassical electric field is brought into balance by collisions, and it is only on these slow time scales that the collisional transport is ambipolar (ie, the time derivative of the vorticity is small). Computations from 2014 showing the relaxation on tokamak core spatial scales are displayed. I will also give relevant cases of edge-layer relaxation and discuss the dependence on the finite poloidal gyroradius. Total-$f$ two-species gyrokinetic relaxation cases from 2009/10 are available to show that the basic processes in fluid and gyrokinetic models are the same for these purposes. Equilibrium Potential Well due to Finite Larmor Radius Effects at the Tokamak Edge W. W. Lee, PPPL , abstract, slides We present a novel mechanism for producing the equilibrium potential well near the edge of a tokamak. Briefly, because of the difference in gyroradii between electrons and ions, an equilibrium electrostatic potential is generated in the presence of spatial inhomogeneity of the background plasma, which, in turn, produces a well associated with the radial electric field, $E_r$, as observed at the edge of many tokamak experiments. We will show that this theoretically predicted $E_r$ field, which can be regarded as producing a long radial wave length zonal flow, agrees well with recent experimental measurements. The work is in collaboration with R. B. White [1]. [1] W.W. Lee & R.B. White, PPPL Report 5254 (2016) Predicting solar magnetic activity and its implications for global dynamo models Nishant Kumar Singh, KTH Royal Institute of Technology, abstract, slides Using the solar surface mode, i.e. the $f$-mode, we attempt to predict the emergence of active regions (ARs) in the days before they can be seen in magnetograms. Our study is motivated by earlier numerical findings of Singh et al. [1], who showed that, in the presence of a nonuniform magnetic field which is concentrated a few scale heights below the surface, the $f$-mode fans out in the diagnostic $k$-$\omega$ diagram at high wavenumbers. Here we exploit this property using data from the Helioseismic and Magnetic Imager aboard the Solar Dynamics Observatory, and show for about six ARs that at large latitudinal wavenumbers (corresponding to horizontal scales of around 3000 km), the $f$-mode displays strengthening about two days prior to AR formation and thus provides a new precursor for AR formation. I will also discuss ways to isolate signals from newly forming ARs in a crowded environment where existing ones are expected to pollute the neighbouring patches. The idea that the $f$-mode is perturbed days before any visible magnetic activity occurs on the surface can be important in constraining dynamo models aiming at understanding the global magnetic activity of the Sun. [1] Nishant K. Singh, Harsha Raichur & Axel Brandenburg, arxiv.org/abs/1601.00629 (2014) Statistical analysis of turbulent transport for flux driven toroidal plasmas Johan Anderson, Chalmers University of Technology, abstract, slides During recent years an overwhelming body of evidence shows that the overall transport of heat and particles is, to a large part, caused by intermittency (or bursty events) related to coherent structures. A crucial question in plasma confinement is thus the prediction of the probability distribution functions (PDFs) of the transport due to these structures and of their formation. This work provides a theoretical interpretation of numerically generated PDFs of intermittent plasma transport events, as well as offering an explanation for elevated PDF tails of heat flux. Specifically, we analyse time traces of heat flux generated by global nonlinear gyrokinetic simulations of ion-temperature-gradient turbulence by the GKNET software [1]. The simulation framework is global, flux-driven and considers adiabatic electrons. In the simulations, SOC type intermittent bursts are frequently observed and transport is often regulated by non-diffusive processes, thus the PDFs of e.g. heat flux are in general non-Gaussian with enhanced tails. A key finding of this study is that the intermittent process in the context of drift-wave turbulence appears to be independent of the specific modelling framework, opening the way to the prediction of its salient features. Although the same PDFs were previously found in local gyrokinetic simulations [2], there are some unique features present inherently coming from the global nature of the physics. The main part of this work consists in providing a theoretical interpretation of the PDFs of radial heat flux. The numerically generated time traces are processed with Box–Jenkins modelling in order to remove deterministic autocorrelations, thus retaining their stochastic parts only. These PDFs have been shown to agree very well with analytical predictions based on a fluid model, on applying the instanton method. In this talk, the theory and comparisons to the numerical work will be presented. The result points to a universality in the modelling of the intermittently stochastic process while the analytical theory offers predictive capability, extending the previous result to be globally applicable. [1] K. Imadera et al., 25th FEC, TH/P5-8 (2014) [2] Johan Anderson & Pavlos Xanthopoulos, Phys. Plasmas 17, 110702 (2010) Tim Stoltzfus-Dueck, PPPL, abstract, slides A higher-order portion of the ${\bf E}\times {\bf B}$ drift causes an outward flux of co-current momentum when electrostatic potential energy is transferred to ion-parallel flows. The robust symmetry breaking follows from the free energy flow in phase space and does not depend on any assumed linear eigenmode structure. The resulting rotation peaking is counter-current and scales as temperature over plasma current. This peaking mechanism can only act when there are adequate fluctuations at low enough frequencies to excite ion parallel flows, which may explain some experimental observations related to rotation reversals Experimental study of the role of electron pressure in fast magnetic reconnection with a guide field Will Fox, PPPL , abstract Magnetic reconnection, the change of magnetic topology in the presence of plasma, is observed in space, laboratory, and enables the explosive energy release by plasma instabilities, as in solar flares or magnetospheric substorms, and the change in topology allows the rapid heat transport associated with sawtooth relaxation and self-organization in RFPs. In numerous environments, especially in toroidal confinement devices, reconnection proceeds in the presence of a net guide field. We report detailed laboratory observations in MRX of the structure of reconnection current sheets with a guide field regime in a two-fluid plasma regime (ion gyro-radius comparable to the current sheet width) . We observe experimentally for the first time the quadrupolar electron pressure variation in the ion-diffusion region, an analogue of the quadrupolar "Hall" magnetic fields in anti-parallel reconnection. The quadrupolar pressure perturbation was originally predicted by extended MHD simulation as essential to balancing the large parallel reconnection electric fields over the ion-scale current-sheet We observe that electron density variations dominate temperature variations and may provide a new diagnostic of reconnection with finite guide field for fusion experiments and spacecraft missions. We discuss consequences for force balance in the reconnection layer and implications for fast reconnection in fusion devices. 2D Full-wave simulations of plasma waves in space and tokamak plasmas Eun-Hwa Kim, PPPL, abstract, slides [#s12, 28 Apr 2016] A 2D full-wave simulation code (so-called FW2D) has been recently developed. This code currently solves the cold plasma wave equations using the finite element method. One advantage of using the finite element method is that the local basis functions can be readily adapted to boundary shapes and can be packed in such a way as to provide higher resolution in regions where needed. We have constructed a 2D triangular mesh given a specified boundary and a target mesh density function. Moreover, the density of the mesh can be specified based on the expected wavelength obtained from solution of the local dispersion (except close to resonances) so that the most efficient resolution is used. Another advantage of this wave code is short running time. For instance, by using node number of 24,395, the computing time is approximately 300 seconds CPU time to obtain a solution. The wave code has been successfully applied to describe low frequency waves in Earth and Mercury's multi-ion magnetospheres. The results include (a) mode conversion from the incoming fast to the transverse wave modes at the ion-ion hybrid resonance, (b) mode coupling and polarization reversal between left-handed (i.e., electromagnetic ion cyclotron waves: EMIC waves) and right-handed polarized waves (i.e., fast mode), and (c) refraction and reflection of field-aligned propagating EMIC waves near the heavier ion cyclotron frequency. Very recently FW2D code has been improved to adopt tokamak geometry and examine radio frequency (RF) waves in the scape-off layer (SOL) of tokamaks, which is the region of the plasma between last closed flux surface and tokamak vessel. The SOL region is important for RF wave heating of tokamaks because significant wave power loss can occur in this region. This code is ideal for waves in SOL plasma, because realistic boundary shapes and arbitrary density structures can be easily adopted in the code and the SOL plasma can be approximated as cold plasma. Parallel electron force balance and the L-H transition Tim Stoltzfus-Dueck, Princeton University, abstract, slides In a popular description of the L-H transition, energy transfer to the mean flows directly depletes kinetic energy from turbulent fluctuations, resulting in suppression of the turbulence and a corresponding transport bifurcation. However, electron parallel force balance couples non-zonal velocity fluctuations with electron pressure fluctuations on rapid timescales, comparable with the electron transit time. For this reason, energy in the non-zonal velocity stays in a fairly fixed ratio to electron thermal free energy, at least for frequency scales much slower than electron transit. In order for direct depletion of the energy in turbulent fluctuations to cause the L-H transition, energy transfer via Reynolds stress must therefore drain enough energy to significantly reduce the sum of the free energy in non-zonal velocities and electron pressure fluctuations. At low $k$, the electron thermal free energy is much larger than the energy in non-zonal velocities, posing a stark challenge for this model of the L-H transition. Realistic characterizations of chirping instabilities in tokamaks Vinícius Duarte, PPPL, abstract, slides [#s14, 31 Mar 2016] In tokamak plasmas, the dynamics of phase-space structures with their associated frequency chirping is a topic of major interest in connection with mechanisms for fast ion losses. The onset of phase-space holes and clumps which produce chirping phenomena has been theoretically shown to be related to the emergence of an explosive solution of an integro-differential, nonlocal cubic equation [1,2] that governs the early mode amplitude evolution in the nonlinear regime near marginal stability. We have extended the analysis of the model to quantitatively account for multiple resonance surfaces of a given mode in the presence of drag and diffusion (due to collisions and micro-turbulence) operators. Then a more realistic criterion is found that takes into account the details of the mode structure and the variation of transport coefficients in phase space, to determine whether steady-state solutions can or cannot exist. Stable, steady-state solutions indicate that chirping oscillations do not arise, while the lack of steady solutions due to the predominance of drag is indicative that a frequency chirping response is likely in a plasma. Waves measured in experiments have been analyzed using the NOVA and NOVA-K codes, with which we can realistically account for the mode structure and varying resonance responses spread over phase space. In the experiments presently analyzed, compatibility has been found between the theoretical predictions for whether chirping should or should not arise and the experimental observation or lack of observation of toroidicity-induced Alfvén eigenmodes in NSTX, DIII-D and TFTR. We have found that stochastic diffusion due to wave micro-turbulence is the dominant energetic particle transport mechanism in many plasma experiments, and its strength is the key as to whether chirping solutions are likely to arise. [1] H.L. Berk, B.N. Breizman & M. Pekker, Phys. Rev. Lett. 76, 1256 (1996) [2] M.K. Lilley, B.N. Breizman & S.E. Sharapov, Phys. Rev. Lett. 102, 195003 (2009) Speed-Limited Particle-in-Cell (SLPIC) Method John R. Cary, U. Colorado, Boulder, abstract, slides The Speed-Limited Particle-In-Cell (SLPIC) Method reduces computational requirements for simulations that evolve on ion time scales while keeping appropriate kinetic electron effects. This method works by introducing an ansatz for the distribution function that allows the new unknown phase-space function to be solved by the method of characteristics, where those characteristics move slowly through phase space. Therefore, the solution can be obtained by particle-in-cell (PIC) methods, where the electrons have speeds much smaller than their actual speeds, ultimately leading to a much relaxed numerical (CFL) stability condition on the time step. Ultimately, the time step can be increased by the square root of the ion-electron mass ratio. SLPIC can be easily implemented in existing PIC codes as it requires no changes to deposition and field solution. Its explicit nature makes it ideal for modern computing architectures with vector instruction sets. Free-Boundary Axisymmetric Plasma Equilibria: Computational Methods and Applications Holger Heumann, INRIA, France, abstract, slides We present a comprehensive survey of the various computational methods for finding axisymmetric plasma equilibria. Our focus is on free-boundary plasma equilibria, where either poloidal field coil currents or the temporal evolution of voltages in poloidal field circuit systems are given data. Centered around a piecewise linear finite element representation of the poloidal flux map, our approach allows in large parts the use of established numerical schemes. The coupling of a finite element method and a boundary element method gives consistent numerical solutions for equilibrium problems in unbounded domains. We formulate a Newton-type method for the discretized non-linear problem to tackle the various non-linearities, including the free plasma boundary. The Newton method guarantees fast convergence and is the main building block for the inverse equilibrium problems that we discuss as well. The inverse problems aim at finding either poloidal field coil currents that ensure a desired shape and position of the plasma or at finding the evolution of the voltages in the poloidal field circuit systems that ensure a prescribed evolution of the plasma shape and position. We provide equilibrium simulations for the tokamaks ITER and WEST to illustrate performance and application areas. Mixed finite-element/finite difference method for toroidal field-aligned elliptic electromagnetic equations Salomon Janhunen, PPPL, abstract [#s17, 29 Feb 2016] Gyrokinetic simulations -- such as those performed by the XGC code -- provide a self-consistent framework to investigate a wide range of physics in strongly magnetized high temperature laboratory plasmas, global modes usually considered to be in the realm of MHD simulations. However, the present simulation models generally concentrate on short wavelength electro-magnetic modes mostly to convenience the field solver performance. To incorporate more global fluid-like modes, also non-zonal long wavelength physics needs to be retained. In this work we present development of a fully 3D mixed FEM/FDM electro-magnetic field solver for use in the gyrokinetic code XGC1. We present optimization for use on massively parallel computational platforms, investigation of numerical accuracy characteristics using the method of manufactured solutions, and evaluate importance of field line length calculations on the stability of the discretization. We also invite discussion on the importance of the perpendicular vector potential for pressure driven modes. Gyrokinetic Particle Simulation of Fast Electron Driven Beta-induced Alfvén Eigenmodes Wenlu Zhang, Chinese Academy of Science & U.C. Irvine, abstract The fast electron driven beta induced Alfvén eigenmode (e-BAE) in toroidal plasmas is investigated for the first time using global gyrokinetic particle simulations, where the fast electrons are described by the drift kinetic model. The phase space structure shows that only the processional resonance is responsible for the e-BAE excitations while fast-ion driven BAE can be excited through all the channels such as transit, drift-bounce, and processional resonance. Frequency chirping is observed in nonlinear simulations with both weak and strong drives in the absence of sources and sinks, which provide a complement to the standard `bump-on-tail` paradigm for the frequency chirping of Alfvén eigenmodes. For weakly nonlinear driven case, frequency is observed to be in phase with the particle energy flux, and mode structure is almost the same as linear stage. While in the strongly driven nonlinear case, BAAE is excited along with BAE after the BAE mode saturated. Analysis of nonlinear wave-particle interactions show that the frequency chirping is induced by the nonlinear evolution of the coherent structures in the energetic-particle phase space, where the dynamics of the coherent structure is controlled by the formation and destruction of phrase space islands of energetic particles in the canonical variables. Zonal flow and zonal field are found to affect wave-particle resonance in the nonlinear e-BAE simulations. Theory for Transport Properties of Warm Dense Matter Scott Baalrud, University of Iowa, abstract, slides [#s5, 18 Feb 2016] Progress in a number of research frontiers relies upon an accurate description of the transport coefficients of warm and hot dense matter, characterized by densities near those of solids and temperatures ranging from several to hundreds of eV. Examples include inertial confinement fusion, evolution of giant planets, exoplanets, and other compact astrophysical objects such as white dwarf stars, as well as numerous high energy density laboratory experiments. These conditions are too dense for standard plasma theories to apply and too hot for condensed matter theories to apply. The challenge is to account for the combined effects of strong Coulomb coupling of ions and quantum degeneracy of electrons. This seminar will discuss the first theory to provide fast and accurate predictions of ionic transport coefficients in this regime. The approach combines two recent developments. One is the effective potential theory (EPT), which is a physically motivated approach to extend plasma kinetic theory into the strong coupling regime. The second is a new average atom model, which provides accurate radial density distributions at high-density conditions, accounting for effects such as pressure ionization. Results are compared with state-of-the-art orbital-free density functional theory computations, revealing that the theory is accurate from high temperature through the warm dense matter regime, breaking down when the system exhibits liquid-like behaviors. A number of properties are considered, including diffusion, viscosity and thermal conductivity. Nonlinear Fishbone Dynamics in Spherical Tokamaks with Toroidal Rotation Feng Wang, PPPL, abstract, slides Fishbone is one of the most important energetic particles driven modes in tokamaks. A numerical study of the nonlinear dynamics of fishbone has been carried out in this work. Realistic parameters with finite toroidal plasma rotation are used to understand nonlinear frequency chirping in NSTX. We have carried out a systematic study of nonlinear frequency chirping and energetic particle dynamics. It is found that, linearly, the mode is driven by both trapped particles and passing particles, with resonance condition $\omega_{d} \simeq \omega$ for trapped particles and $\omega_{\phi}+\omega_{\theta}\simeq\omega$ for passing particles, where $\omega_{d}$ is trapped particle toroidal precession frequency, and $\omega_{\phi}$, $\omega_{\theta}$ are passing particle transit frequency in toroidal and poloidal direction. As the mode growing, trapped resonant particles oscillate and move outward radially, which reduces particle precessional frequency. We believe this is the main reason for the mode frequency chirping down. Finally, as the mode frequency chirping down, initially non-resonant particles with lower precessional frequencies become resonant particles in the nonlinear regime. This effect can sustain a quasi-steady state mode amplitude observed in the simulation. Generation of anomalously energetic suprathermal electrons due to collisionless interaction of an electron beam with a nonuniform plasma Igor Kaganovich, PPPL, abstract, slides Electrons emitted by electrodes surrounding or immersed in the plasma are accelerated by the sheath electric field and become the electron beams penetrating the plasma. Recently, it was reported that an electron energy distribution measured in a dc-rf discharge with 800V dc voltage has not only a peak at 800eV corresponding to the electrons emitted from the dc-biased electrode, but also a significant fraction of accelerated electrons with energy up to several hundred eV. The particle-in-cell simulation results show that the acceleration may be caused by the effects related to the plasma nonuniformity. The electron beam excites plasma waves whose wavelength and phase speed gradually decrease towards anode. The short waves near the anode accelerate plasma bulk electrons to suprathermal energies because of multiple interactions of electron with wave region [1]. The two-stream instability of an electron beam propagating in finite-size plasma placed between two electrodes was also studied analytically. It is shown that the growth rate in such a system is much smaller than that of infinite plasma or finite size plasma with periodic boundary conditions. We show that even if width of the plasma matches the resonance condition for standing waves; standing waves do not develop and transform into spatially growing wave, whose growth rate is small compared to that of the standing wave in a system with periodic boundary conditions. The frequency and growth rate as a function of plasma width form a bandwidth structure [2]. [1] D. Sydorenko, I.D. Kaganovich et al., "Generation of anomalously energetic suprathermal electrons by an electron beam interacting with a nonuniform plasma", arXiv:1503.05048 (2015) [2] I.D. Kaganovich & D.Sydorenko, "Band Structure of the Growth Rate of the Two-Stream Instability of an Electron Beam Propagating in a Bounded Plasma" arXiv:1503.04695 (2015). Nonlinear Alfvén waves in generalized magnetohydrodynamics - a creation on Casimir leaves Hamdi M. Abdelhamid, University of Tokyo, abstract Alfvén waves are the most typical electromagnetic phenomena in magnetized plasma. In particular, nonlinear Alfvén waves deeply affect various plasma regimes in laboratory as well as in space, which have a crucial role in plasma heating, turbulence, etc. Large-amplitude Alfvén waves are observed in various systems in space and laboratories, demonstrating an interesting property that the wave shapes are stable even in the nonlinear regime. The ideal magnetohydrodynamics (MHD) model predicts that an Alfvén wave keeps an arbitrary shape constant when it propagates on a homogeneous ambient magnetic field. Here we investigate the underlying mechanism invoking a more accurate framework, generalized MHD. When we take into account dispersion effects (we consider both ion and electron inertial effects), the wave forms are no longer arbitrary. Interestingly, these "small-scale effects" change the picture completely; the large-scale component of the wave cannot be independent of the small scale component, and the coexistence of them forbids the large scale component to have a free wave form. This is a manifestation of the nonlinearity-dispersion interplay, which is somewhat different from that of solitons. The Casimir invariants of the system is the root cause of this interesting property. Runaway Mitigation Issues on ITER Allen Boozer, Columbia University, abstract, slides [#s2, 25 Jan 2016] The plasma current in ITER can be transferred from near thermal to relativistic electrons by the runaway phenomenon. If such a current of relativistic electrons were to strike the chamber walls in ITER, the machine could be out of commission for many months. For ITER to be operable as a research device, the shortest credible time between such events must be years. The physics of the runaway process is remarkably simple and clear. The major uncertainty is what range of plasma conditions may arise in post thermal quench ITER plasmas. Consequently, a focused effort that includes theory, experiments, and engineering could relatively quickly clarify whether ITER will be operable with the envisioned mitigation strategy and what mitigation strategies could enhance the credibility that ITER will be operable. Non-twist map bifurcation of drift-lines and drift-island formation in saturated 3D MHD equilibria David Pfefferlé, PPPL, abstract Based on non-canonical perturbation theory of the field-line action [1], guiding-centre drift equations are identified as perturbed magnetic field-line equations. In this context, passing-particle orbits are called drift-lines, and their topology is completely determined by the magnetic configuration. In axisymmetric tokamak fields, drift-lines lie on shifted flux-surfaces, i.e. drift-surfaces, the shift being proportional at lowest order to the parallel gyro-radius and the q-profile [2]. Field-lines as well as drift-lines produce island structures at rational surfaces [3] only when a non-axisymmetric magnetic component is added. The picture is different in the case of 3D saturated MHD equilibrium like the helical core associated with a non-resonant internal kink mode. In assuming nested flux-surfaces, such bifurcated states, expected for a reversed q-profile with qmin close yet above unity [4] and conveniently obtained in VMEC [5], feature integrable field-lines. The helical drift-lines however become resonant with the axisymmetric component in the region of qmin and spontaneously generate drift-islands. Due to the locally reversed sheared q-profile, the drift-island structure follows the bifurcation/reconnection mechanism found in non-twist maps [6, 7]. This result provides a theoretical interpretation of NBI fast ion helical hot-spots in Long-Lived Modes [8] as well as snake-like impurity density accumulation in internal MHD activity [9]. [1] John R. Cary & Robert G. Littlejohn, Ann. Physics 151, 1 (1983) [2] Machiel de Rover, Niek J. Lopes Cardozo & Attila Montvai, Phys. Plasmas 3, 4478 (1996) [3] M. Brambilla & A.J. Lichtenberg, Nucl. Fusion 13, 517 (1973) [4] F.L. Waelbroeck, Phys. Fluids B 1, 499 (1989) [5] W.A. Cooper, J.P. Graves et al., Phys. Rev. Lett. 105, 035003 (2010) [6] P.J. Morrison, Phys. Plasmas 7, 2279 (2000) [7] R. Balescu, Phys. Rev. E 58, 3781 (1998) [8] D. Pfefferlé, J.P. Graves et al., Nucl. Fusion 54, 064020 (2014) [9] L. Delgado-Aparicio, L. Sugiyama et al., Phys. Rev. Lett. 110, 065006 (2013) Nonlinear quantum electrodynamics in strong laser fields: From basic concepts to electron-positron photoproduction Sebastian Meuren, Max-Planck-Institut für Kernphysik, Germany, abstract, slides [#s20, 14 Jan 2016] Since the work of Sauter in 1931 it is known that quantum electrodynamics (QED) exhibits a so-called "critical" electromagnetic field scale, at which the quantum interaction between photons and macroscopic electromagnetic fields becomes nonlinear. One prominent example is the importance of light-light interactions in vacuum at this scale, which violates the superposition principle of classical electrodynamics. Furthermore, an electromagnetic field becomes unstable in this regime, as electron-positron pairs can be spontaneously created from the vacuum at the expenses of electromagnetic-field energy (Schwinger mechanism). Unfortunately, the QED critical field scale is so high that experimental investigations are challenging. One promising pathway to explore QED in the nonlinear domain with existing technology consists in the combination of modern (multi) petawatt optical laser systems with highly energetic particles. The suitability of this approach was first demonstrated in the mid-1990s at the seminal SLAC E-144 experiment. Since then, laser technology continuously developed, implying the dawn of a new era of strong-field QED experiments. For instance, the basic processes nonlinear Compton scattering and Breit-Wheeler pair production are expected to influence laser-matter interactions and in particular plasma physics at soon available laser intensities. Therefore, a considerable effort is being undertaken to include these processes into particle-in-cell (PIC) codes used for numerical simulations. In the first part of the talk the most prominent nonlinear QED phenomena are presented and discussed on a qualitative level. Afterwards, the mathematical formalism needed for calculations with strong plane-wave background fields is introduced with an emphasize on fundamental concepts. Finally, the nonlinear Breit-Wheeler process is considered more in depth. In particular, the semiclassical approximation is elaborated, which serves as a starting point for the implementation of quantum processes into PIC codes. Attempting a Theoretical Framework for High Energy Density Matter Swadesh Mahajan, U. Texas, Austin, abstract, slides Two distinct approaches to construct a theoretical framework for fully relativistic high energy density (HED) systems- in particular, an assembly of particles in the field of an electromagnetic (EM) wave of arbitrary magnitude- are explored: 1) The Statistical-Mechanical model for the HED matter is built through the following steps: First, the eigenvalue problem for a quantum relativistic particle immersed in an arbitrary strength EM field is solved; the resulting energy eigenvalue (dependent on $A$, $\omega$ and $k$) is invoked to define the appropriate Boltzmann factor for constructing expressions for physical variables for a weakly interacting system of these field-dressed particles. The fluid equations are the conservation laws, 2) In the second approach, an equivalent nonlinear quantum mechanics is constructed to represent a hot fluid with and without internal degrees of freedom (like spin). Representative initial results are displayed and discussed. Some notable results are: 1) fundamental changes in the particle energy momentum relationship, 2) The EM wave induces anisotropy in the energy momentum tensor, 3) the EM wave splits the spin-degenerate states, 4) the propagation characteristics of the EM wave are modified by thermal and field effects causing differential self-induced transparency, 5) Particle trapping and ``pushing'' by the high amplitude EM wave. Attempts will be made to highlight testable predictions. The electrostatic response to edge islands induced by Resonant Magnetic Perturbations Gianluca Spizzo, Consorzio RFX, Italy, abstract, slides [#s22, 03 Dec 2015] Measurements of plasma potential have been experimentally determined in great detail in the edge of the RFX reversed-field pinch (RFP) [1], and of the TEXTOR tokamak, with applied magnetic perturbations (MP's) [2]. Generally speaking, the potential has the form $\Phi(r,t,z) = \Phi_0 \sin u$, with $u$ the helical angle: this fact implies a strong correlation between the magnetic field topology and the poloidal/toroidal modulation of the measured plasma potential. In a chaotic tokamak edge, the ion and electron drifts yield a predominantly electron driven radial diffusion when approaching the island X-point, while ion diffusivities are generally an order of magnitude smaller. In the RFP the picture is more complicated, since X-points can act both as drivers of electron diffusion, or dynamical traps (reduced electron diffusion), depending on the helicity of the dominant island [3]. In both devices, this differential electron-to-ion diffusion, causes a strong radial electric field structure pointing outward (inward) from the island O-point. An analytical model for the plasma potential is implemented in the code Orbit [4], and analyses of the ambipolar flow shows that both ion- and electron-dominated transport regimes can exist, which are known as ion and electron roots in stellarators. Moreover, the good agreement found between measured and modeled plasma potential supports that a magnetic island in the plasma edge can act as convective cell. These findings and comparison with stellarator literature, suggests that the role of magnetic islands as convective cells and hence as major radial particle transport drivers could be a generic mechanism in 3D plasma boundary layers. [1] P. Scarin, N. Vianello et al., Nucl. Fusion 51, 073002 (2011) [1] N. Vianello, C. Rea et al., Plasma Phys. Control. F. 57, 014027 (2015) [2] G. Ciaccio, O. Schmitz et al., Phys. Plasmas 22, 102516 (2015) [3] G. Spizzo, N. Vianello et al., Phys. Plasmas 21, 056102 (2014) [4] R.B. White & M.S. Chance, Phys. Fluids 27, 2455 (1984)
CommonCrawl
Verifying the correctness of a Sudoku solution A Sudoku is solved correctly, if all columns, all rows and all 9 subsquares are filled with the numbers 1 to 9 without repetition. Hence, in order to verify if a (correct) solution is correct, one has to check by definition 27 arrays of length 9. Q1: Are there verification strategies that reduce this number of checks ? Q2: What is the minimal number of checks that verify the correctness of a (correct) solution ? (Image sources from Wayback Machine: first second) The following simple observation yields an improved verification algorithm: At first enumerate rows, columns and subsquares as indicated in pic 2. Suppose the columns $c_1,c_2,c_3$ and the subsquares $s_1, s_4$ are correct (i.e. contain exactly the numbers 1 to 9). Then it's easy to see that $s_7$ is correct as well. This shows: (A1) If all columns, all rows and 4 subsquares are correct, then the solution is correct. Now suppose all columns and all rows up to $r_9$ and the subsquares $s_1,s_2,s_4,s_5$ are correct. By the consideration above, $s_7,s_8,s_9$ and $s_3,s_6$ are correct. Moreover, $r_9$ has to be correct, too. For, suppose a number, say 1, occurs twice in $r_9$. Since the subsquares are correct, the two 1's have be in different subsquares, say $s_7,s_8$. Hence the 1's from rows $r_7, r_8$ both have to lie in $s_9$, i.e. $s_9$ isn't correct. This is the desired contradiction. Hence (A1) can be further improved to (A2) If all columns and all rows up to one and 4 subsquares are correct, then the solution is correct. This gives as upper bound for Q2 the need of checking 21 arrays of length 9. Q3: Can the handy algorithm (A2) be further improved ? co.combinatorics combinatorial-game-theory sudoku jeq RalphRalph $\begingroup$ You can recognize a mathematician in that he always checks his Sudoku is correct because he knows that at this point he has only proved unicity :) $\endgroup$ – François Brunault Apr 29 '13 at 20:45 $\begingroup$ If we consider the $c_i$, $r_j$ and $s_k$ as elements in the free abelian group with basis $\{1,\ldots,9\}$, then relations of the form $r_1+r_2+r_3=s_1+s_2+s_3$ show that e.g. correctness of $r_1,r_2,r_3,s_1,s_2$ implies correctness of $s_3$. $\endgroup$ – François Brunault Apr 29 '13 at 20:55 $\begingroup$ (A2) doesn't work if three of the four subsquares are aligned. $\endgroup$ – Denis Serre Apr 29 '13 at 21:03 $\begingroup$ @François: Exactly! I think your conclusion in the 2nd comment is just the row-version of what I described for columns in the paragraph after the pics. $\endgroup$ – Ralph Apr 29 '13 at 21:30 $\begingroup$ @Denis: Thanks. I'll correct it later. $\endgroup$ – Ralph Apr 29 '13 at 21:35 $\DeclareMathOperator\span{span}$Here is an argument which works for general $n\times n$ Sudokus, $n\ge 2$, using some ideas from the other answers (namely, casting the problem in terms of linear algebra as in François Brunault's answer, and the notion of alternating paths below is related to the even sets as in Tony Huynh's answer, attributed to Zack Wolske). I will denote the cells as $s_{ijkl}$ with $0\le i,j,k,l< n$, where $i$ identifies the band, $j$ the stack, $k$ the row within band $i$, and $l$ the column within stack $j$. Rows, columns, and blocks are denoted $r_{ik},c_{jl},b_{ij}$ accordingly. Let $X=\{r_{ik},c_{jl},b_{ij}:i,j,k,l< n\}$ be the set of all $3n^2$ checks. For $S\subseteq X$ and $x\in X$, I will again denote by $S\models x$ the consequence relation "every Sudoku grid satisfying all checks from $S$ also satisfies $x$". Let $V$ be the $\mathbb Q$-linear space with basis $X$, and $V_0$ be the span of the vectors $\sum_kr_{ik}-\sum_jb_{ij}$ for $i< n$, and $\sum_lc_{jl}-\sum_ib_{ij}$ for $j< n$. Lemma 1: If $x\in\span(S\cup V_0)$, then $S\models x$. Proof: A grid $G$ induces a linear mapping $\phi_G$ from $V$ into an $n^2$-dimensional such that for any $x'\in X$, the $i$th coordinate of $\phi_G(x')$ gives the number of occurrences of the number $i$ in $x'$. We have $\phi_G(V_0)=0$, and $G$ satisfies $x'$ iff $\phi_G(x')$ is the constant vector $\vec 1$. If $x=\sum_i\alpha_ix_i+y$, where $x_i\in S$ and $y\in V_0$, then $\phi_G(x)=\vec\alpha$ for $\alpha:=\sum_i\alpha_i$. The same holds for every grid $G'$ satisfying $S$; in particular, it holds for any valid grid, which has $\phi_{G'}(x)=\vec1$, hence $\alpha=1$. QED We intend to prove that the converse holds as well, so assume that $x\notin\span(S\cup V_0)$. We may assume WLOG $x=r_{00}$ or $x=b_{00}$, and we may also assume that $r_{i0}\notin S$ whenever $r_{ik}\notin S$ for some $k$, and $c_{j0}\notin S$ whenever $c_{jl}\notin S$ for some $l$. By assumption, there exists a linear function $\psi\colon V\to\mathbb Q$ such that $\psi(S\cup V_0)=0$, and $\psi(x)\ne0$. The space of all linear functions on $V$ vanishing on $V_0$ has dimension $3n^2-2n$, and one checks easily that the following functions form its basis: $\omega_{ik}$ for $0\le i< n$, $0< k< n$: $\omega_{ik}(r_{ik})=1$, $\omega_{ik}(r_{i0})=-1$. $\eta_{jl}$ for $0\le j< n$, $0< l< n$: $\eta_{jl}(c_{jl})=1$, $\eta_{jl}(c_{j0})=-1$. $\xi_{ij}$ for $i,j< n$: $\xi_{ij}(r_{i0})=\xi_{ij}(c_{j0})=\xi_{ij}(b_{ij})=1$. (The functions are zero on basis elements not shown above.) We can thus write $$\psi=\sum_{ik}u_{ik}\omega_{ik}+\sum_{jl}v_{jl}\eta_{jl}+\sum_{ij}z_{ij}\xi_{ij}.$$ If $r_{ik}\in S$, $k\ne0$, then $0=\psi(r_{ik})=u_{ik}$, and similarly $c_{jl}\in S$ for $l\ne0$ implies $v_{jl}=0$. Thus, the functions $\omega_{ik}$ and $\eta_{jl}$ that appear in $\psi$ with a nonzero coefficient individually vanish on $S$. The only case when they can be nonzero on $x$ is $\omega_{0k}$ if $x=r_{00}$ and $r_{00},r_{0k}\notin S$, but then taking any valid grid and swapping cells $s_{0000}$ and $s_{00k0}$ shows that $S\nvDash x$ and we are done. Thus we may assume that the first two sums in $\psi$ vanish on $S\cup\{x\}$, and therefore the third one vanishes on $S$ but not on $x$, i.e., WLOG $$\psi=\sum_{ij}z_{ij}\xi_{ij}.$$ That $\psi$ vanishes on $S$ is then equivalent to the following conditions on the matrix $Z=(z_{ij})_{i,j< n}$: $z_{ij}=0$ if $b_{ij}\in S$, $\sum_jz_{ij}=0$ if $r_{i0}\in S$, $\sum_iz_{ij}=0$ if $c_{j0}\in S$. Let us say that an alternating path is a sequence $e=e_p,e_{p+1},\dots,e_q$ of pairs $e_m=(i_m,j_m)$, $0\le i_m,j_m< n$, such that $i_m=i_{m+1}$ if $m$ is even, and $j_m=j_{m+1}$ if $m$ is odd, the indices $i_p,i_{p+2},\dots$ are pairwise distinct, except that we may have $e_p=e_q$ if $q-p\ge4$ is even, likewise for the $j$s. If $m$ is even, the incoming line of $e_m$ is the column $c_{j_m0}$, and its outgoing line is the row $r_{i_m0}$. If $m$ is odd, we define it in the opposite way. An alternating path for $S$ is an alternating path $e$ such that $b_{i_mj_m}\notin S$ for every $m$, and either $e_p=e_q$ and $q-p\ge4$ is even ($e$ is an alternating cycle), or the incoming line of $e_p$ and the outgoing line of $e_q$ do not belong to $S$. Every alternating path $e$ induces a matrix $Z_e$ which has $(-1)^m$ at position $e_m$ for $m=p,\dots,q$, and $0$ elsewhere. It is easy to see that if $e$ is an alternating path for $S$, then $Z_e$ satisfies conditions 1, 2, 3. Lemma 2: The space of matrices $Z$ satisfying 1, 2, 3 is spanned by matrices induced by alternating paths for $S$. Proof: We may assume that $Z$ has integer entries, and we will proceed by induction on $\|Z\|:=\sum_{ij}|z_{ij}|$. If $Z\ne 0$, pick $e_0=(i_0,j_0)$ such that $z_{i_0j_0}>0$. If the outgoing line of $e_0$ is outside $S$, we put $q=0$, otherwise condition 2 guarantees that $z_{i_0,j_1}< 0$ for some $j_1$, and we put $i_1=i_0$, $e_1=(i_1,j_1)$. If the outgoing line of $e_1$ is outside $S$, we put $q=1$, otherwise we find $i_2$ such that $z_{i_2j_1}>0$ by condition 3, and put $j_2=j_1$. Continuing in this fashion, one of the following things will happen sooner or later: The outgoing line of the last point $e_m$ constructed contains another point $e_{m'}$ (and therefore two such points, unless $m'=0$). In this case, we let $p$ be the maximal such $m'$, we put $q=m+1$, $e_q=e_p$ to make a cycle, and we drop the part of the path up to $e_{p-1}$. The outgoing line of $e_m$ is outside $S$. We put $q=m$. In the second case, we repeat the same construction going backwards from $e_0$. Again, either we find a cycle, or the construction stops with an $e_p$ whose incoming line is outside $S$. Either way, we obtain an alternating path for $S$ (condition 1 guarantees that $b_{i_mj_m}\notin S$ for every $m$). Moreover, the nonzero entries of $Z_e$ have the same sign as the corresponding entries of $Z$, thus $\|Z-Z_e\|<\|Z\|$. By the induction hypothesis, $Z-Z_e$, and therefore $Z$, is a linear combination of some $Z_e$s. QED Now, Lemma 2 implies that we may assume that our $\psi$ comes from a matrix $Z=Z_e$ induced by an alternating path $e=e_p,\dots,e_q$. Assume that $G$ is a valid Sudoku grid that has $1$ in cells $s_{i_mj_m00}$ for $m$ even, and $2$ for $m$ odd. Let $G'$ be the grid obtained from $G$ by exchanging $1$ and $2$ in these positions. Then $G'$ violates the following checks: $b_{i_mj_m}$ for each $m$. If $e$ is not a cycle, the incoming line of $e_p$, and the outgoing line of $e_q$. Since $e$ is an alternating path for $S$, none of these is in $S$. On the other hand, $\psi(x)\ne0$ implies that $x$ is among the violated checks, hence $S\nvDash x$. It remains to show that such a valid grid $G$ exists. We can now forget about $S$, and then it is easy to see that every alternating path can be completed to a cycle, hence we may assume $e$ is a cycle. By applying Sudoku permutations and relabelling the sequence, we may assume $p=0$, $i_m=\lfloor m/2\rfloor$, $j_m=\lceil m/2\rceil$ except that $i_q=j_q=j_{q-1}=0$. We are thus looking for a solution of the following grid: $$\begin{array}{|ccc|ccc|ccc|ccc|ccc|} \hline 1&&&2&&&&&&&&&&&&\\ \strut&&&&&&&&&&&&&&&\\ \strut&&&&&&&&&&&&&&&\\ \hline &&&1&&&2&&&&&&&&&\\ &&&&&&&&&&&&&&\cdots&\\ &&&&&&&&\ddots&&&&&&&\\ \hline 2&&&&&&&&&1&&&&&&\\ \strut&&&&&&&&&&&&&&&\\ \strut&&&&&&&&&&&&&&&\\ \hline \strut&&&&&&&&&&&&&&&\\ \strut&&&&\vdots&&&&&&&&&&&\\ \strut&&&&&&&&&&&&&&&\\ \hline \end{array}$$ where the upper part is a $q'\times q'$ subgrid, $q'=q/2$. If $q'=n$, we can define the solution easily by putting $s_{ijkl}=(k+l,j-i+l)$, where we relabel the numbers $1,\dots,n^2$ by elements of $(\mathbb Z/n\mathbb Z)\times(\mathbb Z/n\mathbb Z)$, identifying $1$ with $(0,0)$ and $2$ with $(0,1)$. In the general case, we define $s_{ijkl}=(k+l+a_{ij}-b_{ij},l+a_{ij})$. It is easy to check that this is a valid Sudoku if the columns of the matrix $A=(a_{ij})$ and the rows of $B=(b_{ij})$ are permutations of $\mathbb Z/n\mathbb Z$. We obtain the wanted pattern if we let $a_{ij}=b_{ij}=j-i\bmod{q'}$ for $i,j< q'$, and extend this in an arbitrary way so that the columns of $A$ and the rows of $B$ are permutations. This completes the proof that $x\notin\span(S\cup V_0)$ implies $S\nvDash x$. This shows that $\models$ is a linear matroid, and we get a description of maximal incomplete sets of checks by means of alternating paths. We can also describe the minimal dependent sets. Put $$D_{R,C}=\{r_{ik}:i\in R,k< n\}\cup\{c_{jl}:j\in C,l< n\}\cup\{b_{ij}:(i\in R\land j\notin C)\lor(i\notin R\land j\in C)\}$$ for $R,C\subseteq\{0,\dots,n-1\}$. If $R$ or $C$ is nonempty, so is $D_{R,C}$, and $$\sum_{i\in R}\Bigl(\sum_kr_{ik}-\sum_jb_{ij}\Bigr)-\sum_{j\in C}\Bigl(\sum_lc_{jl}-\sum_ib_{ij}\Bigr)\in V_0$$ shows that $D_{R,C}$ is dependent. On the other hand, if $D$ is a dependent set, there is a linear combination $$\sum_i\alpha_i\Bigl(\sum_kr_{ik}-\sum_jb_{ij}\Bigr)-\sum_j\beta_j\Bigl(\sum_lc_{jl}-\sum_ib_{ij}\Bigr)\ne0$$ where all basic vectors with nonzero coefficients come from $D$. If (WLOG) $\alpha:=\alpha_{i_0}\ne0$, put $R=\{i:\alpha_i=\alpha\}$ and $C=\{j:\beta_j=\alpha\}$. Then $R\ne\varnothing$, and $D_{R,C}\subseteq D$. On the one hand, this implies that every minimal dependent set is of the form $D_{R,C}$. On the other hand, $D_{R,C}$ is minimal unless it properly contains some $D_{R',C'}$, and this can happen only if $R'\subsetneq R$ and $C=C'=\varnothing$ or vice versa. Thus $D_{R,C}$ is minimal iff $|R|+|C|=1$ or both $R,C$ are nonempty. This also provides an axiomatization of $\models$ by rules of the form $D\smallsetminus\{x\}\models x$, where $x\in D=D_{R,C}$ is minimal. It is easy to see that if $R=\{i\}$ and $C\ne\varnothing$, the rules for $D_{R,C}$ can be derived from the rules for $D_{R,\varnothing}$ and $D_{\varnothing,\{j\}}$ for $j\in C$, hence we can omit these. (Note that the remaining sets $D_{R,C}$ are closed, hence the corresponding rules have to be included in every axiomatization of $\models$.) To sum it up: Theorem: Let $n\ge2$. $S\models x$ if and only if $x\in\span(S\cup V_0)$. In particular, $\models$ is a linear matroid. All minimal complete sets of checks have cardinality $3n^2-2n$. (One such set consists of all checks except for one row from each band, and one column from each stack.) The closed sets of $\models$ are intersections of maximal closed sets, which are complements of Sudoku permutations of the sets $\{b_{00},b_{01},b_{11},b_{12},\dots,b_{mm},b_{m0}\}$ for $0< m< n$ $\{c_{00},b_{00},b_{01},b_{11},b_{12},\dots,b_{mm},r_{m0}\}$ for $0\le m< n$ $\{c_{00},b_{00},b_{01},b_{11},b_{12},\dots,b_{m-1,m},c_{m1}\}$ for $0\le m< n$ The minimal dependent sets of $\models$ are the sets $D_{R,C}$, where $R,C\subseteq\{0,\dots,n-1\}$ are nonempty, or $|R|+|C|=1$. $\models$ is the smallest consequence relation such that $D_{R,C}\smallsetminus\{x\}\models x$ whenever $x\in D_{R,C}$ and either $|R|,|C|\ge2$, or $|R|+|C|=1$. Emil Jeřábek supports MonicaEmil Jeřábek supports Monica $\begingroup$ I'm sorry for the number of edits. I'm going to stop here. $\endgroup$ – Emil Jeřábek supports Monica May 7 '13 at 14:59 $\begingroup$ This is great work. Thanks a lot. The direction proved in Lemma 1 by using François' linear map is particularly elegant. I tried hard to find a similar approach for the other direction, but didn't succeed yet. $\endgroup$ – Ralph May 17 '13 at 10:05 The consequence relation $\models$ defined in Emil Jeřábek's answer is a matroid. In fact, it is a linear matroid. Let $X=\{r_1,\ldots,r_9,c_1,\ldots,c_9,b_1,\ldots,b_9\}$ be the set of possible checks. Recall that given $S \subset X$ and $x \in X$, the notation $S \models x$ means that every Sudoku grid which is valid on $S$ is also valid on $x$. We may embed $X$ into the free abelian group $V$ generated by the 81 cells of the Sudoku grid, by mapping a check $x \in X$ to the formal sum of the cells contained in $x$. The span of $X$ has rank $21$, and the kernel of the natural map $\mathbf{Z}X \to V$ is generated by the six relations of the form $r_1+r_2+r_3-b_1-b_2-b_3$. Proposition. We have $S \models x$ if and only if $x \in \operatorname{Vect}(S)$. Proof. By Proposition 2 from Emil's answer, the consequence relations $\models$ and $\vdash$ coincide, so we may work with $\vdash$. Let us prove that $S \vdash x$ implies $x \in \operatorname{Vect}(S)$. By transitivity, we may assume $S=D \backslash \{x\}$ for some $D \in \mathcal{D}$. It is straightforward to check that $x \in \operatorname{Vect}(D \backslash \{x\})$ in each case (i)-(iv). Conversely, let us assume $x=\sum_{s \in S} \lambda_s s$ for some $\lambda_s \in \mathbf{Z}$. Since the elements of $X$ have degree 9, we have $\sum_{s \in S} \lambda_s = 1$. Any Sudoku grid provides a linear map $\phi : V \to E$, where $E$ is the free abelian group with basis $\{1,\ldots,9\}$ (map each cell to the digit it contains). If the grid is valid on $S$ then $\phi(s)=[1]+\cdots+[9]$ for every $s \in S$, and thus $\phi(x)=[1]+\cdots+[9]$, which means that the grid is valid on $x$. QED Note that a set of checks $S$ is complete if and only if $\operatorname{Vect}(S)=\operatorname{Vect}(X)$. In particular, the minimal complete sets are those which form a basis of $\operatorname{Vect}(X)$, and it is now clear that every such set has cardinality $21$. We also obtain a description of the independent sets : these are exactly the sets which are linearly independent when considered in $V$. Any independent set may be extended to a minimal complete set (we may have worked with $\mathbf{Q}$-coefficients instead of $\mathbf{Z}$-coefficients above). François BrunaultFrançois Brunault $\begingroup$ Does the matroid property easily generalize to $n \times n$-Sudoku? $\endgroup$ – Sam Hopkins May 4 '13 at 5:01 $\begingroup$ Just for reference, I asked this as a separate question mathoverflow.net/questions/129600/is-there-a-sudoku-matroid $\endgroup$ – Tony Huynh May 4 '13 at 6:37 $\begingroup$ @Sam : Good question, I don't know the answer. In fact, I should say that all the difficult work here is contained in Emil's answer. In particular we need $\models$ = $\vdash$ (his Proposition 2) in order to prove the matroid property, and this is done by a case-by-case analysis, so it is not clear what happens for higher values of $n$. $\endgroup$ – François Brunault May 4 '13 at 18:02 $\begingroup$ @François: You managed to algebraicify the problem. Great. Thanks. $\endgroup$ – Ralph May 17 '13 at 10:06 One can use information theoretic considerations to obtain lower bounds for the number of checks. I'll prove that at least 15 checks are necessary. Proof. First note that for any two rows $r_i$ and $r_j$ (contained in the same band), it is easy to construct a Sudoku which is correct everywhere except $r_i$ and $r_j$. Thus, one must check at least 2 rows from each band, and hence at least 6 rows. By symmetry, one must also check at least 6 columns. Next, we define a $4$-set of $3 \times 3$ squares to be a corner set if they are the corners of a rectangle. For any corner set $S$, it is easy to construct a Sudoku which is correct on all rows, columns, and squares except for $S$. Note that any set of squares which meets all corner sets must have size at least 3 Thus, we must check at least 3 squares. $6+6+3=15.$ Edit. Here is an improvement that shows that 16 checks are in fact necessary. This idea is due to Zack Wolske (see the comments below). Call a subset of $3 \times 3$ squares an even set if it contains an even number of squares from each row and column of squares. Note that a corner set is an even set. Lemma. If $S$ is a set of at most three squares, then the complement of $S$ contains a non-empty even set. The only non-trivial verification is if $S$ is a transversal, in which case the complement of $S$ is itself an even set of size 6. This lemma shows that at least 4 squares must be checked. To see this suppose that we have only checked at most three squares. By the Lemma, we may select a non-empty even set $E$ contained in the squares we have not checked. We next label the center cell of each square in $E$ with a $1$ or a $2$ such that each row and column is either completely unlabelled or contains exactly one $1$ and one $2$. Clearly, we can extend this partial labelling to a fully correct Sudoku. If we then flip $1$ and $2$ in the center cells of $E$, we obtain a Sudoku that is incorrect on each square in $E$, but correct on all other squares, rows and columns. Thus, we must check 4 squares as claimed. $6+6+4=16$. Edit 2. I now can prove that at least 18 checks are necessary. Recall that we have so far established that at least 6 rows (at least 2 from each band), and 6 columns (at least 2 from each stack), and 4 squares are necessary. Therefore, suppose in a minimum set of checks $V$ we have checked $6+r'$ rows, $6+c'$ columns and $4+s'$ squares. Note that for each unchecked square $x$, it cannot be the case that at most two columns of $x$ and at most two rows of $x$ are checked. If so, there would be a cell of $x$ such that the row containing $x$, the column containing $x$ and the square containing $x$ are all unchecked, which is a contradiction. If $s' \geq 2$, then we are done. So, we have checked at most 5 squares. In particular, the set of unchecked squares are not all in the same column or same row. Thus, there are two unchecked squares that are in different rows and in different columns. As mentioned, both of these unchecked squares must have all rows checked or all columns checked. Therefore, $r'+c' \geq 2$, and we are done. Edit 3. I can now prove that at least 19 checks are necessary. Using the notation from the previous edit, if $s' \geq 3$, we are done. We define a band $B$ to be tight (for $V$), if $V$ uses all three rows of $B$. If $s'=2$, then at least one band or one stack must be tight, so we are done. If $s'=1$, then by the previous edit we have $r'+c' \geq 2$, and we are done. The only remaining possibility is if $s'=0$. Thus, there are 5 unchecked squares. Observe that any set of 5 squares must either contain a transversal, a band, or a stack. If the unchecked squares contain a transversal, then $r'+c' \geq 3$ (since the sum of the tight bands and tight stacks must be at least 3). By symmetry, we may assume that there is an unchecked band. Lemma. If there is an unchecked band $B$, then at least two stacks are tight. Proof. If not, by symmetry we may assume that $s_1, s_2, s_3$ are unchecked and that $c_1$ and $c_4$ are unchecked. By taking a correct Sudoku and swapping the first entry and fourth entries of the first row, we obtain a Sudoku that is correct everywhere, except $s_1, s_2, c_1$, and $c_4$, which is a contradiction. By the lemma, there are at least two tight stacks. If there are three, then $c' \geq 3$, so we are done. If there are exactly two tight stacks, then the band $B$ itself must be tight, otherwise there is a cell whose row, column and square are all unchecked. Hence $c'+r' \geq 3$, and we are again done. Remark. There is quite a bit of slack in these arguments, so with enough case analysis, I think one can get to 21 with $\epsilon$ new ideas. $\begingroup$ Thanks for the comment. The proposed solution in the original question shows that it is possible to get away with only checking 4 squares (provided you check all the rows and columns also), so I am a bit confused. $\endgroup$ – Tony Huynh Apr 30 '13 at 6:34 $\begingroup$ @Denis: sudoku works on 81 squares, not $3^3 = 27$ as you propose. I think there is no symmetry between rows and squares. E.g., every row intersects 9 distinct columns but every square only intersects three columns. Could you elaborate? $\endgroup$ – Marek Apr 30 '13 at 9:14 $\begingroup$ @Tony : If we only check 6 rows and 6 columns, then there are 9 cells which can be altered indifferently, so we need to check all 9 squares. Using this kind of reasoning, it seems you can improve your lower bound to 18. $\endgroup$ – François Brunault Apr 30 '13 at 12:12 $\begingroup$ @Francois: Agreed. My bounds are absolute lower bounds on r,c and s (where r, c and s are what you think they are), while the quantity we are actually interested in is r+c+s. By considering the interaction between cells, rows and columns the bound can be improved (probably to 21). $\endgroup$ – Tony Huynh Apr 30 '13 at 18:16 $\begingroup$ I think you can do $1$ better for s, assuming I have the right construction in mind which works except on a corner set (flip pairs of elements along each side of the rectangle, so that rows keep the same elements on horizontal flips, and the number of vertical flips is even, and vice versa for columns). This just requires an even number of squares be chosen in each row and column, which you do with a corner set, taking $2$, $2$, and $0$. You can make the same construction with $2$ squares in each row and column (like the complement of a minimal set of squares that meets all corner sets). $\endgroup$ – Zack Wolske Apr 30 '13 at 18:47 I couldn't find all my original notes, but I have reconstructed the gist of it. First, validity of Sudoku grids is preserved by transposition, permutations of bands, permutations of rows within bands, permutations of stacks, and permutations of columns within stacks. Below I will use "Sudoku permutation" as a short-hand for permutations from the group generated by these transformations. Also, I will write "blocks" instead of what is called "squares" in the question, since the latter is commonly used to denote single cells. Let us say that a set of checks $S\subseteq\{r_1,\dots,r_9,c_1,\dots,c_9,b_1,\dots,b_9\}$ is complete if every Sudoku grid satisfying the checks from $S$ satisfies all checks (i.e., it is a valid grid). One can characterize complete sets of checks by describing either minimal complete sets, or maximal incomplete sets. I will mostly refer to complements of these sets, as they have less elements. As was already observed in the question, there are complete sets of 21 checks. (I prefer the following symmetric solution: check all blocks, two rows in each band, and two columns in each stack.) It follows from the description below that this number is optimal, as all minimal complete sets of checks have 21 elements. Proposition 1. The following are equivalent: $S$ is complete. All checks can be derived from $S$ by means of the following rule: if $S$ includes all but one of the 6 checks contained in a given band or stack, add the remaining one to $S$. $S$ is not included in the complement of a Sudoku permutation of one of the following sets: a) $\{r_1,r_2\}$ b) $\{r_1,c_1,b_1\}$ c) $\{b_1,b_2,b_4,b_5\}$ d) $\{r_1,r_4,b_1,b_4\}$ e) $\{r_1,c_1,b_2,b_4,b_5\}$ f) $\{b_2,b_3,b_4,b_6,b_7,b_8\}$ g) $\{r_1,r_4,b_1,b_5,b_7,b_8\}$ h) $\{r_1,c_1,b_3,b_5,b_6,b_7,b_8\}$ $S$ includes the complement of a Sudoku permutation of one of the following sets (there may well be some errors in the list, but the only relevant information is that they all have 6 elements): $\{r_1,r_4,r_7,c_1,c_4,c_7\}$, $\{r_1,r_4,c_1,c_4,c_7,b_7\}$, $\{r_1,r_4,c_1,c_4,b_6,b_9\}$, $\{r_1,r_4,c_1,c_4,b_6,b_8\}$, $\{r_1,c_1,c_4,c_7,b_6,b_9\}$, $\{r_1,c_1,c_4,c_7,b_6,b_8\}$, $\{r_1,c_1,c_4,b_3,b_6,b_9\}$, $\{r_1,c_1,c_4,b_3,b_6,b_8\}$, $\{r_1,c_1,c_4,b_3,b_5,b_8\}$, $\{r_1,c_1,c_4,b_3,b_5,b_7\}$, $\{r_1,c_1,c_4,b_5,b_6,b_9\}$, $\{r_1,c_1,c_4,b_5,b_6,b_8\}$, $\{r_1,c_1,b_2,b_3,b_4,b_7\}$, $\{r_1,c_1,b_2,b_3,b_6,b_7\}$, $\{r_1,c_1,b_2,b_3,b_6,b_8\}$, $\{r_1,c_1,b_2,b_3,b_6,b_9\}$, $\{r_1,c_1,b_3,b_6,b_7,b_8\}$, $\{r_1,c_1,b_3,b_6,b_8,b_9\}$, $\{r_1,c_1,b_3,b_5,b_7,b_8\}$, $\{r_1,c_1,b_3,b_5,b_8,b_9\}$, $\{c_1,c_4,c_7,b_1,b_4,b_7\}$, $\{c_1,c_4,c_7,b_1,b_4,b_8\}$, $\{c_1,c_4,c_7,b_1,b_5,b_9\}$, $\{c_1,c_4,b_2,b_3,b_5,b_8\}$, $\{c_1,c_4,b_2,b_3,b_5,b_9\}$, $\{c_1,c_4,b_2,b_3,b_6,b_9\}$, $\{c_1,c_4,b_2,b_3,b_6,b_7\}$, $\{c_1,c_4,b_2,b_5,b_6,b_7\}$, $\{c_1,b_1,b_2,b_3,b_4,b_7\}$, $\{c_1,b_1,b_2,b_3,b_4,b_8\}$, $\{c_1,b_1,b_2,b_3,b_5,b_9\}$, $\{c_1,b_1,b_2,b_3,b_6,b_9\}$, $\{c_1,b_1,b_2,b_4,b_6,b_7\}$, $\{c_1,b_2,b_3,b_4,b_6,b_9\}$, $\{c_1,b_2,b_3,b_6,b_7,b_8\}$, $\{c_1,b_3,b_4,b_6,b_7,b_8\}$, $\{c_1,b_2,b_3,b_4,b_7,b_8\}$. Proof (part): $2\to1$ follows from the soundness of the rule: if, say, a grid satisfies 3 block checks and two row checks incident with the same band, each number occurs three times in the band, and twice in the checked rows, hence it occurs once in the remaining row. $4\to2$: draw 37 pictures, and chase applications of the rule. $1\to3$: For each of the cases a–h, we need to find an invalid grid which satisfies checks outside the given set. a) Take any valid grid, and swap the elements in cells 1:1 and 2:1 (that's row and column number). b) Take a valid grid, and modify cell 1:1. c) There exists a valid grid with 1 in cells 1:1, 4:4, and 2 in cells 1:4, 4:1. Exchange 1 and 2 in these four cells. d) Take a valid grid, and swap the elements in cells 1:1 and 4:1. e) Do the same as in c), but leave 1 in cell 1:1. f) There exists a valid grid with 1 in cells 1:4, 4:7, 7:1, and 2 in cells 1:7, 4:1, and 7:4. Exchange 1 and 2 in these six cells. $$\begin{array}{|ccc|ccc|ccc|} \hline 3&4&5&\color{green}1&6&7&\color{green}2&8&9\\ 6&7&8&3&2&9&4&1&5\\ 9&1&2&4&5&8&3&6&7\\ \hline \color{green}2&3&4&5&7&6&\color{green}1&9&8\\ 5&6&9&8&1&2&7&3&4\\ 7&8&1&9&3&4&5&2&6\\ \hline \color{green}1&9&6&\color{green}2&4&5&8&7&3\\ 4&2&7&6&8&3&9&5&1\\ 8&5&3&7&9&1&6&4&2\\ \hline \end{array}$$ g) Do the same as in f), but leave cells 1:7 and 4:7 unchanged. This is g) up to permutation. h) Do the same as in f), but leave one of the six cells unchanged. (Again, up to permutation.) $3\to4$: This is a tedious but straightforward case analysis, much easier done with pictures than with words, so I'm omitting it. Let us consider a more general problem: a set of checks $S$ implies a check $x$, written $S\models x$, if every Sudoku grid satisfying all checks from $S$ also satisfies $x$. Thus defined $\models$ is a consequence relation (or closure operator). Note that $S$ is complete iff $S\models x$ for every $x$ (i.e., iff $S$ is inconsistent in the usual consequence relation terminology). Let $\mathcal D$ be the set of all Sudoku permutations of the following sets: (i) $\{r_1,r_2,r_3,b_1,b_2,b_3\}$, (ii) $\{r_1,\dots,r_9,c_1,\dots,c_9\}$, (iii) $\{r_1,\dots,r_9,c_1,\dots,c_6,b_3,b_6,b_9\}$, (iv) $\{r_4,\dots,r_9,c_4,\dots,c_9,b_2,b_3,b_4,b_7\}$. (I don't know how to draw decent pictures this time, as everything overlaps everything else.) Define $S\vdash x$ to be the consequence relation axiomatized by rules of the form $D\smallsetminus\{x\}\vdash x$, where $x\in D\in\mathcal D$. Let $\mathcal M$ be the set of all complements of Sudoku permutations of the sets a, ..., h above. The third consequence relation is defined as follows: $S\Vdash x$ iff $S\subseteq M$ implies $x\in M$ for every $M\in\mathcal M$. (In other words, closed sets of $\Vdash$ are exactly the intersections of subfamilies of $\mathcal M$.) Proposition 2: ${\models}={\vdash}={\Vdash}$. $S\vdash x\implies S\models x$: This amounts to showing that $D\smallsetminus\{x\}\models x$ for $x\in D\in\mathcal D$. For example, let $D$ be the set in (iv), and $x=b_3$. Fix a Sudoku grid satisfying $\{r_4,\dots,r_9,c_4,\dots,c_9,b_2,b_4,b_7\}$, and let $n=1,\dots,9$. The number $n$ occurs 3 times in the bottom band by $r_7,r_8,r_9$, one of which occurrences is in $b_7$, hence it occurs twice in $b_8\cup b_9$. The same argument shows that it occurs twice in $b_5\cup b_6$ and in $b_5\cup b_8$, hence it occurs twice in $b_6\cup b_9$. Since there are three occurrences in the rightmost stack by $c_7,c_8,c_9$, $n$ occurs once in $b_3$. As $n$ was arbitrary, this means that $b_3$ is correct. $S\models x\implies S\Vdash x$: This means that for every set $M\in\mathcal M$ and $x\notin M$, there exists a grid satisfying $M$ and not $x$. We have verified this in the proof of Proposition 1. $S\Vdash x\implies S\vdash x$: We need to show that if $S$ is a maximal set such that $S\nvdash x$, there is $M\in\mathcal M$ such that $S\subseteq M$ and $x\notin M$. This is again done by a case analysis. By symmetry, it suffices to consider the cases $x=r_1$ and $x=b_1$. I will briefly write down the proof so that it does not appear that I'm making unjustified claims all the time. Case $x=r_1$: We have $r_1\notin S$. If $r_2\notin S$ or $r_3\notin S$, we are done by a), hence assume $r_2,r_3\in S$. As $S$ is closed under the (i) rule, some block from the first band is missing from $S$. By symmetry, we may assume $b_1\notin S$. If some column incident with $b_1$ is missing, we are done by b), hence assume $c_1,c_2,c_3\in S$. By (i) for the first stack, WLOG $b_4\notin S$. If some row from the middle band is missing, we are done by d), hence assume $r_4,r_5,r_6\in S$. By (i) for the middle band, WLOG $b_5\notin S$. If some column in the middle stack is missing, e) applies, hence assume $c_4,c_5,c_6\in S$. Case 1: $r_7,r_8,r_9\in S$. Then some column, WLOG $c_7$, is missing from $S$ by (ii). If $b_3\notin S$ or $b_6\notin S$, we are done by b) or e), respectively. Thus $b_3,b_6\in S$, hence $b_9\notin S$ by (iii). By (iv), $b_7\notin S$ or $b_8\notin S$, hence e) or h) applies. Case 2: some row, WLOG $r_7$, is missing. Then $b_7,b_8\in S$ unless d) or g) applies. By (iv), $b_3\notin S$ or $b_6\notin S$, hence we are done by d) or g), respectively, unless $b_9\in S$. Then WLOG $c_7\notin S$ by (iii), hence we are done by b) or e). Case $x=b_1$: If some row and column incident with $b_1$ are missing from $S$, we are done by b), hence WLOG $r_1,r_2,r_3\in S$. By (i), WLOG $b_2\notin S$. If columns are missing in both the first two stacks, we are done by d), hence WLOG $c_1,c_2,c_3\in S$. Then WLOG $b_4\notin S$ by (i). If $b_5\notin S$, we are done by c), hence assume $b_5\in S$. If $b_6,b_8\notin S$, then $b_3,b_7,b_9\in S$ unless c) or f) applies, thus some row and column incident with $b_9$ are missing by (i), hence we are done by h). If $b_6,b_8\in S$, some row and column incident with $b_5$ are missing by (i), hence we are done by e). Thus, we can assume $b_6\in S$ and $b_8\notin S$. By (i), WLOG $r_4\notin S$. Then $r_7,r_8,r_9\in S$ unless g) applies, and $c_4,c_5,c_6\in S$ unless e) applies. If $b_7\notin S$, c) applies, otherwise $b_9\notin S$ by (i). By (iii), WLOG $c_7\notin S$, hence we are done by h). I know next to nothing about matroid theory so I let others to figure it out, but the symmetric form of the rules defining $\vdash$ makes me suspect that the closure operator is in fact a matroid. $\begingroup$ Nice! It is not hard to see that any minimal complete set is a maximal independent set. Do you know whether the converse holds? $\endgroup$ – François Brunault May 2 '13 at 14:28 $\begingroup$ Emil, thanks. I'll work through the details of your answer (and the other answers and comments) at the weekend and reply at the beginning of next week. $\endgroup$ – Ralph May 2 '13 at 15:25 $\begingroup$ @François: I don't know. I find independence rather difficult to check by hand. $\endgroup$ – Emil Jeřábek supports Monica May 2 '13 at 17:01 $\begingroup$ All the consequence relations coming from $\mathcal{D}$ are linear, in the sense that $x \in \operatorname{Vect}(D \backslash \{x\})$ for every $x \in D \in \mathcal{D}$ (here we view $r_i$, $c_j$, $b_k$ as formal linear combinations of Sudoku cells). I think we can deduce from this that the Steinitz exchange axiom holds, and thus $\models$ is indeed a matroid. $\endgroup$ – François Brunault May 3 '13 at 16:05 $\begingroup$ Doesn't this imply that we can check a system is optimal by just checking you can't remove any of the rows, columns, or blocks? That cuts down dramatically on the case work. $\endgroup$ – Will Sawin May 4 '13 at 2:46 Not the answer you're looking for? Browse other questions tagged co.combinatorics combinatorial-game-theory sudoku or ask your own question. Is there a Sudoku matroid? Minimal-information description of sudoku solution (Latin square) How to choose $L$ size-$m$ subsets of $\{1,\ldots,n\}$ to maximize expected max overlap with another randomly chosen subset? Most inconsistent ranking Almost Hadamard matrices Who wins two player sudoku? The Sudoku game: Solver-Spoiler variation How to describe the common boundaries between regions in a infinite Sudoku?
CommonCrawl
AI & SOCIETY February 2015 , Volume 30, Issue 1, pp 89–116 | Cite as Social media analytics: a survey of techniques, tools and platforms Bogdan Batrinca Philip C. Treleaven This paper is written for (social science) researchers seeking to analyze the wealth of social media now available. It presents a comprehensive review of software tools for social networking media, wikis, really simple syndication feeds, blogs, newsgroups, chat and news feeds. For completeness, it also includes introductions to social media scraping, storage, data cleaning and sentiment analysis. Although principally a review, the paper also provides a methodology and a critique of social media tools. Analyzing social media, in particular Twitter feeds for sentiment analysis, has become a major research and business activity due to the availability of web-based application programming interfaces (APIs) provided by Twitter, Facebook and News services. This has led to an 'explosion' of data services, software tools for scraping and analysis and social media analytics platforms. It is also a research area undergoing rapid change and evolution due to commercial pressures and the potential for using social media data for computational (social science) research. Using a simple taxonomy, this paper provides a review of leading software tools and how to use them to scrape, cleanse and analyze the spectrum of social media. In addition, it discussed the requirement of an experimental computational environment for social media research and presents as an illustration the system architecture of a social media (analytics) platform built by University College London. The principal contribution of this paper is to provide an overview (including code fragments) for scientists seeking to utilize social media scraping and analytics either in their research or business. The data retrieval techniques that are presented in this paper are valid at the time of writing this paper (June 2014), but they are subject to change since social media data scraping APIs are rapidly changing. Social media Scraping Behavior economics Sentiment analysis Opinion mining NLP Toolkits Software platforms Social media is defined as web-based and mobile-based Internet applications that allow the creation, access and exchange of user-generated content that is ubiquitously accessible (Kaplan and Haenlein 2010). Besides social networking media (e.g., Twitter and Facebook), for convenience, we will also use the term 'social media' to encompass really simple syndication (RSS) feeds, blogs, wikis and news, all typically yielding unstructured text and accessible through the web. Social media is especially important for research into computational social science that investigates questions (Lazer et al. 2009) using quantitative techniques (e.g., computational statistics, machine learning and complexity) and so-called big data for data mining and simulation modeling (Cioffi-Revilla 2010). This has led to numerous data services, tools and analytics platforms. However, this easy availability of social media data for academic research may change significantly due to commercial pressures. In addition, as discussed in Sect. 2, the tools available to researchers are far from ideal. They either give superficial access to the raw data or (for non-superficial access) require researchers to program analytics in a language such as Java. We start with definitions of some of the key techniques related to analyzing unstructured textual data: Natural language processing—(NLP) is a field of computer science, artificial intelligence and linguistics concerned with the interactions between computers and human (natural) languages. Specifically, it is the process of a computer extracting meaningful information from natural language input and/or producing natural language output. News analytics—the measurement of the various qualitative and quantitative attributes of textual (unstructured data) news stories. Some of these attributes are: sentiment, relevance and novelty. Opinion mining—opinion mining (sentiment mining, opinion/sentiment extraction) is the area of research that attempts to make automatic systems to determine human opinion from text written in natural language. Scraping—collecting online data from social media and other Web sites in the form of unstructured text and also known as site scraping, web harvesting and web data extraction. Sentiment analysis—sentiment analysis refers to the application of natural language processing, computational linguistics and text analytics to identify and extract subjective information in source materials. Text analytics—involves information retrieval (IR), lexical analysis to study word frequency distributions, pattern recognition, tagging/annotation, information extraction, data mining techniques including link and association analysis, visualization and predictive analytics. 1.2 Research challenges Social media scraping and analytics provides a rich source of academic research challenges for social scientists, computer scientists and funding bodies. Challenges include: Scraping—although social media data is accessible through APIs, due to the commercial value of the data, most of the major sources such as Facebook and Google are making it increasingly difficult for academics to obtain comprehensive access to their 'raw' data; very few social data sources provide affordable data offerings to academia and researchers. News services such as Thomson Reuters and Bloomberg typically charge a premium for access to their data. In contrast, Twitter has recently announced the Twitter Data Grants program, where researchers can apply to get access to Twitter's public tweets and historical data in order to get insights from its massive set of data (Twitter has more than 500 million tweets a day). Data cleansing—cleaning unstructured textual data (e.g., normalizing text), especially high-frequency streamed real-time data, still presents numerous problems and research challenges. Holistic data sources—researchers are increasingly bringing together and combining novel data sources: social media data, real-time market & customer data and geospatial data for analysis. Data protection—once you have created a 'big data' resource, the data needs to be secured, ownership and IP issues resolved (i.e., storing scraped data is against most of the publishers' terms of service), and users provided with different levels of access; otherwise, users may attempt to 'suck' all the valuable data from the database. Data analytics—sophisticated analysis of social media data for opinion mining (e.g., sentiment analysis) still raises a myriad of challenges due to foreign languages, foreign words, slang, spelling errors and the natural evolving of language. Analytics dashboards—many social media platforms require users to write APIs to access feeds or program analytics models in a programming language, such as Java. While reasonable for computer scientists, these skills are typically beyond most (social science) researchers. Non-programming interfaces are required for giving what might be referred to as 'deep' access to 'raw' data, for example, configuring APIs, merging social media feeds, combining holistic sources and developing analytical models. Data visualization—visual representation of data whereby information that has been abstracted in some schematic form with the goal of communicating information clearly and effectively through graphical means. Given the magnitude of the data involved, visualization is becoming increasingly important. 1.3 Social media research and applications Social media data is clearly the largest, richest and most dynamic evidence base of human behavior, bringing new opportunities to understand individuals, groups and society. Innovative scientists and industry professionals are increasingly finding novel ways of automatically collecting, combining and analyzing this wealth of data. Naturally, doing justice to these pioneering social media applications in a few paragraphs is challenging. Three illustrative areas are: business, bioscience and social science. The early business adopters of social media analysis were typically companies in retail and finance. Retail companies use social media to harness their brand awareness, product/customer service improvement, advertising/marketing strategies, network structure analysis, news propagation and even fraud detection. In finance, social media is used for measuring market sentiment and news data is used for trading. As an illustration, Bollen et al. (2011) measured sentiment of random sample of Twitter data, finding that Dow Jones Industrial Average (DJIA) prices are correlated with the Twitter sentiment 2–3 days earlier with 87.6 percent accuracy. Wolfram (2010) used Twitter data to train a Support Vector Regression (SVR) model to predict prices of individual NASDAQ stocks, finding 'significant advantage' for forecasting prices 15 min in the future. In the biosciences, social media is being used to collect data on large cohorts for behavioral change initiatives and impact monitoring, such as tackling smoking and obesity or monitoring diseases. An example is Penn State University biologists (Salathé et al. 2012) who have developed innovative systems and techniques to track the spread of infectious diseases, with the help of news Web sites, blogs and social media. Computational social science applications include: monitoring public responses to announcements, speeches and events especially political comments and initiatives; insights into community behavior; social media polling of (hard to contact) groups; early detection of emerging events, as with Twitter. For example, Lerman et al. (2008) use computational linguistics to automatically predict the impact of news on the public perception of political candidates. Yessenov and Misailovic (2009) use movie review comments to study the effect of various approaches in extracting text features on the accuracy of four machine learning methods—Naive Bayes, Decision Trees, Maximum Entropy and K-Means clustering. Lastly, Karabulut (2013) found that Facebook's Gross National Happiness (GNH) exhibits peaks and troughs in-line with major public events in the USA. 1.4 Social media overview For this paper, we group social media tools into: Social media data—social media data types (e.g., social network media, wikis, blogs, RSS feeds and news, etc.) and formats (e.g., XML and JSON). This includes data sets and increasingly important real-time data feeds, such as financial data, customer transaction data, telecoms and spatial data. Social media programmatic access—data services and tools for sourcing and scraping (textual) data from social networking media, wikis, RSS feeds, news, etc. These can be usefully subdivided into: Data sources, services and tools—where data is accessed by tools which protect the raw data or provide simple analytics. Examples include: Google Trends, SocialMention, SocialPointer and SocialSeek, which provide a stream of information that aggregates various social media feeds. Data feeds via APIs—where data sets and feeds are accessible via programmable HTTP-based APIs and return tagged data using XML or JSON, etc. Examples include Wikipedia, Twitter and Facebook. Text cleaning and storage tools—tools for cleaning and storing textual data. Google Refine and DataWrangler are examples for data cleaning. Text analysis tools—individual or libraries of tools for analyzing social media data once it has been scraped and cleaned. These are mainly natural language processing, analysis and classification tools, which are explained below. Transformation tools—simple tools that can transform textual input data into tables, maps, charts (line, pie, scatter, bar, etc.), timeline or even motion (animation over timeline), such as Google Fusion Tables, Zoho Reports, Tableau Public or IBM's Many Eyes. Analysis tools—more advanced analytics tools for analyzing social data, identifying connections and building networks, such as Gephi (open source) or the Excel plug-in NodeXL. Social media platforms—environments that provide comprehensive social media data and libraries of tools for analytics. Examples include: Thomson Reuters Machine Readable News, Radian 6 and Lexalytics. Social network media platforms—platforms that provide data mining and analytics on Twitter, Facebook and a wide range of other social network media sources. News platforms—platforms such as Thomson Reuters providing commercial news archives/feeds and associated analytics. 2 Social media methodology and critique The two major impediments to using social media for academic research are firstly access to comprehensive data sets and secondly tools that allow 'deep' data analysis without the need to be able to program in a language such as Java. The majority of social media resources are commercial and companies are naturally trying to monetize their data. As discussed, it is important that researchers have access to open-source 'big' (social media) data sets and facilities for experimentation. Otherwise, social media research could become the exclusive domain of major companies, government agencies and a privileged set of academic researchers presiding over private data from which they produce papers that cannot be critiqued or replicated. Recently, there has been a modest response, as Twitter and Gnip are piloting a new program for data access, starting with 5 all-access data grants to select applicants. 2.1 Methodology Research requirements can be grouped into: data, analytics and facilities. 2.1.1 Data Researchers need online access to historic and real-time social media data, especially the principal sources, to conduct world-leading research: Social network media—access to comprehensive historic data sets and also real-time access to sources, possibly with a (15 min) time delay, as with Thomson Reuters and Bloomberg financial data. News data—access to historic data and real-time news data sets, possibly through the concept of 'educational data licenses' (cf. software license). Public data—access to scraped and archived important public data; available through RSS feeds, blogs or open government databases. Programmable interfaces—researchers also need access to simple application programming interfaces (APIs) to scrape and store other available data sources that may not be automatically collected. 2.1.2 Analytics Currently, social media data is typically either available via simple general routines or require the researcher to program their analytics in a language such as MATLAB, Java or Python. As discussed above, researchers require: Analytics dashboards—non-programming interfaces are required for giving what might be termed as 'deep' access to 'raw' data. Holistic data analysis—tools are required for combining (and conducting analytics across) multiple social media and other data sets. Data visualization—researchers also require visualization tools whereby information that has been abstracted can be visualized in some schematic form with the goal of communicating information clearly and effectively through graphical means. 2.1.3 Facilities Lastly, the sheer volume of social media data being generated argues for national and international facilities to be established to support social media research (cf. Wharton Research Data Services https://wrds-web.wharton.upenn.edu): Data storage—the volume of social media data, current and projected, is beyond most individual universities and hence needs to be addressed at a national science foundation level. Storage is required both for principal data sources (e.g., Twitter), but also for sources collected by individual projects and archived for future use by other researchers. Computational facility—remotely accessible computational facilities are also required for: a) protecting access to the stored data; b) hosting the analytics and visualization tools; and c) providing computational resources such as grids and GPUs required for processing the data at the facility rather than transmitting it across a network. 2.2 Critique Much needs to be done to support social media research. As discussed, the majority of current social media resources are commercial, expensive and difficult for academics to obtain full access. In general, access to important sources of social media data is frequently restricted and full commercial access is expensive. Siloed data—most data sources (e.g., Twitter) have inherently isolated information making it difficult to combine with other data sources. Holistic data—in contrast, researchers are increasingly interested in accessing, storing and combining novel data sources: social media data, real-time financial market & customer data and geospatial data for analysis. This is currently extremely difficult to do even for Computer Science departments. Analytical tools provided by vendors are often tied to a single data set, maybe limited in analytical capability, and data charges make them expensive to use. There are an increasing number of powerful commercial platforms, such as the ones supplied by SAS and Thomson Reuters, but the charges are largely prohibitive for academic research. Either comparable facilities need to be provided by national science foundations or vendors need to be persuaded to introduce the concept of an 'educational license.' 3 Social media data Clearly, there is a large and increasing number of (commercial) services providing access to social networking media (e.g., Twitter, Facebook and Wikipedia) and news services (e.g., Thomson Reuters Machine Readable News). Equivalent major academic services are scarce.We start by discussing types of data and formats produced by these services. 3.1 Types of data Although we focus on social media, as discussed, researchers are continually finding new and innovative sources of data to bring together and analyze. So when considering textual data analysis, we should consider multiple sources (e.g., social networking media, RSS feeds, blogs and news) supplemented by numeric (financial) data, telecoms data, geospatial data and potentially speech and video data. Using multiple data sources is certainly the future of analytics. Broadly, data subdivides into: Historic data sets—previously accumulated and stored social/news, financial and economic data. Real-time feeds—live data feeds from streamed social media, news services, financial exchanges, telecoms services, GPS devices and speech. And into: Raw data—unprocessed computer data straight from source that may contain errors or may be unanalyzed. Cleaned data—correction or removal of erroneous (dirty) data caused by disparities, keying mistakes, missing bits, outliers, etc. Value-added data—data that has been cleaned, analyzed, tagged and augmented with knowledge. 3.2 Text data formats The four most common formats used to markup text are: HTML, XML, JSON and CSV. HTML—HyperText Markup Language (HTML) as well-known is the markup language for web pages and other information that can be viewed in a web browser. HTML consists of HTML elements, which include tags enclosed in angle brackets (e.g., <div>), within the content of the web page. XML—Extensible Markup Language (XML)—the markup language for structuring textual data using <tag>…<\tag> to define elements. JSON—JavaScript Object Notation (JSON) is a text-based open standard designed for human-readable data interchange and is derived from JavaScript. CSV—a comma-separated values (CSV) file contains the values in a table as a series of ASCII text lines organized such that each column value is separated by a comma from the next column's value and each row starts a new line. For completeness, HTML and XML are so-called markup languages (markup and content) that define a set of simple syntactic rules for encoding documents in a format both human readable and machine readable. A markup comprises start-tags (e.g., <tag>), content text and end-tags (e.g., </tag>). Many feeds use JavaScript Object Notation (JSON), the lightweight data-interchange format, based on a subset of the JavaScript Programming Language. JSON is a language-independent text format that uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. JSON's basic types are: Number, String, Boolean, Array (an ordered sequence of values, comma-separated and enclosed in square brackets) and Object (an unordered collection of key:value pairs). The JSON format is illustrated in Fig. 1 for a query on the Twitter API on the string 'UCL,' which returns two 'text' results from the Twitter user 'uclnews.' JSON Example Comma-separated values are not a single, well-defined format but rather refer to any text file that: (a) is plain text using a character set such as ASCII, Unicode or EBCDIC; (b) consists of text records (e.g., one record per line); (c) with records divided into fields separated by delimiters (e.g., comma, semicolon and tab); and (d) where every record has the same sequence of fields. 4 Social media providers Social media data resources broadly subdivide into those providing: Freely available databases—repositories that can be freely downloaded, e.g., Wikipedia (http://dumps.wikimedia.org) and the Enron e-mail data set available via http://www.cs.cmu.edu/~enron/. Data access via tools—sources that provide controlled access to their social media data via dedicated tools, both to facilitate easy interrogation and also to stop users 'sucking' all the data from the repository. An example is Google's Trends. These further subdivided into: Free sources—repositories that are freely accessible, but the tools protect or may limit access to the 'raw' data in the repository, such as the range of tools provided by Google. Commercial sources—data resellers that charge for access to their social media data. Gnip and DataSift provide commercial access to Twitter data through a partnership, and Thomson Reuters to news data. Data access via APIs—social media data repositories providing programmable HTTP-based access to the data via APIs (e.g., Twitter, Facebook and Wikipedia). 4.1 Open-source databases A major open source of social media is Wikipedia, which offers free copies of all available content to interested users (Wikimedia Foundation 2014). These databases can be used for mirroring, database queries and social media analytics. They include dumps from any Wikimedia Foundation project: http://dumps.wikimedia.org/, English Wikipedia dumps in SQL and XML: http://dumps.wikimedia.org/enwiki/, etc. Another example of freely available data for research is the World Bank data, i.e., the World Bank Databank (http://databank.worldbank.org/data/databases.aspx ), which provides over 40 databases, such as Gender Statistics, Health Nutrition and Population Statistics, Global Economic Prospects, World Development Indicators and Global Development Finance, and many others. Most of the databases can be filtered by country/region, series/topics or time (years and quarters). In addition, tools are provided to allow reports to be customized and displayed in table, chart or map formats. 4.2 Data access via tools As discussed, most commercial services provide access to social media data via online tools, both to control access to the raw data and increasingly to monetize the data. 4.2.1 Freely accessible sources Google with tools such as Trends and InSights is a good example of this category. Google is the largest 'scraper' in the world, but they do their best to 'discourage' scraping of their own pages. (For an introduction of how to surreptitious scrape Google—and avoid being 'banned'—see http://google-scraper.squabbel.com.) Google's strategy is to provide a wide range of packages, such as Google Analytics, rather than from a researchers' viewpoint the more useful programmable HTTP-based APIs. Figure 2 illustrates how Google Trends displays a particular search term, in this case 'libor.' Using Google Trends you can compare up to five topics at a time and also see how often those topics have been mentioned and in which geographic regions the topics have been searched for the most. 4.2.2 Commercial sources There is an increasing number of commercial services that scrape social networking media and then provide paid-for access via simple analytics tools. (The more comprehensive platforms with extensive analytics are reviewed in Sect. 8.) In addition, companies such as Twitter are both restricting free access to their data and licensing their data to commercial data resellers, such as Gnip and DataSift. Gnip is the world's largest provider of social data. Gnip was the first to partner with Twitter to make their social data available, and since then, it was the first to work with Tumblr, Foursquare, WordPress, Disqus, StockTwits and other leading social platforms. Gnip delivers social data to customers in more than 40 countries, and Gnip's customers deliver social media analytics to more than 95 % of the Fortune 500. Real-time data from Gnip can be delivered as a 'Firehose' of every single activity or via PowerTrack, a proprietary filtering tool that allows users to build queries around only the data they need. PowerTrack rules can filter data streams based on keywords, geo boundaries, phrase matches and even the type of content or media in the activity. The company then offers enrichments to these data streams such as Profile Geo (to add significantly more usable geo data for Twitter), URL expansion and language detection to further enhance the value of the data delivered. In addition to real-time data access, the company also offers Historical PowerTrack and Search API access for Twitter which give customers the ability to pull any Tweet since the first message on March 21, 2006. Gnip provides access to premium (Gnip's 'Complete Access' sources are publishers that have an agreement with Gnip to resell their data) and free data feeds (Gnip's 'Managed Public API Access' sources provide access to normalized and consolidated free data from their APIs, although it requires Gnip's paid services for the Data Collectors) via its dashboard (see Fig. 3). Firstly, the user only sees the feeds in the dashboard that were paid for under a sales agreement. To select a feed, the user clicks on a publisher and then chooses a specific feed from that publisher as shown in Fig. 3. Different types of feeds serve different types of use cases and correspond to different types of queries and API endpoints on the publisher's source API. After selecting the feed, the user is assisted by Gnip to configure it with any required parameters before it begins collecting data. This includes adding at least one rule. Under 'Get Data' – > 'Advanced Settings' you can also configure how often your feed queries the source API for data (the 'query rate'). Choose between the publisher's native data format and Gnip's Activity Streams format (XML for Enterprise Data Collector feeds). Gnip Dashboard, Publishers and Feeds 4.3 Data feed access via APIs For researchers, arguably the most useful sources of social media data are those that provide programmable access via APIs, typically using HTTP-based protocols. Given their importance to academics, here, we review individually wikis, social networking media, RSS feeds, news, etc. 4.3.1 Wiki media Wikipedia (and wikis in general) provides academics with large open-source repositories of user-generated (crowd-sourced) content. What is not widely known is that Wikipedia provides HTTP-based APIs that allows programmable access and searching (i.e., scraping) that returns data in a variety of formats including XML. In fact, the API is not unique to Wikipedia but part of MediaWiki's (http://www.mediawiki.org/) open-source toolkit and hence can be used with any MediaWiki-based wikis. The wiki HTTP-based API works by accepting requests containing one or more input arguments and returning strings, often in XML format, that can be parsed and used by the requesting client. Other formats supported include JSON, WDDX, YAML, or PHP serialized. Details can be found at: http://en.wikipedia.org/w/api.php?action=query&list=allcategories&acprop=size&acprefix=hollywood&format=xml. The HTTP request must contain: a) the requested 'action,' such as query, edit or delete operation; b) an authentication request; and c) any other supported actions. For example, the above request returns an XML string listing the first 10 Wikipedia categories with the prefix 'hollywood.' Vaswani (2011) provides a detailed description of how to scrape Wikipedia using an Apache/PHP development environment and an HTTP client capable of transmitting GET and PUT requests and handling responses. 4.3.2 Social networking media As with Wikipedia, popular social networks, such as Facebook, Twitter and Foursquare, make a proportion of their data accessible via APIs. Although many social networking media sites provide APIs, not all sites (e.g., Bing, LinkedIn and Skype) provide API access for scraping data. While more and more social networks are shifting to publicly available content, many leading networks are restricting free access, even to academics. For example, Foursquare announced in December 2013 that it will no longer allow private check-ins on iOS 7, and has now partnered with Gnip to provide a continuous stream of anonymized check-in data. The data is available in two packages: the full Firehose access level and a filtered version via Gnip's PowerTrack service. Here, we briefly discuss the APIs provided by Twitter and Facebook. 4.3.2.1 Twitter The default account setting keeps users' Tweets public, although users can protect their Tweets and make them visible only to their approved Twitter followers. However, less than 10 % of all the Twitter accounts are private. Tweets from public accounts (including replies and mentions) are available in JSON format through Twitter's Search API for batch requests of past data and Streaming API for near real-time data. Search API—Query Twitter for recent Tweets containing specific keywords. It is part of the Twitter REST API v1.1 (it attempts to comply with the design principles of the REST architectural style, which stands for Representational State Transfer) and requires an authorized application (using oAuth, the open standard for authorization) before retrieving any results from the API. Streaming API—A real-time stream of Tweets, filtered by user ID, keyword, geographic location or random sampling. One may retrieve recent Tweets containing particular keywords through Twitter's Search API (part of REST API v1.1) with the following API call: https://api.twitter.com/1.1/search/tweets.json?q=APPLE and real-time data using the streaming API call: https://stream.twitter.com/1/statuses/sample.json. Twitter's Streaming API allows data to be accessed via filtering (by keywords, user IDs or location) or by sampling of all updates from a select amount of users. Default access level 'Spritzer' allows sampling of roughly 1 % of all public statuses, with the option to retrieve 10 % of all statuses via the 'Gardenhose' access level (more suitable for data mining and research applications). In social media, streaming APIs are often called Firehose—a syndication feed that publishes all public activities as they happen in one big stream. Twitter has recently announced the Twitter Data Grants program, where researchers can apply to get access to Twitter's public tweets and historical data in order to get insights from its massive set of data (Twitter has more than 500 million tweets a day); research institutions and academics will not get the Firehose access level; instead, they will only get the data set needed for their research project. Researchers can apply for it at the following address: https://engineering.twitter.com/research/data-grants. Twitter results are stored in a JSON array of objects containing the fields shown in Fig. 4. The JSON array consists of a list of objects matching the supplied filters and the search string, where each object is a Tweet and its structure is clearly specified by the object's fields, e.g., 'created_at' and 'from_user'. The example in Fig. 4 consists of the output of calling Twitter's GET search API via http://search.twitter.com/search.json?q=financial%20times&rpp=1&include_entities=true&result_type=mixed where the parameters specify that the search query is 'financial times,' one result per page, each Tweet should have a node called 'entities' (i.e., metadata about the Tweet) and list 'mixed' results types, i.e., include both popular and real-time results in the response. Example Output in JSON for Twitter REST API v1 4.3.2.2 Facebook Facebook's privacy issues are more complex than Twitter's, meaning that a lot of status messages are harder to obtain than Tweets, requiring 'open authorization' status from users. Facebook currently stores all data as objects1 and has a series of APIs, ranging from the Graph and Public Feed APIs to Keyword Insight API. In order to access the properties of an object, its unique ID must be known to make the API call. Facebook's Search API (part of Facebook's Graph API) can be accessed by calling https://graph.facebook.com/search?q=QUERY&type=page. The detailed API query format is shown in Fig. 5. Here, 'QUERY' can be replaced by any search term, and 'page' can be replaced with 'post,' 'user,' 'page,' 'event,' 'group,' 'place,' 'checkin,' 'location' or 'placetopic.' The results of this search will contain the unique ID for each object. When returning the individual ID for a particular search result, one can use https://graph.facebook.com/ID to obtain further page details such as number of 'likes.' This kind of information is of interest to companies when it comes to brand awareness and competition monitoring. Facebook Graph API Search Query Format The Facebook Graph API search queries require an access token included in the request. Searching for pages and places requires an 'app access token', whereas searching for other types requires a user access token. Replacing 'page' with 'post' in the aforementioned search URL will return all public statuses containing this search term.2 Batch requests can be sent by following the procedure outlined here: https://developers.facebook.com/docs/reference/api/batch/. Information on retrieving real-time updates can be found here: https://developers.facebook.com/docs/reference/api/realtime/. Facebook also returns data in JSON format and so can be retrieved and stored using the same methods as used with data from Twitter, although the fields are different depending on the search type, as illustrated in Fig. 6. Facebook Graph API Search Results for q='Centrica' and type='page' 4.3.3 RSS feeds A large number of Web sites already provide access to content via RSS feeds. This is the syndication standard for publishing regular updates to web-based content based on a type of XML file that resides on an Internet server. For Web sites, RSS feeds can be created manually or automatically (with software). An RSS Feed Reader reads the RSS feed file, finds what is new converts it to HTML and displays it. The program fragment in Fig. 7 shows the code for the control and channel statements for the RSS feed. The channel statements define the overall feed or channel, one set of channel statements in the RSS file. Example RSS Feed Control and Channel Statements 4.3.4 Blogs, news groups and chat services Blog scraping is the process of scanning through a large number of blogs, usually daily, searching for and copying content. This process is conducted through automated software. Figure 8 illustrates example code for Blog Scraping. This involves getting a Web site's source code via Java's URL Class, which can eventually be parsed via Regular Expressions to capture the target content. Example Code for Blog Scraping 4.3.5 News feeds News feeds are delivered in a variety of textual formats, often as machine-readable XML documents, JSON or CSV files. They include numerical values, tags and other properties that tend to represent underlying news stories. For testing purposes, historical information is often delivered via flat files, while live data for production is processed and delivered through direct data feeds or APIs. Figure 9 shows a snippet of the software calls to retrieve filtered NY Times articles. Scraping New York Times Articles Having examined the 'classic' social media data feeds, as an illustration of scraping innovative data sources, we will briefly look at geospatial feeds. 4.3.6 Geospatial feeds Much of the 'geospatial' social media data come from mobile devices that generate location- and time-sensitive data. One can differentiate between four types of mobile social media feeds (Kaplan 2012): Location and time sensitive—exchange of messages with relevance for one specific location at one specific point-in time (e.g., Foursquare). Location sensitive only—exchange of messages with relevance for one specific location, which are tagged to a certain place and read later by others (e.g., Yelp and Qype) Time sensitive only—transfer of traditional social media applications to mobile devices to increase immediacy (e.g., posting Twitter messages or Facebook status updates) Neither location or time sensitive—transfer of traditional social media applications to mobile devices (e.g., watching a YouTube video or reading a Wikipedia entry) With increasingly advanced mobile devices, notably smartphones, the content (photos, SMS messages, etc.) has geographical identification added, called 'geotagged.' These geospatial metadata are usually latitude and longitude coordinates, though they can also include altitude, bearing, distance, accuracy data or place names. GeoRSS is an emerging standard to encode the geographic location into a web feed, with two primary encodings: GeoRSS Geography Markup Language (GML) and GeoRSS Simple. Example tools are GeoNetwork Opensource—a free comprehensive cataloging application for geographically referenced information, and FeedBurner—a web feed provider that can also provide geotagged feeds, if the specified feeds settings allow it. As an illustration Fig. 10 shows the pseudo-code for analyzing a geospatial feed. Pseudo-code for Analyzing a Geospatial Feed 5 Text cleaning, tagging and storing The importance of 'quality versus quantity' of data in social media scraping and analytics cannot be overstated (i.e., garbage in and garbage out). In fact, many details of analytics models are defined by the types and quality of the data. The nature of the data will also influence the database and hardware used. Naturally, unstructured textual data can be very noisy (i.e., dirty). Hence, data cleaning (or cleansing, scrubbing) is an important area in social media analytics. The process of data cleaning may involve removing typographical errors or validating and correcting values against a known list of entities. Specifically, text may contain misspelled words, quotations, program codes, extra spaces, extra line breaks, special characters, foreign words, etc. So in order to achieve high-quality text mining, it is necessary to conduct data cleaning at the first step: spell checking, removing duplicates, finding and replacing text, changing the case of text, removing spaces and non-printing characters from text, fixing numbers, number signs and outliers, fixing dates and times, transforming and rearranging columns, rows and table data, etc. Having reviewed the types and sources of raw data, we now turn to 'cleaning' or 'cleansing' the data to remove incorrect, inconsistent or missing information. Before discussing strategies for data cleaning, it is essential to identify possible data problems (Narang 2009): Missing data—when a piece of information existed but was not included for whatever reason in the raw data supplied. Problems occur with: a) numeric data when 'blank' or a missing value is erroneously substituted by 'zero' which is then taken (for example) as the current price; and b) textual data when a missing word (like 'not') may change the whole meaning of a sentence. Incorrect data—when a piece of information is incorrectly specified (such as decimal errors in numeric data or wrong word in textual data) or is incorrectly interpreted (such as a system assuming a currency value is in $ when in fact it is in £ or assuming text is in US English rather than UK English). Inconsistent data—when a piece of information is inconsistently specified. For example, with numeric data, this might be using a mixture of formats for dates: 2012/10/14, 14/10/2012 or 10/14/2012. For textual data, it might be as simple as: using the same word in a mixture of cases, mixing English and French in a text message, or placing Latin quotes in an otherwise English text. 5.1 Cleansing data A traditional approach to text data cleaning is to 'pull' data into a spreadsheet or spreadsheet-like table and then reformat the text. For example, Google Refine 3 is a standalone desktop application for data cleaning and transformation to various formats. Transformation expressions are written in proprietary Google Refine Expression Language (GREL) or JYTHON (an implementation of the Python programming language written in Java). Figure 11 illustrates text cleansing. Text Cleansing Pseudo-code 5.2 Tagging unstructured data Since most of the social media data is generated by humans and therefore is unstructured (i.e., it lacks a pre-defined structure or data model), an algorithm is required to transform it into structured data to gain any insight. Therefore, unstructured data need to be preprocessed, tagged and then parsed in order to quantify/analyze the social media data. Adding extra information to the data (i.e., tagging the data) can be performed manually or via rules engines, which seek patterns or interpret the data using techniques such as data mining and text analytics. Algorithms exploit the linguistic, auditory and visual structure inherent in all of the forms of human communication. Tagging the unstructured data usually involve tagging the data with metadata or part-of-speech (POS) tagging. Clearly, the unstructured nature of social media data leads to ambiguity and irregularity when it is being processed by a machine in an automatic fashion. Using a single data set can provide some interesting insights. However, combining more data sets and processing the unstructured data can result in more valuable insights, allowing us to answer questions that were impossible beforehand. 5.3 Storing data As discussed, the nature of the social media data is highly influential on the design of the database and possibly the supporting hardware. It would also be very important to note that each social platform has very specific (and narrow) rules around how their respective data can be stored and used. These can be found in the Terms of Service for each platform. For completeness, databases comprise: Flat file—a flat file is a two-dimensional database (somewhat like a spreadsheet) containing records that have no structured interrelationship, that can be searched sequentially. Relational database—a database organized as a set of formally described tables to recognize relations between stored items of information, allowing more complex relationships among the data items. Examples are row-based SQL databases and column-based kdb + used in finance. noSQL databases—a class of database management system (DBMS) identified by its non-adherence to the widely used relational database management system (RDBMS) model. noSQL/newSQL databases are characterized as: being non-relational, distributed, open-source and horizontally scalable. 5.3.1 Apache (noSQL) databases and tools The growth of ultra-large Web sites such as Facebook and Google has led to the development of noSQL databases as a way of breaking through the speed constraints that relational databases incur. A key driver has been Google's MapReduce, i.e., the software framework that allows developers to write programs that process massive amounts of unstructured data in parallel across a distributed cluster of processors or stand-alone computers (Chandrasekar and Kowsalya 2011). It was developed at Google for indexing Web pages and replaced their original indexing algorithms and heuristics in 2004. The model is inspired by the 'Map' and 'Reduce' functions commonly used in functional programming. MapReduce (conceptually) takes as input a list of records, and the 'Map' computation splits them among the different computers in a cluster. The result of the Map computation is a list of key/value pairs. The corresponding 'Reduce' computation takes each set of values that has the same key and combines them into a single value. A MapReduce program is composed of a 'Map()' procedure for filtering and sorting and a 'Reduce()' procedure for a summary operation (e.g., counting and grouping). Figure 12 provides a canonical example application of MapReduce. This example is a process to count the appearances of each different word in a set of documents (MapReduce 2011). The Canonical Example Application of MapReduce 5.3.1.1 Apache open-source software The research community is increasingly using Apache software for social media analytics. Within the Apache Software Foundation, three levels of software are relevant: Cassandra/hive databases—Apache Cassandra is an open source (noSQL) distributed DBMS providing a structured 'key-value' store. Key-value stores allow an application to store its data in a schema-less way. Related noSQL database products include: Apache Hive, Apache Pig and MongoDB, a scalable and high-performance open-source database designed to handle document-oriented storage. Since noSQL databases are 'structure-less,' it is necessary to have a companion SQL database to retain and map the structure of the corresponding data. Hadoop platform—is a Java-based programming framework that supports the processing of large data sets in a distributed computing environment. An application is broken down into numerous small parts (also called fragments or blocks) that can be run on systems with thousands of nodes involving thousands of terabytes of storage. Mahout—provides implementations of distributed or otherwise scalable analytics (machine learning) algorithms running on the Hadoop platform. Mahout4 supports four classes of algorithms: a) clustering (e.g., K-Means, Fuzzy C-Means) that groups text into related groups; b) classification (e.g., Complementary Naive Bayes classifier) that uses supervised learning to classify text; c) frequent itemset mining takes a set of item groups and identifies which individual items usually appear together; and d) recommendation mining (e.g., user- and item-based recommenders) that takes users' behavior and from that tries to find items users might like. 6 Social media analytics techniques As discussed, opinion mining (or sentiment analysis) is an attempt to take advantage of the vast amounts of user-generated text and news content online. One of the primary characteristics of such content is its textual disorder and high diversity. Here, natural language processing, computational linguistics and text analytics are deployed to identify and extract subjective information from source text. The general aim is to determine the attitude of a writer (or speaker) with respect to some topic or the overall contextual polarity of a document. 6.1 Computational science techniques Automated sentiment analysis of digital texts uses elements from machine learning such as latent semantic analysis, support vector machines, bag-of-words model and semantic orientation (Turney 2002). In simple terms, the techniques employ three broad areas: Computational statistics—refers to computationally intensive statistical methods including resampling methods, Markov chain Monte Carlo methods, local regression, kernel density estimation and principal components analysis. Machine learning—a system capable of the autonomous acquisition and integration of knowledge learnt from experience, analytical observation, etc. (Murphy 2012). These sub-symbolic systems further subdivide into: Supervised learning such as Regression Trees, Discriminant Function Analysis, Support Vector Machines. Unsupervised learning such as Self-Organizing Maps (SOM), K-Means. Machine Learning aims to solve the problem of having huge amounts of data with many variables and is commonly used in areas such as pattern recognition (speech, images), financial algorithms (credit scoring, algorithmic trading) (Nuti et al. 2011), energy forecasting (load, price) and biology (tumor detection, drug discovery). Figure 13 illustrates the two learning types of machine learning and their algorithm categories. Machine Learning Overview Complexity science—complex simulation models of difficult-to-predict systems derived from statistical physics, information theory and nonlinear dynamics. The realm of physicists and mathematicians. These techniques are deployed in two ways: Data mining—knowledge discovery that extracts hidden patterns from huge quantities of data, using sophisticated differential equations, heuristics, statistical discriminators (e.g., hidden Markov models), and artificial intelligence machine learning techniques (e.g., neural networks, genetic algorithms and support vector machines). Simulation modeling—simulation-based analysis that tests hypotheses. Simulation is used to attempt to predict the dynamics of systems so that the validity of the underlying assumption can be tested. 6.1.1 Stream processing Lastly, we should mention stream processing (Botan et al 2010). Increasingly, analytics applications that consume real-time social media, financial 'ticker' and sensor networks data need to process high-volume temporal data with low latency. These applications require support for online analysis of rapidly changing data streams. However, traditional database management systems (DBMSs) have no pre-defined notion of time and cannot handle data online in near real time. This has led to the development of Data Stream Management Systems (DSMSs) (Hebrail 2008)—processing in main memory without storing the data on disk—that handle transient data streams on-line and process continuous queries on these data streams. Example commercial systems include: Oracle CEP engine, StreamBase and Microsoft's StreamInsight (Chandramouli et al. 2010). 6.2 Sentiment analysis Sentiment is about mining attitudes, emotions, feelings—it is subjective impressions rather than facts. Generally speaking, sentiment analysis aims to determine the attitude expressed by the text writer or speaker with respect to the topic or the overall contextual polarity of a document (Mejova 2009). Pang and Lee (2008) provide a thorough documentation on the fundamentals and approaches of sentiment classification and extraction, including sentiment polarity, degrees of positivity, subjectivity detection, opinion identification, non-factual information, term presence versus frequency, POS (parts of speech), syntax, negation, topic-oriented features and term-based features beyond term unigrams. 6.2.1 Sentiment classification Sentiment analysis divides into specific subtasks: Sentiment context—to extract opinion, one needs to know the 'context' of the text, which can vary significantly from specialist review portals/feeds to general forums where opinions can cover a spectrum of topics (Westerski 2008). Sentiment level—text analytics can be conducted at the document, sentence or attribute level. Sentiment subjectivity—deciding whether a given text expresses an opinion or is factual (i.e., without expressing a positive/negative opinion). Sentiment orientation/polarity—deciding whether an opinion in a text is positive, neutral or negative. Sentiment strength—deciding the 'strength' of an opinion in a text: weak, mild or strong. Perhaps, the most difficult analysis is identifying sentiment orientation/polarity and strength—positive (wonderful, elegant, amazing, cool), neutral (fine, ok) and negative (horrible, disgusting, poor, flakey, sucks) due to slang. A popular approach is to assign orientation/polarity scores (+1, 0, −1) to all words: positive opinion (+1), neutral opinion (0) and negative opinion (−1). The overall orientation/polarity score of the text is the sum of orientation scores of all 'opinion' words found. However, there are various potential problems in this simplistic approach, such as negation (e.g., there is nothing I hate about this product). One method of estimating sentiment orientation/polarity of the text is pointwise mutual information (PMI) a measure of association used in information theory and statistics. 6.2.2 Supervised learning methods There are a number of popular computational statistics and machine learning techniques used for sentiment analysis. For a good introduction, see (Khan et al 2010). Techniques include: Naïve Bayes (NB)—a simple probabilistic classifier based on applying Bayes' theorem with strong (naive) independence assumptions (when features are independent of one another within each class). Maximum entropy (ME)—the probability distribution that best represents the current state of knowledge is the one with largest information-theoretical entropy. Support vector machines (SVM)—are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. Logistic regression (LR) model—is a type of regression analysis used for predicting the outcome of a categorical (a variable that can take on a limited number of categories) criterion variable based on one or more predictor variables. Latent semantic analysis—an indexing and retrieval method that uses a mathematical technique called singular value decomposition (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text (Kobayashi and Takeda 2000). The bag-of-words model is a simplifying representation commonly used in natural language processing and IR, where a sentence or a document is represented as an unordered collection of words, disregarding grammar and even word order. This is a model traditionally applied to sentiment analysis thanks to its simplicity. 6.2.2.1 Naïve Bayes classifier (NBC) As an example of sentiment analysis, we will describe briefly a Naive Bayes classifier (Murphy 2006). The Naive Bayes classifier is general purpose, simple to implement and works well for a range of applications. It classifies data in two steps: Training step—using the training samples, the method estimates the parameters of a probability distribution, assuming features are conditionally independent given the class. Analysis/testing step—For any unseen test sample, the method computes the posterior probability of that sample belonging to each class. The method then classifies the test sample according to the largest posterior probability. Using the Naïve Bayes classifier, the classifier calculates the probability for a text to belong to each of the categories you test against. The category with the highest probability for the given text wins: $${\text{classify}}\left( {{\text{word}}_{1} , {\text{word}}_{2} , \ldots {\text{word}}_{n} } \right) = \mathop {\arg \hbox{max} }\limits_{\text{cat}} P\left( {\text{cat}} \right)*\mathop \prod \limits_{i = 1}^{n} P({\text{word}}_{i} |{\text{cat}})$$ Figure 14 provides an example of sentiment classification using a Naïve Bayes classifier in Python. There are a number of Naïve Bayes classifier programs available in Java, including the jBNC toolkit (http://jbnc.sourceforge.net), WEKA (www.cs.waikato.ac.nz/ml/weka) and Alchemy API (www.alchemyapi.com/api/demo.html). Sentiment Classification Example using Python We next look at the range of Social Media tools available, starting with 'tools' and 'toolkits,' and in the subsequent chapter at 'comprehensive' social media platforms. Since there are a large number of social media textual data services, tools and platforms, we will restrict ourselves examining a few leading examples. 7 Social media analytics tools Opinion mining tools are crowded with (commercial) providers, most of which are skewed toward sentiment analysis of customer feedback about products and services. Fortunately, there is a vast spectrum of tools for textual analysis ranging from simple open-source tools to libraries, multi-function commercial toolkits and platforms. This section focuses on individual tools and toolkits for scraping, cleaning and analytics, and the next chapter looks at what we call social media platforms that provide both archive data and real-time feeds, and as well as sophisticated analytics tools. 7.1 Scientific programming tools Popular scientific analytics libraries and tools have been enhanced to provide support for sourcing, searching and analyzing text. Examples include: R—used for statistical programming, MATLAB—used for numeric scientific programming, and Mathematica—used for symbolic scientific programming (computer algebra). Data processing and data modeling, e.g., regression analysis, are straightforward using MATLAB, which provides time-series analysis, GUI and array-based statistics. MATLAB is significantly faster than the traditional programming languages and can be used for a wide range of applications. Moreover, the exhaustive built-in plotting functions make it a complex analytics toolkit. More computationally powerful algorithms can be developed using it in conjunction with the packages (e.g., FastICA in order to perform independent component analysis). Python can be used for (natural) language detection, title and content extraction, query matching and, when used in conjunction with a module such as scikit-learn, it can be trained to perform sentiment analysis, e.g., using a Naïve Bayes classifier. Another example, Apache UIMA (Unstructured Information Management Applications) is an open-source project that analyzes 'big data' and discovers information that is relevant to the user. 7.2 Business toolkits Business Toolkits are commercial suites of tools that allow users to source, search and analyze text for a range of commercial purposes. SAS Sentiment Analysis Manager, part of the SAS Text Analytics program, can be used for scraping content sources, including mainstream Web sites and social media outlets, as well as internal organizational text sources, and creates reports that describe the expressed feelings of consumers, customers and competitors in real time. RapidMiner (Hirudkar and Sherekar 2013), a popular toolkit offering an open-source Community Edition released under the GNU AGPL and also an Enterprise Edition offered under a commercial license. RapidMiner provides data mining and machine learning procedures including: data loading and transformation (Extract, Transform, Load, a.k.a. ETL), data preprocessing and visualization, modeling, evaluation, and deployment. RapidMiner is written in Java and uses learning schemes and attribute evaluators from the Weka machine learning environment and statistical modeling schemes from the R project. Other examples are Lexalytics that provides a commercial sentiment analysis engine for many OEM and direct customers; and IBM SPSS Statistics is one of the most used programs for statistical analysis in social science. 7.3 Social media monitoring tools Social media monitoring tools are sentiment analysis tools for tracking and measuring what people are saying (typically) about a company or its products, or any topic across the web's social media landscape. In the area of social media monitoring examples include: Social Mention, (http://socialmention.com/), which provides social media alerts similarly to Google Alerts; Amplified Analytics (http://www.amplifiedanalytics.com/), which focuses on product reviews and marketing information; Lithium Social Media Monitoring; and Trackur, which is an online reputation monitoring tool that tracks what is being said on the Internet. Google also provides a few useful free tools. Google Trends shows how often a particular search-term input compares to the total search volume. Another tool built around Google Search is Google Alerts—a content change detection tool that provides notifications automatically. Google also acquired FeedBurner—an RSS feeds management—in 2007. 7.4 Text analysis tools Text analysis tools are broad-based tools for natural language processing and text analysis. Examples of companies in the text analysis area include: OpenAmplify and Jodange whose tools automatically filter and aggregate thoughts, feelings and statements from traditional and social media. There are also a large number of freely available tools produced by academic groups and non-governmental organizations (NGO) for sourcing, searching and analyzing opinions. Examples include Stanford NLP group tools and LingPipe, a suite of Java libraries for the linguistic analysis of human language (Teufl et al 2010). A variety of open-source text analytics tools are available, especially for sentiment analysis. A popular text analysis tool, which is also open source, is Python NLTK—Natural Language Toolkit (www.nltk.org/), which includes open-source Python modules, linguistic data and documentation for text analytics. Another one is GATE (http://gate.ac.uk/sentiment). We should also mention Lexalytics Sentiment Toolkit which performs automatic sentiment analysis on input documents. It is powerful when used on a large number of documents, but it does not perform data scraping. Other commercial software for text mining include: AeroText, Attensity, Clarabridge, IBM LanguageWare, SPSS Text Analytics for Surveys, Language Computer Corporation, STATISTICA Text Miner and WordStat. 7.5 Data visualization tools The data visualization tools provide business intelligence (BI) capabilities and allow different types of users to gain insights from the 'big' data. The users can perform exploratory analysis through interactive user interfaces available on the majority of devices, with a recent focus on mobile devices (smartphones and tablets). The data visualization tools help the users identify patterns, trends and relationships in the data which were previously latent. Fast ad hoc visualization on the data can reveal patterns and outliers, and it can be performed on large-scale data sets frameworks, such as Apache Hadoop or Amazon Kinesis. Two notable visualization tools are SAS Visual Analytics and Tableau. 7.6 Case study: SAS Sentiment Analysis and Social Media Analytics SAS is the leading advanced analytics software for BI, data management and predictive analytics. SAS Sentiment Analysis (SAS Institute 2013) automatically rates and classifies opinions. It also performs data scraping from Web sites, social media and internal file systems. Then, it processes in a unified format to evaluate relevance with regard to its pre-defined topics. SAS Sentiment Analysis identifies trends and emotional changes. Experts can refine the sentiment models through an interactive workbench. The tool automatically assigns sentiment scores to the input documents as they are retrieved in real time. SAS Sentiment Analysis combines statistical modeling and linguistics (rule-based natural language processing techniques) in order to output accurate sentiment analysis results. The tool monitors and evaluates sentiment changes over time; it extracts sentiments in real time as the scraped data is being retrieved and generates reports showing patterns and detailed reactions. The software identifies where (i.e., on what channel) the topic is being discussed and quantifies perceptions in the market as the software scrapes and analyzes both internal and external content about your organization (or the concept you are analyzing) and competitors, identifying positive, neutral, negative or 'no sentiment' texts in real time. SAS Sentiment Analysis and SAS Social Media Analytics have a user-friendly interface for developing models; users can upload sentiment analysis models directly to the server in order to minimize the manual model deployment. More advanced users can use the interactive workbench to refine their models. The software includes graphics to illustrate instantaneously the text classification (i.e., positive, negative, neutral or unclassified) and point-and-click exploration in order to drill the classified text into detail. The tool also provides some workbench functionality through APIs, allowing for automatic/programmatic integration with other modules/projects. Figure 15 illustrates the SAS Social Media Analytics graphical reports, which provide user-friendly sentiment insights. The SAS software has crawling plugins for the most popular social media sites, including Facebook, Twitter, Bing, LinkedIn, Flickr and Google. It can also be customized to crawl any Web site using the mark-up matcher; this provides a point-and-click interface to indicate what areas need to be extracted from an HTML or XML. SAS Social Media Analytics gathers online conversations from popular networking sites (e.g., Facebook and Twitter), blogs and review sites (e.g., TripAdvisor and Priceline), and scores the data for influence and sentiment. It provides visualization tools for real-time tracking; it allows users to submit customized queries and returns a geographical visualization with brand-specific commentary from Twitter, as illustrated in Fig. 16. Graphical Reports with Sentiment Insights SAS Visualization of Real-Time Tracking via Twitter 8 Social media analytics platforms Here, we examine comprehensive social media platforms that combine social media archives, data feeds, data mining and data analysis tools. Simply put, the platforms are different from tools and toolkits since platforms are more comprehensive and provide both tools and data. They broadly subdivide into: News platforms—platforms such as Thomson Reuters providing news archives/feeds and associated analytics and targeting companies such as financial institutions seeking to monitor market sentiment in news. Social network media platforms—platforms that provide data mining and analytics on Twitter, Facebook and a wide range of other social network media sources. Providers typically target companies seeking to monitor sentiment around their brands or products. 8.1 News platforms The two most prominent business news feed providers are Thomson Reuters and Bloomberg. Computer read news in real time and provide automatically key indicators and meaningful insights. The news items are automatically retrieved, analyzed and interpreted in a few milliseconds. The machine-readable news indicators can potentially improve quantitative strategies, risk management and decision making. Examples of machine-readable news include: Thomson Reuters Machine Readable News, Bloomberg's Event-Driven Trading Feed and AlphaFlash (Deutsche Börse's machine-readable news feed). Thomson Reuters Machine Readable News (Thomson Reuters 2012a, b, c) has Reuters News content dating back to 1987, and comprehensive news from over 50 third-parties dating back to 2003, such as PR Newswire, Business Wire and the Regulatory News Service (LSE). The feed offers full text and comprehensive metadata via streaming XML. Thomson Reuters News Analytics uses Natural Language Processing (NLP) techniques to score news items on tens of thousands of companies and nearly 40 commodities and energy topics. Items are measured across the following dimensions: Author sentiment—metrics for how positive, negative or neutral the tone of the item is, specific to each company in the article. Relevance—how relevant or substantive the story is for a particular item. Volume analysis—how much news is happening on a particular company. Uniqueness—how new or repetitive the item is over various time periods. Headline analysis—denotes special features such as broker actions, pricing commentary, interviews, exclusives and wrap-ups. 8.2 Social network media platforms Attensity, Brandwatch, Salesforce Marketing Cloud (previously called Radian6) and Sysomos MAP (Media Analysis Platform) are examples of social media monitoring platforms, which measure demographics, influential topics and sentiments. They include text analytics and sentiment analysis on online consumer conversations and provide user-friendly interfaces for customizing the search query, dashboards, reports and file export features (e.g., to Excel or CSV format). Most of the platforms scrape a range of social network media using a distributed crawler that targets: micro-blogging (Twitter via full Twitter Firehose), blogs (Blogger, WordPress, etc.), social networks (Facebook and MySpace), forums, news sites, images sites (Flickr) and corporate sites. Some of the platforms provide multi-language support for widely used languages (e.g., English, French, German, Italian and Spanish). Sentiment analysis platforms use two main methodologies. One involves a statistical or model-based approach wherein the system learns to assess sentiment by analyzing large quantities of pre-scored material. The other method utilizes a large dictionary of pre-scored phrases. RapidMiner5 is a platform which combines data mining and data analysis, which, depending on the requirements, can be open source. It uses the WEKA machine learning library and provides access to data sources such as Excel, Access, Oracle, IBM, MySQL, PostgreSQL and Text files. Mozenda provides a point-and-click user interface for extracting specific information from the Web sites and allows automation and data export to CSV, TSV or XML files. DataSift provides access to both real-time and historical social data from the leading social networks and millions of other sources, enabling clients to aggregate, filter and gain insights and discover trends from the billions of public social conversations. Once the data is aggregated and processed (i.e., DataSift can filter and add context, such as enrichments—language processing, geodata and demographics—and categorization—spam detection, intent identification and machine learning), the customers can use pre-built integrations with popular BI tools, application and developer tools to deliver the data into their businesses, or use the DataSift APIs to stream real-time data into their applications. There are a growing number of social media analytics platforms being founded nowadays. Other notable platforms that handle sentiment and semantic analysis of Web and Web 2.0-sourced material include Google Analytics, HP Autonomy IDOL (Intelligent Data Operating Layer), IBM SPSS Modeler, Adobe SocialAnalytics, GraphDive, Keen IO, Mass Relevance, Parse.ly, ViralHeat, Socialbakers, DachisGroup, evolve24, OpenAmplify and AdmantX. Recently, more and more specific social analytics platforms have emerged. One of them is iSpot.tv which launched its own social media analytics platform that matches television ads with mentions on Twitter and Facebook. It provides real-time reports about when and where an ad appears, together with what people are saying about it on social networks (iSpot.tv monitors almost 80 different networks). Thomson Reuters has recently announced that it is now incorporating Twitter sentiment analysis for the Thomson Reuters Eikon market analysis and trading platform, providing visualizations and charts based on the sentiment data. In the previous year, Bloomberg incorporated tweets related to specific companies in a wider data stream. 8.3 Case study: Thomson Reuters News Analytics Thomson Reuters News Analytics (TRNA) provides a huge news archive with analytics to read and interpret news, offering meaningful insights. TRNA scores news items on over 25,000 equities and nearly 40 topics (commodities and energy). The platform scrapes and analyzes news data in real time and feeds the data into other programs/projects or quantitative strategies. TRNA uses an NLP system from Lexalytics, one of the linguistics technology leaders, that can track news sentiment over time, and scores text across the various dimensions as mentioned in Sect. 8.1. The platform's text scoring and metadata has more than 80 fields (Thomson Reuters 2010) such as: Item type—stage of the story: Alert, Article, Updates or Corrections. Item genre—classification of the story, i.e., interview, exclusive and wrap-up. Headline—alert or headline text. Relevance—varies from 0 to 1.0. Prevailing sentiment—can be 1, 0 or −1. Positive, neutral, negative—more detailed sentiment indication. Broker action—denotes broker actions: upgrade, downgrade, maintain, undefined or whether it is the broker itself Price/market commentary—used to flag items describing pricing/market commentary Topic codes—describes what the story is about, i.e., RCH = Research, RES = Results, RESF = Results Forecast, MRG = Mergers and Acquisitions A snippet of the news sentiment analysis is illustrated in Fig. 17. Thomson Reuters News Discovery Application with Sentiment Analysis In 2012, Thomson Reuters extended its machine-readable news offering to include sentiment analysis and scoring for social media. TRNA's extension is called Thomson Reuters News Analytics (TRNA) for Internet News and Social Media, which aggregates content from over four million social media channels and 50,000 Internet news sites. The content is then analyzed by TRNA in real time, generating a quantifiable output across dimensions such as sentiment, relevance, novelty volume, category and source ranks. This extension uses the same extensive metadata tagging (across more than 80 fields). TRNA for Internet News and Social Media is a powerful platform analyzing, tagging and filtering millions of public and premium sources of Internet content, turning big data into actionable ideas. The platform also provides a way to visually analyze the big data. It can be combined with Panopticon Data Visualization Software in order to reach meaningful conclusions more quickly with visually intuitive displays (Thomson Reuters 2012a, b, c), as illustrated in Fig. 18. Combining TRNA for Internet News and Social Media with Panopticon Data Visualization Software Thomson Reuters also expanded the News Analytics service with MarketPsych Indices (Thomson Reuters 2012a, b, c), which allows for real-time psychological analysis of news and social media. The Thomson Reuters MarketPsych Indices (TRMI) service gains a quantitative view of market psychology as it attempts to identify human emotion and sentiment. It is a complement to TRNA and uses NLP processing created by MarketPsych (http://www.marketpsych.com), a leading company in behavioral psychology in financial markets. Behavioral economists have extensively investigated whether emotions affect markets in predictable ways, and TRMI attempts to measure the state of 'emotions' in real time in order to identify patterns as they emerge. TRMI has two key indicator types: Emotional indicators (sentiments)—emotions such as Gloom, Fear, Trust, Uncertainty, Innovation, Anger, Stress, Urgency, Optimism and Joy. Buzz metrics—they indicate how much something is being discussed in the news and social media and include macroeconomic themes (e.g., Litigation, Mergers, Volatility, Financials sector, Airlines sector and Clean Technology sector) The platform from Thomson Reuters allows the exploitation of news and social media to be used to spot opportunities and capitalize on market inefficiencies (Thomson Reuters 2013). 9 Experimental computational environment for social media As we have discussed in Sect. 2 methodology and critique, researchers arguably require a comprehensive experimental computational environment/facility for social media research with the following attributes: 9.1 Data Data scraping—the ability through easily programmable APIs to scrape any type of social media (social networking media, RSS feeds, blogs, wikis, news, etc.). Data streaming—to access and combine real-time feeds and archived data for analytics. Data storage—a major facility for storing principal data sources and for archiving data collected for specific projects. Data protection/security—the stored data needs to be protected to stop users attempting to 'suck it out' off the facility. Access to certain data sets may need to be restricted and charges may be levied on access (cf. Wharton Research Data Services). Programmable interfaces—researchers need access to simple application programming interfaces (APIs) to scrape and store other available data sources that may not be automatically collected. 9.2 Analytics Analytics dashboards—non-programming interfaces are required for giving what might be referred to as 'deep' access to 'raw' data. Programmable analytics—programming interfaces are also required so users can deploy advanced data mining and computer simulation models using MATLAB, Java and Python. Stream processing—facilities are required to support analytics on streamed real-time data feeds, such as Twitter feeds, news feeds and financial tick data. High-performance computing—lastly the environment needs to support non-programming interfaces to MapReduce/Hadoop, NoSQL databases and Grids of processors. Decentralized analytics—if researchers are to combine social media data with highly sensitive/valuable proprietary data held by governments, financial institutions, retailers and other commercial organizations, then the environment needs in the future to support decentralized analytics across distributed data sources and in a highly secure way. Realistically, this is best facilitated at a national or international level. To provide some insight into the structure of an experimental computational environment for social media (analytics), below we present the system architecture of the UCL SocialSTORM analytics platform developed by Dr. Michal Galas and his colleagues (Galas et al. 2012) to University College London (UCL). University College London's social media streaming, storage and analytics platform (SocialSTORM) is a cloud-based 'central hub' platform, which facilitates the acquisition of text-based data from online sources such as Twitter, Facebook, RSS media and news. The system includes facilities to upload and run Java-coded simulation models to analyze the aggregated data, which may comprise scraped social data and/or users' own proprietary data. 9.3 System architecture Figure 19 shows the architecture of the SocialSTORM platform, and the following section outlines the key components of the overall system. The basic idea is that each external feed has a dedicated connectivity engine (API) and this streams data to the message bus, which handles internal communication, analytics and storage. SocialSTORM Platform Architecture Connectivity engines—the connectivity modules communicate with the external data sources, including Twitter and Facebook's APIs, financial blogs, various RSS and news feeds. The platform's APIs are continually being expanded to incorporate other social media sources as required. Data is fed into SocialSTORM in real time, including a random sample of all public updates from Twitter, providing gigabytes of text-based data every day. Messaging bus—the message bus serves as the internal communication layer which accepts the incoming data streams (messages) from the various connectivity engines, parses these (from either JSON or XML format) to an internal representation of data in the platform, distributes the information across all the interested modules and writes the various data to the appropriate tables of the main database. Data warehouse—the database supports terabytes of text-based entries, which are accompanied by various types of metadata to expand the potential avenues of research. Entries are organized by source and accurately time-stamped with the time of publication, as well as being tagged with topics for easy retrieval by simulation models. The platform currently uses HBase, but in future might use Apache Cassandra or Hive. Simulation manager—the simulation manager provides an external API for clients to interact with the data for research purposes, including a web-based GUI whereby users can select various filters to apply to the data sets before uploading a Java-coded simulation model to perform the desired analysis on the data. This facilitates all client-access to the data warehouse and also allows users to upload their own data sets for aggregation with UCL's social data for a particular simulation. There is also the option to switch between historical mode (which mines data existing at the time the simulation is started) and live mode (which 'listens' to incoming data streams and performs analysis in real time). 9.4 Platform components The platform comprises the following modules, which are illustrated in Fig. 20: Environment System Architecture and Modules Back-end services—this provides the core of the platform functionalities. It is a set of services that allow connections to data providers, propagation processing and aggregation of data feeds, execution and maintenance of models, as well as their management in a multiuser environment. Front-end client APIs—this provides a set of programmatic and graphical interfaces that can be used to interact with a platform to implement and test analytical models. The programmatic access provides model templates to simplify access to some of the functionalities and defines generic structure of every model in the platform. The graphic user interface allows visual management of analytical models. It enables the user to visualize data in various forms, provides data watch grid capabilities, provides a dynamic visualization of group behavior of data and allows users to observe information on events relevant to the user's environment. Connectivity engine—this functionality provides a means of communication with the outside world, with financial brokers, data providers and others. Each of the outside venues utilized by the platform has a dedicated connector object responsible for control of communication. This is possible due to the fact that each of the outside institutions provide either a dedicated API or is using a communication protocol (e.g., the FIX protocol and the JSON/XML-based protocol). The platform provides a generalized interface to allow standardization of a variety of connectors. Internal communication layer—the idea behind the use of the internal messaging system in the platform draws from the concept of event-driven programming. Analytical platforms utilize events as a main means of communication between their elements. The elements, in turn, are either producers or consumers of the events. The approach significantly simplifies the architecture of such system while making it scalable and flexible for further extensions. Aggregation database—this provides a fast and robust DBMS functionality, for an entry-level aggregation of data, which is then filtered, enriched, restructured and stored in big data facilities. Aggregation facilities enable analytical platforms to store, extract and manipulate large amounts of data. The storage capabilities of the Aggregation element not only allow replay of historical data for modeling purposes, but also enable other, more sophisticated tasks related to functioning of the platform including model risk analysis, evaluation of performance of models and many more. Client SDK—this is a complete set of APIs (Application Programming Interfaces) that enable development, implementation and testing of new analytical models with use of the developer's favorite IDE (Integrated Development Environment). The SDK allows connection from the IDE to the server side of the platform to provide all the functionalities the user may need to develop and execute models. Shared memory—this provides a buffer-type functionality that speeds up the delivery of temporal/historical data to models and the analytics-related elements of the platform (i.e., the statistical analysis library of methods), and, at the same time, reduces the memory usage requirement. The main idea is to have a central point in the memory (RAM) of the platform that will manage and provide a temporal/historical data from the current point of time up to a specified number of timestamps back in history). Since the memory is shared, no model will have to keep and manage history by itself. Moreover, since the memory is kept in RAM rather than in the files or the DBMS, the access to it is instant and bounded only by the performance of hardware and the platform on which the buffers work. Model templates—the platform supports two generic types of models: push and pull. The push type registers itself to listen to a specified set of data streams during initialization, and the execution of the model logic is triggered each time a new data feed arrives to the platform. This type is dedicated to very quick, low-latency, high-frequency models and the speed is achieved at the cost of small shared memory buffers. The pull model template executes and requests data on its own, based on a schedule. Instead of using the memory buffers, it has a direct connection to the big data facilities and hence can request as much historical data as necessary, at the expense of speed. 10 Conclusions As discussed, the easy availability of APIs provided by Twitter, Facebook and News services has led to an 'explosion' of data services and software tools for scraping and sentiment analysis, and social media analytics platforms. This paper surveys some of the social media software tools, and for completeness introduced social media scraping, data cleaning and sentiment analysis. Perhaps, the biggest concern is that companies are increasingly restricting access to their data to monetize their content. It is important that researchers have access to computational environments and especially 'big' social media data for experimentation. Otherwise, computational social science could become the exclusive domain of major companies, government agencies and a privileged set of academic researchers presiding over private data from which they produce papers that cannot be critiqued or replicated. Arguably what is required are public-domain computational environments and data facilities for quantitative social science, which can be accessed by researchers via a cloud-based facility. An object may be a person, a page, a picture or an event. Details of the information retrieved in status updates can be found here: https://developers.facebook.com/docs/reference/api/status/. More information about Google Refine is found in its documentation wiki: https://github.com/OpenRefine/OpenRefine/wiki. Apache Mahout project page: http://mahout.apache.org/. http://rapid-i.com/. The authors would like to acknowledge Michal Galas who led the design and implementation of the UCL SocialSTORM platform with the assistance of Ilya Zheludev, Kacper Chwialkowski and Dan Brown. Dr. Christian Hesse of Deutsche Bank is also acknowledged for collaboration on News Analytics. Botan I et al. (2010) SECRET: a model for analysis of the execution semantics of stream processing systems. Proc VLDB Endow 3(1–2):232–243Google Scholar Salathé M et al. (2012) Digital epidemiology. PLoS Comput Biol 8(7):1–5Google Scholar Bollen J, Mao H, Zeng X (2011) Twitter mood predicts the stock market. J Comput Sci 2(3):1–8CrossRefGoogle Scholar Chandramouli B et al (2010) Data stream management systems for computational finance. IEEE Comput 43(12):45–52CrossRefGoogle Scholar Chandrasekar C, Kowsalya N (2011) Implementation of MapReduce Algorithm and Nutch Distributed File System in Nutch. Int J Comput Appl 1:6–11Google Scholar Cioffi-Revilla C (2010) Computational social science. Wiley Interdiscip Rev Comput Statistics 2(3):259–271CrossRefGoogle Scholar Galas M, Brown D, Treleaven P (2012) A computational social science environment for financial/economic experiments. In: Proceedings of the Computational Social Science Society of the Americas, vol 1, pp 1–13Google Scholar Hebrail G (2008) Data stream management and mining. In: Fogelman-Soulié F, Perrotta D, Piskorski J, Steinberger R (eds) Mining Massive Data Sets for Security. IOS Press, pp 89–102Google Scholar Hirudkar AM, Sherekar SS (2013) Comparative analysis of data mining tools and techniques for evaluating performance of database system. Int J Comput Sci Appl 6(2):232–237Google Scholar Kaplan AM (2012) If you love something, let it go mobile: mobile marketing and mobile social media 4x4. Bus Horiz 55(2):129–139CrossRefGoogle Scholar Kaplan AM, Haenlein M (2010) Users of the world, unite! the challenges and opportunities of social media. Bus Horiz 53(1):59–68CrossRefGoogle Scholar Karabulut Y (2013) Can Facebook predict stock market activity? SSRN eLibrary, pp 1–58. http://ssrn.com/abstract=2017099. Accessed 2 Feb 2014 Khan A, Baharudin B, Lee LH, Khan K (2010) A review of machine learning algorithms for text-documents classification. J Adv Inf Technol 1(1):4–20Google Scholar Kobayashi M, Takeda K (2000) Information retrieval on the web. ACM Comput Surv CSUR 32(2):144–173CrossRefGoogle Scholar Lazer D et al (2009) Computational social science. Science 323:721–723CrossRefGoogle Scholar Lerman K, Gilder A, Dredze M, Pereira F (2008) Reading the markets: forecasting public opinion of political candidates by news analysis. In: Proceedings of the 22nd international conference on computational linguistics 1:473–480Google Scholar MapReduce (2011) What is MapReduce?. http://www.mapreduce.org/what-is-mapreduce.php. Accessed 31 Jan 2014 Mejova Y (2009) Sentiment analysis: an overview, pp 1–34. http://www.academia.edu/291678/Sentiment_Analysis_An_Overview. Accessed 4 Nov 2013 Murphy KP (2006) Naive Bayes classifiers. University of British Columbia, pp 1–8. http://www.ic.unicamp.br/~rocha/teaching/2011s1/mc906/aulas/naivebayes.pdf Murphy KP (2012) Machine learning: a probabilistic perspective. In: Chapter 1: Introduction. MIT Press, pp 1–26Google Scholar Narang RK (2009) Inside the black box. Hoboken, New JerseyCrossRefGoogle Scholar Nuti G, Mirghaemi M, Treleaven P, Yingsaeree C (2011) Algorithmic trading. IEEE Comput 44(11):61–69CrossRefGoogle Scholar Pang B, Lee L (2008) Opinion mining and sentiment analysis. Found Trends Inf Retr 2(1–2):1–135CrossRefGoogle Scholar SAS Institute Inc (2013) SAS sentiment analysis factsheet. http://www.sas.com/resources/factsheet/sas-sentiment-analysis-factsheet.pdf. Accessed 6 Dec 2013 Teufl P, Payer U, Lackner G (2010) From NLP (natural language processing) to MLP (machine language processing). In: Kotenko I, Skormin V (eds) Computer network security, Springer, Berlin Heidelberg, pp 256–269Google Scholar Thomson Reuters (2010). Thomson Reuters news analytics. http://thomsonreuters.com/products/financial-risk/01_255/News_Analytics_-_Product_Brochure-_Oct_2010_1_.pdf. Accessed 1 Oct 2013 Thomson Reuters (2012) Thomson Reuters machine readable news. http://thomsonreuters.com/products/financial-risk/01_255/TR_MRN_Overview_10Jan2012.pdf. Accessed 5 Dec 2013 Thomson Reuters (2012) Thomson Reuters MarketPsych Indices. http://thomsonreuters.com/products/financial-risk/01_255/TRMI_flyer_2012.pdf. Accessed 7 Dec 2013 Thomson Reuters (2012) Thomson Reuters news analytics for internet news and social media. http://thomsonreuters.com/business-unit/financial/eurozone/112408/news_analytics_and_social_media. Accessed 7 Dec 2013 Thomson Reuters (2013) Machine readable news. http://thomsonreuters.com/machine-readable-news/?subsector=thomson-reuters-elektron. Accessed 18 Dec 2013 Turney PD (2002) Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics pp. 417–424Google Scholar Vaswani V (2011) Hook into Wikipedia information using PHP and the MediaWiki API. http://www.ibm.com/developerworks/web/library/x-phpwikipedia/index.html. Accessed 21 Dec 2012 Westerski A (2008) Sentiment analysis: introduction and the state of the art overview. Universidad Politecnica de Madrid, Spain, pp 1–9. http://www.adamwesterski.com/wpcontent/files/docsCursos/sentimentA_doc_TLAW.pdf. Accessed 14 Aug 2013 Wikimedia Foundation (2014) Wikipedia:Database download. http://en.wikipedia.org/wiki/Wikipedia:Database_download. Accessed 18 Apr 2014 Wolfram SMA (2010) Modelling the stock market using Twitter. Dissertation Master of Science thesis, School of Informatics, University of Edinburgh, pp 1–74. http://homepages.inf.ed.ac.uk/miles/msc-projects/wolfram.pdf. Accessed 23 Jul 2013 Yessenov K, Misailovic S (2009) Sentiment analysis of movie review comments, pp 1–17. http://people.csail.mit.edu/kuat/courses/6.863/report.pdf. Accessed 16 Aug 2013 Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. 1.Department of Computer ScienceUniversity College LondonLondonUK Batrinca, B. & Treleaven, P.C. AI & Soc (2015) 30: 89. https://doi.org/10.1007/s00146-014-0549-4 Publisher Name Springer London
CommonCrawl
Get rolling on your preparation for AMC 10 with Cheenta. This post has all the AMC 10 Algebra previous year Questions, year-wise. Try out these problems: AMC 10A, 2021, Problem 1 What is the value of $\left(2^{2}-2\right)-\left(3^{2}-3\right)+\left(4^{2}-4\right)$ (E) 12 Portia's high school has 3 times as many students as Lara's high school. The two high schools have a total of 2600 students. How many students does Portia's high school have? (A) 600 (D) 2000 (E) 2050 The sum of two natural numbers is 17,402 . One of the two numbers is divisible by 10 . If the units digit of that number is erased, the other number is obtained. What is the difference of these two numbers? (A) 10,272 (B) 11,700 (C) 13,362 (D) 14,238 (E) 15,426 A cart rolls down a hill, travelling 5 inches the first second and accelerating so that during each successive 1 -second time interval, it travels inches more than during the previous 1 -second interval. The cart takes 30 seconds to reach the bottom of the hill. How far, in inches, does it travel? The quiz scores of a class with $k>12$ students have a mean of 8 . The mean of a collection of 12 of these quiz scores is 14 . What is the mean of the remaining quiz scores in terms of $k$ ? (A) $\frac{14-8}{k-12}$ (B) $\frac{8 k-168}{k-12}$ (C) $\frac{14}{12}-\frac{8}{k}$ (D) $\frac{14(k-12)}{k^{2}}$ (E) $\frac{14(k-12)}{8 k}$ Chantal and Jean start hiking from a trailhead toward a fire tower. Jean is wearing a heavy backpack and walks slower. Chantal starts walking a 4 miles per hour. Halfway to the tower, the trail becomes really steep, and Chantal slows down to 2 miles per hour. After reaching the tower, she immediately turns around and descends the steep part of the trail at 3 miles per hour. She meets Jean at the halfway point. What was Jean's average speed, in miles per hour, until they meet? (A) $\frac{12}{13}$ (B) $1$ (C) $\frac{13}{12}$ (D) $\frac{24}{13}$ (E) $2$ AMC 10B, 2021, Problem 2 What is the value of $\sqrt{(3-2 \sqrt{3})^{2}}+\sqrt{(3+2 \sqrt{3})^{2}}$ ? (B) $4 \sqrt{3}-6$ (D) $4 \sqrt{3}$ (E) $4 \sqrt{3}+6$ AMC 10B, 2021, Problem 15 The real number $x$ satisfies the equation $x+\frac{1}{x}=\sqrt{5}$. What is the value of $x^{11}-7 x^{7}+x^{3} ?$ (A) $-1$ (C) $1$ (D) $2$ (E) $\sqrt{5}$ AMC 10A, 2020, Problem 22 Hiram's algebra notes are 50 pages long and are printed on 25 sheets of paper; the first sheet contains pages 1 and 2 , the second sheet contains pages 3 and 4 , and so on. One day he leaves his notes on the table before leaving for lunch, and his roommate decides to borrow some pages from the middle of the notes. When Hiram comes back, he discovers that his roommate has taken a consecutive set of sheets from the notes and that the average (mean) of the page numbers on all remaining sheets is exactly 19 . How many sheets were borrowed? What value of $x$ satisfies $x-\frac{3}{4}=\frac{5}{12}-\frac{1}{3} ?$ (A) $-\frac{2}{3}$ (B) $\frac{7}{36}$ (C) $\frac{7}{12}$ (D) $\frac{2}{3}$ (E) $\frac{5}{6}$ The numbers $3,5,7, a$, and $b$ have an average (arithmetic mean) of $15$ . What is the average of $a$ and $b$ ? Assuming $a\neq3$, $b\neq4$, and $c\neq5$, what is the value in simplest form of the following expression? $\frac{a-3}{5-c} \cdot \frac{b-4}{3-a} \cdot \frac{c-5}{4-b}$ (C) $\frac{a b c}{60}$ (D) $\frac{1}{a b c}-\frac{1}{60}$ (E) $\frac{1}{60}-\frac{1}{a b c}$ What is the sum of all real numbers $x$ for which $|x^2-12x+34|=2?$ (A) $12$ (B) $15$ (C) $18$ (D) $21$ (E) $25$ Real numbers $x$ and $y$ satisfy $x + y = 4$ and $x \cdot y = -2$. What is the value of $x+\frac{x^{3}}{y^{2}}+\frac{y^{3}}{x^{2}}+y ?$ (A) $360$ (B) $400$ (C) $420$ (D) $440$ (E) $480$ 1−(−2)−3−(−4)−5−(−6)? (A) $-20$ (B) $-3$ The ratio of $w$ to $x$ is $4: 3$, the ratio of $y$ to $z$ is $3: 2$, and the ratio of $z$ to $x$ is $1: 6$. What is the ratio of $w$ to $y$ ? (A) $4: 3$ (B) $3: 2$ (C) $8: 3$ (D) $4: 1$ (E) $16: 3$ Driving along a highway, Megan noticed that her odometer showed 15951 (miles). This number is a palindrome-it reads the same forward and backward. Then 2 hours later, the odometer displayed the next higher palindrome. What was her average speed, in miles per hour, during this 2 hour period? How many positive even multiples of 3 less than 2020 are perfect squares? How many ordered pairs of integers $(x, y)$ satisfy the equation x2020+y2=2y? (E) infinitely many The decimal representation of consists of a string of zeros after the decimal point, followed by a 9 and then several more digits. How many zeros are in that initial string of zeros after the decimal point? What is the remainder when $2^{202}+202$ is divided by $2^{101}+2^{51}+1$ ? (E) 202 ${ }_{2}^{\left(0^{\left(1^{9}\right)}\right)}+\left(\left(2^{0}\right)^{1}\right)^{9} ?$ (A) $0$ What is the hundreds digit of $(20 !-15 !) ?$ Ana and Bonita were born on the same date in different years, $n$ years apart. Last year Ana was 5 times as old as Bonita. This year Ana's age is the square of Bonita's age. What is $n$ ? Let $p, q$, and $r$ be the distinct roots of the polynomial $x^{3}-22 x^{2}+80 x-67$. It is given that there exist real numbers $A, B$, and $C$ such that $\frac{1}{s^{3}-22 s^{2}+80 s-67}=\frac{A}{s-p}+\frac{B}{s-q}+\frac{C}{s-r}$ for all $s \notin{p, q, r}$. What is $\frac{1}{A}+\frac{1}{B}+\frac{1}{C}$ ? Alicia had two containers. The first was $\frac{5}{6}$ full of water and the second was empty. She poured all the water from the first container into the second container, at which point the second container was $\frac{3}{4}$ full of water. What is the ratio of the volume of the first container to the volume of the second container? (A) $\frac{5}{8}$ (B) $\frac{4}{5}$ (C) $\frac{7}{8}$ (D) $\frac{9}{10}$ (E) $\frac{11}{12}$ Consider the statement, "If $n$ is not prime, then $n-2$ is prime." Which of the following values of $n$ is a counterexample to this statement? In a high school with 500 students, $40 \%$ of the seniors play a musical instrument, while $30 \%$ of the non-seniors do not play a musical instrument. In all, $46.8 \%$ of the students do not play a musical instrument. How many non-seniors play a musical instrument? All lines with equation $a x+b y=c$ such that $a, b, c$ form an arithmetic progression pass through a common point. What are the coordinates of that point? (A) $(-1,2)$ (B) $(0,1)$ (C) $(1,-2)$ (D) $(1,0)$ (E) $(1,2)$ Two jars each contain the same number of marbles, and every marble is either blue or green. In Jar 1 the ratio of blue to green marbles is $9: 1$, and the ratio of blue to green marbles in Jar 2 is $8: 1$. There are 95 green marbles in all. How many more blue marbles are in Jar 1 than in Jar $2 ?$ What is the sum of all real numbers $x$ for which the median of the numbers $4,6,8,17$, and $x$ is equal to the mean of those five numbers? (D) $\frac{15}{4}$ (E) $\frac{35}{4}$ Henry decides one morning to do a workout, and he walks $\frac{3}{4}$ of the way from his home to his gym. The gym is 2 kilometers away from Henry's home. At that point, he changes his mind and walks $\frac{3}{4}$ of the way from where he is back toward home. When he reaches that point, he changes his mind again and walks $\frac{3}{4}$ of the distance from there back toward the gym. If Henry keeps changing his mind when he has walked $\frac{3}{4}$ of the distance toward either the gym or home from the point where he last changed his mind, he will get very close to walking back and forth between a point $A$ kilometers from home and a point $B$ kilometers from home. What is $|A-B|$ ? (((2+1)−1+1)−1+1)−1+1? (B) $\frac{11}{7}$ Liliane has $50 \%$ more soda than Jacqueline, and Alice has $25 \%$ more soda than Jacqueline. What is the relationship between the amounts of soda that Liliane and Alice have? (A) Liliane has $20 \%$ more soda than Alice. (B) Liliane has $25 \%$ more soda than Alice. (C) Liliane has $45 \%$ more soda than Alice. (D) Liliane has $75 \%$ more soda than Alice. (E) Liliane has $100 \%$ more soda than Alice. A unit of blood expires after $10 !=10 \cdot 9 \cdot 8 \cdots 1$ seconds. Yasin donates a unit of blood at noon of January $1 .$ On what day does his unit of blood expire? (A) January 2 (B) January 12 (C) January 22 (D) February 11 (E) February 12 Sangho uploaded a video to a website where viewers can vote that they like or dislike a video. Each video begins with a score of 0 , and the score increases by 1 for each like vote and decreases by 1 for each dislike vote. At one point Sangho saw that his video had a score of 90 , and that $65 \%$ of the votes cast on his video were like votes. How many votes had been cast on Sangho's video at that point? Joe has a collection of 23 coins, consisting of 5 -cent coins, 10 -cent coins, and 25 -cent coins. He has 3 more 10 -cent coins than 5 -cent coins and the total value of his collection is 320 cents. How many more 25 -cent coins does Joe have than 5 -cent coins? (E) 4 Suppose that real number $x$ satisfies 49−x2−−−−−−√−25−x2−−−−−−√=3 What is the value of $\sqrt{49-x^{2}}+\sqrt{25-x^{2}}$ ? (B) $\sqrt{33}+8$ (D) $2 \sqrt{10}+4$ Kate bakes a 20-inch by 18 -inch pan of cornbread. The cornbread is cut into pieces that measure 2 inches by 2 inches. How many pieces of cornbread does the pan contain? Sam drove 96 miles in 90 minutes. His average speed during the first 30 minutes was $60 \mathrm{mph}$ (miles per hour), and his average speed during the second 30 minutes was $65 \mathrm{mph}$. What was his average speed, in mph, during the last 30 minutes? A three-dimensional rectangular box with dimensions $X, Y$, and $Z$ has faces whose surface areas are $24,24,48,48,72$, and 72 square units. What is $X+Y+Z ?$ Joey and Chloe and their daughter Zoe all have the same birthday. Joey is 1 year older than Chloe, and Zoe is exactly 1 year old today. Today is the first of the 9 birthdays on which Chloe's age will be an integral multiple of Zoe's age. What will be the sum of the two digits of Joey's age the next time his age is a multiple of Zoe's age? What is the value of $(2(2(2(2(2(2+1)+1)+1)+1)+1)+1)$ ? Pablo buys popsicles for his friends. The store sells single popsicles for $\$ 1$ each, 3-popsicle boxes for $\$ 2$ each, and 5-popsicle boxes for $\$ 3$ What is the greatest number of popsicles that Pablo can buy with $\$ 8$ ? Mia is "helping" her mom pick up 30 toys that are strewn on the floor. Mia's mom manages to put 3 toys into the toy box every 30 seconds, but each time immediately after those 30 seconds have elapsed, Mia takes 2 toys out of the box. How much time, in minutes, will it take Mia and her mom to put all 30 toys into the box for the first time? (A) $13.5$ (C) $14.5$ (E) $15.5$ The sum of two nonzero real numbers is 4 times their product. What is the sum of the reciprocals of the two numbers? Minnie rides on a flat road at 20 kilometers per hour (kph), downhill at $30 \mathrm{kph}$, and uphill at $5 \mathrm{kph}$. Penny rides on a flat road at $30 \mathrm{kph}$, downhill at $40 \mathrm{kph}$, and uphill at $10 \mathrm{kph}$. Minnie goes from town $A$ to town $B$, a distance of $10 \mathrm{~km}$ all uphill, then from town $B$ to town $C$, a distance of 15 $\mathrm{km}$ all downhill, and then back to town $A$, a distance of $20 \mathrm{~km}$ on the flat. Penny goes the other way around using the same route. How many more minutes does it take Minnie to complete the 45 -km ride than it takes Penny? Joy has 30 thin rods, one each of every integer length from $1 \mathrm{~cm}$ through $30 \mathrm{~cm}$. She places the rods with lengths $3 \mathrm{~cm}, 7 \mathrm{~cm}$, and $15 \mathrm{~cm}$ on a table. She then wants to choose a fourth rod that she can put with these three to form a quadrilateral with positive area. How many of the remaining rods can she choose as the fourth rod? Every week Roger pays for a movie ticket and a soda out of his allowance. Last week, Roger's allowance was $A$ dollars. The cost of his movie ticket was $20 \%$ of the difference between $A$ and the cost of his soda, while the cost of his soda was $5 \%$ of the difference between $A$ and the cost of his movie ticket. To the nearest whole percent, what fraction of $A$ did Roger pay for his movie ticket and soda? (A) $9 \%$ (B) $19 \%$ (C) $22 \%$ (D) $23 \%$ (E) $25 \%$ There are 10 horses, named Horse 1, Horse $2, \ldots$, Horse 10. They get their names from how many minutes it takes them to run one lap around a circular race track: Horse $k$ runs one lap in exactly $k$ minutes. At time 0 all the horses are together at the starting point on the track. The horses start running in the same direction, and they keep running around the circular track at their constant speeds. The least time $S>0$, in minutes, at which all 10 horses will again simultaneously be at the starting point is $S=2520$. Let $T>0$ be the least time, in minutes, such that at least 5 of the horses are again at the starting point. What is the sum of the digits of $T$ ? Mary thought of a positive two-digit number. She multiplied it by 3 and added 11 . Then she switched the digits of the result, obtaining a number between 71 and 75 , inclusive. What was Mary's number? Sofia ran 5 laps around the 400-meter track at her school. For each lap, she ran the first 100 meters at an average speed of 4 meters per second and the remaining 300 meters at an average speed of 5 meters per second. How much time did Sofia take running the 5 laps? (A) 5 minutes and 35 seconds (B) 6 minutes and 40 seconds (C) 7 minutes and 5 seconds (D) 7 minutes and 25 seconds (E) 8 minutes and 10 seconds Real numbers $x, y$, and $z$ satisfy the inequalities $0<x<1,-1<y<0$, and $1<z<2$. Which of the following numbers is necessarily positive? (A) $y+x^{2}$ (B) $y+x z$ (C) $y+y^{2}$ (D) $y+2 y^{2}$ (E) $y+z$ Supposed that $x$ and $y$ are nonzero real numbers such that $\frac{3 x+y}{x-3 y}=-2$. What is the value of $\frac{x+3 y}{3 x-y}$ ? Camilla had twice as many blueberry jelly beans as cherry jelly beans. After eating 10 pieces of each kind, she now has three times as many blueberry jelly beans as cherry jelly beans. How many blueberry jelly beans did she originally have? Samia set off on her bicycle to visit her friend, traveling at an average speed of 17 kilometers per hour. When she had gone half the distance to her friend's house, a tire went flat, and she walked the rest of the way at 5 kilometers per hour. In all it took her 44 minutes to reach her friend's house. In kilometers rounded to the nearest tenth, how far did Samia walk? (A) $2.0$ (B) $2.2$ (C) $2.8$ (D) $3.4$ (E) $4.4$ The lines with equations $a x-2 y=c$ and $2 x+b y=-c$ are perpendicular and intersect at $(1,-5)$. What is $c ?$ $(\mathbf{A})-13$ At Typico High School, $60 \%$ of the students like dancing, and the rest dislike it. Of those who like dancing, $80 \%$ say that they like it, and the res say that they dislike it. Of those who dislike dancing, $90 \%$ say that they dislike it, and the rest say that they like it. What fraction of students who say they dislike dancing actually like it? (A) $10 \%$ (E) $33 \frac{1}{3} \%$ Elmer's new car gives $50 \%$ percent better fuel efficiency, measured in kilometers per liter, than his old car. However, his new car uses diesel fuel which is $20 \%$ more expensive per liter than the gasoline his old car used. By what percent will Elmer save money if he uses his new car instead of his old car for a long trip? (B) $26 \frac{2}{3} \%$ (C) $27 \frac{7}{9} \%$ (D) $33 \frac{1}{3} \%$ There are $20$ students participating in an after-school program offering classes in yoga, bridge, and painting. Each student must take at least one of these three classes, but may take two or all three. There are $10$ students taking yoga, $13$ taking bridge, and $9$ taking painting. There are $9$ students taking at least two classes. How many students are taking all three classes? What is the value of $\frac{11 !-10 !}{9 !}$ ? For what value of $x$ does $10^{x} \cdot 100^{2 x}=1000^{5}$ ? For every dollar Ben spent on bagels, David spent 25 cents less. Ben paid $\$ 12.50$ more than David. How much did they spend in the bagel store together? (A) $\$ 37.50$ (B) $\$ 50.00$ (C) $\$ 87.50$ (D) $\$ 90.00$ (E) $\$ 92.50$ A rectangular box has integer side lengths in the ratio $1: 3: 4$. Which of the following could be the volume of the box? Ximena lists the whole numbers $1$ through $30$ once. Emilio copies Ximena's numbers, replacing each occurrence of the digit $2$ by the digit 1$$ Ximena adds her numbers and Emilio adds his numbers. How much larger is Ximena's sum than Emilio's? The mean, median, and mode of the $7$ data values $60,100, x, 40,50,200,90$ are all equal to $x$. What is the value of $x$ ? Trickster Rabbit agrees with Foolish Fox to double Fox's money every time Fox crosses the bridge by Rabbit's house, as long as Fox pays $40$ coins in toll to Rabbit after each crossing. The payment is made after the doubling, Fox is excited about his good fortune until he discovers that all his money is gone after crossing the bridge three times. How many coins did Fox have at the beginning? What is the value of $\frac{2 a^{-1}+\frac{a^{-1}}{2}}{a}$ when $a=\frac{1}{2}$ ? Let $x=-2016$. What is the value of ||$|x|-x|-| x||-x$ ? (A) $-2016$ The mean age of Amanda's $4$ cousins is $8$ , and their median age is $5$ . What is the sum of the ages of Amanda's youngest and oldest cousins? Laura added two three-digit positive integers. All six digits in these numbers are different. Laura's sum is a three-digit number $S$. What is the smallest possible value for the sum of the digits of $S ?$ The ratio of the measures of two acute angles is $5: 4$, and the complement of one of these two angles is twice as large as the complement of the other. What is the sum of the degree measures of the two angles? What is the tens digit of $2015^{2016}-2017 ?$ A thin piece of wood of uniform density in the shape of an equilateral triangle with side length 3 inches weighs $12$ ounces. A second piece of the same type of wood, with the same thickness, also in the shape of an equilateral triangle, has side length of $5$ inches. Which of the following is closest to the weight, in ounces, of the second piece? (B) $16.0$ (D) $33.3$ At Megapolis Hospital one year, multiple-birth statistics were as follows: Sets of twins, triplets, and quadruplets accounted for 1000 of the babies born. There were four times as many sets of triplets as sets of quadruplets, and there was three times as many sets of twins as sets o triplets. How many of these 1000 babies were in sets of quadruplets? The sum of an infinite geometric series is a positive number $S$, and the second term in the series is 1 . What is the smallest possible value of $S$ ? (A) $\frac{1+\sqrt{5}}{2}$ (C) $\sqrt{5}$ What is the value of $\left(2^{0}-1+5^{2}-0\right)^{-1} \times 5 ?$ (A) $-125$ (B) $-120$ A box contains a collection of triangular and square tiles. There are $25$ tiles in the box, containing $84$ edges total. How many square tiles are there in the box? Pablo, Sofia, and Mia got some candy eggs at a party. Pablo had three times as many eggs as Sofia, and Sofia had twice as many eggs as Mia. Pablo decides to give some of his eggs to Sofia and Mia so that all three will have the same number of eggs. What fraction of his eggs should Pablo give to Sofia? (A) $\frac{1}{12}$ Mr. Patrick teaches math to 15 students. He was grading tests and found that when he graded everyone's test except Payton's, the average grade for the class was $80$ . After he graded Payton's test, the test average became $81$ . What was Payton's score on the test? The sum of two positive numbers is 5 times their difference. What is the ratio of the larger number to the smaller number? How many terms are in the arithmetic sequence $13,16,19, \ldots, 70,73$ ? Two years ago Pete was three times as old as his cousin Claire. Two years before that, Pete was four times as old as Claire. In how many years will the ratio of their ages be $2: 1$ ? The ratio of the length to the width of a rectangle is $4: 3$. If the rectangle has diagonal of length $d$, then the area may be expressed as $k d^{2}$ for some constant $k$. What is $k$ ? Points $(\sqrt{\pi}, a)$ and $(\sqrt{\pi}, b)$ are distinct points on the graph of $y^{2}+x^{4}=2 x^{2} y+1$. What is $|a-b|$ ? (B) $\frac{\pi}{2}$ (D) $\sqrt{1+\pi}$ (E) $1+\sqrt{\pi}$ Consider the set of all fractions $\frac{x}{y}$, where $x$ and $y$ are relatively prime positive integers. How many of these fractions have the property that if both numerator and denominator are increased by $1$, the value of the fraction is increased by $10 \%$ ? If $y+4=(x-2)^{2}, x+4=(y-2)^{2}$, and $x \neq y$, what is the value of $x^{2}+y^{2}$ ? What is the value of $2-(-2)^{-2}$ ? Marie does three equally time-consuming tasks in a row without taking breaks. She begins the first task at $1: 00 \mathrm{PM}$ and finishes the seconc task at $2: 40 \mathrm{PM}$. When does she finish the third task? (A) $3:10$ PM (B) $3:30$ PM (C) $4:00$ PM (D) $4:10$ PM (E) $4:30$ PM Kaashish has written down one integer two times and another integer three times. The sum of the five numbers is $100$ , and one of the numbers is $28$ . What is the other number? Consider the operation "minus the reciprocal of," defined by $a \diamond b=a-\frac{1}{b}$. What is $((1 \diamond 2) \diamond 3)-(1 \diamond(2 \diamond 3))$ ? (A) $-\frac{7}{30}$ (B) $-\frac{1}{6}$ (E) $\frac{7}{30}$ The line $12 x+5 y=60$ forms a triangle with the coordinate axes. What is the sum of the lengths of the altitudes of this triangle? (B) $\frac{360}{17}$ (C) $\frac{107}{5}$ (E) $\frac{281}{13}$ Let $a, b$, and $c$ be three distinct one-digit numbers. What is the maximum value of the sum of the roots of the equation $(x-a)(x-b)+(x-b)(x-c)=0 ?$ What is $10 \cdot\left(\frac{1}{2}+\frac{1}{5}+\frac{1}{10}\right)^{-1}$ ? (C) $\frac{25}{2}$ (D) $\frac{170}{3}$ Bridget bakes 48 loaves of bread for her bakery. She sells half of them in the morning for $\$ 2.50$ each. In the afternoon she sells two thirds o what she has left, and because they are not fresh, she charges only half price. In the late afternoon she sells the remaining loaves at a dollar each. Each loaf costs $\$ 0.75$ for her to make. In dollars, what is her profit for the day? On an algebra quiz, $10 \%$ of the students scored 70 points, $35 \%$ scored 80 points, $30 \%$ scored 90 points, and the rest scored 100 points. What is the difference between the mean and median score of the students' scores on this quiz? Nonzero real numbers $x, y, a$, and $b$ satisfy $x<a$ and $y<b$. How many of the following inequalities must be true? (I) $x+y<a+b$ (II) $x-y<a-b$ (III) $x y<a b$ (IV) $\frac{x}{y}<\frac{a}{b}$ Which of the following numbers is a perfect square? (A) $\frac{14 ! 15 !}{2}$ (B) $\frac{15 ! 16 !}{2}$ (C) $\frac{16 ! 17 !}{2}$ (D) $\frac{17 ! 18 !}{2}$ (E) $\frac{18 ! 19 !}{2}$ Five positive consecutive integers starting with $a$ have average $b$. What is the average of 5 consecutive integers that start with $b$ ? (A) $a+3$ (B) $a+4$ (C) $a+5$ (D) $a+6$ (E) $a+7$ A customer who intends to purchase an appliance has three coupons, only one of which may be used: Coupon 1: $10 \%$ off the listed price if the listed price is at least $\$ 50$ Coupon 2: $\$ 20$ off the listed price if the listed price is at least $\$ 100$ Coupon 3: $18 \%$ off the amount by which the listed price exceeds $\$ 100$ For which of the following listed prices will coupon 1 offer a greater price reduction than either coupon $2$ or coupon $3$ ? (A) $\$ 179.95$ (B) $\$ 199.95$ (C) $\$ 219.95$ (D) $\$ 239.95$ $(\mathbf{E}) \$ 259.95$ David drives from his home to the airport to catch a flight. He drives 35 miles in the first hour, but realizes that he will be 1 hour late if he continues at this speed. He increases his speed by 15 miles per hour for the rest of the way to the airport and arrives 30 minutes early. How many miles is the airport from his home? Leah has $13$ coins, all of which are pennies and nickels. If she had one more nickel than she has now, then she would have the same number of pennies and nickels. In cents, how much are Leah's coins worth? What is $\frac{2^{3}+2^{3}}{2^{-3}+2^{-3} ?} ?$ Randy drove the first third of his trip on a gravel road, the next 20 miles on pavement, and the remaining one-fifth on a dirt road. In miles how long was Randy's trip? (E) $\frac{300}{7}$ Susie pays for 4 muffins and 3 bananas. Calvin spends twice as much paying for 2 muffins and 16 bananas. A muffin is how many times as expensive as a banana? Orvin went to the store with just enough money to buy 30 balloons. When he arrived, he discovered that the store had a special sale on balloons: buy 1 balloon at the regular price and get a second at $\frac{1}{3}$ off the regular price. What is the greatest number of balloons Orvin could buy? Suppose $A>B>0$ and $\mathrm{A}$ is $x \%$ greater than $B$. What is $x$ ? (A) $100\left(\frac{A-B}{B}\right)$ (B) $100\left(\frac{A+B}{B}\right)$ (C) $100\left(\frac{A+B}{A}\right)$ (D) $100\left(\frac{A-B}{A}\right)$ (E) $100\left(\frac{A}{B}\right)$ A truck travels $\frac{b}{6}$ feet every $t$ seconds. There are 3 feet in a yard. How many yards does the truck travel in 3 minutes? (A) $\frac{b}{1080 t}$ (B) $\frac{30 t}{b}$ (C) $\frac{30 b}{t}$ (D) $\frac{10 t}{b}$ (E) $\frac{10 b}{t}$ For real numbers $w$ and $z$, 1w+1z1w−1z=2014 What is $\frac{w+z}{w-z} ?$ (B) $\frac{-1}{2014}$ (C) $\frac{1}{2014}$ (E) $2014$ A taxi ride costs $\$ 1.50$ plus $\$ 0.25$ per mile traveled. How much does a 5 -mile taxi ride cost? (A) $2.25$ (B) $2.50$ (C) $2.75$ (D) $3.00$ (E) $3.75$ Alice is making a batch of cookies and needs $2 \frac{1}{2}$ cups of sugar. Unfortunately, her measuring cup holds only $\frac{1}{4}$ cup of sugar. How many times must she fill that cup to get the correct amount of sugar? Tom, Dorothy, and Sammy went on a vacation and agreed to split the costs evenly. During their trip Tom paid $\$ 105$, Dorothy paid $\$ 125$, and Sammy paid $\$ 175$. In order to share costs equally, Tom gave Sammy $t$ dollars, and Dorothy gave Sammy $d$ dollars. What is $t-d$ ? Joey and his five brothers are ages $3,5,7,9,11$, and 13 . One afternoon two of his brothers whose ages sum to 16 went to the movies, two brothers younger than 10 went to play baseball, and Joey and the 5 -year-old stayed home. How old is Joey? 22014+2201222014−22012? (D) $2013$ $(\mathbf{E}) 2^{4024}$ In a recent basketball game, Shenille attempted only three-point shots and two-point shots. She was successful on $20 \%$ of her three-point shots and $30 \%$ of her two-point shots. Shenille attempted 30 shots. How many points did she score? A flower bouquet contains pink roses, red roses, pink carnations, and red carnations. One third of the pink flowers are roses, three fourths of the red flowers are carnations, and six tenths of the flowers are pink. What percent of the flowers are carnations? A solid cube of side length $1$ is removed from each corner of a solid cube of side length $3$ . How many edges does the remaining solid have? What is $\frac{2+4+6}{1+3+5}-\frac{1+3+5}{2+4+6} ?$ (B) $\frac{49}{20}$ (E) $-1$ Mr. Green measures his rectangular garden by walking two of the sides and finds that it is 15 steps by 20 steps. Each of Mr. Green's steps is 2 feet long. Mr. Green expects a half a pound of potatoes per square foot from his garden. How many pounds of potatoes does Mr. Green expect from his garden? (C) $1000$ A basketball team's players were successful on $50 \%$ of their two-point shots and $40 \%$ of their three-point shots, which resulted in 54 points. They attempted $50 \%$ more two-point shots than three-point shots. How many three-point shots did they attempt? A wire is cut into two pieces, one of length $a$ and the other of length $b$. The piece of length $a$ is bent to form an equilateral triangle, and the piece of length $b$ is bent to form a regular hexagon. The triangle and the hexagon have equal area. What is $\frac{a}{b}$ ? (C) $3\sqrt{2}$ Cagney can frost a cupcake every 20 seconds and Lacey can frost a cupcake every 30 seconds. Working together, how many cupcakes can they frost in 5 minutes? A bug crawls along a number line, starting at $-2$. It crawls to $-6$, then turns around and crawls to 5 . How many units does the bug crawl altogether? Last year $100$ adult cats, half of whom were female, were brought into the Smallville Animal Shelter. Half of the adult female cats were accompanied by a litter of kittens. The average number of kittens per litter was $4$ . What was the total number of cats and kittens received by the shelter last year? The product of two positive numbers is $9$ . The reciprocal of one of these numbers is $4$ times the reciprocal of the other number. What is the sum of the two numbers? (A) $\frac{10}{3}$ In a bag of marbles, $\frac{3}{5}$ of the marbles are blue and the rest are red. If the number of red marbles is doubled and the number of blue marbles stays the same, what fraction of the marbles will be red? The sums of three whole numbers taken in pairs are $12,17$, and $19 .$ What is the middle number? $(\mathbf{E}) 8$ Mary divides a circle into $12$ sectors. The central angles of these sectors, measured in degrees, are all integers and they form an arithmetic sequence. What is the degree measure of the smallest possible sector angle? An iterative average of the numbers $1,2,3,4$, and $5$ is computed the following way. Arrange the five numbers in some order. Find the mean of the first two numbers, then find the mean of that with the third number, then the mean of that with the fourth number, and finally the mean of that with the fifth number. What is the difference between the largest and smallest possible values that can be obtained using this procedure? Chubby makes nonstandard checkerboards that have $31$ squares on each side. The checkerboards have a black square in every corner and alternate red and black squares along every row and column. How many black squares are there on such a checkerboard? Three runners start running simultaneously from the same point on a 500-meter circular track. They each run clockwise around the course maintaining constant speeds of 4.4, 4.8, and $5.0$ meters per second. The runners stop once they are all together again somewhere on the circular course. How many seconds do the runners run? (A) $1,000$ (B) $1,250$ (C) $2,500$ (D) $5,000$ (E) $10,000$ Let $a$ and $b$ be relatively prime positive integers with $a>b>0$ and $\frac{a^{3}-b^{3}}{(a-b)^{3}}=\frac{73}{3}$. What is $a-b$ ? Paula the painter and her two helpers each paint at constant, but different, rates. They always start at $8: 00 \mathrm{AM}$, and all three always take the same amount of time to eat lunch. On Monday the three of them painted $50 \%$ of a house, quitting at $4: 00$ PM. On Tuesday, when Paula wasn't there, the two helpers painted only $24 \%$ of the house and quit at 2:12 PM. On Wednesday Paula worked by herself and finished the house by working until 7:12 P.M. How long, in minutes, was each day's lunch break? Each third-grade classroom at Pearl Creek Elementary has 18 students and 2 pet rabbits. How many more students than rabbits are there in all of the third-grade classrooms? When Ringo places his marbles into bags with 6 marbles per bag, he has 4 marbles left over. When Paul does the same with his marbles, he has 3 marbles left over. Ringo and Paul pool their marbles and place them into as many bags as possible, with 6 marbles per bag. How many marbles will be leftover? Anna enjoys dinner at a restaurant in Washington, D.C., where the sales tax on meals is $10 \%$. She leaves a $15 \%$ tip on the price of her meal before the sales tax is added, and the tax is calculated on the pre-tip amount. She spends a total of $27.50$ dollars for dinner. What is the cost of her dinner without tax or tip in dollars? For a science project, Sammy observed a chipmunk and a squirrel stashing acorns in holes. The chipmunk hid $3$ acorns in each of the holes it dug. The squirrel hid $4$ acorns in each of the holes it dug. They each hid the same number of acorns, although the squirrel needed $4$ fewer holes. How many acorns did the chipmunk hide? Two integers have a sum of 26 . When two more integers are added to the first two integers the sum is 41 . Finally when two more integers are added to the sum of the previous four integers the sum is $57 .$ What is the minimum number of odd integers among the 6 integers? How many ordered pairs of positive integers $(M, N)$ satisfy the equation $\frac{M}{6}=\frac{6}{N}$ ? It takes Clea 60 seconds to walk down an escalator when it is not operating, and only 24 seconds to walk down the escalator when it is operating. How many seconds does it take Clea to ride down the operating escalator when she just stands on it? Jesse cuts a circular paper disk of radius 12 along two radii to form two sectors, the smaller having a central angle of 120 degrees. He makes two circular cones, using each sector to form the lateral surface of a cone. What is the ratio of the volume of the smaller cone to that of the larger? (C)$\frac{\sqrt{10}}{10}$ (D) $\frac{\sqrt{5}}{6}$ (E) $\frac{\sqrt{5}}{5}$ A cell phone plan costs $20$ dollars each month, plus $5$ cents per text message sent, plus $10$ cents for each minute used over $30$ hours. In January Michelle sent 100 text messages and talked for $30.5$ hours. How much did she have to pay? (A) $24.00$ (B) $24.50$ (C) $25.50$ (D) $28.00$ (E) $30.00$ A small bottle of shampoo can hold 35 milliliters of shampoo, whereas a large bottle can hold 500 milliliters of shampoo. Jasmine wants to buy the minimum number of small bottles necessary to completely fill a large bottle. How many bottles must she buy? Suppose $[a b]$ denotes the average of $a$ and $b$, and ${a b c}$ denotes the average of $a, b$, and $c$. What is ${{1 1 0} [0 1 ] 0 }$ ? Let $X$ and $Y$ be the following sums of arithmetic sequences: X=10+12+14+\cdots+100 \ Y=12+14+16+\cdots+102 What is the value of $Y-X ?$ Set $A$ has 20 elements, and set $B$ has 15 elements. What is the smallest possible number of elements in $A \cup B$ ? Which of the following equations does NOT have a solution? (A) $(x+7)^{2}=0$ (B) $|-3 x|+5=0$ (C) $\sqrt{-x}-2=0$ (D) $\sqrt{x}-8=0$ $(\mathrm{E})|-3 x|-4=0$ Last summer $30 \%$ of the birds living on Town Lake were geese, $25 \%$ were swans, $10 \%$ were herons, and $35 \%$ were ducks. What percent of the birds that were not swans were geese? A majority of the 30 students in Ms. Demeanor's class bought pencils at the school bookstore. Each of these students bought the same number of pencils, and this number was greater than 1 . The cost of a pencil in cents was greater than the number of pencils each student bought, and the total cost of all the pencils was $\$ 17.71$. What was the cost of a pencil in cents? Square $E F G H$ has one vertex on each side of square $A B C D$. Point $E$ is on $A B$ with $A E=7 \cdot E B$. What is the ratio of the area of $E F G H$ to the area of $A B C D ?$ (D) $\frac{5 \sqrt{2}}{8}$ (E) $\frac{\sqrt{14}}{4}$ The players on a basketball team made some three-point shots, some two-point shots, and some one-point free throws. They scored as many points with two-point shots as with three-point shots. Their number of successful free throws was one more than their number of successful two-point shots. The team's total score was 61 points. How many free throws did they make? Roy bought a new battery-gasoline hybrid car. On a trip the car ran exclusively on its battery for the first 40 miles, then ran exclusively on gasoline for the rest of the trip, using gasoline at a rate of $0.02$ gallons per mile. On the whole trip he averaged 55 miles per gallon. How long was the trip in miles? Which of the following is equal to $\sqrt{9-6 \sqrt{2}}+\sqrt{9+6 \sqrt{2}}$ ? (A) $3 \sqrt{2}$ (B) $2 \sqrt{6}$ (C) $\frac{7 \sqrt{2}}{2}$ 2+4+61+3+5−1+3+52+4+6? (D) $\frac{147}{60}$ Josanna's test scores to date are $90,80,70,60$, and 85 . Her goal is to raise here test average at least 3 points with her next test. What is the minimum test score she would need to accomplish this goal? LeRoy and Bernardo went on a week-long trip together and agreed to share the costs equally. Over the week, each of them paid for various joint expenses such as gasoline and car rental. At the end of the trip it turned out that LeRoy had paid $A$ dollars and Bernardo had paid $B$ dollars, where $A<B$. How many dollars must LeRoy give to Bernardo so that they share the costs equally? (A) $\frac{A+B}{2}$ (B) $\frac{A-B}{2}$ (C) $\frac{B-A}{2}$ (D) $B-A$ $(\mathbf{E}) A+B$ In multiplying two positive integers $a$ and $b$, Ron reversed the digits of the two-digit number $a$. His erroneous product was 161 . What is the correct value of the product of $a$ and $b$ ? On Halloween Casper ate $\frac{1}{2}$ of his candies and then gave $2$ candies to his brother. The next day he ate $\frac{1}{2}$ of his remaining candies and then gave $4$ candies to his sister. On the third day he ate his final $8$ candies. How many candies did Casper have at the beginning? Keiko walks once around a track at exactly the same constant speed every day. The sides of the track are straight, and the ends are semicircles The track has a width of 6 meters, and it takes her 36 seconds longer to walk around the outside edge of the track than around the inside edge. What is Keiko's speed in meters per second? (A) $\frac{\pi}{3}$ (B) $\frac{2 \pi}{3}$ (C) $\pi$ (D) $\frac{4 \pi}{3}$ (E) $\frac{5 \pi}{3}$ A rectangular parking lot has a diagonal of 25 meters and an area of 168 square meters. In meters, what is the perimeter of the parking lot? What is the product of all the roots of the equation 5|x|+8−−−−−−√=x2−16−−−−−−√ (B) $-24$ (C) $-9$ Mary's top book shelf holds five books with the following widths, in centimeters: $6, \frac{1}{2}, 1,2.5$, and $10$ . What is the average book width, in centimeters? Tyrone had 97 marbles and Eric had 11 marbles. Tyrone then gave some of his marbles to Eric so that Tyrone ended with twice as many marbles as Eric. How many marbles did Tyrone give to Eric? A book that is to be recorded onto compact discs takes 412 minutes to read aloud. Each disc can hold up to 56 minutes of reading. Assume that the smallest possible number of discs is used and that each disc contains the same length of reading. How many minutes of reading will each disc contain? Tony works $2$ hours a day and is paid $\$ 0.50$ per hour for each full year of his age. During a six month period Tony worked 50 days and earned $\$$ $630$. How old was Tony at the end of the six month period? A palindrome, such as 83438 , is a number that remains the same when its digits are reversed. The numbers $x$ and $x+32$ are three-digit anc four-digit palindromes, respectively. What is the sum of the digits of $x$ ? Marvin had a birthday on Tuesday, May 27 in the leap year 2008 . In what year will his birthday next fall on a Saturday? (A) $2011$ (B) $2012$ The length of the interval of solutions of the inequality $a \leq 2 x+3 \leq b$ is 10 . What is $b-a$ ? Logan is constructing a scaled model of his town. The city's water tower stands 40 meters high, and the top portion is a sphere that holds 100,000 liters of water. Logan's miniature water tower holds $0.1$ liters. How tall, in meters, should Logan make his tower? (B) $\frac{0.4}{\pi}$ (D) $\frac{4}{\pi}$ Angelina drove at an average rate of $80 \mathrm{kmh}$ and then stopped 20 minutes for gas. After the stop, she drove at an average rate of $100 \mathrm{kmh}$. Altogether she drove $250 \mathrm{~km}$ in a total trip time of 3 hours including the stop. Which equation could be used to solve for the time $t$ in hours that she drove before her stop? (A) $80 t+100\left(\frac{8}{3}-t\right)=250$ (B) $80 t=250$ (C) $100 t=250$ (D) $90 t=250$ (E) $80\left(\frac{8}{3}-t\right)+100 t=250$ The polynomial $x^{3}-a x^{2}+b x-2010$ has three positive integer roots. What is the smallest possible value of $a$ ? What is $100(100-3)-(100 \cdot 100-3)$ ? (A) $-20,000$ (B) $-10,000$ (C) $-297$ (D) $-6$ Makarla attended two meetings during her 9 -hour work day. The first meeting took 45 minutes and the second meeting took twice as long. What percent of her work day was spent attending meetings? A month with 31 days has the same number of Mondays and Wednesdays. How many of the seven days of the week could be the first day of this month? Shelby drives her scooter at a speed of 30 miles per hour if it is not raining, and 20 miles per hour if it is raining. Today she drove in the sun in the morning and in the rain in the evening, for a total of 16 miles in 40 minutes. How many minutes did she drive in the rain? A ticket to a school play cost $x$ dollars, where $x$ is a whole number. A group of $9_{\text {th }}$ graders buys tickets costing a total of $\$ 48$, and a group of 10 th graders buys tickets costing a total of $\$ 64$. How many values for $x$ are possible? Lucky Larry's teacher asked him to substitute numbers for $a, b, c, d$, and $e$ in the expression $a-(b-(c-(d+e)))$ and evaluate the result. Larry ignored the parenthese but added and subtracted correctly and obtained the correct result by coincidence. The number Larry substituted for $a, b, c$, and $d$ were $1,2,3$, and 4 , respectively. What number did Larry substitute for $e$ ? A shopper plans to purchase an item that has a listed price greater than $\$ 100$ and can use any one of the three coupons. Coupon A gives $15 \%$ off the listed price, Coupon B gives $\$ 30$ off the listed price, and Coupon C gives $25 \%$ off the amount by which the listed price exceeds $\$ 100$. Let $x$ and $y$ be the smallest and largest prices, respectively, for which Coupon A saves at least as many dollars as Coupon B or C. What is $y-x$ ? At the beginning of the school year, $50 \%$ of all students in Mr. Well's class answered "Yes" to the question "Do you love math", and $50 \%$ answered "No." At the end of the school year, $70 \%$ answered "Yes" and $30 \%$ answered "No." Altogether, $x \%$ of the students gave a different answer at the beginning and end of the school year. What is the difference between the maximum and the minimum possible values of $x$ ? What is the sum of all the solutions of $x=|2 x-| 60-2 x||$ ? The average of the numbers $1,2,3, \cdots, 98,99$, and $x$ is $100 x$. What is $x$ ? (A) $\frac{49}{101}$ (B) $\frac{50}{101}$ (D) $\frac{51}{101}$ On a 50 -question multiple choice math contest, students receive 4 points for a correct answer, 0 points for an answer left blank, and $-1$ point for an incorrect answer. Jesse's total score on the contest was 99 . What is the maximum number of questions that Jesse could have answered correctly? One can holds $12$ ounces of soda, what is the minimum number of cans needed to provide a gallon (128 ounces) of soda? Four coins are picked out of a piggy bank that contains a collection of pennies, nickels, dimes, and quarters. Which of the following could not be the total value of the four coins, in cents? Which of the following is equal to $1+\frac{1}{1+\frac{1}{1+1}}$ ? Eric plans to compete in a triathlon. He can average 2 miles per hour in the $\frac{1}{4}$ mile swim and 6 miles per hour in the 3 -mile run. His goal is to finish the triathlon in 2 hours. To accomplish his goal what must his average speed in miles per hour, be for the 15 -mile bicycle ride? (A) $\frac{120}{11}$ What is the sum of the digits of the square of 111111111 ? A carton contains milk that is $2 \%$ fat, an amount that is $40 \%$ less fat than the amount contained in a carton of whole milk. What is the percentage of fat in whole milk? Three Generations of the Wen family are going to the movies, two from each generation. The two members of the youngest generation receive a $50 \%$ discount as children. The two members of the oldest generation receive a $25 \%$ discount as senior citizens. The two members of the middle generation receive no discount. Grandfather Wen, whose senior ticket costs $\$ 6.00$, is paying for everyone. How many dollars must he pay? Positive integers $a, b$, and 2009, with $a<b<2009$, form a geometric sequence with an integer ratio. What is $a$ ? Let $a, b, c$, and $d$ be real numbers with $|a-b|=2,|b-c|=3$, and $|c-d|=4$. What is the sum of all possible values of $|a-d|$ ? At Jefferson Summer Camp, $60 \%$ of the children play soccer, $30 \%$ of the children swim, and $40 \%$ of the soccer players swim. To the nearest whole percent, what percent of the non-swimmers play soccer? Each morning of her five-day workweek, Jane bought either a 50 -cent muffin or a 75 -cent bagel. Her total cost for the week was a whole number of dollars. How many bagels did she buy? Paula the painter had just enough paint for 30 identically sized rooms. Unfortunately, on the way to work, three cans of paint fell off her truck, so she had only enough paint for 25 rooms. How many cans of paint did she use for the 25 rooms? Twenty percent less than $60$ is one-third more than what number? Kiana has two older twin brothers. The product of their three ages is $128$ . What is the sum of their three ages? In a certain year the price of gasoline rose by $20 \%$ during January, fell by $20 \%$ during February, rose by $25 \%$ during March, and fell by $x \%$ during April. The price of gasoline at the end of April was the same as it had been at the beginning of January. To the nearest integer, what is $x$ When a bucket is two-thirds full of water, the bucket and water weigh $a$ kilograms. When the bucket is one-half full of water the total weight is $b$ kilograms. In terms of $a$ and $b$, what is the total weight in kilograms when the bucket is full of water? (A) $\frac{2}{3} a+\frac{1}{3} b$ (B) $\frac{3}{2} a-\frac{1}{2} b$ (C) $\frac{3}{2} a+b$ (D) $\frac{3}{2} a+2 b$ (E) $3 a-2 b$ A particular 12 -hour digital clock displays the hour and minute of a day. Unfortunately, whenever it is supposed to display a 1, it mistakenly displays a 9. For example, when it is 1:16 PM the clock incorrectly shows 9:96 PM. What fraction of the day will the clock show the correct time? A bakery owner turns on his doughnut machine at 8:30 AM. At 11:10 AM the machine has completed one third of the day's job. At what time will the doughnut machine complete the job? (A) $1: 50 \mathrm{PM}$ (B) 3:00 PM (C) $3: 30 \mathrm{PM}$ (D) 4:30 PM (E) 5:50 PM A square is drawn inside a rectangle. The ratio of the width of the rectangle to a side of the square is $2: 1$. The ratio of the rectangle's length to its width is $2: 1$. What percent of the rectangle's area is inside the square? For the positive integer $n$, let $\langle n\rangle$ denote the sum of all the positive divisors of $n$ with the exception of $n$ itself. For example, $\langle 4\rangle=1+2=3$ and $\langle 12\rangle=1+2+3+4+6=16$. What is $\langle\langle\langle 6\rangle\rangle\rangle$ ? Suppose that $\frac{2}{3}$ of 10 bananas are worth as much as 8 oranges. How many oranges are worth as much as $\frac{1}{2}$ of 5 bananas? Which of the following is equal to the product 84⋅128⋅1612⋯⋯4n+44n⋯⋯20082004? A triathlete competes in a triathlon in which the swimming, biking, and running segments are all of the same length. The triathlete swims at a rate of 3 kilometers per hour, bikes at a rate of 20 kilometers per hour, and runs at a rate of 10 kilometers per hour. Which of the following is closest to the triathlete's average speed, in kilometers per hour, for the entire race? The fraction (32008)2−(32006)2(32007)2−(32005)2 simplifies to which of the following? Heather compares the price of a new computer at two different stores. Store $A$ offers $15 \%$ off the sticker price followed by a $\$ 90$ rebate, and store $B$ offers $25 \%$ off the same sticker price with no rebate. Heather saves $\$ 15$ by buying the computer at store $A$ instead of store $B$. What is the sticker price of the computer, in dollars? 2x3−x6 is an integer. Which of the following statements must be true about $x$ ? (A) It is negative. (B) It is even, but not necessarily a multiple of 3 . (C) It is a multiple of 3 , but not necessarily even. (D) It is a multiple of 6, but not necessarily a multiple of 12 . (E) It is a multiple of 12 . In a collection of red, blue, and green marbles, there are $25 \%$ more red marbles than blue marbles, and there are $60 \%$ more green marbles than red marbles. Suppose that there are $r$ red marbles. What is the total number of marbles in the collection? (A) $2.85 r$ (B) $3 r$ (C) $3.4 r$ (D) $3.85 r$ (E) $4.25 r$ Doug can paint a room in 5 hours. Dave can paint the same room in 7 hours. Doug and Dave paint the room together and take a one-hour break for lunch. Let $t$ be the total time, in hours, required for them to complete the job working together, including lunch. Which of the following equations is satisfied by $t$ ? (A) $\left(\frac{1}{5}+\frac{1}{7}\right)(t+1)=1$ (B) $\left(\frac{1}{5}+\frac{1}{7}\right) t+1=1$ (C) $\left(\frac{1}{5}+\frac{1}{7}\right) t=1$ (D) $\left(\frac{1}{5}+\frac{1}{7}\right)(t-1)=1$ (E) $(5+7) t=1$ Yesterday Han drove 1 hour longer than lan at an average speed 5 miles per hour faster than lan. Jan drove 2 hours longer than lan at an average speed 10 miles per hour faster than lan. Han drove 70 miles more than lan. How many more miles did Jan drive than lan? Assume that $x$ is a positive real number. Which is equivalent to $\sqrt[3]{x \sqrt{x}}$ ? (A) $x^{1 / 6}$ (B) $x^{1 / 4}$ (C) $x^{3 / 8}$ (D) $x^{1 / 2}$ (E) $x$ For real numbers $a$ and $b$, define $a * b=(a-b)^{2}$. What is $(x-y)^{2} *(y-x)^{2}$ ? (B) $x^{2}+y^{2}$ (C) $2 x^{2}$ (D) $2 y^{2}$ (E) $4 x y$ A quadratic equation $a x^{2}-2 a x+b=0$ has two real solutions. What is the average of these two solutions? (C) $\frac{b}{a}$ (D) $\frac{2 b}{a}$ (E) $\sqrt{2 b-a}$ Bricklayer Brenda would take nine hours to build a chimney alone, and bricklayer Brandon would take 10 hours to build it alone. When they work together, they talk a lot, and their combined output decreases by 10 bricks per hour. Working together, they build the chimney in 5 hours. How many bricks are in the chimney? One ticket to a show costs $\$ 20$ at full price. Susan buys 4 tickets using a coupon that gives her a $25 \%$ discount. Pam buys 5 tickets using coupon that gives her a $30 \%$ discount. How many more dollars does Pam pay than Susan? The larger of two consecutive odd integers is three times the smaller. What is their sum? The school store sells $7$ pencils and $8$ notebooks for $\$ 4.15$. It also sells $5$ pencils and $3$ notebooks for $\$ 1.77$. How much do 16 pencils and 10 notebooks cost? $(A) \$ 1.76$ (B) $\$ 5.84$ (C) $\$ 6.00$ (D) $\$ 6.16$ (E) $\$ 6.32$ Last year Mr. Jon Q. Public received an inheritance. He paid $20 \%$ in federal taxes on the inheritance, and paid $10 \%$ of what he had left in state taxes. He paid a total of $\$ 10500$ for both taxes. How many dollars was his inheritance? (A) $30000$ (B) $32500$ (C) $35000$ (D) $37500$ (E) $40000$ Real numbers $a$ and $b$ satisfy the equations $3^{a}=81^{b+2}$ and $125^{b}=5^{a-3}$. What is $a b$ ? The Dunbar family consists of a mother, a father, and some children. The average age of the members of the family is 20 , the father is 48 years old, and the average age of the mother and children is 16 . How many children are in the family? Yan is somewhere between his home and the stadium. To get to the stadium he can walk directly to the stadium, or else he can walk home and then ride his bicycle to the stadium. He rides 7 times as fast as he walks, and both choices require the same amount of time. What is the ratio of Yan's distance from his home to his distance from the stadium? Suppose that the number $a$ satisfies the equation $4=a+a^{-1}$. What is the value of $a^{4}+a^{-4}$ ? Define the operation $\star$ by $a \star b=(a+b) b$. What is $(3 \star 5)-(5 \star 3) ?$ The 2007 AMC 10 will be scored by awarding 6 points for each correct response, 0 points for each incorrect response, and $1.5$ points for each problem left unanswered. After looking over the 25 problems, Sarah has decided to attempt the first 22 and leave only the last 3 unanswered. How many of the first 22 problems must she solve correctly in order to score at least 100 points? Tom's age is $T$ years, which is also the sum of the ages of his three children. His age $N$ years ago was twice the sum of their ages then. What is $T / N ?$ Some boys and girls are having a car wash to raise money for a class trip to China. Initially $40 \%$ of the group are girls. Shortly thereafter two girls leave and two boys arrive, and then $30 \%$ of the group are girls. How many girls were initially in the group? A teacher gave a test to a class in which $10 \%$ of the students are juniors and $90 \%$ are seniors. The average score on the test was 84 . The juniors all received the same score, and the average score of the seniors was 83 . What score did each of the juniors receive on the test? staralign-justifymagic-wandexitrockethighlight
CommonCrawl
Brain Informatics The CFA framework for combining two visual cognition systems Cognitive diversity and performance ratio Summary and future work On the combination of two visual cognition systems using combinatorial fusion Amy Batallones1, Kilby Sanchez1Email author, Brian Mott1, Cameron Coffran2 and D. Frank Hsu1 Brain Informatics20152:8 Accepted: 8 January 2015 When combining decisions made by two separate visual cognition systems, statistical means such as simple average (M 1) and weighted average (M 2 and M 3), incorporating the confidence level of each of these systems have been used. Although combination using these means can improve each of the individual systems, it is not known when and why this can happen. By extending a visual cognition system to become a scoring system based on each of the statistical means M 1, M 2, and M 3 respectively, the problem of combining visual cognition systems is transformed to the problem of combining multiple scoring systems. In this paper, we examine the combined results in terms of performance and diversity using combinatorial fusion, and study the issue of when and why a combined system can be better than individual systems. A data set from an experiment with twelve trials is analyzed. The findings demonstrated that combination of two visual cognition systems, based on weighted means M 2 or M 3, can improve each of the individual systems only when both of them have relatively good performance and they are diverse. Combinatorial fusion analysis (CFA) Visual cognition Rank-score characteristics (RSC) function Cognitive diversity Many decisions that humans have to make are partially, or even wholly, based on visual input. The split second nature of such decisions may make the process seem simple. However, there are many factors that are considered and combined during this short time frame. On a neurological level, there has been growing interest in understanding the factors that are combined within the visual aspect alone [1, 2], as well as how visual information is joined with information from other senses [3–7]. Combination of multiple visual decisions has also been explored [5, 8, 9]. Prior research into how pairs of people can interactively make decisions based on visual perception has been conducted by several researchers including Bahrami et al. [8], Ernst [5], and Kepecs et al. [9]. In Bahrami's work, four predictive models are used on experiments of varying degrees of noise, feedback, and communication: coin-flip (CF), behavioral feedback (BF), weighted confidence sharing (WCS), and direct signal sharing (DSS). Bahrami concludes that the WCS model is the only one that can be fit over the empirical data. His findings indicate that the accuracy of the decision-making is aided by communication between the pairs and can greatly improve the overall performance of the pair. Marc O. Ernst expands on the concept of WCS [5] between pairs by proposing a hypothetical soccer match during which two referees determine whether the ball falls behind a goal line. Similar to Bahrami's proposal, Ernst's findings indicate that simply taking the approach of BF or a CF omits information which could lead to an optimal joint decision between the pair. However, while Ernst agrees that the WCS model can lead to a beneficial joint determination, his findings also indicate that there are improvements that can be made to the WCS model to achieve a more optimal joint decision. With Ernst's scenario, Bahrami's WCS model can be applied as the distance of the individual's decision (d i ) divided by the spread of the confidence distribution (σ), which is d i /σ i . A modified version of WCS (which closely resembles DSS) using sigma-square can produce a more accurate estimate through the joint opinion, which is represented as d i /σ i 2 . In an affirmation of Bahrami's research, Ernst also notes that joint decision-making comes with a cost when individuals with dissimilar judgments attempt to come to a consensus in such a manner. Bahrami and Ernst set forth very different experimental methods, but their aim is very much the same: to devise an algorithm for optimal decision-making between two people based on visual sensory input. In the other direction, neural bases for decision-making and combining sensory information within senses have been studied by Gold and Shadlin [10] and Hillis et al. [1]. Koriat [11] indicated that there is no need to combine two heads' decisions under a normal environment. His suggestion is to simply take the decision of the most confident person. Combinatorial Fusion Analysis (CFA), an emerging information fusion paradigm, was proposed for analyzing the combination of multiple scoring systems (MSS) (see Hsu et al. [12–14]). CFA has been shown to be useful in several research domains, including sensor feature selection and combination [15, 16], information retrieval, system selection and combination [12, 17], text categorization [18], protein structure prediction [19], image recognition [20], target tracking [21], ChIP-seq peak detection [22], and virtual screening [23]. These studies have shown in its respective domain that combination of MSS performs better than individual systems when the individual scoring systems perform relatively well and they are characteristically different [13, 14]. In a series of previous studies [24–26], a modified version of the soccer goal line decision proposed by Ernst is used as the data collection method. In this experiment, two subjects observe a small target being thrown into a grass field. The subjects are separately asked of their decision on their perceived landing point of the target and their respective confidences in their decisions. More recently, we conducted two sets of experiments with a total of 20 trials on two different days (12 trials and 8 trials) [27, 28]. In each of these trials, a small token was thrown into a grass field and landed at location A = (A x , A y ). Two subjects P and Q standing 40 feet away from the landing site would perceive the landing site as at location P = (P x , P y ) and Q = (Q x , Q y ) with confidence radius σ P and σ Q , respectively. In these works, each visual cognition system is treated as a scoring system which assigns a score to each of the partitioned intervals in the common visual space. Then the problem of combining visual cognition systems is transformed to the problem of combining multiple scoring systems. The combination is analyzed using the CFA framework. Results obtained showed that combination by rank as well as by score can improve individual systems. In this paper, we explore the issue of when and why a combination of two cognitive systems is better than each individual system using the CFA. In particular, we use the concept of "cognitive diversity" and the notion of "performance ratio" to analyze the outcome of the combination. Using the data set from the experiment with twelve trials [27], we demonstrate, as in other domain applications, that combination is positive (better than or equal to the best of the two individual systems) only if the two systems, based on weighted mean using confidence radius, are relatively good (higher performance ratio) and they are diverse (higher cognitive diversity). Section 2 of this paper discusses two methods of combining visual cognition systems: statistical mean and combinatorial fusion. In Sect. 2.1, three statistical means M 1, M 2, and M 3 are calculated as average or weighted mean using the confidence radius as the weight. Based on these means, scoring systems p and q are constructed from the two visual cognition systems P and Q, respectively, in Sect. 2.2. Section 2.3 gives the method to combine these two visual scoring systems using the CFA framework. Section 3 gives the definition of cognitive diversity and the notion of performance ratio. Section 4 consists of examples, in particular the data set of an experiment with twelve trials of pairs of visual cognition systems [27]. Combination of these two visual cognition systems and analysis of the combination for the data set is discussed in more detail in Sect. 4.2 and 4.3. A summary of the results and possible future works is discussed in Sect. 5. 2 The CFA framework for combining two visual cognition systems 2.1 Computing various statistical means When we make a decision based on visual input, we can consider this decision-making as a contemplation of various choices or candidates. Given two perceived locations P = (P x , P y ) and Q = (Q x , Q y ) (with confidence radius σ P and σ Q , respectively) of the actual landing site A = (A x , A y ), we wish to find a new location L (obtained by the joint decision of P and Q) so that L is better than P and Q (distance between L and A is smaller than those between P and A, and Q and A). When determining a joint decision, typically an average or a weighted average approach is used to determine a mean. Average mean M 1 = (M 1x , M 1y ) of the two locations P = (P x , P y ) and Q = (Q x , Q y ) is calculated as $$ M_{ 1} = \, \left( {P \, + \, Q} \right) \, /{ 2 }, $$ and weighted means are obtained by $$ M_{ 2} = \, \left( {P/\sigma_{P} + \, Q/ \, \sigma_{Q} } \right) \, / \, \left( { 1/\sigma_{P} + { 1}/ \, \sigma_{Q} } \right), $$ $$ M_{ 3} = \, \left( {P/\sigma_{P}^{ 2} + \, Q/ \, \sigma_{Q}^{ 2} } \right) \, / \, \left( { 1/\sigma_{P}^{ 2} + { 1}/ \, \sigma_{Q}^{ 2} } \right), $$ where P and Q are the perceived locations of the individual subjects P and Q, and σ P and σ Q are the confidence measurement of the two subjects, respectively. 2.2 Converting each visual cognition system to a scoring system In the experiments we conducted, each of the two subjects provides an individually determined decision on where they respectively perceived the same target has landed in a field. Each coordinate on the field can be considered as a candidate for the respective participants' decisions of the perceived landing point. We are able to obtain a weight for each decision and their combination by asking each subject of a radius measurement of confidence around his or her decision. The smaller the radius measure of confidence, the more confident is the participant. We use radius R to calculate the spread (i.e., standard deviation) of the distribution around the perceived landing point, or σ. In our research, we use $$ \sigma = \, 0. 5 {\text{R}}. $$ 2.2.1 Set common visual space The σ values are used in Formulas (1), (2), and (3) to determine the positions of the means and denoted as M 1, M 2, and M 3 respectively. The distance between M i and A, m i = d(M i, A), where A is the actual landing site, is used to evaluate the performance of M i. With the field used as a two-dimensional coordinate grid, P, Q, and A are represented as x- and y- coordinates. Three formulas are used to calculate the mean of P and Q, as M i , where i = 1, 2, or 3. M i falls somewhere in between points P and Q and is determined as a coordinate. The longer of either segment PM i or M i Q is extended 30 % to the left to point P′ or to the right to point Q′, respectively. The shorter side is extended more to create the widened observation area P′Q′ so that Mi is the midpoint of P′ and Q′. We refer to the line segment P′Q′ as the common visual space (Fig. 1). The extension of PQ to P′Q′ based on M i for i = 1, 2, or 3 We partition the length, d(P′,Q′), of line segment P′Q′ into 127 intervals with midpoint di in each interval i, i = 1, 2, …, 127, and with each interval length d(P′,Q′)/127. The midpoint of the center interval, in this case, d64, contains M i . 2.2.2 Treat P and Q as two scoring systems p and q Normal distribution probability curves for each participant are created with the point P and Q as the mean and using the confidence radii values, σ P 2 and σ Q 2 of P and Q as the variances of P and Q, respectively (see Fig. 2 in the case of 15 intervals). The following formula is used to determine normal distribution: Partition of P′Q′ into 15 intervals with center M i $$ Y \, = \, \left( { 1 { }/ \, \left( {\sigma \surd \left( { 2\pi } \right)} \right)} \right) * {\text{e}}^{{\left[ { - \, \left( {x \, - \, \mu } \right)** 2} \right]/{ 2}\sigma ** 2}} , $$ where x is a normal random variable, μ is the mean, and σ is the standard deviation. A normal distribution curve spans infinitely to the right and to the left. Therefore, our two scoring systems p and q create overlapping distributions that span the entire visual plane between P′ and Q′. Scoring system p and scoring system q, respectively, scores each of the 127 intervals on the common visual space. For normal distribution functions with point P and Q as the mean and σ P and σ Q as the standard deviation respectively, each of the scoring systems p and q assigns interval di a score between 0 and 1 according to formula (5) (see Fig. 2 in the case of 15 intervals). These are the score functions s p and s q . The values of the score function s are sorted from highest to lowest to obtain the rank functions r p and r q , respectively (see Fig. 3). The d i with the lowest integer as its rank has the highest score. Score and rank function for respective scoring systems p and q undergo CFA to produce score combination C and rank combination D 2.3 Combining scoring systems p and q using both score and rank combination Let D be a set of candidates with |D| = n. Let N = [1, n] be the set of integers from 1 to n and R be a set of real numbers. In the context of a CFA framework, a scoring system A consists of a score function s A and a rank function r A on the set D of possible n positions (in this paper, D = {d i | i = 1, 2, …, 127}). In the setting of this paper, the score function s C of the score combination of derived scoring systems p and q in our experiment is $$ s_{C} \left( {d_{i} } \right) = \left( {s_{p} \left( {d_{i} } \right) + s_{q} \left( {d_{i} } \right)} \right) \, / 2 { }. $$ The score function s D of the rank combination of the two scoring systems p and q in our experiment is $$ s_{D} \left( {d_{i} } \right) = \left( {r_{p} \left( {d_{i} } \right) + r_{q} \left( {d_{i} } \right)} \right) \, /{ 2}. $$ When we sort s C (d i ) in descending order, we obtain the rank function of the score combination, called r C (d i ). When we sort s D (d i ) in ascending order, we obtain the rank function of the rank combination, called r D (d i ). The top ranked interval in r C (d i ) is called C. The top ranked interval in r D (d i ) is called D (see Fig. 3). These points are considered the optimal score and rank combination, respectively, and are used for evaluation of the combination result. The performance of the points (P, Q, M i , C, and D) is determined by each respective point's distance from target A. A shorter distance indicates higher performance (Fig. 4). Layout of M i , i = 1, 2, or 3, C, and D in relation to P, Q, and their distance to A. The distances between the 5 estimated points and A are noted on each line [24] Score function s A , rank function r A , and RSC function f A of the scoring system A [13, 14] 3 Cognitive diversity and performance ratio 3.1 Cognitive diversity Given the score function s A of the system A and its derived rank function r A , rank-score characteristic (RSC) function f A, which is a composite function of s A and the inverse of r A , defined by Hsu et al. [13, 14] is a function from N to R and can be computed mathematically as (see Fig. 5). $$ f_{A} \left( i \right) = (s_{A} r_{A}^{ - 1} )\left( i \right) \, = \, s_{A} (r_{A}^{ - 1} \left( i \right)). $$ The cognitive diversity between two scoring systems p and q, d(p,q) is calculated using RSC functions f p and f q (also see [23]) as $$ {\text{d}}\left( {p,q} \right) = {\text{d}}(f_{p} ,f_{q} ) = \left( {\sum\limits_{i = 1}^{127} {( f_{p} \left( i \right) - f_{q} (i))^{2} } /127} \right)^{1/2} . $$ 3.2 Performance ratio The performances of each P and Q for all trials are used in calculating the performance ratio. Performance of P (or Q) is determined by the distance between P (or Q) and A, d(P, A) [or d(Q, A)], respectively. Shorter distance indicates high performances. Each distance is inverted and then multiplied by the maximum distance md = max{d(P i , A i ), d(Q i , A i ) | i = 1, 2,…, 12} for all trials. Let \( {\text{MAX = max}}\left\{ {\frac{md}{{{\text{d}}(P_{i} , A_{i} )}}, \frac{md}{{{\text{d}}(Q_{i} , A_{i} )}}\left| {i = 1,2, \ldots ,12} \right.} \right\} \). Then this set of numbers is each divided by MAX. In this way, the performance for each of the 12 P and Q is in the set (0, 1]. The smaller performance over the higher performance for P and Q is the performance ratio after it is normalized again among the twelve ratios to be in (0, 1]. 4.1 Data set We use the data set from an experiment of twelve trials conducted by the authors in [27]. Each trial consists of two volunteers P and Q with confidence radius σ P and σ Q . Each gives a visual cognitive estimate of the actual token landing site A as P and Q respectively. Table 1 lists coordinates of P (P x , P y ), Q (Q x , Q y ), and A (A x , A y ) as well as the confidence radius σ P and σ Q of P and Q respectively. Coordinates of P, Q, and A and confidence radius (σ) of P and Q for the 12 trials [27] (P x , P y ) σ P (Q x , Q y ) σ Q (A x , A y ) (11.5, 134.5) (78.5, 105) (94, 124) (23.5, 56) (112, 96.75) (105, 134.25) (78.5, 87.75) (229.25, 151.5) (256, 162.5) (216.25, 149.75) (125.5, 13.5) (112.75, 57.25) (113.75, 46) (184.5, 108.25) (22, 190.5) (17, 227.75) (14.75, 195) (98.75, 57) (71.25, 25.5) (16.5, 1) (205.5, 15) (204, 21.5) (203, 26) (100.5, 4.5) (127, 9.5) (98.5, −75.5) (99, 30) (96, 4) 4.2 Combination results and analysis The decision of Participant p, marked as P, and the decision of Participant q, marked as Q, are used to obtain line segment PQ. The radii of confidence are used to calculate the two σ values to locate the coordinates of points M 1, M 2, and M 3 along the extended P′Q′. To combine and compare the two visual decision systems of p and q, a common plane must be implemented to be evaluated by the different systems. The 127 intervals along the P′Q′ line serve as the common visual space to be scored. When P′Q′ has been partitioned into the 127 intervals mapped according to M i , the intervals are scored according to the normal distribution curves of P and Q using the standard deviation σ P and σ Q , respectively. Both systems assume the set of common interval midpoints d 1, d 2, d 3,…,d 127. Each scoring system, p and q, consists of a score function. We define score functions s P (d i ) and s Q (d i ) that map each interval, d i , to a score in systems p and q, respectively. The rank function of each of the systems p and q maps each element d i to a positive integer in N, where N = {x | 1 ≤ x ≤ 127}. We obtained the rank functions r P (d i ) and r Q (d i ) by sorting s P (d i ) and s Q (d i ) in descending order and assigning a rank value from 1 to 127 to each interval. C and D based on M i , for i = 1, 2, and 3, are calculated, and the distances to target A are computed. The point with the shorter distance from the target is considered the point with the better performance. Table 2 lists the performance of (P, Q), confidence radius of P, Q and performance of C and D based on M i , i = 1, 2, and 3. Table 3 lists performance for M i , i = 1, 2, and 3 in the twelve trials. Table 4 gives comparisons of the performance of C or D to that of P and Q, and to M i . We note that Koriat's criterion, taking the decision of the most confident system, gives a correct prediction of 7 out of the 12 trials (Trials 1, 2, 4, 6, 8, 9, and 11). The score combination C or rank combination D obtained by CFA improves P and Q in 8, 7, and 6 out of the 12 trials when the common visual space mean is M 1, M 2, and M 3 respectively. It is interesting to note that C or D improves P and Q in more trials based on M 1 than those based on M 2 or M 3 because M 1 does not take into consideration the confidence radius as weighted means (Table 4(a)). The same reason can be given to Table 4(b) where C or D can improve M 1 in more trials than M2 or M3. In addition, in the 4 trials (Trials 3, 5, 10, and 12) that Koriat's criterion fails to apply, they can all be improved using the CFA framework. Performance of combination: (a) Performance of P, Q, (b) Confidence radius of P, Q, (c) Performance of C and D based on M 1, M 2, and M 3, respectively (a) Per. (P,Q) (b) Confidence Radius (σ P , σ Q ) (C)(1) Per. of C, D; based on M 1 (20.41, 24.52) (7, 21.5) (14, 15.5) (0.5, 3) (8.53, 32.83) (7, 6) (32.68, 8.44) (17, 6.5) (4.86, 4.64) (4, 4.5) Bold numbers indicate C and/or D perform better than P and Q in (C)(1), (C)(2) and (C)(3). Bold numbers indicate better performance of the two systems in (a) and higher confidence in (b) Performance of M 1, M 2, M 3 in 12 trials Each bold number indicates the performance of M i in the Trial is better than P and Q. M 3 is best among M i 's in Trials 2, 4, 6, 8, 9, and 11 Comparisons of performance of C or D to that (a) of P and Q, (b) of M i, and (c) of P, Q, and M i (set of 36 cases in Table 2) (a) C or D ≥ P and Q (b) C or D ≥ M i (c) C or D ≥ P, Q,& M i 1, 3, 5, 6, 7, 10, 11, 12 (8/12) 2, 3, 4, 5, 7, 8, 9, 12 (8/12) 3, 5, 7, 12 (4/12) 1, 2, 4, 5, 6, 8, 11 (7/12) 2, 4, 5, 8, 9 (5/12) 2, 4, 5, 8 (4/12) 1, 2, 5, 6, 8, 9 (6/12) 2, 5 (2/12) Figures 6 and 7 illustrate the performances of P, C, D, M i and Q for i = 1, 2, and 3 in Trials 2 and 7 respectively. In Trial 2, P performs quite good and has a higher confidence radius than Q. When given weighted means M 2 and M 3, combinatorial fusion C or D performs better than P and Q. However, in Trial 7, P performs better but has a lower confidence radius than Q. In this case, C or D does not improve P and Q based on M 2 or M 3 when more weight is given to Q. Therefore, we observe that giving more weight to the better performer with a higher confidence leads to a combination which improves P and Q. We call such a case a positive case. In the following Sect. 4.3, we investigate in general when combination (either rank or score combination) can improve P and Q. Performance of P, C, D, and Q based on M 1 (a), M 2 (b), and M 3 (c) respectively for Trial 2, a Performance of P, Q, C, and D based on M 1 in Trial 2, b performance of P, Q, C, and D based on M 2 in Trial 2, c performance of P, Q, C, and D based on M 3 in Trial 2 Performance of P, C, D, and Q based on M 1 (a), M 2 (b), and M 3 (c) respectively for Trial 7, a Performance of P, Q, C, and D based on M 1 in Trial 7, b performance of P, Q, C, and D based on M 2 in Trial 7, c performance of P, C, D, and Q based on M 3 in Trial 7 4.3 Positive cases versus Negative cases We plot the result of a score or rank combination of P and Q, distinguishing positive cases as "□" or "◊" and negative cases as "×" or "+" on the two-dimensional coordinate plane with the y-axis as the cognitive diversity d(P, Q) and the x-axis as the performance ratio P l/P h (lower performance over higher performance) for all the trials for each M i , i = 1, 2, or 3. Each trial within each graph is noted as positive when rank or score combination performs better than both P and Q, and negative when it does not. The average for all positive cases and the average for all negative cases is also marked for each graph as "■" and "X" respectively. Cognitive diversity between P and Q, d(P, Q), is the diversity between two RSC functions f p and f q , d(f p , f q ), and is calculated using formula (9). Cognitive diversity values are normalized to (0, 1] in each case based on M i , i = 1, 2, and 3 (see Table 5). Figure 8 depicts the positive versus negative cases based on each M i , i = 1, 2, and 3 (Fig. 8a–c respectively) in terms of cognitive diversity (y-axis) and performance ratio (x-axis). d(p, q) in M 1 Positive versus negative cases resulting from the 24 score and rank combinations in terms of cognitive diversity d(P, Q) (y-axis) and performance ratio P l/P h (x-axis) based on M 1 (a), M 2 (b), and M 3 (c) respectively, a Positive versus negative cases based on M 1, b positive versus negative cases based on M 2, c positive versus negative cases based on M 3 5 Summary and future work In our previous work [27, 28], it has been demonstrated that combination of two visual cognition system using the CFA framework can improve each of the individual systems. In this paper, we analyze outcomes of these combinations according to positive cases or negative cases using the notions of cognitive diversity and performance ratio on the data set of an experiment with 12 trials [27]. It is demonstrated that in the majority of the 72 cases of rank combinations and score combinations (12 × 2 × 3 = 72) (see Fig. 8a–c), combination of two visual systems, based on weighted means M 2 or M 3, can outperform each of the individual systems only if they each perform relatively well (with higher performance ratio) and they are diverse (with high cognitive diversity). In an earlier work by Hsu and Taksa [12], it was shown that under certain conditions, rank combination can be better than score combination. In the current study, each of the six trials (Trials 1, 2, 5, 6, 9, and 10) has higher diversity than the remaining six trials. Similar to the results in [12], the six trials do have better rank combination (D) than score combination (C). It is also interesting to note that improvement in the other six trials was carried out by rank combination only (Trial 3, 4, 7, 8, 11, and 12). In other cases, whenever score combination (C) improves P and Q, rank combination (D) can also improve. All these indicate that the CFA framework, which uses score and rank combination, is robust in analyzing combination and decision problems for visual cognition systems. In the combination of decisions or visual cognition systems, as well as the integration of signals from different sensors, statistical means or weighted means such as M 1, M 2, or M 3 are often used [1, 3, 4, 5, 8]. It has been observed in these previous studies that M 3, using 1/σ P 2 (or 1/σ Q 2 ) as the weight assigned to system P (or Q), provides better combination results. In our current study, when comparing M 1, M 2, and M 3 in each of the 12 trials, it is shown that M 3 is better than M 1 and M 2 in 6 of the 12 trials, while M 1 and M 2 are the best in 5 and 1 of the 12 trials respectively, independent of the performance of P and Q. So our current study supports that observation. However, when comparing improvements of M i over P and Q, it was shown in our study that the statistical means M 1, M 2, and M 3 can improve P and Q in 4, 3, and 3 trials, respectively (see Table 3). On the other hand, the CFA framework (C or D) based on M1, M2, or M3 can improve P and Q in 8, 7, or 6 trials. All these indicate that the CFA framework is a viable analytic method in combining visual cognition systems and can be generalized to analyze data in bioinformatics and neuroscience. In summary, our CFA framework provides two criteria: performance ratio and cognitive diversity to guide us to combine two visual cognition systems with confidence radii. In the case of unsupervised learning or when the performance cannot be evaluated (e.g., the location of A is not known), cognitive diversity itself can be used to direct us when to combine (when the cognitive diversity is big enough) or how to combine (use rank combination or score combination) (see [12, 14, 21, 22, and 23]). Our future work includes the following: Apply CFA framework to the combination of more than two visual systems; Study the effect of the number of partition intervals in the common visual space defined by P′Q′; Use other diversity measurements such as Pearson's correlation (between two score functions s A and s B ) and Kendall's tau (see [29]) or Spearman's rho (between two rank functions r A and r B ); and Apply CFA framework to combination of multiple sensing systems or combination of multi-modal physiological systems. We thank two anonymous references for helpful comments and suggestions which led to improvement of the manuscript. DFH is partially supported by a travel fund provided by DIMACS and CCICADA at Rutgers University. Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. Laboratory of Informatics and Data Mining, Department of Computer and Information Science, Fordham University, New York, NY, USA Program for the Human Environment, The Rockefeller University, New York, NY, USA Hillis JM, Ernst MO, Banks MS, Landy MS (2002) Combining sensory information: mandatory fusion within, but not between, senses. Science 298(5598):1627–1630View ArticleGoogle Scholar Tong F, Meng M, Blake R (2006) Neural basis of binocular rivalry. Trends Cogn Sci 10(11):502–511View ArticleGoogle Scholar Ernst MO, Banks MS (2002) Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415:429–433View ArticleGoogle Scholar Ernst MO (2007) Learning to integrate arbitrary signals from vision and touch. J Vis 7(5):7View ArticleGoogle Scholar Ernst MO (2010) Decisions made better. Science 329(5995):1022–1023View ArticleGoogle Scholar Gepshtein S, Burge J, Ernst O, Banks S (2009) The combination of vision and touch depends on spatial proximity. J Vis 5(11):1013–1023Google Scholar Lunghi C, Binda P, Morrone C (2010) Touch disambiguates rivalrous perception at early stages of visual analysis. Curr Biol 20(4):R143–R144View ArticleGoogle Scholar Bahrami B, Olsen K, Latham P, Roepstroff A, Rees G, Frith C (2010) Optimally interacting minds. Science 329(5995):1081–1085View ArticleGoogle Scholar Kepecs A, Uchida N, Zariwala H, Mainen Z (2008) Neural correlates, computation and behavioural impact of decision confidence. Nature 455:227–231View ArticleGoogle Scholar Gold JI, Shadlen N (2007) The neural basis of decision making. Annu Rev Neurosci 30:535–574View ArticleGoogle Scholar Koriat A (2012) When are two heads better than one. Science 336:360–362View ArticleGoogle Scholar Hsu DF, Taksa I (2005) Comparing rank and score combination methods for data fusion in information retrieval. Inf Retrieval 8(3):449–480View ArticleGoogle Scholar Hsu DF, Chung YS, Kristal BS (2006) Combinatorial fusion analysis: methods and practice of combining multiple scoring systems. In: Hsu HH (ed) Advanced data mining technologies in bioinformatics. Idea Group Inc, Hershey, pp 1157–1181View ArticleGoogle Scholar Hsu DF, Kristal BS, Schweikert C (2010) Rank-score characteristics (RSC) function and cognitive diversity. Brain Inform 6334:42–54View ArticleGoogle Scholar Deng Y, Hsu DF, Wu Z, Chu CH (2012) Combining multiple sensor features for stress detection using combinatorial fusion. J Interconnect Netw 13(03n04)Google Scholar Deng Y, Wu Z, Chu CH, Zhang Q, Hsu DF (2013) Sensor feature selection and combination for stress identification using combinatorial fusion. Int J Adv Rob Syst 10Google Scholar Liu CY, Tang CY, Hsu DF (2013) Comparing system selection methods for the combinatorial fusion of multiple retrieval systems. J Interconnect Netw 14(01)Google Scholar Li Y, Hsu DF, Chung SM (2013) combination of multiple feature selection methods for text categorization by using combinatorial fusion analysis and rank-score characteristic. Int J Artif Intell Tools 22(02)Google Scholar Lin K-L, Lin C-Y, Huang C-D, Chang H-M, Yang C-Y, Lin C-T, Tang CY, Hsu DF (2007) Feature selection and combination criteria for improving accuracy in protein structure prediction. IEEE Trans Nanobiosci 6(2):186–196View ArticleGoogle Scholar Liu H, Wu ZH, Zhang X, Hsu DF (2013) A skeleton pruning algorithm based on information fusion. Pattern Recogn Lett 34(10):1138–1145View ArticleGoogle Scholar Lyons DM, Hsu DF (2009) Combining multiple scoring systems for target tracking using rank–score characteristics. Inf Fusion 10(2):124–136View ArticleGoogle Scholar Schweikert C, Brown S, Tang Z, Smith PR, Hsu DF (2012) Combining multiple ChIP-seq peak detection systems using combinatorial fusion. BMC Genom 13(Suppl 8):S12View ArticleGoogle Scholar Yang JM, Chen YF, Shen TW, Kristal BS, Hsu DF (2005) Consensus scoring criteria for improving enrichment in virtual screening. J Chem Inf Model 45:1134–1146View ArticleGoogle Scholar McMunn-Coffran C, Paolercio E, Liu H, Tsai R, Hsu DF (2011) Joint decision making in visual cognition using Combinatorial Fusion Analysis. In: Proceedings of the 10th IEEE international conference on cognitive informatics and cognitive computing, 254–261Google Scholar McMunn-Coffran C, Paolercio E, Fei Y, Hsu DF (2012) Combining visual cognition systems for joint decision making using combinatorial fusion. In: Proceedings of the 11th IEEE international conference on cognition informatics and cognition computing, pp 313–322Google Scholar Paolercio E, McMunn-Coffran C, Mott B, Hsu DF, Schweikert C (2013) Fusion of two visual perception systems utilizing cognitive diversity. In: Proceedings of the 12th IEEE international conference on cognitive informatics and cognitive computing, pp 226–235Google Scholar Batallones A, McMunn-Coffran C, Mott B, Sanchez K, Hsu DF (2012) Comparative study of joint decision-making on two visual cognition systems using combinatorial fusion. Active Media Technology. Lecture Notes in Computer Science, Series No. 7669, pp 215–225Google Scholar Batallones A, McMunn-Coffran C, Mott B, Sanchez K, Hsu DF (2013) Combining two visual cognition systems using confidence radius and combinatorial fusion. Brain and Health Informatics. Lecture Notes in Computer Science, Series No. 8211, pp 72–81Google Scholar Ng KB, Kantor PB (2000) Predicting the effectiveness of naive data fusion on the basis of system characteristics. J Am Soc Inform Sci 51(12):1177–1189View ArticleGoogle Scholar
CommonCrawl
Calibration of imaging parameters for space-borne airglow photography using city light positions Yuta Hozumi1, Akinori Saito1 & Mitsumu K. Ejiri2,3 A new method for calibrating imaging parameters of photographs taken from the International Space Station (ISS) is presented in this report. Airglow in the mesosphere and the F-region ionosphere was captured on the limb of the Earth with a digital single-lens reflex camera from the ISS by astronauts. To utilize the photographs as scientific data, imaging parameters, such as the angle of view, exact position, and orientation of the camera, should be determined because they are not measured at the time of imaging. A new calibration method using city light positions shown in the photographs was developed to determine these imaging parameters with high accuracy suitable for airglow study. Applying the pinhole camera model, the apparent city light positions on the photograph are matched with the actual city light locations on Earth, which are derived from the global nighttime stable light map data obtained by the Defense Meteorological Satellite Program satellite. The correct imaging parameters are determined in an iterative process by matching the apparent positions on the image with the actual city light locations. We applied this calibration method to photographs taken on August 26, 2014, and confirmed that the result is correct. The precision of the calibration was evaluated by comparing the results from six different photographs with the same imaging parameters. The precisions in determining the camera position and orientation are estimated to be ±2.2 km and ±0.08°, respectively. The 0.08° difference in the orientation yields a 2.9-km difference at a tangential point of 90 km in altitude. The airglow structures in the photographs were mapped to geographical points using the calibrated imaging parameters and compared with a simultaneous observation by the Visible and near-Infrared Spectral Imager of the Ionosphere, Mesosphere, Upper Atmosphere, and Plasmasphere mapping mission installed on the ISS. The comparison shows good agreements and supports the validity of the calibration. This calibration technique makes it possible to utilize photographs taken on low-Earth-orbit satellites in the nighttime as a reference for the airglow and aurora structures. The International Space Station (ISS) is a unique facility, providing opportunities to conduct various observations of the Earth's upper atmosphere. With the recent development of digital photography technology, a number of photographs of the upper atmosphere that contain the airglow and aurora have been taken by astronauts on the ISS with digital single-lens reflex (DSLR) cameras. These photographs are open to the public and available on NASA's Web site ("The Gateway to Astronaut Photography of Earth 1995"). Some of the photographs captured the airglow in the mesosphere and the F-region ionosphere on the night side of the Earth. This space-borne imaging covers much wider geographical area than ground-based imaging observations. Inspired by these photographs, we carried out the Astronaut-Ionosphere, Mesosphere, upper Atmosphere, and Plasmasphere mapping (A-IMAP) campaign observation to investigate the airglow structure of the Earth's upper atmosphere. In this observation, an astronaut took sequential photographs of the airglow through the window of the cupola, an ISS module, while the ISS flew on the night side of the Earth. The A-IMAP campaign is associated with the Ionosphere, Mesosphere, upper Atmosphere, and Plasmasphere mapping (IMAP) mission. Two imagers of the IMAP mission, one is for the airglow in visible and infrared light and the other is for resonant scattering from ions in extreme ultraviolet light, are operated almost continuously from September 2012 to August 2015. During the operation period of the IMAP mission, ten experiments of the A-IMAP campaign were carried out in 2014 and 2015. To utilize the photographs as scientific data, the imaging parameters must be determined. The camera was fixed to the ISS during the experiments, but the orientation of the camera was different for each experiment because the camera was detached from the window of the cupola after the experiment. A zoom lens was used for the A-IMAP campaign, and its focus length and viewing direction were adjusted in each experiment to capture clear airglow structures while minimizing the obstruction by other ISS structures such as a window-frame, robot arms and solar panels. Therefore, the camera orientation and angle of view (AOV) are unknown parameters and must be determined to utilize the image for airglow study. The precise camera position along the ISS trajectory is also an unknown parameter because the clock of the camera is not precise. To determine these imaging parameters, we developed a new calibration technique using the city light positions shown in the photographs. Because observations were conducted at night, city light positions are better reference than topographic features. The location of the camera is an unknown parameter. Therefore, the positions of city lights were used in our method, despite that the positions of stars are generally used to determine the field of view (FOV) of images taken in the nighttime from spacecrafts, rockets, and ground-based instruments. The attitude of a spacecraft can be determined by the position of stars; however, the positions of the spacecraft cannot be determined by stars because the distant star does not have a detectable parallax. The distances between the spacecraft and cities are close; hence, the city light position in the image gives rise to a detectable parallax, which is necessary for calibration of the camera location. This report is organized as follows. The method of calibration is described in the next section. In Results and discussion section, we present the result of applying the method to photographs taken on August 26, 2014, and evaluate precision of the calibration. Then, we map the airglow structures in the photographs to the geographical coordinates using the calibrated imaging parameters and discuss their vertical and horizontal structures. A comparison with a simultaneous observation of the 630-nm airglow emission by another instrument of the ISS/IMAP mission will also be presented. The conclusion of this report is presented in the last section. We developed a new method for calibrating imaging parameters using the city light positions shown in the airglow photographs taken from a low-Earth-orbit (LEO) satellite. The apparent city light positions were compared with the actual city light positions on the Earth by applying the pinhole camera model. The actual positions were determined by the global nighttime stable light data obtained by the visible and infrared sensors (OLS) on the Defense Meteorological Satellite Program (DMSP) satellite (Version 4 DMSP-OLS Nighttime Lights Time Series 2015). The resolution of the light data is 1 km. In the pinhole camera model (e.g., Trucco and Verri 1998; Jaehne 1997), the relation between the city light position on Earth and that on the photograph is given by $$\left[ {\begin{array}{*{20}c} x \\ y \\ z \\ \end{array} } \right] = R\left[ {\begin{array}{*{20}c} X \\ Y \\ Z \\ \end{array} } \right] - t,$$ $$u = f\frac{x}{z} + C_{x} ,$$ $$v = - f\frac{y}{z} + C_{y} ,$$ where \(\left[ {\begin{array}{*{20}l} x \hfill & y \hfill & z \hfill \\ \end{array} } \right]^{T}\) is the coordinates of the city light in the camera coordinate system, which is the coordinate system fixed to the camera, and \(\left[ {\begin{array}{*{20}l} X \hfill & Y \hfill & Z \hfill \\ \end{array} } \right]^{T}\) is the coordinates of the city light in the world coordinate system. In Eq. (1), t is the camera position in the world coordinates and R is the rotation matrix corresponding to the orientation of the digital camera. The matrix R transforms the coordinates of a point to the coordinate system fixed to the camera. In Eqs. (2) and (3), (u, v) is the coordinates of the projection point in the photograph in units of pixel and f is the focal length expressed in units of pixel, which varies as a function of AOV as \(f = \frac{1}{{\tan \left( {\frac{\text{AOV}}{2}} \right)}}\frac{{N_{\text{pix}} }}{2}\), where \(N_{\text{pix}}\) is the number of horizontal pixels. Moreover, \((C_{x} ,C_{y} )\) is the coordinates of the center of the photograph in units of pixel. The Euler angles \(\theta\), \(\sigma\), and \(\varphi\) are used to describe the orientation of the camera relative to the ISS. The relation between R and the ISS orientation R ISS can be written in terms of the Euler angles as $$R = \left[ {\begin{array}{*{20}c} {\cos \varphi } & {\sin \varphi } & 0 \\ { - \sin \varphi } & {\cos \varphi } & 0 \\ 0 & 0 & 1 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\cos \sigma } & 0 & { - \sin \sigma } \\ 0 & 1 & 0 \\ {\sin \sigma } & 0 & {\cos \sigma } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} 1 & 0 & 0 \\ 0 & {\cos \theta } & {\sin \theta } \\ 0 & { - \sin \theta } & {\cos \theta } \\ \end{array} } \right]R_{\text{ISS}} .$$ Following the formulation of Brown (1971), we assume the radial distortion of the lens is given by $$u_{\text{corrected}} = u\left( {1 + k_{1} r^{2} + k_{2} r^{4} } \right),$$ $$v_{\text{corrected}} = v\left( {1 + k_{1} r^{2} + k_{2} r^{4} } \right),$$ where r is the distance from the center of the image in units of pixel, and \(k_{1}\) and \(k_{2}\) are coefficients of the radial distortion. With these equations, we can convert the city light positions in the world coordinates to projection points on the image. The camera location was assumed to be the same as the ISS location. Although the location of the ISS is known as a function of time, the times at which the photographs are taken are not precise. The time when the photograph was captured was recorded in the metadata of the image file in the exchangeable image file format. However, the clock of the camera is not accurate and has a time lag with respect to Universal Time (UT) because the clock was set by hand. It is necessary to determine the time lag so as to obtain the exact location of the camera. Because the orientation of the ISS is also known as a function of time, the orientation of the camera can be expressed in terms of its orientation relative to the ISS. Therefore, the unknown parameters are the time lag, orientation of the camera relative to the ISS, AOV, and the coefficients of lens distortion \(k_{1}\) and \(k_{2}\). The correct set of these imaging parameters was determined by applying the imaging model described above. Once the set of parameters is assumed, the actual city light location can be converted to a position on the photograph using Eqs. (1–6). To evaluate the correctness of the assumed parameters, the mean distance between the city light positions on the photograph and projected positions were calculated. This process was iterated over the parameter space to determine the best set of parameters that minimizes the mean distance. The total number of unknown parameters is seven, i.e., one for the time lag, three for the orientation, one for the AOV, and two for the lens distortion. Each city light position on the two-dimensional plane of a photograph provides two restrict conditions. Therefore, at least four city light positions are necessary for the calibration. Practically, more than four city light positions are used to determine the parameters with improved accuracy. Calibration results We applied the calibration method to the photographs from the A-IMAP mission taken on August 26, 2014. On the day, the ISS flew on the night side of the Earth at latitude 21.8° N and longitude 93.2° E over Myanmar at 13:38 UT and flew out of the night side at latitude 49.2° S and longitude 142.5° W over the east of New Zealand at 14:12 UT. During the night path of the ISS, photographs were taken with a DSLR camera (Nikon D3s) and a lens (AI AF-s Zoom-Nikkor 17–35 mm f/2.8D IF-ED). Four photographs were taken every 11 s with an exposure time of 2.5 s. The camera was operated in Manual mode with an f-number of 2.8 and ISO speed of 102,400. Each photograph has 4256 × 2382 pixels. An image of the photographs is shown in Fig. 1. The observation time recorded in the metadata is 13:47:00.08 UT. This photograph is hereafter referred to as photograph A. The Earth's limb was captured in the upper part of the image. Above the Earth's limb, two airglow layers are identified. The lower layer is caused by emission from the mesopause region, and the upper layer is caused by emission from the F-region ionosphere. On the Earth's surface, several city lights are seen in the image. City lights of the islands of Indonesia in the center of the image and those of the northwest coast of Australia on the right side of the image are recognized. The green circles indicate city light positions used for the calibration. The city lights in the middle and long distance ranges from the ISS were chosen. The city lights too close to the ISS had a large apparent size on the photograph, leading to a reduction in positioning accuracy. The positions of the selected city lights should be widely distributed in the image to improve accuracy of the calibration. Because of the ISS's motion during the exposure time, 2.5 s, the apparent images of the city lights are stretched and have line-shape along the ISS trajectory as seen in Fig. 1. This effect is apparent for the city lights #1, #2, #5, and #7 in Fig. 1. We used the low/near end of the line-shaped image of these city lights, which correspond to the positions of city lights at the time of end of exposure time. Figure 2 shows the nighttime stable lights map derived from the DMSP-OLS data for the year 2013 (2) with the city locations indicated by the red circles. We utilized the data for the year 2013 to identify the city lights positions, ignoring the variation between stable lights for the year 2013 and city lights on the day August 26, 2014. City light covered by clouds is obscurely imaged from the ISS and should not be used for the calibration. The coverage of clouds is easily identified by comparison between A-IMAP photographs and DMSP-OLS data. One of the photographs taken on August 26, 2014 (Photograph A). The recorded time is 13:47:00.08 UT. The green circles indicate city light positions used for the calibration. The green numbers are corresponding to the red numbers shown in Fig. 2 City lights map observed by the DMSP satellite. The red circles indicate city light positions used for the calibration. The red numbers are corresponding to the green numbers shown in Fig. 1. The DMSP image and data were processed by NOAA's National Geophysical Data Center, and the DMSP data were collected by US Air Force Weather Agency The calibration method was applied to photograph A. The mean distance between the city light position on the photograph and projected positions was calculated for every node of seven-dimensional parameter space with resolutions of 0.1 s for the time lag, 0.01° for the Euler angles and AOV, \(10^{ - 10}\) for k 1, and \(10^{ - 17}\) for \(k_{2}\). The parameter set minimizing the mean distance is the best set. The derived parameters are listed in the first row of Table 1. Figure 3 shows the parameter dependence of the mean distance. The variation of the mean distance as a function of each assumed parameter is calculated, while the other parameters are kept to be the best. Because each parameter has a smooth curve and a sharp minimum, as shown in Fig. 3, the searching resolution of each parameter for the iteration process is valid. Table 1 Imaging parameters determined by the calibration Parameter dependence of the mean distance. a–g panels show the time lag, \(\theta\), \(\psi\), \(\phi\), AOV, \(k_{1}\) and \(k_{2}\) dependences, respectively. Each curve has a clear minimum peak To evaluate the precision and stability of the calibration method, we applied the method to other five photographs for different city light conditions. These five photographs were taken in the same night path of the ISS with 2-min time interval from 13:49 to 13:57 UT using the same camera and the same setting as for photograph A. Photographing time recorded in the metadata for each photograph is presented in Table 1 (photographs B–F). During this period, the focal length of the lens was not changed and the camera orientation was stable relative to the ISS, because the camera was fixed to the ISS. The time lag of the camera's clock could be regarded as stable with 0.1-s resolution in this 10-min period. Therefore, the imaging parameters derived from these five photographs should be the same as those derived from photograph A. The distance between adjacent photographing positions for these six photographs was about 880 km, and the cities used in each calibration process were not used in the calibrations for the other photographs. The calibration result and their standard deviation are presented in Table 1. The time lag was estimated in the range from −15.5 to −16.3 s with a standard deviation of 0.3 s. The camera position along the ISS orbit was derived from the time lag. A standard deviation of 0.3 s was corresponding to 2.2 km in the horizontal distance along the ISS trajectory, considering the ISS speed of 7.4 km/s. Therefore, the calibration precision for the camera position was ±2.2 km. The standard deviations of the derived Euler angles \(\theta\), \(\sigma\), and \(\varphi\) for the orientation of the camera were determined with a standard deviations within 0.08°, yielding a 2.9-km difference in the plane perpendicular to the line of sight (LOS) at the tangential point of 90 km in altitude. The 2.9 km distance was the calibration precision for the determination of the altitude profiles of the airglow on the limb. The AOV was derived in the range from 90.53° to 91.93°. The distortion coefficients \(k_{1}\) and \(k_{2}\) were estimated in the ranges from −4.26 \(\times\,10^{ - 8}\) to −0.97 \(\times\,10^{ - 8}\) and from −0.26 \(\times\,10^{ - 15}\) to \(10.55 \times\,10^{ - 15}\), respectively. The AOV, \(k_{1}\), and \(k_{2}\) are not independent parameters. They are related to each other, and they determine the FOV and its distortion. To evaluate their combined expression, we calculated the angle between the LOS of the pixel at the center of image and that of the pixel with a distance of 1500 pixel from the center. We denote this angle by \(\alpha\), and the derived values of \(\alpha\) are listed on the rightmost column of Table 1. The derived values of \(\alpha\) were in the range from 36.71° to 36.99° with a standard deviation of 0.12°. The calibration precision for the FOV was ±0.12°. The calibration errors would be mainly attributed to the errors in determination of city light position on the photograph. Because each city light has a finite size on the photographs, the determination of its position yields some errors. Stars are also captured in the image. The orientation of the camera can be determined from star positions. For the current case, however, stars are captured only in small upper portion of the image and city lights are captured more widely in the image. The calibration precision evaluated above seems sufficient for most studies of airglow. Therefore, we use only city light positions for the calibration of A-IMAP observations. However, additional usage of star map for determination of camera orientation will provide better calibration accuracy for future other observations when stars are distributed widely in the image. The calibration results are stable in the ISS path, along which the imaging parameters are not changed. Therefore, only one photograph is needed when applying the method to determine the imaging parameters for sequential photographs. Even though most photographs taken along the ISS path did not have clear city lights, because of clouds or the land–sea distribution under the ISS path, calibration of the imaging parameters is possible if there were more than one photograph with an enough number of city lights. Vertical airglow structures The airglow structures on the photographs were mapped to geographical locations with the derived imaging parameters and compared with previous studies so as to evaluate the validity of the calibration results. The camera had a CMOS image sensor that contains a red, green, and blue color filter of the Bayer arrangement (Bayer 1976). The number of pixels for the green channel was 2142 in width and 2844 in height, and that for the red and blue channels was 2142 in width and 1422 in height. A laboratory experiment was conducted to obtain the sensitivity spectrum for each color filter. Thirty monochromatic lines from 400 to 700 nm with a bandpass of 2 nm were used to measure the spectral response of the sensor. The camera was operated in manual mode with 2.5-s exposure time and 102,400 ISO speed. The mean raw count per pixel per cd/m2 for each monochromatic line is shown in Fig. 4. The counts were normalized by divided by the peak mean count of the green channel. The result showed that the sensor is sensitive to light of wavelengths from 410 to 680 nm. Sensitivity spectrum of the camera obtained from the laboratory experiment. The response of the sensor to monochromatic light was examined in 10-nm intervals. The mean raw count per pixel per cd/m2 is normalized by divided by the peak mean count of the green channel. The wavelengths of the major emission lines are also plotted The spectra of the airglow have been observed from ground (e.g., Krassovsky et al. 1962, Broadfoot and Kendall 1968) and space (Broadfoot and Bellaire 1999). The responsivity of each color channel to the airglow can be estimated by comparing the camera sensitivity spectra with the airglow spectra. The red channel has sensitivity in the range from 570 to 680 nm. In this wavelength range, the intense emissions of the airglow are atomic oxygen emission (OI) at 630 nm, sodium emissions (NaD) in the range of 588.9–589.6 nm, and hydroxyl (OH) emissions. Therefore, the red channel had a capability to measure these three emissions. The green channel had sensitivity in the range from 460 to 620 nm with a peak near the OI 557.7-nm green line emission. The OI green line dominated the green channel, although OH and NaD emissions also contributed to the green channel. In the blue channel sensitivity range from 410 to 540 nm, no intense emission line of the airglow was identified. However, the emissions of the O2 Herzberg I, II, and II and the Chamberlain system (Chamberlain 1958; Khomich et al. 2008) could be measured in the blue channel. The limb profile of the raw count for photograph A was derived using the imaging parameters determined by the calibration method. The imaging data were averaged over 20 pixels along the horizontal axis of the image so as to increase the signal-to-noise ratio. Although the horizontal axis of the image is not perfectly identical to the Earth's horizontal axis, the two axes are roughly parallel as shown in Fig. 1. Because of the Earth's curvature, there is 0.9-km difference in tangential altitude between the right and left side pixels of the 20 pixels around 90 km. We ignored this discrepancy of the two axes in the horizontal 20 pixels and averaged data along the horizontal axis of the image to reduce the calculation quantity. The pixels with counts more than the average count by its standard deviation were removed so as to avoid contamination from stars. We calculated the imaging parameters from those determined using photograph A (Table 1). The limb profile as a function of tangential altitude is plotted in Fig. 5. The error bars indicate the standard deviation of the averaged 20 pixels. The coordinates of the tangential point at 90 km in altitude were 13.1° S and 130.3° E in geographical coordinates. The green channel was found to have 102 pixels in the range from 0 to 100 km in the tangential altitude, and the red and blue channels were found to have 51 pixels in the same range. Therefore, the height resolution for the green channel was 1.0 km and those for the red and blue channels were 2.0 km. The green and red channels had a broad peak in the range from 180 to 300 km. The 557.7 and 630-nm OI emissions from the F-region ionosphere were considered to be the cause for the peaks in the green and red channels, respectively. A previous study showed that the 630-nm OI airglow emission is in the altitude range of 180–350 km and the low- and midlatitude regions based on data of rockets (Takahashi et al. 1990) and satellites (Adachi et al. 2010). The altitude of the 630-nm emission derived from ground-based imaging network with the triangulation technique was reported to be 260 ± 10 km in middle latitude (Kubota et al. 2000). The 557.7-nm emission in the F-region has an intensity that is 30 % of that of the 630-nm emission (Khomich et al. 2008). The vertical profiles of the F-region emission obtained from the red and green channels of the A-IMAP observation have a good agreement with these studies. All of the channels had a clear peak around an altitude of 90 km. The red channel had a peak at an altitude of 86 km. The NaD and OH emissions in the mesopause region were possible causes for the red channel peak. The green channel had a peak at an altitude of 94 km. It was mainly dominated by the 557.7-nm OI airglow emission in the mesopause region. The NaD and OH airglow would also contribute to the green channel profile in this altitude range to some extent. The peak altitude of the emissions observed at the mesopause altitude had a good agreement with previous observations by rockets (e.g., Packer 1961; Baker and Stair 1988) and satellites (e.g., Liu and Shepherd 2006). According to those studies, the typical peak altitudes are 95 km for the 557.7-nm OI emission, 90 km for NaD emissions, and 86 km for OH airglow. The emission layer in the blue channel with a peak latitude of 94 km could be attributed to the O2 emission of Herzberg I, II, and III, and/or the Chamberlain system (Khomich et al. 2008). The peak altitude of Herzberg I and II and the Chamberlain system was reported to be in the altitude range from 94 to 100 km (López-González et al. 1992). The vertical structures of the airglow captured in the three channels are consistent with previous studies. Limb profiles of the three channels derived from photograph A. The coordinates of the tangential point at 90 km were 12.6° S and 130.5° E. The recoded time is 13:47:00.08 UT. The raw counts of the pixels along the vertical line of the image are averaged to 10 pixels horizontally so as to increase the signal-to-noise ratio and are plotted as a function of tangential altitude. The limb profile has its tangential point at 32.3°S and 124.2°E and 90 km in altitude. The error bars show the standard deviation of the averaged pixels. The green, red, and blue lines indicate the green, red, and blue channels, respectively Horizontal airglow structures We derived the horizontal structure of the Equatorial Ionization Anomaly (EIA) of the 630-nm OI emission from the red channel of the sequential photographs of the A-IMAP campaign and compared it with a simultaneous observation of the 630-nm emission by the Visible and near-Infrared Spectral Imager (VISI), one of the instruments of the ISS-IMAP mission. In order to increase the signal-to-noise ratio and reduce the calculation quantity, we compressed the horizontal axis of the A-IMAP image from 2142 to 214 pixels by averaging data. The raw counts of the red channel of the photographs were averaged over 20 pixels for every 10 pixels along the horizontal axis of the image. Because the discrepancy between the horizontal axis of the image and the Earth's horizontal axis is small in the range of horizontal 20 pixels, we can safely average data of the red channel in 20 pixels along the horizontal axis of the image. In the averaging process, the contamination by stars was removed with the same method as mentioned above. For each vertical profile along the vertical axis of the A-IMAP image, the total row count of the red channel in the range from 180 to 300 km in tangential altitude was calculated and mapped to its tangential point at 250 km in altitude. This process was applied to 570 photographs taken from 13:38:59 to 14:05:09 UT on August 26, 2014. The result is presented in Fig. 6a with a black–red color scale. The ISS flew toward southeast and the center LOS of the camera pointed southeast with an AOV of 92° in the low-latitude region at that time. The width of the FOV across the track was about 2100 km at 250 km in altitude. The horizontal resolution along the track was 21 km, which is determined by the 2.75 s imaging interval of the photographs. The horizontal resolution perpendicular to the track was 19 km, corresponding to the size of 10 pixels of the image. The robot arms of the ISS obstructed the airglow imaging on the right side of the FOV as can be seen in Fig. 1. They cause the two line-shape data gaps on the south side of the mapping image along the ISS orbit. The solar panels of the ISS also obstructed the observation on the left side of the FOV until 13:49:36 UT. The data gap caused by the solar panels can be seen on the north side of the mapping image. An enhancement of the total count of the red channel caused by an enhancement of the 630-nm emission can be seen around the latitude of 5° S. This enhancement was interpreted to be caused by the EIA, because the magnetic latitude of the enhancement is consistent with the typical magnetic latitude of the EIA (e.g., Watthanasangmechai et al. 2014, 2015). The total counts of the enhancement on the east side of the FOV were larger than those on the west side. This is because of the alignment of the LOS with the EIA, which has a zonally extended structure. Considering the center LOS of the camera pointed southeast with an AOV of 92 at the low-latitude region, the LOS on the east side of the FOV pointed east, and then an integration along the LOS yielded a large count from the EIA. The LOS on the west side of the FOV pointed south and resulted in a less count. Mapped total counts and the IMAP/VISI data. In a, total counts of the red channel of the photographs in the range between tangential altitudes of 180 and 300 km are mapped to the tangential points of 250 km with a black–red color scale. The 630-nm emission observed by the IMAP/VISI is plotted in units of Rayleigh/nm with a blue–white color scale. In b, data of A-IMAP and IMAP/VISI along the green line in a are presented as functions of latitude. The red solid line is for the total counts of the red channel with the summation range from 180 to 300 km and the red dash line is for that with the summation range from 230 to 300 km. The blue line is for the 630-nm emission observed by the IMAP/VISI IMAP/VISI is a Visible and near-Infrared Spectral Imager installed on the Exposure Facility of the Japanese Experiment Module on the ISS (Sakanoi et al. 2011). It had a simultaneous observation on August 26, 2014, with the A-IMAP observation. IMAP/VISI observed several emission lines of the airglow, including the 630-nm OI emission with two slit-line FOVs pointing 45° forward and 45° backward to nadir. From 13:40:42 to 14:08:57 UT, it observed the 630-nm OI emission with backward FOV and an exposure time of 4.0 s in "the spectral mode." In the spectral mode, it recorded the spectral shape in the wavelength range around 630 ± 6 nm with a resolution of 1.02 pixel/nm. We assumed that the maximum count is in the wavelength range containing both the 630-nm OI emission and background emission and that the minimum count is in the range containing only the background emission. By subtracting the minimum count from the maximum count, the 630-nm airglow emission peak could be retrieved from the observed spectral shape. The 630-nm OI emission in units of Rayleigh/nm was mapped to an altitude of 250 km. The results are presented in Fig. 6a with a blue–white color scale. The width of the FOV across the track was 350 km at 250 km in altitude, and the resolutions across and along the track were 14 and 33 km, respectively. An enhancement of the 630-nm emission caused by the EIA enhancement was observed by IMAP/VISI in the latitude range from 0° S to 10° S. To have more detailed comparison of the data, we plotted the data as functions of latitude in Fig. 6b. The total counts of the A-IMAP red channel and the IMAP/VISI data along the green line shown in Fig. 6a are plotted with the red and blue lines, respectively. To examine the effect of the vertical summation range, we show the latitudinal profile of the A-IMAP data with the summation range from 230 to 300 km with the red dash line in addition to that with the summation range from 180 to 300 km. The center of the EIA was located around 5.5° S for both observations. Although the enhancement peak locations showed a good coincidence, the width of the enhancement observed by A-IMAP was wider than that observed by IMAP/VISI. We computed a nonlinear least-squares fit for each latitudinal profile to a function \(f\left( x \right) = A_{0} e^{{\frac{z}{2}}} + A_{3} + A_{4}\) where \(z = \frac{{x - A_{1} }}{{A_{2} }}\). Assuming the width of EIA is twice large as the full-width half maxima, we determined the edge of the EIA for each profile. The southern edge for each profile is indicated with the vertical arrow of each line type in Fig. 6. The southern edge of the EIA in the red channel with the summation range from 180 to 300 km was wider by about 3.3° in latitude or 450 km in distance than that in the IMAP/VISI observation, while the edge in the red channel with the summation range from 230 to 300 km was wider by about 2.2° in latitude or 300 km in distance that in the IMAP/VISI observation. The wider enhancement structure in the limb count profile of the photographs would be attributed to the horizontally integrated observation path. There was about 520-km horizontal distance between the tangential points in the altitude range from 180 and 300 km in each image. For the summation range from 230 to 300 km, the horizontal distance between the tangential points is about 330 km. It means that the limb observation integrated the structure horizontally for about 520 or 330 km depending on its summation range. As a result, the horizontally extended shape of EIA was observed in the A-IMAP data. The difference in the EIA width derived from different summation altitude ranges can be explained by the difference in horizontal distance between the tangential points in each summation range. On the other hand, IMAP/VISI has a relatively vertical integration path and can observe the true horizontal edge of the EIA. Therefore, it is reasonable that the results from different observation geometries have difference in the EIA edge. Although there is a difference in the width of the structure caused by the observation geometry, the EIA enhancement locations from the two observations have a good agreement. This agreement supports the validity of the proposed calibration method. A new calibration technique for space-borne airglow photography was developed and confirmed using the other optical measurements. Matching the apparent city light positions on a photograph with the actual city light positions derived from the DMSP-OLS stable night light map in the pinhole camera model, the imaging parameters can be determined. We applied the method to the photographs taken by astronauts on the ISS on August 28, 2014, and evaluated the precision and stability of the calibration. The precision of the derived time lag for the camera clock was 0.3 s, and that of the camera orientation was 0.08°. The precision of the FOV was 0.12° for the pixel with a distance of 1500 pixels from the center of the image. The calibration result is precise enough for airglow and aurora studies. This calibration technique makes it possible to utilize photographs taken on LEO satellites as a reference for the airglow and aurora structures. The raw count data of the photographs were mapped to their tangential points in geographical coordinates using the calibrated imaging parameters. The peak heights for the F-region airglow, e.g., the 630 and 557.7-nm OI emissions, and the mesospheric airglow, such as OI, NaD, and OH emissions, had a good agreement with those of previous studies. The EIA structures of the 630-nm OI emission obtained from the red channel of the photographs and IMAP/VISI data also had good agreement. These agreements support the validity of the calibration method. AOV: DMSP-OLS: the visible and infrared sensors on the Defense Meteorological Satellite Program satellite DSLR: digital single-lens reflex EIA: equatorial ionization anomaly EXIF format: exchangeable image file format FOV: ISS: ISS-IMAP: International Space Station-Ionosphere, Mesosphere upper Atmosphere, and Plasmasphere mapping mission low Earth orbit Visible and near-Infrared Spectral Imager Adachi T, Yamaoka M, Yamamoto M et al (2010) Midnight latitude-altitude distribution of 630 nm airglow in the Asian sector measured with FORMOSAT-2/ISUAL. J Geophys Res 115:1–11. doi:10.1029/2009JA015147 Baker DJ, Stair AT (1988) Rocket measurements of the altitude distributions of the hydroxyl airglow. Phys Scr 37:611–622. doi:10.1088/0031-8949/37/4/021 Bayer B (1976) Color imaging array. US Patent 3971065, 20 July 1976 Broadfoot AL, Bellaire PJ (1999) Bridging the gap between ground-based and space-based observations of the night airglow. J Geophys Res 104:17127. doi:10.1029/1999JA900135 Broadfoot AL, Kendall KR (1968) The airglow spectrum, 3100–10,000 A. J Geophys Res 73:426. doi:10.1029/JA073i001p0042 Brown DC (1971) Close-range camera calibration. Photogramm Eng 37:855–866 Chamberlain JW (1958) The blue airglow spectrum. Astrophys J 128:713. doi:10.1086/146582 Jaehne B (1997) Practical handbook on image processing for scientific applications. CRC Press Inc, Boca Raton Khomich VY, Semenov AI, Shefov NN (2008) Airglow as an indicator of upper atmospheric structure and dynamics. Springer, Berlin Krassovsky VI, Shefov NN, Yarin VI (1962) Atlas of the airglow spectrum 3000–12400 Å. Planet Space Sci 9:883–915. doi:10.1016/0032-0633(62)90008-9 Kubota M, Shiokawa K, Ejiri MK et al (2000) Traveling ionospheric disturbances observed in the OI 630-nm nightglow images over Japan by using a Multipoint Imager Network during the FRONT Campaign. Geophys Res Lett 27:4037–4040. doi:10.1029/2000GL011858 Liu G, Shepherd GG (2006) An empirical model for the altitude of the OH nightglow emission. Geophys Res Lett 33:L09805. doi:10.1029/2005GL025297 López-González MJ, López-Moreno JJ, Rodrigo R (1992) Altitude and vibrational distribution of the O2 ultraviolet nightglow emissions. Planet Space Sci 40:913–928. doi:10.1016/0032-0633(92)90132-8 Packer DM (1961) Altitude of the night airglow radiations. Ann Geophys 17:67–75 Sakanoi T, Akiya Y, Yamazaki A et al (2011) Imaging observation of the earth's mesosphere, thermosphere and ionosphere by VISI of ISS-IMAP on the International Space Station. IEEJ Trans Fundam Mater 131:983–988. doi:10.1541/ieejfms.131.983 Takahashi H, Clemesha BR, Batista PP et al (1990) Equatorial f-region oi 6300 å and oi 5577 å emission profiles observed by rocket-borne airglow photometers. Planet Space Sci 38:547–554. doi:10.1016/0032-0633(90)90148-J The Gateway to Astronaut Photography of Earth (1995). http://eol.jsc.nasa.gov/. Accessed 6 Apr 2016 Trucco E, Verri A (1998) Introductory techniques for 3-D computer vision. Prentice Hall, Englewood Cliffs Version 4 DMSP-OLS Nighttime Lights Time Series (2015). http://ngdc.noaa.gov/eog/dmsp/downloadV4composites.html. Accessed 6 Apr 2016 Watthanasangmechai K, Yamamoto M, Saito A et al (2014) Latitudinal GRBR-TEC estimation in Southeast Asia region based on the two-station method. Radio Sci 49:910–920. doi:10.1002/2013RS005347 Watthanasangmechai K, Yamamoto M, Saito A et al (2015) Temporal change of EIA asymmetry revealed by a beacon receiver network in Southeast Asia. Earth Planets Space 67:75. doi:10.1186/s40623-015-0252-9 YH developed the calibration method, applied the method to data, interpreted the calibration result, and wrote the first draft of the manuscript. AS conceived the airglow observation with photographs taken on the International Space Station, coordinated observations, contributed to the improvement of the calibration method. ME lead the sensitivity calibration of the camera. All authors revised and improved the manuscript. All authors read and approved the final manuscript. Data that support this report are from Astronaut-Ionosphere, Mesosphere, upper Atmosphere, and Plasmasphere mapping (A-IMAP) mission and the Ionosphere, Mesosphere, upper Atmosphere, and Plasmasphere mapping mission from the ISS (ISS-IMAP mission). We thank all the member of the A-IMAP and ISS-IMAP mission. We are also thankful to NOAA's National Geophysical Data Center and US Air Force Weather Agency for the provision of the global nighttime stable light data obtained from the visible and infrared sensors (OLS) on the Defense Meteorological Satellite Program (DMSP) satellite. This work was supported by JSPS KAKENHI Grant Number JP26610156. Graduate School of Science, Kyoto University, Kyoto, Japan Yuta Hozumi & Akinori Saito Department of Polar Science, Graduate University for Advanced Studies (SOKENDAI), Tachikawa, Japan Mitsumu K. Ejiri National Institute of Polar Research, Tachikawa, Japan Search for Yuta Hozumi in: Search for Akinori Saito in: Search for Mitsumu K. Ejiri in: Correspondence to Yuta Hozumi. Hozumi, Y., Saito, A. & Ejiri, M.K. Calibration of imaging parameters for space-borne airglow photography using city light positions. Earth Planet Sp 68, 155 (2016) doi:10.1186/s40623-016-0532-z Camera calibration Space-borne imaging
CommonCrawl
Machine learning analysis of motor evoked potential time series to predict disability progression in multiple sclerosis Jan Yperman1,2,3, Thijs Becker2, Dirk Valkenborg2, Veronica Popescu4,5, Niels Hellings3, Bart Van Wijmeersch4,5 & Liesbet M. Peeters2,3 Evoked potentials (EPs) are a measure of the conductivity of the central nervous system. They are used to monitor disease progression of multiple sclerosis patients. Previous studies only extracted a few variables from the EPs, which are often further condensed into a single variable: the EP score. We perform a machine learning analysis of motor EP that uses the whole time series, instead of a few variables, to predict disability progression after two years. Obtaining realistic performance estimates of this task has been difficult because of small data set sizes. We recently extracted a dataset of EPs from the Rehabiliation & MS Center in Overpelt, Belgium. Our data set is large enough to obtain, for the first time, a performance estimate on an independent test set containing different patients. We extracted a large number of time series features from the motor EPs with the highly comparative time series analysis software package. Mutual information with the target and the Boruta method are used to find features which contain information not included in the features studied in the literature. We use random forests (RF) and logistic regression (LR) classifiers to predict disability progression after two years. Statistical significance of the performance increase when adding extra features is checked. Including extra time series features in motor EPs leads to a statistically significant improvement compared to using only the known features, although the effect is limited in magnitude (ΔAUC = 0.02 for RF and ΔAUC = 0.05 for LR). RF with extra time series features obtains the best performance (AUC = 0.75±0.07 (mean and standard deviation)), which is good considering the limited number of biomarkers in the model. RF (a nonlinear classifier) outperforms LR (a linear classifier). Using machine learning methods on EPs shows promising predictive performance. Using additional EP time series features beyond those already in use leads to a modest increase in performance. Larger datasets, preferably multi-center, are needed for further research. Given a large enough dataset, these models may be used to support clinicians in their decision making process regarding future treatment. Multiple sclerosis (MS) is an incurable chronic disease of the central nervous system (CNS). Because of inflammation there is demyelination and degeneration of the CNS, and patients acquire symptoms which depend on the site of the lesions. Typical MS symptoms include sensation deficits and motor, autonomic and neurocognitive dysfunction. The clinical course of MS can span decades, and varies greatly between individuals [1]. Although there is currently no cure, there are numerous disease-modifying treatments (DMTs) that alter the natural disease course, with more on the way [2]. For the time being, it remains impossible to accurately predict the disease course of an individual patient. This unpredictability causes anxiety and frustration for patients, families and health-care professionals [3]. Ideally, MS should be featured by an individualized clinical follow-up and treatment strategy. A fast and sensitive detection of non-response to the current treatment could trigger a treatment switch, which would optimize the individual treatment trajectory. While there are numerous biomarkers available for MS, there is still discussion on their relative usefulness. Besides magnetic resonance imaging (MRI) scans, which visualize lesions in the CNS, other clinical parameters such as the expanded disability status scale (EDSS) [4] are used in the assessment of MS disease progression [5–9]. Several research groups have shown that evoked potentials (EP) allow monitoring of MS disability and prediction of disability progression [10–32], see [33, 34] for reviews. However, the precise value of EPs as a biomarker for monitoring MS is still under discussion [35–37]. EP provide quantitative information on the functional integrity of well-defined pathways of the central nervous system, and reveal early infra-clinical lesions. They are able to detect the reduction in electrical conduction caused by damage (demyelination) along these pathways even when the change is too subtle to be noticed by the person or to translate into clinical symptoms. EP measure the electrical activity of the brain in response to stimulation of specific nerve pathways or, conversely, the electrical activity in specific nerve pathways in response to stimulation of the brain. Different types of EP are available corresponding to different parts of the nervous system [38]. For visual EP (VEP) the visual system is excited and conductivity is measured in the optic nerve; for motor EP (MEP) the motor cortex is excited and conductivity is measured in the feet or hands; for somatosensory EP (SEP) the somatosensory system (touch) is excited and conductivity is measured in the brain; and for brainstem auditory EP (BAEP) the auditory system (ears) is excited and conductivity is measured at the auditory cortex. If several types of EP are available for the same patient this is referred to as a multimodal EP (mmEP). Considerable community effort has been performed to summarize mmEP by a one-dimensional statistic, called the EP score (EPS), by applying different scoring methods [14, 15, 18, 19, 23, 31, 32]. The scoring methods described in literature use a limited number of features from these EP time series (EPTS). The latency (i.e. the time for the signal to arrive) is always included. Besides latency, amplitude and dispersion pattern are also possibly included in the EPS [23]. By only using two or three variables extracted from the EPTS, possibly useful information is lost. In this study, we investigate whether a machine learning approach that includes extra features from the EPTS can increase the predictive performance of EP in MS. Predicting disability progression is often translated to a binary problem, where a certain increase in EDSS is considered as a deteriorated patient. In the literature, the main modeling techniques are linear correlation of latency or EPS with EDSS, and linear or logistic regression models. Except for one study with 30 patients [13], no study has used an independent test set to asses model performance. Some studies use cross-validation to estimate model performance [17, 19, 20, 22, 27]. Akaike information criterion (AIC) or Bayesian information criterion (BIC) are sometimes included to encourage model parsimony. While such models are statistically rigorous, insightful, and often used in practice [39], a realistic performance estimate is obtained by training on a large dataset (part of which is used as a validation set to tweak any hyperparameters), and testing on an independent large dataset containing different patients. This study provides, for the first time, such a performance estimate. We recently extracted a large number of EPTS from the Rehabilitation & MS Center in Overpelt, Belgium. This patient cohort consists of individuals undergoing treatment. This is the most relevant scenario, since in a clinical setting the majority of patients will have had some form of treatment prior to these types of measurements. The resulting dataset, containing the full time series of mmEP with longitudinal information for most patients, is the first of its kind. We perform a disability prediction analysis on the MEP from this dataset, as this EP modality is most abundant in the dataset. A machine learning approach is used to see if there is extra information in the MEP for predicting disability progression after 2 years, besides latency and amplitude. 419 patients have at least one measurement point, where previous studies had between 22 and 221 patients. Including extra EP features leads to a statistically significant increase in performance in predicting disability progression, although the absolute effect is small. Our results suggest that this effect will become more stable on a larger dataset. We show that a nonlinear model (random forests) achieves significantly better performance compared to a linear one (logistic regression). The best model for predicting disability progression after 2 years achieves an average area under the curve (AUC) of the receiver-operating characteristic (ROC) of 0.75. In the literature, AUC ROC values for this task range from 0.74 to 0.89, with prediction windows between 6 months and 20 years [20, 23, 24, 26, 27]. The predictive performance of the MEP in this work, achieved on an independent test set and measured in a real-world setting, shows MEP can be a valuable biomarker in clinical practice for disease monitoring and the prediction of disability progression. If our identified extra features are confirmed in larger, multi-center studies, they can be used to give additional feedback to the caregivers on the disease evolution. Measurement protocol Motor evoked potentials were recorded from the preinnervated abductor pollicis brevis (APB) and abductor hallucis (AH) muscles bilaterally. Magnetic stimuli were delivered to the hand and leg areas of the motor cortex with a Magstim 2002 or Bistim device (The Magstim Company Ltd., Whitland, UK) via a round coil with an inner diameter of 9 cm with maximal output of the stimulator (2.2 T). Recording is done with two different machines. The signal is recorded for 100ms, starting from the moment the stimulus is applied. The resulting signal is digitized at a frequency rate of 20 kHz or 19.2 kHz (depending on which machine was used), resulting in 2000 or 1920 data points per measurement respectively. One such measurement is illustrated in Fig. 1. The 20 kHz signals are down-sampled to 19.2 kHz. Signals from one machine are filtered between 0.6 Hz and 10 kHz, while the other machine has a high-pass filter of 100 Hz. Single MEP An example of a single motor evoked potential (MEP) measurement. The annotations indicate the following points: 1: The point where the measurement starts, i.e., the moment the motor cortex is stimulated. 2: As the first 70 points of the measurement often contain artifacts we discard all points up to this point. 3: The latency of the signal, as annotated by specialized nurses. The time series consists of 1920 values For the hands, electrodes are placed at three places: on top of the hand (ground), the APB muscle, and the proximal phalanx of the thumb. The first excitation is at 45% of the maximal stimulator output. New stimuli are presented with an increase of 5 percentage points. The measurement ends if the amplitude reaches 1 millivolt, or if the amplitude stops increasing for stronger stimuli. If the signal is of bad quality, as judged by the nurse, it is discarded. For the feet, electrodes are placed at three places: on top of the foot (ground), the big toe, and the AH muscle. The first excitation is at 50% of the maximal stimulater output. New stimuli are presented with an increase of 5 percentage points. The measurements end if the amplitude reaches 1 millivolt, or if the amplitude stops increasing for stronger stimuli. If the signal is of bad quality, as judged by the nurse, it is discarded. An example of all the EPTS of the MEP for a single visit is shown in Fig. 2. For each limb, each excitation strength gives one EPTS. After discussion with the neurologists we decided to use only the EPTS with the maximal peak-to-peak amplitude, as this is likely to be the most informative measurement. EPTS example Example of the EPTS measured during a single patient visit. The titles indicate the anatomy (APB for the hands, AH for the feet) and the respective sides (L for left, R for right) The full evoked potential dataset consists of 642 patients and has SEP (528), BAEP (1526), VEP (2482), and MEP (6219) visits (dataset paper to be published). We only study the MEP, because they are most frequently measured. Each MEP visit contains 4 measurements: two for the hands (APB muscle), and two for the feet (AH muscle). Visits that don't contain all 4 EPTS are discarded. We use the standard definition of disability progression [40], where the patient has progressed if \(\text {EDSS}_{T_{1}} - \text {EDSS}_{T_{0}} >= 1.0\) for \(\text {EDSS}_{T_{0}} \leq 5.5\), or if \(\text {EDSS}_{T_{1}} - \text {EDSS}_{T_{0}} >= 0.5\) for \(\text {EDSS}_{T_{0}} > 5.5\). T0 is the time of the first measurement, and T1 is the time of the EDSS measurement between 1.5 and 3 years which is closest to the 2 year mark. The MEP visit has to occur 1 year before or after T0. Visits without two-year follow-up are discarded. We do not perform confirmation of disability progression, where one confirms progression with one or a few EDSS measurements after \(\text {EDSS}_{T_{1}}\) [40]. This makes it more straightforward to compare our results with the literature on predicting disability progression from EPs, where this is also not done. It furthermore gives us more input-output pairs for training the model, where the input is a collection of measurement variables (e.g. latency, peak-to-peak amplitude, age), and the output is the disability progression target (yes or no). From now on we refer to input-output pairs as samples. The downside is that our target is likely more noisy, as some positive targets are not truly progressed in disability, but rather fluctuations in the measurement or disease process. Measurements of a duration differing from 100ms are discarded. Around 97% of the data has duration 100ms, so this has little influence on the dataset size, and keeps the data homogeneous. The majority of EPTS consist of 1920 data points. Due to a slight difference in the sampling rate (cfr. "Measurement protocol" section), some EPTS consist of 2000 data points. These EPTS were downsampled to 1920 data points. The latencies (as illustrated in Fig. 1) were manually annotated by specialized nurses during routine clinical follow-up, and are included in the extracted dataset. Their test-retest reliability has not been calculated for this dataset, but the literature indicates that they are reliable [41–43], and can be used as a biomarker to study MS disease progression [44]. The peak-to-peak amplitude was calculated by taking the difference between the minimum and maximum value of the whole EPTS. In case no spontaneous response or MEP in rest position is obtainable, a light voluntary contraction of the muscle in question is asked in order to activate the motor cortex and increase the possibility of becoming a motor answer. This so called facilitation method is usually very noisy due to baseline contraction of the muscle measured, so we decided to drop them from the dataset altogether. Facilitated measurements are characterized by a non-flat signal right from the start of the measurement. We drop any EPTS that have a spectral power above an empirically determined threshold at the starting segment of the measurement. This segment is determined by the values of the latency of a healthy patient, which we set to be 17 ms as this is the lower bound for the hands. We use the same threshold for the feet, which is not a problem since the lower bound there is higher. The type of MS was inferred from the diagnosis date and the date of onset, both of which have missing values, making the type of MS field somewhat unreliable. After all these steps, we are left with a dataset of 10 008 EPTS from 2502 visits of 419 patients. Note that one patient can have several visits that satisfy the conditions for two-year follow-up. We have one target (worsened after 2 years or not) for each visit, so the total number of samples is 2502. Some of the characteristics of the dataset are summarized in Table 1. Table 1 Characteristics of the dataset Data analysis pipeline We start with a simple model that uses a subset of the features proposed in the literature. As other EPS require neurologist interpretation, and are therefore difficult to automate, we use latencies in our baseline. The fact that this is a fair baseline is supported by [23], where it was shown that different EPS have similar predictive performance, with short-term change or baseline values in (z-scored) latencies being more predictive than changes in other EPS. It is furthermore supported by the results from [17], where the central motor conduction time of the MEP was more informative for disability progression than the MEP EPS. Despite the increased size of the dataset, the disability progression classification task remains a challenging problem. Challenging aspects are the limited sensitivity to change of the EDSS measure, its dependence on neurologist interpretation, and the heterogeneity of disease development. Therefore, our data analysis pipeline is mainly focused on minimizing overfitting. As our dataset includes the full EPTS, we wish to find one or more time series features that provide supplemental information on disability progression, on top of the features already used in the literature. A schematic overview of the data analysis pipeline is shown in Fig. 3. The various steps in the data analysis pipeline are detailed below. The data analyis pipeline was implemented in Python using the scikit-learn library [45], with the exception of the Boruta processing step, for which we used the Boruta package in R [46], and the feature extraction, for which we used the highly comparative time-series analysis (HCTSA) package [47, 48], which is implemented in Matlab and is available on github (https://github.com/benfulcher/hctsa). To determine sensible values for the hyperparameters of the model, we performed grouped 4-fold cross-validation on the training set. For more details on this, along with an analysis of the robustness of the performance of the model to these choices of hyperparameters, we refer the reader to Additional file 1. Pipeline Schematic overview of the data analysis pipeline. By grouped stratified shufflesplit we mean splits of train/test sets generated by assigning samples to train or test set at random, but keeping the patients in test and training set separated, and making sure the ratio of positive targets are roughly the same for the test and training set. To assess the impact of the size of the training set, we run this pipeline for 4 different ratios of train/test set size Feature extraction: Because each EPTS starts with a large peak at the beginning, an uninformative artifact of the electrophysiological stimulation, the first 70 data points of each EPTS are discarded. A diverse and large set of time series features is extracted from the rest of the EPTS (1850 data points) with the HCTSA package, which automatically calculates around 7700 features from different TS analysis methodologies. The motivation for this approach is to perform a wide variety of time series analysis types, and draw general conclusions on what approaches are useful. It makes the analysis less subjective, since one does not have to choose a priori the type of hand-engineered features that are extracted. Given the large size of this feature set, one expects that almost all useful statistical information contained in the EPTS is encoded in it. A detailed discussion of the HCTSA library and its included features can be found in the manual of its git repository (https://hctsa-users.gitbook.io/hctsa-manual/) and in the supplementary information of [47]. There are several highly performant time series classification libraries available (e.g. [49]). The advantage of using HCTSA is interpretability: after feature selection, the final model contains one to a few extra features, whose content can be investigated. This in contrast to more black-box type classifiers that take the whole signal as input and return an output. The underlying philosophy is very similar to, e.g., radiomics for the analyses of MRI images [50]. To our knowledge, HCTSA is the only library that computes such a comprehensive set of time series features. The feature matrix Fij has rows i for each EPTS and columns j for each feature. If a column fj contains an error or NaN value it is discarded. Normalization is performed by applying the following transformation on each column: $$ \hat{\mathbf{f}}_{j} = \left\{ 1 + \exp \left[ - \frac{\mathbf{f}_{j} - \text{median}(\mathbf{f}_{j})}{\text{iqr}(\mathbf{f}_{j}) / 1.35} \right] \right\}^{-1}, $$ with iqr the interquartile range. Because the median and iqr are used, this normalization is robust to outliers. All normalized columns \(\hat {\mathbf {f}}_{j}\) that contain an error or NaN are discarded. To exploit the symmetry between the measurements performed on the left and the right limb, we sum the features of both sides. This reduces the number of features we need to consider, which is helpful against overfitting. The final normalized feature matrices \(\hat {\mathbf {F}}_{ij}\) of AH and APB both have size 5004 ×5885. Mutual Information: Our goal is to use a feature selection algorithm in order to determine the most important features. The ratio of the number of samples to the number of features is quite small (≈ 1). The feature selection algorithm we use, Boruta [46], was expected to work well for such a ratio [51]. We however found it to perform poorly for our problem. The performance of Boruta was tested by adding the latency, which is known to be relevant, to the list of candidates, which was subsequently not marked as relevant by Boruta. We therefore reduce the number of features using mutual information with the target as a measure of feature importance. We select the top ten percent of features based on this metric. We performed an analysis of the impact of this choice of preselection method, for which we refer the reader to Additional file 1. Hierarchical clustering: In this step we seek to reduce redundancy in our choice of features. We estimate this redundancy using the correlation distance, which we define here as $$ \text{correlation distance} = \left|\frac{\left(\mathbf{u} - \bar{\mathbf{u}}\right)\cdot\left(\mathbf{v} - \bar{\mathbf{v}}\right)}{\left\lVert \left(\mathbf{u} - \bar{\mathbf{u}}\right) \right\rVert_{2} \left\lVert \left(\mathbf{v} - \bar{\mathbf{v}}\right) \right\rVert_{2}}\right| $$ where u and v are the feature vectors we wish to compare, and ∥·∥2 the Euclidean distance. Note that we take the absolute value here so highly anti-correlated features are filtered as well. Features which are highly correlated have a distance close to zero, and conversely features which are not correlated have a distance close to 1. We cluster all features at a cutoff of 0.1 and keep only one feature for each cluster. This step roughly halves the number of features that remained after the mutual information selection step. We performed an analysis of the impact of this step on the final result, for which we refer the reader to Additional file 1. Boruta With the number of features now reduced to a more manageable count we run the Boruta algorithm [46] to estimate the importance of the remaining features. In a nutshell, the Boruta algorithm compares the importance (as determined by a z-scored mean decrease accuracy measure in a random forest) of a given feature with a set of shuffled versions of that feature (called shadow features). If a feature's importance is significantly higher than the maximal importance of its set of shadow features, it is marked as important. Conversely, any feature with importance significantly lower than the maximal importance of its shadow features is marked as unimportant, and is removed from further iterations. This procedure is repeated until all features are have an importance assigned to them, or until a maximal number of iterations is reached. Because the Boruta method is based on random forests, and because we use random forests as our classifier, we expect it to be well suited for the feature selection task. We add a few literature features to the set of TS features as well (latencies, EDSS at T0 and age). There are multiple reasons for doing this. First off, it allows us to check the performance of the Boruta algorithm, as these features are known to be important. Secondly, some of the TS features may only be informative in conjunction with a given literature feature. Boruta returns a numerical measure of feature importance, which allows us to assign an ordering to the features. On average, some 80 features are confirmed to be relevant. From these we select the 6 most important ones, based on their importance score. This cutoff was chosen empirically using cross-validation (cfr. Additional file 1), as more features leads to overfitting of the classifier. Classifier For the final classification we use a random forest, with 100 decision trees and balanced class weights. Using more trees led to no improvement in cross-validation (cfr. Additional file 1). We opted for a random forest classifier due to the fact that it is a non-linear classifier, which is known to be robust against overfitting [52]. It is a popular choice for machine learning tasks involving relatively small datasets. We regularize the model further by setting the minimal number of samples required for a split to be 10% of the total number of samples. This value was obtained using cross-validation on the training set (cfr. Additional file 1). The maximum depth of the resulting decision trees averages around 8. As linear models are most often used in the literature, we use logistic regression for comparison. Furthermore, logistic regression is often used as a baseline in these types of machine learning tasks. As discussed earlier, we have 4 time series per visit, 2 of the hands (left and right), and 2 of the feet (left and right). We run the pipeline for the hands and feet separately and average the predictions of the resulting classifiers to get the final prediction. This approach was chosen for two reasons: The time series resulting from the measurements are quite disparate, therefore the same time series features may not work well for both. The other reason is that adding too many features to the model causes the classifier to overfit. Splitting up the task like this reduces the number of features per model. We found that the performance of the algorithm is greatly influenced by the choice of training and test set. To get a measure for how much this factors in we run this data analysis pipeline 1000 times, each time with a different choice of train/test split. That way we can get a better understanding of the usefulness of this process, rather than focusing on a single split. It also drives down the standard error on the mean performance estimate, allowing for a more accurate quantification of the performance increase we get by adding additional time series features to the model. We ensure that patients don't occur both in the training and the test set, and that the balance of the targets is roughly the same for the train and the test set. We refer to this as grouped (by patients) stratified shuffle splits in Fig. 3, the shuffle pertains to the fact that the samples are distributed randomly across the train and test set for each split, subject to the aforementioned conditions. To illustrate, for some splits we actually obtain AUC values of 0.97, whereas others are random at 0.5. Of course, these are just the extreme values, the performance turns out to be normally distributed around the reported results. At this point, we have the ranking of the 10 most important features as determined by the Boruta algorithm. For the final prediction, we will add the top-n features. The value of n is determined on a validation set (as illustrated in Fig. 3). For each train-test split, we use half of the test set as a validation set. This split into validation and test set satisfies the same conditions as before, and we evaluate the model for 100 such splits. So in total 1000 models are trained (on the training set), and are subsequently evaluated 100 times each, leading to 100 000 test set performances. There is a trade-off to be taken into account. On the one hand we want as much data as possible to fit our model, which would require allocating as much data as possible to the training set. On the other hand, however, we want to accurately measure the performance of said model on an independent test set, which for a heterogeneous dataset also requires a large amount of data to minimize the variance. To get an idea of both extremes we evaluate the pipeline at various splits of the dataset. We run the entire pipeline for 4 different sizes of the training set, composed of 20, 30, 50 and 80 percent of the dataset. This method also gives information on the necessary dataset size to achieve a certain performance [53, 54] and on how much room for improvement the algorithm has when given more data [55, 56]. Disability progression task Here we present the results of the disability progression prediction task. In the literature, the main features that are considered are: Latency, EDSS at T0, peak-to-peak amplitude, age, gender and type of MS. We note that not all of these are found to be significant in the literature (see, e.g., [26]). Using cross-validation (cfr. Additional file 1) we determined that using the latencies of the left and right side separately, the EDSS at T0 and the age worked best for this prediction task. Adding additional literature features leads to a negligible performance increase. We assess the performance of the literature features as well as the performance when we add additional time series features. The main results are shown graphically in Fig. 4, and numerically in Table 2. As is to be expected, we see that the overall performance of the pipeline increases as the size of the training set increases, while the variance of the result also increases due to the smaller size of the test set. The general trend we see is that adding the extra time series features improves the performance on the independent test set, but only marginally. RF performs better than LR both with and without the additional TS features, with the difference being especially evident when not adding them. The figure indicates that increasing the dataset size further would improve the performance. Results of the disability progression task Results are shown for different sizes of training set. Each point represents an average over 100 000 test sets, with the error bar indicating the standard deviation. Results are shown for the baseline model which uses a subset of known features (Latency, EDSS at T0 and age), as well as a model where we add additional TS features. Abbreviations used: RF Random Forest, LR Logistic Regression, TS Time series. These results are represented numerically in Table 2 Table 2 Results of the disability progression task As a check for our assumption of using only a subset of the literature features, we also checked the performance when adding additional literature features to the classifier (peak-to-peak amplitude, gender and type of MS). The resulting model performed worse than the model using just 4 literature features in almost every case, and in the cases where it does increase it does so by a negligible margin. It also degrades the performance gain by adding TS features, presumably due to overfitting. This reaffirms our decision of using just the latencies, the age and the EDSS at T0. Significance test of performance increase To check whether the increase in performance by adding TS features is significant, we employ the DeLong test [57] which tests the hypothesis that the true difference in AUC of the model with and without TS features is greater than zero. For each split we compare the ROC curves of the classifier with and without the additional TS features. The results are shown in Fig. 5. We observe that the percentage of splits with significantly improved performance increases with the size of the testset, reaching a maximum at 80% of the dataset used for the testset. We argue that the low fraction of significant improvement is mainly due to the power of the test. To support this further we show the significance percentages for a single model (the one trained on 20% of the dataset), tested on subsets of the remaining 80% of increasing size. The results are shown in Fig. 6, from which we see the fraction of significant splits increases steadily with the number of samples in the test set. Significance results The results of the DeLong test on the improvement by adding TS features. We show both the fraction of splits that show an improvement and the fraction of splits that show significant improvement according to the DeLong test Significance results single model Results of the significance tests, using a single model (trained on 20% of the dataset), and tested on various sizes of test set. Both the fraction of improved splits and the fraction of significantly improved splits are shown. The trend suggests more data would likely increase the fraction of splits that show significant improvement It is interesting to see which TS features are often found to be important according to our feature selection method. As the pipeline is run independently for 1000 times we have 1000 ranked sets of features deemed important by the feature selection. We consider only the train/test-split where the training set consists of 80% of the dataset, as the feature selection is most stable in this case. We consider the anatomies separately as the selected TS features are different for each. Here we give only a brief overview of the features that we found to be most important. For a ranked list of the 20 most important features for both APB and AH we refer the reader to the additional files [see Additional file 1]. There we also provide a way of obtaining the code used to generate these features. APB: The feature most often found to be important ranks in the final 10 features for 83.9% of splits. In 74.7% of splits it ranks in the top 3. The feature in question is calculated by sliding a window of half the length of the TS across the TS in steps of 25% of the TS (so a total of 3 windows is considered). For each window, the mean is calculated. Finally, the standard deviation of these means, divided by the standard deviation of the entire TS, is calculated. In practice this feature seems to characterize how fast the TS returns to an average of zero after the initial peak. The other high-ranking features are mostly other sliding window calculations or features that compute characteristics of the spectral power of the TS. The prominence of these features drops off quickly, e.g., the second highest ranking feature occurs in the top 4 for 39% of splits. AH: For AH one feature in particular stands out. It is included in the final 10 features for 97.5% of splits. In 90.6% of the splits, it is in the top 3 most important features. In fact, it is very consistently marked as being important for various methods of feature selection (cfr. Additional file 1), more consistently even than all the other literature features (latency, EDSS and age). Using just this feature extracted from the AH measurements, a prediction model can achieve a performance of 0.7±0.07 AUC (mean and standard deviation). This makes it a very interesting candidate for further research. Unfortunately, it is not very interpretable. The feature is calculated by fitting an autoregressive model to the timeseries, and evaluating its performance on 25 uniformly selected subsets of the timeseries of 10% the total length of the time series. The evaluation is based on 1-step ahead prediction. The difference between the real and predicted value forms a new TS, of length 192 in our case. The autocorrelation at lag 1 is calculated of each of these 25 TS. Finally, we take the absolute value of the mean of these 25 autocorrelation values. Further research could be done to determine why this particular feature is found to be this important. Other high-ranking features include those that quantify the level of surprise of a data point, given its recent memory. The remaining features show no clear pattern of type. As was the case for APB, we find that these lower ranked features' prominence drops off rapidly. The distributions for the most important features for each anatomy are shown in Fig. 7. These figures were generated using kernel density estimation with a Gaussian kernel. Despite significant overlap in the distributions for the two classes, there is a definite difference between the two. For APB the distributions suggest that patients that are going to worsen have a more rapid return to an average of zero after the initial peak than patients that will not worsen. For AH such an intuitive interpretation is difficult due to the oblique nature of its most important TS feature. The features that were found have a few hyperparameters associated with them that could be optimized to further boost the performance of the classifier. We will not be doing this here, as these features were selected by looking at all splits at once, which covers the complete dataset. Their performance should be evaluated on another independent test set. TS feature distributions The distributions of the most important additional TS feature for APB (left) and AH (right). The dashed lines represent the distributions of the TS feature of patients that show progression after 2 years, whereas the solid lines are for those that do not progress. The distributions are normalized separately This paper presents the first analysis on a new dataset, containing the full time series of several EP types. The idea was to extract a large number of features from the MEP from different time series analysis methods, and use a machine learning approach to see which ones are relevant. Improving the prediction of disability progression compared to using only the latencies, age, and EDSS at T0 was quite difficult, despite its larger size compared with the literature. The main problem was overfitting. We expect the algorithm to deal quite well with noisy input features, caused by generic measurement noise and variations in the manual latency annotation. In fact, small input noise could help to avoid overfitting. The main reason for overfitting seems, to us, the noise in the binary target, caused mainly by an inherent unreliability of the EDSS measurement itself, and the lack of confirmation of the disability progression. Furthermore, there is an unknown upper limit for this task: even with an infinitely large cohort, the task will not be perfectly solvable with the biomarkers in our model. Our results shown in Fig. 4 suggest that this upper limit is not yet reached, although we could be close to it. On average, one in four TS features that remained after the mutual information and hierarchical clustering steps was found to contain at least some information relevant to the prediction task, though only a small subset contained a strong enough signal to be consistently marked important across multiple train/test splits. Nevertheless, a significant improvement was found by adding extra features that showed high importance. The usefulness of non-linear methods is also clearly demonstrated. Much more remains to be investigated on this dataset. Given the large amount of literature on the usefulness of mmEP (as discussed in the introduction), the largest performance improvement is most likely achieved by including the VEP and SEP. Given the large differences in measurement times and frequencies of the different EP modalities, one has to decide between throwing away a lot of data, or using more elaborate techniques that robustly handle missing data. The second option that has potential for significant improvement is analyzing the whole longitudinal trajectory of the patient [58]. This in contrast to our current analysis, where a single visit is used for predicting progression over 2 years. Inclusion of all (sparsely measured) mmEP and longitudinal modeling can be combined, and is an active research area [59, 60]. An obvious extension is to use TS algorithms not included in HCTSA. For example, another library with qualitatively different TS analysis methods is HIVE-COTE [49]. We have constricted ourselves to predicting progression over 2 years. This choice was made because it frequently occurs in the literature, and it leads to many training samples. Longer or shorter time differences are also of interest. It is, furthermore, believed by some clinicians that EPTS pick up disease progression faster than EDSS. One could check this by using short time-scale EPTS changes (e.g., 6 months) to predict EDSS changes on longer time-scales [23], or to detect non-response to treatment. The obvious left-right-symmetry of the limb measurements is taken into account in a rudimentary way. Incorporating this symmetry in a more advanced way could boost performance. Data augmentation can be used to expand the size of the training set, which could stabilize the performance estimate. We note that even small neural networks are difficult to train on the current dataset. Data augmentation could make them competitive. While the achieved AUC of 0.75±0.07 is impressive for a model with only MEP latency, EDSS at T0, age, and a few additional MEP TS features, there is surely an upper limit to what mmEP can predict. Other variables such as, e.g., MRI, cerebrospinal fluid, and genomic data could boost performance [5]. A very important variable that is currently not included is the type of DMT the patient is on. In the absence of a single, highly predictive marker, personalization will depend on a combinations of markers. Indeed, several studies show that a multi-parametric approach may improve our prognostic ability in MS [61, 62]. It involves the development of predictive models involving the integration of clinical and biological data with an understanding of the impact of disease on the lives of individual patients [63]. Besides the inclusion of extra biomarkers, another step of great practical importance is to move towards multi-center design studies. How well mmEP data from different centers can be combined remains an open and very important question [64, 65]. Our results contribute to the long-term goal of improving clinical care of people with MS in several ways. We add evidence to the hypothesis that EP are a valuable biomarker in personalized prediction models. This is important, because the precise value of EPs for monitoring MS is still under discussion [35–37]. Our evidence is stronger compared to previous work, because the performance is tested on an independent test set, and the EPs were measured in routine clinical follow-up. If a model predicts that the patient is likely to progress in disability, this could be a sign of treatment inefficacy, and a switch to a different or more aggressive DMT could be done. Improving the performance of predictive models can therefore lead to a faster optimal treatment choice, and result in slower disability progression. Finally, if our identified extra features are confirmed in larger, multi-center studies, their evolution over time can be used to give additional feedback to the caregivers on the disease evolution. Multiple sclerosis is a chronic disease affecting millions of people worldwide. Gaining insight into its progression in patients is an important step in the process of gaining a better understanding of this condition. Evoked potential time series (EPTS) are one of the tools clinicians use to estimate progression. The prediction of disability progression from EPTS can be used to support a clinician's decision making process regarding further treatment, and reduce uncertainty for patients about their disease course. We presented a prediction model for disability progression after 2 years, trained on a dataset containing all available motor EP measurements from the Rehabilitation & MS Center in Overpelt, Belgium. Any patient with two-year follow-up is included. It is an order of magnitude larger than most datasets used in previous works, and for the first time includes the raw time series, as opposed to just the high-level features extracted from them (i.e. latencies, peak-to-peak amplitude, and dispersion pattern). The dataset consists of individuals undergoing treatment, which is clinically the most relevant scenario. We plan to make this dataset publicly available in the near future. We found that adding additional features extracted from the raw time series improves performance, albeit marginally (ΔAUC =0.02 for the best performing classifier). Results suggest that the model would benefit from an increased dataset size. We found that linear models, often used in previous works, are significantly outperformed by the random forest classifier, especially when not adding extra TS features (ΔAUC =0.06). Given the limited number of biomarkers in the model (EDSS at T0, MEP, and age) and heterogeneity of the cohort, the reported performance (AUC 0.75±0.07) is quite good. We took an initial look at the features that were found to boost predictive power and found a few candidates that might be a good starting point for further research. The feature found to be important for the feet (AH) (see "Selected features" section) is particularly robust to all feature selection methods (cfr. Additional file 1), even more so than the features currently considered by clinicians. If its importance is confirmed in larger, multi-center studies, further investigation into what this feature measures could potentially lead to new physiological insights, and could guide clinicians in their interpretation of the measurements. The dataset analyzed during the current study is currently not publicly available due to privacy concerns but is available from the corresponding author on reasonable request. We aim to release the dataset as well as the code used to generate the results publicly after taking steps to ensure the full anonymity of the patients in accordance with local data protection laws. A list of the abbreviations used throughout this work: AH: Abductor Hallucis AIC: Akaike information criterion APB: Abductor Pollicis Brevis Area under curve (of ROC, see below) BAEP: Brainstem auditory EP (EP see below) Bayesian information criterion EDSS: Expanded disability status scale EP: Evoked potentials EPS: EP score EPTS: EP time series FWO: Fonds Wetenschappelijk Onderzoek HCTSA: Highly comparative time-series analysis LR: MEP: Motor EP mmEP: Multimodal EP MRI: MSE: Mean-squared error PPMS: Primary progressive MS SEP: Somatosensory EP SPMS: Secondary progressive MS TS: VEP: Visual EP Sospedra M, Martin R. Immunology of multiple sclerosis. Ann Rev Immunol. 2004; 23(1):683–747. https://doi.org/10.1146/annurev.immunol.23.021704.115707. Montalban X, Gold R, Thompson AJ, Otero-Romero S, Amato MP, Chandraratna D, Clanet M, Comi G, Derfuss T, Fazekas F, et al.Ectrims/ean guideline on the pharmacological treatment of people with multiple sclerosis. Mult Scler J. 2018; 24(2):96–120. Tilling K, Lawton M, Robertson N, Tremlett H, Zhu F, Harding K, Oger J, Ben-Shlomo Y. Modelling disease progression in relapsing-remitting onset multiple sclerosis using multilevel models applied to longitudinal data from two natural history cohorts and one treated cohort. Health Technol Assess. 2016; 20(81):1–48. https://doi.org/10.3310/hta20810. Kurtzke JF. Rating neurologic impairment in multiple sclerosis: an expanded disability status scale (edss). Neurology. 1983; 33(11):1444–52. Gajofatto A, Calabrese M, Benedetti MD, Monaco S. Clinical, mri, and csf markers of disability progression in multiple sclerosis. Dis Mark. 2013; 35(6):13. https://doi.org/10.1155/2013/484959. Tintoré M, Rovira A, Río J, Tur C, Pelayo R, Nos C, Téllez N, Perkal H, Comabella M, Sastre-Garriga J, Montalban X. Do oligoclonal bands add information to mri in first attacks of multiple sclerosis?Neurology. 2008; 70(13 Part 2):1079. https://doi.org/10.1212/01.wnl.0000280576.73609.c6. Pelayo R, Montalban X, Minoves T, Moncho D, Rio J, Nos C, Tur C, Castillo J, Horga A, Comabella M, Perkal H, Rovira A, Tintoré M. Do multimodal evoked potentials add information to mri in clinically isolated syndromes?Mult Scler J. 2009; 16(1):55–61. https://doi.org/10.1177/1352458509352666. Martinelli V, Dalla Costa G, Messina MJ, Di Maggio G, Sangalli F, Moiola L, Rodegher M, Colombo B, Furlan R, Leocani L, Falini A, Comi G. Multiple biomarkers improve the prediction of multiple sclerosis in clinically isolated syndromes. Acta Neurol Scand. 2017; 136(5):454–61. https://doi.org/10.1111/ane.12761. Leocani L, Guerrieri S, Comi G. Visual evoked potentials as a biomarker in multiple sclerosis and associated optic neuritis. J Neuro-Ophthalmol. 2018; 38(3):350–7. https://doi.org/10.1097/wno.0000000000000704. Nuwer MR, Packwood JW, Myers LW, Ellison GW. Evoked potentials predict the clinical changes in a multiple sclerosis drug study. Neurology. 1987; 37(11):1754. https://doi.org/10.1212/WNL.37.11.1754. O'Connor P, Marchetti P, Lee L, Perera M. Evoked potential abnormality scores are a useful measure of disease burden in relapsing–remitting multiple sclerosis. Ann Neurol. 1998; 44(3):404–7. https://doi.org/10.1002/ana.410440320. Fuhr P, Kappos L. Evoked potentials for evaluation of multiple sclerosis. Clin Neurophysiol. 2001; 112(12):2185–9. https://doi.org/10.1016/S1388-2457(01)00687-3. Fuhr P, Borggrefe-Chappuis A, Schindler C, Kappos L. Visual and motor evoked potentials in the course of multiple sclerosis. Brain. 2001; 124(11):2162–8. https://doi.org/10.1093/brain/124.11.2162. Kallmann BA, Fackelmann S, Toyka KV, Rieckmann P, Reiners K. Early abnormalities of evoked potentials and future disability in patients with multiple sclerosis. Mult Scler J. 2006; 12(1):58–65. https://doi.org/10.1191/135248506ms1244oa. Leocani L, Rovaris M, Boneschi FM, Medaglini S, Rossi P, Martinelli V, Amadio S, Comi G. Multimodal evoked potentials to assess the evolution of multiple sclerosis: a longitudinal study. J Neurol Neurosurg Psychiatry. 2006; 77(9):1030. https://doi.org/10.1136/jnnp.2005.086280. Jung P, Beyerle A, Ziemann U. Multimodal evoked potentials measure and predict disability progression in early relapsing–remitting multiple sclerosis. Mult Scler J. 2008; 14(4):553–6. https://doi.org/10.1177/1352458507085758. Bejarano B, Bianco M, Gonzalez-Moron D, Sepulcre J, Goñi J, Arcocha J, Soto O, Del Carro U, Comi G, Leocani L, et al.Computational classifiers for predicting the short-term course of multiple sclerosis. BMC Neurol. 2011; 11(1):67. Invernizzi P, Bertolasi L, Bianchi MR, Turatti M, Gajofatto A, Benedetti MD. Prognostic value of multimodal evoked potentials in multiple sclerosis: the ep score. J Neurol. 2011; 258(11):1933–9. https://doi.org/10.1007/s00415-011-6033-x. Schlaeger R, D'Souza M, Schindler C, Grize L, Kappos L, Fuhr P. Combined evoked potentials as markers and predictors of disability in early multiple sclerosis. Clin Neurophysiol. 2012; 123(2):406–10. https://doi.org/10.1016/j.clinph.2011.06.021. Schlaeger R, D'Souza M, Schindler C, Grize L, Dellas S, Radue EW, Kappos L, Fuhr P. Prediction of long-term disability in multiple sclerosis. Mult Scler J. 2011; 18(1):31–8. https://doi.org/10.1177/1352458511416836. Schlaeger R, D'Souza M, Schindler C, Grize L, Kappos L, Fuhr P. Prediction of ms disability by multimodal evoked potentials: Investigation during relapse or in the relapse-free interval?Clin Neurophysiol. 2014; 125(9):1889–92. https://doi.org/10.1016/j.clinph.2013.12.117. Schlaeger R, D'Souza M, Schindler C, Grize L, Kappos L, Fuhr P. Electrophysiological markers and predictors of the disease course in primary progressive multiple sclerosis. Mult Scler J. 2013; 20(1):51–6. https://doi.org/10.1177/1352458513490543. Schlaeger R, Hardmeier M, D'Souza M, Grize L, Schindler C, Kappos L, Fuhr P. Monitoring multiple sclerosis by multimodal evoked potentials: Numerically versus ordinally scaled scoring systems. Clin Neurophysiol. 2016; 127(3):1864–71. https://doi.org/10.1016/j.clinph.2015.11.041. Giffroy X, Maes N, Albert A, Maquet P, Crielaard J-M, Dive D. Multimodal evoked potentials for functional quantification and prognosis in multiple sclerosis. BMC Neurol. 2016; 16:83–3. https://doi.org/10.1186/s12883-016-0608-1. Hardmeier M, Hatz F, Naegelin Y, Hight D, Schindler C, Kappos L, Seeck M, Michel CM, Fuhr P. Improved characterization of visual evoked potentials in multiple sclerosis by topographic analysis. Brain Topogr. 2014; 27(2):318–27. https://doi.org/10.1007/s10548-013-0318-6. Giffroy X, Maes N, Albert A, Maquet P, Crielaard JM, Dive D. Do evoked potentials contribute to the functional follow-up and clinical prognosis of multiple sclerosis?Acta Neurol Belg. 2017; 117(1):53–9. https://doi.org/10.1007/s13760-016-0650-1. Schlaeger R, Schindler C, Grize L, Dellas S, Radue EW, Kappos L, Fuhr P. Combined visual and motor evoked potentials predict multiple sclerosis disability after 20 years. Mult Scler J. 2014; 20(10):1348–54. https://doi.org/10.1177/1352458514525867. Margaritella N, Mendozzi L, Garegnani M, Colicino E, Gilardi E, DeLeonardis L, Tronci F, Pugnetti L. Sensory evoked potentials to predict short-term progression of disability in multiple sclerosis. Neurol Sci. 2012; 33(4):887–92. https://doi.org/10.1007/s10072-011-0862-3. Margaritella N, Mendozzi L, Tronci F, Colicino E, Garegnani M, Nemni R, Gilardi E, Pugnetti L. The evoked potentials score improves the identification of benign ms without cognitive impairment. Eur J Neurol. 2013; 20(10):1423–5. https://doi.org/10.1111/ene.12071. Ramanathan S, Lenton K, Burke T, Gomes L, Storchenegger K, Yiannikas C, Vucic S. The utility of multimodal evoked potentials in multiple sclerosis prognostication. J Clin Neurosci. 2013; 20(11):1576–81. https://doi.org/10.1016/j.jocn.2013.01.020. Canham LJW, Kane N, Oware A, Walsh P, Blake K, Inglis K, Homewood J, Witherick J, Faulkner H, White P, Lewis A, Furse-Roberts C, Cottrell DA. Multimodal neurophysiological evaluation of primary progressive multiple sclerosis – an increasingly valid biomarker, with limits. Mult Scler Relat Disord. 2015; 4(6):607–13. https://doi.org/10.1016/j.msard.2015.07.009. London F, El Sankari S, van Pesch V. Early disturbances in multimodal evoked potentials as a prognostic factor for long-term disability in relapsing-remitting multiple sclerosis patients. Clin Neurophysiol. 2017; 128(4):561–9. https://doi.org/10.1016/j.clinph.2016.12.029. Comi G, Leocani L, Medaglini S, Locatelli T, Martinelli V, Santuccio G, Rossi P. Measuring evoked responses in multiple sclerosis. Mult Scler J. 1999; 5(4):263–7. https://doi.org/10.1177/135245859900500412. Hardmeier M, Leocani L, Fuhr P. A new role for evoked potentials in ms? repurposing evoked potentials as biomarkers for clinical trials in ms. Mult Scler J. 2017; 23(10):1309–19. https://doi.org/10.1177/1352458517707265. Fernández O, Fernández V. Evoked potentials are of little use in the diagnosis or monitoring of ms: No. Mult Scler J. 2013; 19(14):1822–3. McGuigan C. Evoked potentials are of little use in the diagnosis or monitoring of ms: Yes. Mult Scler J. 2013; 19(14):1820–1. Hutchinson M. Evoked potentials are of little use in the diagnosis or monitoring of ms: Commentary. Mult Scler J. 2013; 19(14):1824–5. Walsh P, Kane N, Butler S. The clinical role of evoked potentials. J Neurol Neurosurg Psychiatr. 2005; 76(suppl 2):16–22. https://doi.org/10.1136/jnnp.2005.068130. Neter J, Kutner MH, Nachtsheim CJ, Wasserman W. Applied Linear Statistical Models vol. 4: Irwin Chicago; 1996. Spelman T, Jokubaitis V, Kalincik T, Butzkueven H, Grammond P, Hupperts R, Oreja-Guevara C, Boz C, Pucci E, Bergamaschi R, Lechner-Scott J, Alroughani R, Van Pesch V, Iuliano G, Fernandez-Bolaños R, Ramo C, Terzi M, Slee M, Spitaleri D, Verheul F, Cristiano E, Sánchez-Menoyo JL, Fiol M, Gray O, Cutter G, Cabrera-Gomez JA, Barnett M, Horakova D, Havrdova E, Trojano M, Izquierdo G, Prat A, Girard M, Duquette P, Lugaresi A, Grand'Maison F. Defining reliable disability outcomes in multiple sclerosis. Brain. 2015; 138(11):3287–98. https://doi.org/10.1093/brain/awv258. http://oup.prod.sis.lan/brain/article-pdf/138/11/3287/13798678/awv258.pdf. Livingston SC, Ingersoll CD. Intra-rater reliability of a transcranial magnetic stimulation technique to obtain motor evoked potentials. Int J Neurosci. 2008; 118(2):239–56. https://doi.org/10.1080/00207450701668020. http://arxiv.org/abs/https://doi.org/10.1080/00207450701668020. Cacchio A, Paoloni M, Cimini N, Mangone M, Liris G, Aloisi P, Santilli V, Marrelli A. Reliability of TMS-related measures of tibialis anterior muscle in patients with chronic stroke and healthy subjects. J Neurol Sci. 2011; 303(1):90–4. https://doi.org/10.1016/j.jns.2011.01.004. Accessed 25 Sept 2019. Hoonhorst MH, Kollen BJ, Van Den Berg PS, Emmelot CH, Kwakkel G. How reproducible are transcranial magnetic stimulation–induced meps in subacute stroke?J Clin Neurophysiol. 2014; 31(6):556–62. Hardmeier M, Jacques F, Albrecht P, Bousleiman H, Schindler C, Leocani L, Fuhr P. Multicentre assessment of motor and sensory evoked potentials in multiple sclerosis: reliability and implications for clinical trials. Mult Scler J Exp Transl Clin. 2019; 5(2):2055217319844796. https://doi.org/10.1177/2055217319844796. http://arxiv.org/abs/https://doi.org/10.1177/2055217319844796. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E. Scikit-learn: Machine learning in Python. J Mach Learn Res. 2011; 12:2825–30. Kursa M, Rudnicki W. Feature selection with the boruta package. J Stat Softw Artic. 2010; 36(11):1–13. https://doi.org/10.18637/jss.v036.i11. Fulcher BD, Little MA, Jones NS. Highly comparative time-series analysis: the empirical structure of time series and their methods. J R Soc Interface. 2013; 10(83):20130048. Fulcher BD, Jones NS. A computational framework for automated time-series phenotyping using massive feature extraction. Cell Syst. 2017; 5(5):527–5313. https://doi.org/10.1016/j.cels.2017.10.001. Lines J, Taylor S, Bagnall A. Time series classification with hive-cote: The hierarchical vote collective of transformation-based ensembles. ACM Trans Knowl Discov Data. 2018; 12(5):52–15235. https://doi.org/10.1145/3182382. Aerts HJ, Velazquez ER, Leijenaar RT, Parmar C, Grossmann P, Carvalho S, Bussink J, Monshouwer R, Haibe-Kains B, Rietveld D, et al.Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat Commun. 2014; 5:4006. Degenhardt F, Seifert S, Szymczak S. Evaluation of variable selection methods for random forests and omics data sets. Brief Bioinf. 2017. https://doi.org/10.1093/bib/bbx124. http://oup.prod.sis.lan/bib/advance-article-pdf/doi/10.1093/bib/bbx124/21301018/bbx124.pdf. Hastie T, Tibshirani R, Friedman J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition, Springer Series in Statistics: Springer; 2009. https://books.google.be/books?id=tVIjmNS3Ob8C. Mukherjee S, Tamayo P, Rogers S, Rifkin R, Engle A, Campbell C, Mesirov J. Estimating dataset size requirements for classifying dna microarray data. J Comput Biol. 2003; 10:119–42. https://doi.org/10.1089/106652703321825928. Cho J, Lee K, Shin E, Choy G, Do S. How much data is needed to train a medical image deep learning system to achieve necessary high accuracy?arXiv preprint. 2015. arXiv:1511.06348. Zhu X, Vondrick C, Fowlkes CC, Ramanan D. Do we need more training data?Int J Comput Vis. 2016; 119(1):76–92. Sun C, Shrivastava A, Singh S, Gupta A. Revisiting unreasonable effectiveness of data in deep learning era. In: Proceedings of the IEEE International Conference on Computer Vision: 2017. p. 843–52. https://doi.org/10.1109/iccv.2017.97. DeLong ER, DeLong DM, Clarke-Pearson DL. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics. 1988; 44(3):837–45. De Brouwer E, Peeters L, Becker T, Altintas A, Soysal A, Van Wijmeersch B, Boz C, Oreja-Guevara C, Gobbi C, Solaro C, et al.Introducing machine learning for full ms patient trajectories improves predictions for disability score progression. Mult Scler J. 2019; 25:63–5. Lipton ZC, Kale D, Wetzel R. Directly modeling missing data in sequences with rnns: Improved classification of clinical time series In: Doshi-Velez F, Fackler J, Kale D, Wallace B, Wiens J, editors. Proceedings of the 1st Machine Learning for Healthcare Conference, Proceedings of Machine Learning Research, vol. 56. Children's Hospital LA, Los Angeles: PMLR: 2016. p. 253–70. http://proceedings.mlr.press/v56/Lipton16.html. Che Z, Purushotham S, Cho K, Sontag D, Liu Y. Recurrent neural networks for multivariate time series with missing values. Sci Rep. 2018; 8(1):6085. https://doi.org/10.1038/s41598-018-24271-9. Trojano M, Tintore M, Montalban X, Hillert J, Kalincik T, Iaffaldano P, Spelman T, Sormani MP, Butzkueven H. Treatment decisions in multiple sclerosis - insights from real-world observational studies. Nat Rev Neurol. 2017; 13(2):105–18. https://doi.org/10.1038/nrneurol.2016.188. Kalincik T, Brown JWL, Robertson N, Willis M, Scolding N, Rice CM, Wilkins A, Pearson O, Ziemssen T, Hutchinson M, McGuigan C, Jokubaitis V, Spelman T, Horakova D, Havrdova E, Trojano M, Izquierdo G, Lugaresi A, Prat A, Girard M, Duquette P, Grammond P, Alroughani R, Pucci E, Sola P, Hupperts R, Lechner-Scott J, Terzi M, Van Pesch V, Rozsa C, Grand'Maison F, Boz C, Granella F, Slee M, Spitaleri D, Olascoaga J, Bergamaschi R, Verheul F, Vucic S, McCombe P, Hodgkinson S, Sanchez-Menoyo JL, Ampapa R, Simo M, Csepany T, Ramo C, Cristiano E, Barnett M, Butzkueven H, Coles A, Group MSS. Treatment effectiveness of alemtuzumab compared with natalizumab, fingolimod, and interferon beta in relapsing-remitting multiple sclerosis: a cohort study. Lancet Neurol. 2017; 16(4):271–81. https://doi.org/10.1016/S1474-4422(17)30007-8. Gafson A, Craner MJ, Matthews PM. Personalised medicine for multiple sclerosis care. Mult Scler. 2017; 23(3):362–9. https://doi.org/10.1177/1352458516672017. Hardmeier M, Jacques F, Albrecht P, Bousleiman H, Schindler C, Leocani L, Fuhr P. F107. sensory and motor evoked potentials in a multicenter setting: Estimation of detectable group differences at varying sample sizes. Clin Neurophysiol. 2018; 129:106–7. https://doi.org/10.1016/j.clinph.2018.04.270. Hardmeier M, Jacques F, Albrecht P, Bousleiman H, Schindler C, Leocani L, Fuhr P. T85. sensory and motor evoked potentials in a multicenter setting: Definition of significant change in repeated measurements in healthy subjects on individual level. Clin Neurophysiol. 2018; 129:34–5. https://doi.org/10.1016/j.clinph.2018.04.086. JY and TB are very grateful to the late Christian Van den Broeck for giving us the opportunity to work on this problem. The authors would like to thank Jori Liesenborgs and Geert Jan Bex for their valuable suggestions regarding the practical implementation of the data analysis pipeline, as well as Henny Strackx for his help during the data extraction phase at Overpelt. TB is supported by the Fonds voor Wetenschappelijk Onderzoek (FWO), project R4859. The computational resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation - Flanders (FWO) and the Flemish Government - department EWI. This research received funding from the Flemish Government under the "Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen" programme. The funding bodies played no role in the design of the study, in writing the manuscript, nor in the collection, analysis, or interpretation of the data. Theoretical Physics, Hasselt University, Diepenbeek, Belgium Jan Yperman I-Biostat, Data Science Institute, Hasselt University,, Diepenbeek, Belgium Jan Yperman, Thijs Becker, Dirk Valkenborg & Liesbet M. Peeters Department of Immunology, Biomedical Research Institute, Hasselt University, Diepenbeek, 3590, Belgium Jan Yperman, Niels Hellings & Liesbet M. Peeters Rehabilitation and MS-Center, Pelt, 3900, Belgium Veronica Popescu & Bart Van Wijmeersch REVAL Rehabilitation Research Center, BIOMED, Faculty of Rehabilitation Sciences, Hasselt University, Hasselt, Belgium Thijs Becker Dirk Valkenborg Veronica Popescu Niels Hellings Bart Van Wijmeersch Liesbet M. Peeters JY performed the data analysis. JY, TB, DV, and LMP decided on the data analysis methodology. NH, VP and BVW provided clinical feedback for the data analysis. LMP coordinated the study. JY, TB, and LMP wrote the original manuscript. All authors read and approved the final manuscript. Correspondence to Jan Yperman. This study was approved by the ethical commission of the University of Hasselt (CME2017/729). No consent to participate/publish was necessary since this study uses retrospective data only. Additional file 1 Most important TS features / alternative feature preselection algorithms. This file contains tables of the 20 most prominent TS features across the 1000 train/test splits, for both APB and AH anatomies. It also provides a way of obtaining the code used to generate these features. There is also a section on a few alternatives for the feature preselection step. Lastly the process of chosing hyperparameters is discussed and motivated. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Yperman, J., Becker, T., Valkenborg, D. et al. Machine learning analysis of motor evoked potential time series to predict disability progression in multiple sclerosis. BMC Neurol 20, 105 (2020). https://doi.org/10.1186/s12883-020-01672-w DOI: https://doi.org/10.1186/s12883-020-01672-w Disease prognosis
CommonCrawl
Importance of the intellectual property system in attempting compulsory licensing of pharmaceuticals: a cross-sectional analysis Kyung-Bok Son ORCID: orcid.org/0000-0002-6343-58081 Globalization and Health volume 15, Article number: 42 (2019) Cite this article Recently, interest in compulsory licensing of pharmaceuticals has been growing regardless of a country's income- level. We aim to investigate the use of compulsory licensing as a legitimate part of the patent system and tool for the government to utilize by demonstrating that countries with a mature patent system were more likely to utilize compulsory licensing of pharmaceuticals. We used a multivariate logistic model to regress attempts to issue compulsory licensing on the characteristics of the intellectual property system, controlling for macro context variables and other explanatory variables at a country level. A total 139 countries, selected from members of the World Trade Organization, were divided into a CL-attempted group (N = 24) and a non-CL-attempted group (N = 115). An attempt to issue compulsory licensing was associated with population (+) and a dummy variable for other regions, including Europe and North America (−). After controlling for macro context variables, mature intellectual property system was positively associated with attempting compulsory licensing. Our study provided evidence of an association between attempting compulsory licensing and matured patent systems. This finding contradicts our current understanding of compulsory licensing, such as compulsory licensing as a measure to usurp traditional patent systems and sometimes diametrically opposed to the patent system. The findings also suggest a new role of compulsory licensing in current patent systems: compulsory licensing could be a potential alternative or complement to achieve access to medicines in health systems through manufacturing and exporting patented pharmaceuticals. Compulsory licensing, primarily attempted in low- and middle-income countries (LMICs) for HIV/AIDS treatments [1, 2], occurs when a government grants a license to regulate the enforcement of intellectual property, including patents and copyrighted works. Compulsory licensing of patented pharmaceuticals is not only deemed essential but also perceived as an available limited governmental measure that can be used to intervene in the case of market failure. Many studies have reported that the use of compulsory licensing is limited and sporadic. However, the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) stated that a government can authorize the use of a patent of pharmaceuticals for its own purposes, which is commonly referred to as government use, to address public health problems [1, 3]. Therefore, member countries of the World Trade Organization (WTO) could implement the TRIPS, including government use, through national patent legislation [4]. A government-use license could be assigned either to a government entity or non-government entities. Furthermore, compulsory licensing could occur following the request by several stakeholders after failure to obtain a voluntary license. These various forms of compulsory licensing could be utilized for the domestic market and foreign market, including exporting pharmaceuticals to low- and middle-income countries (LMICs) [1, 3]. Notably, adequate remuneration is required in these cases [3, 5]. Recently, interest in the compulsory licensing of pharmaceuticals has been growing regardless of a country's income level. The UN High-Level Panel on Access to Medicines recommended the implementation of legislation and use of compulsory licensing, which is one of the notable flexibilities of TRIPS, for legitimate public health needs [6]. Furthermore, the Lancet Commission on Essential Medicines Policies recommended that national patent legislation allow for effective licensing of essential medicines in the absence of voluntary licensing [7]. Finally, the European Parliament, notably including several high-income countries, adopted a resolution on options for issuing compulsory licensing for European Union Member States [8]. At the same time, the pledge not to use compulsory licensing to lower domestic drug prices was taken by many high-income countries after the Declaration on the TRIPS Agreement and Public Health (the Doha Declaration) [9, 10]. Although compulsory licensing is recognized as a legal measure against patent abuse and public health problems, there has been continuous debate regarding compulsory licensing between high-income countries and LMICs. The basic argument in favour of compulsory licensing stems from the availability of affordable and essential medicines for improved public health [11]. Public health and greater humanitarian approaches to access to medicines were the driving forces behind the Doha Declaration that clearly reaffirmed compulsory licensing as a right of member countries of the WTO [12]. Additionaly, some argue that compulsory licensing is needed to counter high-income countries' obstinacy for intellectual property rights as new norms of international trade agreements [11]. Furthermore, they argue that too many intellectual property rights could result in less innovation [13]. On the other hand, others argued that the rights behind patents are historically and statutorily granted with the issuance of a patent and should be protected through a limited time period [11, 14]. For instance, the patent system in the United States grants the right of the patent holder to exclude others from making, using, offering for sale, or selling the intervention throughout the United States or importing the invention into the United States. Furthermore, some insist that compulsory licensing does not allow the patent holder to recoup any investment incurred through research and development and to make a sufficient profit for them to remain in business [14]. They argue that patents are granted to inventors to exclude others to provide these incentives. Therefore, opponents deem compulsory licensing as a measure to usurp traditional patent systems, sometimes diametrically opposed to the patent system, and diminishing the incentives for innovative medicines for all humanity. These normative arguments provide clues to partially understand the validity of compulsory licensing and reason for its existence in the patent system. In line with these debates, evidence, specifically from empirical literature, is still needed to understand the association between compulsory licensing and the intellectual property system, particularly the patent system. However, few literature sources are available, even non-empirical literature that provide associations between compulsory licensing and the patent system at the country level. For instance, a recent study summarized notable factors significantly influencing the issuance of compulsory licensing based on previous literature: local manufacturing capacity or importing possibilities to supply medicines and pressure from patent holders' threat of market withdrawal [15]. Additionally, Ford et al. (2007) suggested three factors sustaining access to antiretroviral therapy in Brazil and Thailand: legislation for access to medicines, public sector capacity to manufacture medicines, and strong civil societies to support government [16]. Paradoxically, the patent system itself has not been treated importantly to understand the compulsory licensing of patents. However, we should note that the compulsory licensing of patents was originally discussed and legislated in high-income countries, including the United States, Germany, and England with the development of the patent system [17, 18]. For instance, the state of South Carolina in the United States enacted the Act for the Encouragement of Arts and Science in 1784. The Act provided exclusive privileges of the machine (patents) to be subject to the same privileges and restrictions imposed on the author (copyrights) [17]. Additionally, it could be argued that compulsory licensing positively influenced the survival of certain patents or patent systems because compulsory licensing was granted as an alternative to the abolition of patents. In other words, compulsory licensing could be initiated in countries where suitable patent systems were established. However, there is a lack of empirical literature, even case studies, presenting associations between compulsory licensing and the patent system. Given this, we aimed to empirically investigate the associations between attempts to issue compulsory licensing of pharmaceuticals and the intellectual property system and to demonstrate the positive role and function of the matured intellectual property system in attempting compulsory licensing. Not surprisingly, a close relationship exists between the patent system and compulsory licensing of patents. Issuing compulsory licensing of patents requires a granted patent, and the grant of the patent requires a functioning patent system. In this study, we investigated the use of compulsory licensing as a legitimate part of the patent system and tool for the government to utilize by demonstrating that countries with a mature patent system were more likely to utilize compulsory licensing of pharmaceuticals. Variables and measurement Given the availability of the data, we selected 139 member countries of the WTO, including observers. We examined the presence of attempted compulsory licensing of pharmaceuticals between 1995 and 2014. Attempted compulsory licensing data were obtained from a previous study [1], in which attempts based on various sources were collected from the following: documents on the webpage of civil society entities; peer-reviewed articles and grey literature; and news sources such as the Access World News Collection. Additionally, we retrieved data from the publicly available TRIPS Flexibilities database, which is available at the website of Medicines Law & Policy, to supplement our data source [19]. The TRIPS Flexibilities database notably contains various instances, including a government announcement of the intent to initiate compulsory licensing and a request or application by a third party to invoke compulsory licensing. The database was constructed by several primary sources, including non-public documents [3]. However, we excluded cases based on non-public documents in this study. After we thoroughly reviewed the instances, we included one additional case from Kenya that was confirmed in other sources. Based on already known literature and theories [1, 2, 15, 16, 20,21,22,23,24], we chose a set of explanatory variables, specifically geographical, economic, and political preconditions of a country to control for their effects (Table 1). In summary, we chose variables such as region, income, population, and polity in our model as macro context variables. Table 1 Description of variables and sources used in the study First, compulsory licensing has been mostly attempted in Africa, Asia, and Latin America [1, 2]. Thus, we selected a region as the dummy variable for the country location. Because only one European country was included in the database, we combined North America and Europe into the group termed 'others'. It was also reported that the approach to compulsory licensing might vary with the country income level [1]. For instance, compulsory licensing in high-income countries might be used as a price negotiation tool. On the other hand, compulsory licensing in low-income countries might not be merely a negotiation tool but also an actual instrument to guarantee access to medicines. In line with this speculation, we included the income variable in the model. Second, the number of patients is an important context by which limited access to medicines is perceived as a public health problem, and might be connected to attempting compulsory licensing [25]. In line with this assumption, we added the total population variable, which approximates the number of patients, in the model. Furthermore, patent holders might employ various strategies against issuing compulsory licensing to retain their profit margins in different markets [25]. For instance, patent holders might cooperate with the government and discount the price or agree to voluntary licensing to leave the threat to their potential profit unrealized in big markets comprising many patients with purchasing power [20], while patent holders forgo the profits derived from a small market with a few patients without purchasing power but still have an incentive to prevent compulsory licensing, possibly inducing other cases in other markets [25]. Third, the political system might matter in attempting compulsory licensing. For instance, the political system might induce the increased supply of public goods and public policy [26]. Additionally, politicians might attempt compulsory licensing to legitimize their political party or regime [25]. Given this, we added polity as a dummy variable in the model. The polity data series (series) measures a state's level of democracy. Specifically, the series evaluates competitiveness, openness, and participation in a state's political system, including civil participation [27]. Based on these criteria, a "polity score", ranging from − 10 to 10, was determined for each year and country. In the series, there were three types of political systems: democracy with a 6 to 10 polity score; anocracy with a − 5 to 5 score; and autocracy with a − 10 to − 6 score. Specifically, anocracy means a regime with mixed democratic and autocratic features. In this study, we used the last version, Polity IV, to measure policy scores in 2014. Finally, the Pharmaceutical Intellectual Property Protection (PIPP) index was used to identify associations between attempted compulsory licensing and the characteristics of intellectual property systems. The PIPP index summarizes the presence, term, and strength of various types of patents that can be claimed for pharmaceutical innovations [28]. Specifically, the PIPP index comprises three components: the Pharmaceutical Patent Rent Appropriation (PPRA) index; the Pharmaceutical Patent International Agreements (PPIA) index; and the Pharmaceutical Patent Enforcement (PPE) Index. The PPRA index measures the presence of various types of pharmaceutical patents that provide protection for various pharmaceutical inventions. The PPIA index presents country membership in international agreements. The PPE index measures various statutory measures that either enhance or detract from public and private enforcement of patents [28]. The PPIP index has been used in various econometric literature sources regarding corporate strategies, foreign direct investment and trade [29, 30]. Model specification and statistical analysis Descriptive analyses were used to present the difference between two groups, including the CL-attempted group and non-CL-attempted group. Specifically, chi-squared test or Fisher's exact test was applied for the categorical variates and t test was conducted for the continuous variates to examine whether the variables of interest differed by the groups. Next, two multivariate logistic regressions were conducted to elucidate factors that affected the attempting compulsory licensing. Variables of interest such as region, population, income, and polity were added in the model to adjust for the macro context. Population and income were used after log transformation to normalize the distribution. The PIPP index was then added to answer our research questions. Furthermore, we conducted additional analysis assigning weights to the variable of income level to reflect the real world situation. Data management and analysis were performed using R statistical software (version 3.4.1). Significance was considered for p-values less than 0.05. $$ \mathrm{CL}\ {\mathrm{attempted}}_{\mathrm{i}}=\kern0.5em {\upbeta}_0+{\upbeta}_1\kern0.5em {\mathrm{region}}_{\mathrm{i}}+\kern0.5em {\upbeta}_2\log \left(\mathrm{population}\right)\kern0.5em +\kern0.5em {\upbeta}_3\log \left({\mathrm{i}\mathrm{ncome}}_{\mathrm{i}}\right)+\kern0.5em {\upbeta}_4{\mathrm{polity}}_{\mathrm{i}}+\kern0.5em {\upbeta}_5{\mathrm{PIPP}}_{\mathrm{i}}+{\upvarepsilon}_{\mathrm{i}} $$ Among 139 countries, 24 attempted compulsory licensing, while 115 did not. In the CL-attempted group, the number of attempted compulsory licensing ranged from 1 to 16. The mean, median, and mode of attempted compulsory licensing were 3.9, 3 and 1, respectively. Table 2 presents the descriptive statistics of the dependent and explanatory variables in the analysis. Significant differences were found in the region, population, and the PIPP index between the groups. First, significant difference was found in the distribution of countries in the region variable. In the CL-attempted group, 8, 7, 6, and 3 countries belonged to Africa, Asia, Latin America, and others, respectively, while 45, 16, 15, and 39 countries belonged to the same category in the non-CL-attempted group. Second, a significant difference was noted in the population between the groups. The mean of the log-transformed population was 17.82 in the CL-attempted group and 16.24 in the non-CL-attempted group. Finally, the PIPP index, which is our main interest of the study, was significantly different. The PIPP index was 2.13 and 1.61 for the CL-attempted group and non-CL-attempted group, respectively. However, no difference was found in the other variables such as income and polity. Table 2 Descriptive statistics of variables used in the study Table 3 presents multivariate logistic regression for attempted compulsory licensing in 139 countries. Note that we have two models with two different databases for the analysis. Specifically, model 1 does not include the weight to the variable of income, while model 2 applied weights to the same variable. We used the database constructed by Son and Lee (2018) to detect the presence of attempted compulsory licensing of pharmaceuticals between 1995 and 2014. To complement the database, we used the TRIPS flexibilities database, which is publicly accessible at the website of Medicines Law and Policy. Table 3 Factors affecting the attempt to issue compulsory licensing In model 1 with two different data sources, we found consistent results. Specifically, the regions, including Africa and Latin America (reference others), populations, and the PIPP index were significant factors in attempting compulsory licensing. However, the variables of income and polity were not significant factors. Specifically, there was a higher likelihood of attempting compulsory licensing in African and Latin American countries compared than in other regions such as Europe and North America. The population was positively associated with attempting compulsory licensing. Interestingly, we found that the PIPP index was positively associated with attempting compulsory licensing. In other words, countries with mature patent systems had a higher likelihood of attempting compulsory licensing in our model. In model 2, in which weights were applied to the variable of income in the model, we found the similar results. The significant variables in model 1, such as the region, population and the PIPP index, were also significant variables in model 2. Additionally, a higher likelihood exists for attempting compulsory licensing in Asian countries than that in other regions such as Europe and North America. Compulsory licensing could be issued by a government in situations where the patent holder is either not using the patent or is not using it adequately, including pharmaceuticals in demand that are not fully available. When governments attempt to issue compulsory licensing or non-governmental entities request a government to issue compulsory licensing, the outcome is often a decrease in prices, similar to the case of the introduction of generics in the market [16, 31]. Although compulsory licensing is not a new concept, it has received considerable debates as patent holders seek to advance their political stances over intellectual property rights to access to medicines. Specifically, opponents of compulsory licensing argue that compulsory licensing would be diametrically opposed to the function of the patent system. We designed this study to demonstrate the positive role of the matured intellectual property system in attempting compulsory licensing. The issuance of the compulsory licensing of a patent requires a granted patent, which is the outcome of a functioning patent system. Therefore, a link exists between the compulsory licensing of patents and the patent system. In this study, we presented that the use of compulsory licensing is an essential part of the patent system and a legitimate tool for the government to utilize by demonstrating that compulsory licensing of pharmaceuticals is utilized in particular countries with a mature patent system. To our knowledge, this study is the first to empirically analyse the relationship between compulsory licensing and patent systems. We used a multivariate logistic model to regress the attempt to issue compulsory licensing on the characteristics of the intellectual property system (i.e., the PIPP index), controlling for macro context variables (i.e., region, population, income, and polity) at a country level. Interestingly, mature patent systems were positively associated with attempting compulsory licensing. How then might mature patent systems affect the issuance of compulsory licensing? The history of compulsory licensing might help to explain these interesting results because they provide a reason for the existence of compulsory licensing in a patent system. We will trace the origins of the compulsory licensing of patents in high-income countries and discuss interesting cases from Canada in the late 1980s. History of compulsory licensing of patents The following legislative cases in the 18-nineteenth century demonstrate that high-income countries actively reviewed and introduced compulsory licensing to complement the intellectual property system and develop their own industrial policy. The compulsory licensing of patents was followed by the compulsory licensing of copyrights [11]. In England, excessive price and an insufficient supply by monopolies were recognized as an economic disease [32]. The Statute of Anne in 1710, known as the world's first copyright law, reflected this thought. Someone other than the owner of the copyright who determined that a book was over-priced could file a complaint against the owner of the copyright with the court, and a fine was imposed to the owner of the copyright if it violated the price set by the court. Following the Statute of Anne, the United States, specifically the state of Connecticut, established the Copyright Act in 1783. The Copyright Act required copyrighted books to be sold at a reasonable price in sufficient quantities; otherwise, it would be possible to file a complaint with the court, similar to the situation of the Statute of Anne. Furthermore, the court had the authority to determine the quantity and price for the copyright owner; if the copyright owner did not accept or rejected such a request, the court had the authority to have the complainant print it. Therefore, the Copyright Act was evaluated as a legislative example of the world's first system of compulsory licensing of copyrights [17]. The United States, specifically the state of South Carolina, established the Act for the Encouragement of Arts and Science, providing privileges and restrictions imposed to authors (copyrights) as well as to inventors (patents) in 1784. The Act first devised compulsory licensing as a new remedy against the abuse of the patents when only invalidation of the patents existed. It should also be noted that the United States had been more aggressive in dealing with patent abuse during the development of new industry. Similarly, England and Germany discussed compulsory licensing as measures to minimize the adverse effects of the patent system and complement the shortcomings of the patent system in 1851 and 1853, respectively [18]. These trends favouring compulsory licensing in various fields persisted in the 1900s [5, 33]. The United States invoked government use on a regular basis to import pharmaceuticals in the 1960s. Furthermore, Canada utilized the compulsory licensing of pharmaceuticals not only for importation but also for local production to manage health expenditure [5, 34, 35]. For instance, the Canadian government enacted compulsory licensing by amending the Patent Act to encourage local manufacturing of the pharmaceutical and market competition in 1923 [5]. Furthermore, there was public concern on the price of pharmaceuticals that was deemed as excessive by the 1960s. Given the concern, the government made several amendments to the Patent Act, which would have significant implications on patenting activities. Additionally, the amendments allowed the use of compulsory licensing to manufacture or import pharmaceuticals that were restricted to manufacturing before amendments [5]. The benefits of compulsory licensing are palpable in Canada, and a number of follow-on drugs have entered the market. However, the positive stance regarding compulsory licensing of high-income countries notably changed in the twentieth century, specifically during the negotiations for TRIPS [36]. LMICs argued for the right to issue compulsory licensing for patented pharmaceuticals that were expensive for their citizens, while high-income countries, usually influenced by the pharmaceutical industry, argued for the limited use of compulsory licensing [37]. High-income countries feared that massive issuance of compulsory licensing would adversely impact the profits of the industry and would harm the ability of these companies to research and develop in the long term [36, 38]. Therefore, the use of compulsory licensing of pharmaceuticals was restricted to highly infectious diseases such as HIV/AIDS in LMICs [39], and compulsory licensing other than this restricted area was deemed unjustifiable [38]. Meanwhile, WHO adopted the Doha Declaration at the fourth Ministerial Conference, and affirmed that compulsory licensing is the right of the member countries of the WTO. The Doha declaration confirmed that compulsory licensing is intended as an efficient and straightforward instrument for LMICs to improve access to needed medicines [34]. The Doha declaration was quite effective in issuance of compulsory licensing in the short run [1]. A few attempts to issue compulsory licensing, including cases from low-income countries, occurred after the declaration. However, the benefits of the declaration in issuing compulsory licensing was not sustained in the long term [1]. Role of compulsory licensing in patent systems and health systems Our findings suggested a new role of compulsory licensing in current patent systems. Compulsory licensing is alternatively or complementally used in countries where mature patent systems were established. For instance, China released "Measures for Compulsory Licensing of Patent Implementation" that integrated and updated intellectual property laws to allow compulsory licensing in 2012 [40]. Under the new measures, China's State Intellectual Property Office (SIPO) may issue and terminate compulsory licensing for patents [41]. Furthermore, compulsory licensing in the patent system could be harmonized with the health system to secure access to essential medicines. In 2016, for the first time, the German Federal Patent Court notably ordered a provisional compulsory license under Section 24 of the Patent Act. The Court allowed a license seeker to continue to market an HIV drug that was made and sold since 2008 in Germany. Prior to that, the patent holder had requested a preliminary injunction against a license seeker in 2015 based on an active patent granted in 2011, partially covering the drug. The license seeker offered a voluntary license on the patent to the patent holder, but failed. The license seeker then responded by requesting a compulsory license under the Patent Act [42]. Specifically, the Court concluded that the license seeker was entitled to emergency relief and granted provisional compulsory licensing based on the need of patients to have the drug available continuously during the course of treatment [38]. Finally, compulsory licensing could be utilized to export pharmaceuticals to other countries according to paragraph 6 of the Doha declaration. However, cases on the compulsory licensing of pharmaceuticals for export are rare [33]. Meanwhile, an amendment to the TRIPS was enacted on January 23, 2017. This amendment allowed countries to grant compulsory licensing to generic suppliers exclusively to manufacture and export medicines to countries lacking production capacity [43]. It provides a secure legal basis for both potential importers and exporters to adopt domestic legislation, and establishes the means to allow countries to import affordable follow-on drugs from countries where pharmaceuticals are patented. Limitations of the study This study possesses several limitations. First, this study used the presence of the attempted compulsory licensing of pharmaceuticals between 1995 and 2014. The number of countries that attempted compulsory licensing during the study period was 24, comprising approximately 20% of the total of 139 countries analysed. Therefore, some may argue that compulsory licensing is not suitable for quantitative research. However, it should be noted that empirical research might provide more practical implications on issuing compulsory licensing in the real world. This research will contribute to expanding the current understanding of compulsory licensing. Second, because of scant available data and fewer cases on compulsory licensing, a cross-sectional analysis was applied to investigate the associations between attempting compulsory licensing and intellectual property systems. However, panel analysis might be a more accurate method to understand these associations. Third, this study used the database constructed by Son and Lee (2018) to detect the presence of the attempted compulsory licensing of pharmaceuticals occurring between 1995 and 2014 [1]. Meanwhile, a more comprehensive database regarding TRIPS flexibilities, including compulsory licensing, was publicly available [19]. We supplemented the database for the analysis. However, we excluded cases from Mongolia, Pakistan, Papua New Guinea, and the Philippines that were based on non-publicly available documents such as patent letters held by procurement agencies. Our study provided evidence of an association between attempting compulsory licensing and matured patent systems. Specifically, we demonstrated that the use of compulsory licensing is an essential part of the patent system and a legitimate tool for the government to utilize. This finding contradicts our current understanding of compulsory licensing, such as compulsory licensing as a measure to usurp traditional patent systems and sometimes diametrically opposed to the patent system. Furthermore, the findings suggest a new role of compulsory licensing in current patent systems: compulsory licensing could be a potential alternative or complement to achieve access to medicines in health systems through manufacturing and exporting patented pharmaceuticals. Son K-B, Lee T-J. Compulsory licensing of pharmaceuticals reconsidered: current situation and implications for access to medicines. Global Public Health. 2017:1–11. Beall R, Kuhn R. Trends in compulsory licensing of pharmaceuticals since the Doha declaration: a database analysis. PLoS Med. 2012;9(1):e1001154. FM't Hoen E, Veraldi J, Toebes B, Hogerzeil HV. Medicine procurement and the use of flexibilities in the Agreement on Trade-Related Aspects of Intellectual Property Rights, 2001–2016. Bulletin of the World Health Organization. 2018;96(3):185. Son K-B, Lee T-J. The trends and constructive ambiguity in international agreements on intellectual property and pharmaceutical affairs: implications for domestic legislations in low-and middle-income countries. Global Public Health. 2018;13(9):1169–78. Kuek V, Phillips K, Kohler JC. Access to medicines and domestic compulsory licensing: learning from Canada and Thailand. Global public health. 2011;6(2):111–24. United Nations. Report of the United Nations secretary General's high-level panel on access to medicines. 2016. Wirtz VJ, Hogerzeil HV, Gray AL, Bigdeli M, De Joncheere CP, Ewen MA, et al. Essential medicines for universal health coverage. Lancet. 2017;389(10067):403–76. Parliament E. EU options for improving access to medicines; 2017. Feldman J. Compulsory licenses: the dangers behind the current practice. J Int'l Bus & L. 2009;8:137. Taylor J. Compulsory licensing: a misused and abused international trade law; 2017. Monte WN. Compulsory licensing of patents. Inf Commun Technol Law. 2016;25(3):247–71. Maybarduk P, Rimmington S. Compulsory licenses: a tool to improve global access to the HPV vaccine? American J Law Med. 2009;35(2–3):323–50. Heller MA, Eisenberg RS. Can patents deter innovation? The anticommons in biomedical research. Science. 1998;280(5364):698–701. Saroha S, Kaushik D, Nanda A. Compulsory licensing of drug products in developing countries. J Generic Med. 2015;12(3–4):89–94. Ramani SV, Urias E. Access to critical medicines: when are compulsory licenses effective in price negotiations? Soc Sci Med. 2015;135:75–83. Ford N, Wilson D, Chaves GC, Lotrowska M, Kijtiwatchakul K. Sustaining access to antiretroviral therapy in the less-developed world: lessons from Brazil and Thailand. Aids. 2007;21:S21–S9. Brand O. The dawn of compulsory patent licensing. Intellect Prop Q. 2007;2:216. Penrose ET. The economics of the international patent system: Baltimore. Md: Johns Hopkins Press. 1951. Medicines law & Policy. The TRIPS flexibilities database. 2019. Abbott FM, Reichman JH. The Doha Round's public health legacy: strategies for the production and diffusion of patented medicines under the amended TRIPS provisions. J Int Econ Law. 2007;10(4):921–87. Bird RC. Developing nations and the compulsory license: maximizing access to essential medicines while minimizing investment side effects. J Law, Med Ethics. 2009;37(2):209–21. Cohen-Kohler JC, Forman L, Lipkus N. Addressing legal and political barriers to global pharmaceutical access: options for remedying the impact of the agreement on trade-related aspects of intellectual property rights (TRIPS) and the imposition of TRIPS-plus standards. Health Econ, Policy Law. 2008;3(3):229–56. Ravvin M. Incentivizing access and innovation for essential medicines: a survey of the problem and proposed solutions. Public Health Ethics. 2008;1(2):110–23. Flynn M. Origins and limitations of state-based advocacy: Brazil's AIDS treatment program and global power dynamics. Polit Soc. 2013;41(1):3–28. Son K-B, Kim C-Y, Lee T-J. Understanding of for whom, under what conditions and how the compulsory licensing of pharmaceuticals works in Brazil and Thailand: a realist synthesis. Global public health. 2018. Persson T, Tabellini G. Political economics and macroeconomic policy. Handb Macroecon. 1999;1:1397–482. Marshall MG, Jaggers K. Dataset users' manual: political regime characteristics and transitions, 1800–2006. POLITY IV PROJECT. Liu M, La Croix S. A cross-country index of intellectual property rights in pharmaceutical inventions. Res Policy. 2015;44(1):206–16. La Croix S, Liu M. The effect of GDP growth on pharmaceutical patent protection, 1945-2005. Brussels Econ Rev 2009;52(3/4):355–375. Liu M, La Croix S. The impact of stronger property rights in pharmaceuticals on innovation in developed and developing countries. Honolulu: University of Hawaii at Mānoa; 2014. Luo J, Oliveira MA, Ramos MB, Maia A, Osorio-de-Castro CG. Antiretroviral drug expenditure, pricing and judicial demand: an analysis of federal procurement data in Brazil from 2004–2011. BMC Public Health. 2014;14(1):367. Bracha O. The adventures of the statute of Anne in the land of unlimited possibilities: the life of a legal transplant. Berkeley Technol Law J. 2010;25(3):1427–73. FM't Hoen E. Private patents and public health: changing intellectual property rules for access to medicines: health action international; 2016. Lybecker KM, Fowler E. Compulsory licensing in Canada and Thailand: comparing regimes to ensure legitimate use of the WTO rules. J Law, Med Ethics. 2009;37(2):222–39. Gorecki PK, Henderson I. Compulsory patent licensing of drugs in Canada: a comment on the debate. Canadian Public Policy/Analyse de Politiques. 1981:559–68. Ford S. Compulsory licensing provisions under the TRIPS agreement: balancing pills and patents. Am U Int'l L Rev. 1999;15:941. McCabe KW. The January 1999 review of article 27 of the TRIPS agreement: diverging views of developed and developing countries toward the patentability of biotechnology. J Intell Prop L. 1998;6:41. von Falck A. Compulsory licenses as a defense in pharmaceutical and biotech patent litigation. Pharmaceutical patent analyst. 2016;5(6):351–3. t Hoen EFM, Boulet P, Baker BK. Data exclusivity exceptions and compulsory licensing to promote generic medicines in the European Union: a proposal for greater coherence in European pharmaceutical legislation. J Pharmaceutical Policy Pract. 2017;10(1):19. Francisco M. Compulsory license bandwagon gains momentum. Nat Publ Group. 2012. Miller Canfield PLC. China allows compulsory licensing. 2012. Germany CMS. German Federal Court of justice upholds provisional compulsory license for HIV drugs; 2017. World Trade Organization. WTO IP rules amended to ease poor countries' access to affordable medicines 2017 [Available from: https://www.wto.org/english/news_e/news17_e/trip_23jan17_e.htm. This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2018S1A5A2A02065850). College of Pharmacy, Ewha Womans University, 52Ewhayeodae-gil, Seodaemun-gu, Seoul, 03760, South Korea Kyung-Bok Son Search for Kyung-Bok Son in: KS designed the study, collected and analyzed data, and wrote the final manuscript. The author read and approved the final manuscript. Correspondence to Kyung-Bok Son. The author declares that they have no competing interests. Son, K. Importance of the intellectual property system in attempting compulsory licensing of pharmaceuticals: a cross-sectional analysis. Global Health 15, 42 (2019) doi:10.1186/s12992-019-0485-7 Accepted: 05 June 2019 Compulsory licensing Access to medicines Patent system Economics and trade
CommonCrawl
Propagation of long-crested water waves DCDS Home On the elliptic equation Δu+K up = 0 in $\mathbb{R}$n February 2013, 33(2): 579-597. doi: 10.3934/dcds.2013.33.579 Pure discrete spectrum in substitution tiling spaces Marcy Barge 1, , Sonja Štimac 2, and R. F. Williams 3, Department of Mathematics, Montana State University, Bozeman, MT 59717, United States Department of Mathematics, University of Zagreb, Bijenička 30, 10 000 Zagreb Department of Mathematics, University of Texas, Austin, TX 78712, United States Received July 2011 Revised April 2012 Published September 2012 We introduce a technique for establishing pure discrete spectrum for substitution tiling systems of Pisot family type and illustrate with several examples. Keywords: Pure discrete spectrum, substitution, maximal equicontinuous factor, coincidence., tiling space. Mathematics Subject Classification: Primary: 37B50, 52C22, 52C23; Secondary: 37B05, 11R0. Citation: Marcy Barge, Sonja Štimac, R. F. Williams. Pure discrete spectrum in substitution tiling spaces. Discrete & Continuous Dynamical Systems, 2013, 33 (2) : 579-597. doi: 10.3934/dcds.2013.33.579 S. Akiyama and J. Y. Lee, Algorithm for determining pure pointedness of self- affine tilings, Adv. Math., 226 (2011), 2855-2883. doi: 10.1016/j.aim.2010.07.019. Google Scholar J. E. Anderson and I. F. Putnam, Topological invariants for substitution tilings and their associated $c^*$-algebras, Ergodic Theory & Dynamical Systems, 18 (1998), 509-537. doi: 10.1017/S0143385798100457. Google Scholar P. Arnoux and S. Ito, Pisot substitutions and Rauzy fractals, Bull. Belg. Math Soc., 8 (2001), 18-2007. Google Scholar J. Auslander, "Minimal Flows and Their Extensions," North-Holland Mathematical Studies, 153, North-Holland, Amsterdam, New York, Oxford, and Tokyo, 1988. Google Scholar M. Baake and R. V. Moody, Weighted Dirac combs with pure point diffraction, J. Reine Angew. Math., 573 (2004), 61-94. . doi: 10.1515/crll.2004.064. Google Scholar V. Baker, M. Barge and J. Kwapisz, Geometric realization and coincidence for reducible non-unimodular Pisot tiling spaces with an application to $\beta$-shifts, J. Instit. Fourier, 56 (2006), 2213-2248. doi: 10.5802/aif.2238. Google Scholar M. Barge, H. Bruin, L. Jones and L. Sadun, Homological Pisot substitutions and exact regularity,, To appear in Israel J. Math., (). Google Scholar M. Barge and J. Kellendonk, Proximality and pure point spectrum for tiling dynamical systems,, preprint, (). Google Scholar M. Barge, J. Kellendonk and S. Schmeiding, Maximal equicontinuous factors and cohomology of tiling spaces,, To appear in Fund. Math., (). Google Scholar M. Barge and J. Kwapisz, Geometric theory of unimodular Pisot substitutions, Amer J. Math., 128 (2006), 1219-1282. doi: 10.1353/ajm.2006.0037. Google Scholar M. Barge and C. Olimb, Asymptotic structure in substitution tiling spaces,, To appear in Ergodic Theory & Dynamical Systems, (). Google Scholar V. Berthé, T. Jolivet and A. Siegel, Substitutive Arnoux-Rauzy substitutions have pure discrete spectrum,, preprint, (). Google Scholar V. Berthé and A. Siegel, Tilings associated with beta-numeration and substitutions, Integers: Electronic Journal of Combinatorial Number Theory, 5 (2005), A02.arXiv:1108.5574. Google Scholar F. M. Dekking, The spectrum of dynamical systems arising from substitutions of constant length, Z. Wahrscheinlichkeitstheorie verw. Gebiete, 41 (1978), 221-239. Google Scholar S. Dworkin, Spectral theory and X-ray diffraction, J. Math. Phys., 34 (1993), 2965-2967. doi: 10.1063/1.530108. Google Scholar N. P. Fogg, "Substitutions in Dynamics, Arithmetics and Combinatorics," Lecture notes in mathematics, (eds. V. Berthé, S. Ferenczi, C. Mauduit and A. Siegel), Springer-Verlag, 2002. Google Scholar D. Fretlöh and B. Sing, Computing modular coincidences for substitution tilings and point sets, Discrete Comput. Geom., 37 (2007), 381-407. doi: 10.1007/s00454-006-1280-9. Google Scholar S. Ito and H. Rao, Atomic surfaces, tiling and coincidence I. Irreducible case, Israel J. Math., 153 (2006), 129-156. doi: 10.1007/BF02771781. Google Scholar R. Kenyon, Ph. D. Thesis, Princeton University, 1990. Google Scholar R. Kenyon and B. Solomyak, On the characterization of expansion maps for self-affine tilings, Discrete Comput. Geom., 43 (2010), 577-593. Google Scholar J. Y. Lee, Substitution Delone multisets with pure point spectrum are inter-model sets, Journal of Geometry and Physics, 57 (2007), 2263-2285. doi: 10.1016/j.geomphys.2007.07.003. Google Scholar J. Y. Lee and R. Moody, Lattice substitution systems and model sets, Discrete Comput. Geom., 25 (2001), 173-201. doi: 10.1007/s004540010083. Google Scholar J. Y. Lee, R. Moody and B. Solomyak, Consequences of Pure Point Diffraction Spectra for Multiset Substitution Systems, Discrete Comp. Geom., 29 (2003), 525-560. doi: 10.1007/s00454-003-0781-z. Google Scholar J. Y. Lee and B. Solomyak, Pure point diffractive substitution Delone sets have the Meyer property, Discrete Comp. Geom., 34 (2008), 319-338. doi: 10.1007/s00454-008-9054-1. Google Scholar J. Y. Lee and B. Solomyak, Pisot family self-affine tilings, discrete spectrum, and the Meyer property,, preprint, (). Google Scholar A. N. Livshits, Some examples of adic transformations and substitutions, Selecta Math. Sovietica, 11 (1992), 83-104. Google Scholar P. Michel, Coincidence values and spectra of substitutions, Zeit. Wahr., 42 (1978), 205-227. doi: 10.1007/BF00641410. Google Scholar A. Siegel and J. Thuswaldner, Topological properties of Rauzy fractals,, preprint., (). Google Scholar V. F. Sirivent and B. Solomyak, Pure discrete spectrum for one-dimensional substitution systems of Pisot type, Canad. Math. Bull., 45 (2002), 697-710. Dedicated to Robert V. Moody. doi: 10.4153/CMB-2002-062-3. Google Scholar B. Solomyak, Nonperiodicity implies unique composition for self-similar translationally finite tilings, Discrete Comput. Geometry, 20 (1998), 265-279. doi: 10.1007/PL00009386. Google Scholar B. Solomyak, Eigenfunctions for substitution tiling systems, Advanced Studies in Pure Mathematics, 49 (2007), 433-454. Google Scholar B. Solomyak, Dynamics of self-similar tilings, Ergodic Theory & Dynamical Systems, 17 (1997), 695-738. doi: 10.1017/S0143385797084988. Google Scholar W. A. Veech, The equicontinuous structure relation for minimal Abelian transformation groups, Amer. J. of Math. 90 (1968), 723-732. Google Scholar Marcy Barge. Pure discrete spectrum for a class of one-dimensional substitution tiling systems. Discrete & Continuous Dynamical Systems, 2016, 36 (3) : 1159-1173. doi: 10.3934/dcds.2016.36.1159 Jeanette Olli. Endomorphisms of Sturmian systems and the discrete chair substitution tiling system. Discrete & Continuous Dynamical Systems, 2013, 33 (9) : 4173-4186. doi: 10.3934/dcds.2013.33.4173 Rui Pacheco, Helder Vilarinho. Statistical stability for multi-substitution tiling spaces. Discrete & Continuous Dynamical Systems, 2013, 33 (10) : 4579-4594. doi: 10.3934/dcds.2013.33.4579 Noriaki Kawaguchi. Maximal chain continuous factor. Discrete & Continuous Dynamical Systems, 2021, 41 (12) : 5915-5942. doi: 10.3934/dcds.2021101 Gerhard Keller. Maximal equicontinuous generic factors and weak model sets. Discrete & Continuous Dynamical Systems, 2020, 40 (12) : 6855-6875. doi: 10.3934/dcds.2020132 Nazar Arakelian, Saeed Tafazolian, Fernando Torres. On the spectrum for the genera of maximal curves over small fields. Advances in Mathematics of Communications, 2018, 12 (1) : 143-149. doi: 10.3934/amc.2018009 Mostafa Mbekhta. Representation and approximation of the polar factor of an operator on a Hilbert space. Discrete & Continuous Dynamical Systems - S, 2021, 14 (8) : 3043-3054. doi: 10.3934/dcdss.2020463 Tao Yu, Guohua Zhang, Ruifeng Zhang. Discrete spectrum for amenable group actions. Discrete & Continuous Dynamical Systems, 2021, 41 (12) : 5871-5886. doi: 10.3934/dcds.2021099 Shigeki Akiyama. Strong coincidence and overlap coincidence. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5223-5230. doi: 10.3934/dcds.2016027 Evelyn Herberg, Michael Hinze, Henrik Schumacher. Maximal discrete sparsity in parabolic optimal control with measures. Mathematical Control & Related Fields, 2020, 10 (4) : 735-759. doi: 10.3934/mcrf.2020018 Kathryn Dabbs, Michael Kelly, Han Li. Effective equidistribution of translates of maximal horospherical measures in the space of lattices. Journal of Modern Dynamics, 2016, 10: 229-254. doi: 10.3934/jmd.2016.10.229 Carlos Lizama, Marina Murillo-Arcila. Discrete maximal regularity for volterra equations and nonlocal time-stepping schemes. Discrete & Continuous Dynamical Systems, 2020, 40 (1) : 509-528. doi: 10.3934/dcds.2020020 Bassam Fayad, A. Windsor. A dichotomy between discrete and continuous spectrum for a class of special flows over rotations. Journal of Modern Dynamics, 2007, 1 (1) : 107-122. doi: 10.3934/jmd.2007.1.107 Nikolai Edeko. On the isomorphism problem for non-minimal transformations with discrete spectrum. Discrete & Continuous Dynamical Systems, 2019, 39 (10) : 6001-6021. doi: 10.3934/dcds.2019262 Jeong-Yup Lee, Boris Solomyak. Pisot family self-affine tilings, discrete spectrum, and the Meyer property. Discrete & Continuous Dynamical Systems, 2012, 32 (3) : 935-959. doi: 10.3934/dcds.2012.32.935 Wen Huang, Zhiren Wang, Guohua Zhang. Möbius disjointness for topological models of ergodic systems with discrete spectrum. Journal of Modern Dynamics, 2019, 14: 277-290. doi: 10.3934/jmd.2019010 Alexander Vladimirov. Equicontinuous sweeping processes. Discrete & Continuous Dynamical Systems - B, 2013, 18 (2) : 565-573. doi: 10.3934/dcdsb.2013.18.565 Yoshikazu Katayama, Colin E. Sutherland and Masamichi Takesaki. The intrinsic invariant of an approximately finite dimensional factor and the cocycle conjugacy of discrete amenable group actions. Electronic Research Announcements, 1995, 1: 43-47. Russell Ricks. The unique measure of maximal entropy for a compact rank one locally CAT(0) space. Discrete & Continuous Dynamical Systems, 2021, 41 (2) : 507-523. doi: 10.3934/dcds.2020266 Qiying Hu, Chen Xu, Wuyi Yue. A unified model for state feedback of discrete event systems I: framework and maximal permissive state feedback. Journal of Industrial & Management Optimization, 2008, 4 (1) : 107-123. doi: 10.3934/jimo.2008.4.107 Marcy Barge Sonja Štimac R. F. Williams
CommonCrawl
STUDYDADDY Top Tutors StudyDaddy Geometry Answered You can buy a ready-made answer or pick a professional tutor to order an original one. Find the angles between the diagonals of a cube. Prowriters Tutor has posted answer for $10.00. See answer's preview * ******** of the **** *** ******************************************* adjacent ******** ** a **** is: ******************************************** ***** ******* *** ******* is ***** *** ****************************************************** **************** ********* ** have: ****************** ********************************************** rangle}{\sqrt{1^2+1^2+1^2}\cdot{\sqrt{1^2+1^2+0^2} }} \:=\:\frac{1\+\1\+\0}{\sqrt{3}\cdot\sqrt{2}}[/tex]Therefore: ******************************************************* \;\;\theta *********************** Buy this answer or Buy custom answer Q: What is the perimeter of triangle with vertices of ##(1,2) (3,-4)## and ##(-4,5)##? Q: draw a plane containing four coplanar points A, B, C, and D with exactly 3 collinear points A, B, and... Q: In a 30-60-90 triangle, where the long leg is 12, what is the length of the short leg? Q: How do you evaluate ##sin^ -1(sqrt(2)/2)##? Q: What does it mean to have a negative angle? Q: What is the relationship between corresponding sides, altitudes, and medians in similar triangles? Q: The opposite angles of a parallelogram have measures of ##3x-20## and ##x+15##. What is ##x##? Have a similar question? Continue to post Continue to edit or attach image(s). Create a PowerPoint (or Flip Grid) that has 8 s... 13 minutes ago If you have been involved with a company doing... 15 minutes ago Assignment Guidelines: This assignment must be... 41 minutes ago A bar graph shows that sports books received 9... 1 hour ago both methods show that needs 8 gal of and 12... 1 hour ago Homework Categories LEARN MORE EFFECTIVELY AND GET BETTER GRADES! Copyright © 2021 StudyDaddy.com Worbert Limited - All right reserved. 20 Christou Tsiarta Elma 2, 22, 1077, Nicosia, Cyprus
CommonCrawl
Hemodynamic effects of perfusion level of peripheral ECMO on cardiovascular system Kaiyun Gu1,2, Zhe Zhang1, Bin Gao3, Yu Chang3 & Feng Wan1,2 BioMedical Engineering OnLine volume 17, Article number: 59 (2018) Cite this article Peripheral ECMO is an effective cardiopulmonary support in clinical. The perfusion level could directly influence the performances and complications. However, there are few studies on the effects of the perfusion level on hemodynamics of peripheral ECMO. The geometric model of cardiovascular system with peripheral ECMO was established. The blood assist index was used to classify the perfusion level of the ECMO. The flow pattern from the aorta to the femoral artery and their branches, blood flow rate from aorta to brain and limbs, flow interface, harmonic index of blood flow, wall shear stress and oscillatory shear index were chosen to evaluate the hemodynamic effects of peripheral ECMO. The results demonstrated that the flow rate of aorta outlets increased and perfusion condition had been improved. And the average flow to the upper limbs and brain has a positive correlation with BAI (r = 0.037, p < 0.05), while there is a negative correlation with lower limbs (r = − 0.054, p < 0.05). The HI has negative correlation with BAI (p < 0.05, r < 0). The blood interface is further from the heart with the BAI decrease. And the average WSS has negative correlation with BAI (p < 0.05, r = − 0.983) at the bifurcation of femoral aorta and has positive correlation with BAI (p < 0.05, r = 0.99) at the inner aorta. The OSI under different BAI is higher (reaching 0.4) at the inner wall of the aortic arch, the descending aorta and the femoral access. The pathogenesis of peripheral ECMO with different perfusion levels varies; its further research will be thorough and extensive. Veno-arterial extra corporeal membrane oxygenation (VA ECMO) is a common treatment for respiratory failure and heart failure clinically [1,2,3,4]. VA-ECMO drains the blood from the venous vessels and then returns the processed blood to the arterial vessels and this has an advantage of pulmonary as well as cardiac support. Peripheral ECMO is one of the VA ECMO mode which usually has an arterial cannula placed into the right femoral artery for infusion and a venous cannula placed in the right common femoral vein for extraction [5,6,7,8]. Peripheral ECMO is usually used in cardiogenic shock and cardiac arrest and leaves small surgical wound. Lower extremity ischemia, amputation and vascular complication are common complications of ECMO treatment [9]. High blood flow velocity from the ECMO cannula flow may result in high wall shear stress of local vascular and high blood pressure which may result in vascular complication [10]. The retrograde ECMO flow also increases the left ventricular afterload [11], even restrict the opening of the aortic valve [12, 13]. The flow interface caused by ECMO jet flow and cardiac jet flow may also lead to severe flow conditions resulting in platelet activation or hemolysis. Coronary arteries, cerebral blood vessels and upper limbs may be also under threat of hypoxaemia for proximal branches of the aorta receive predominantly deoxygenated blood ejected from the heart [14]. Limb ischemic is a common complication with peripheral ECMO and more than half of our patients developed it [15,16,17]. Peripheral ECMO may not well supply the coronary arteries and proximal aorta arch branches and these vessels may be filled with poorly oxygenated blood by the heart if the patient has respiratory failure. The non-pulsatile blood flow of ECMO is another risk for it alters the pulsatile flow patterns and cerebral autoregulation [18]. Thus, the possibility of complications of ECMO have relationship with the perfusion condition of ECMO. In clinical, the perfusion amount is based on doctors' own experience, the hemodynamic performances under different perfusion level of ECMO have been less studied. Computational fluid method has been widely used in hemodynamic study of cardiovascular system and related assist devices. The resulting hemodynamic vectors, wall shear stress, pressure gradients and other factors were obtained and analyzed to investigate the effects of ECMO on the blood and vascular. Avrahami et al. [19, 20] analyzed hemodynamic characteristics of aorta cannula and the risk to develop cerebral embolism and hemolysis. Menon et al. [21] also study the jet wake of aortic cannula and related disease. This article used computational fluid dynamic method [22,23,24,25] to clarify the hemodynamic performance of peripheral ECMO under different perfusion level, and to further guide its empiric therapy scientifically. Specially, the hemodynamic analyse was compared under different blood perfusion conditions. That is, a blood assist index (BAI) was defined to represent the ratio of ECMO energy to total energy. Four numerical simulations were performed under different BAI (80, 60, 40, 0%) and the results were compared. Define of the blood assist index (BAI) In order to evaluate support level of ECMO, the blood assist index (BAI) was defined to represent the ratio of ECMO energy to total energy, denoted as Eq. 1. $$BAI = \frac{1}{T}\int_{0}^{{T_{c} }} {\left( {\frac{{F_{E} (t)}}{{F_{E} (t) + F_{C} (t)}}} \right)dt,}$$ where \(F_{E} (t)\) is the waveform of ECMO cannula outlet blood flow, \(F_{C} (t)\) is the waveform of cardiac output blood flow, \(T_{c}\) represents the cardiac cycle. The unit of BAI is %. It is seen that when the BAI equals to 1, the ECMO is fully assisted and the cardiac output is zero. When the BAI is lower than 1, the ECMO is partial assisted. When the BAI is zero, only the native heart supplies the body. In this study, to meet the physical requirement, the total blood perfusion is set about 5 L/min. for different perfusion conditions were assumed as BAI of 80, 60, 40 and 0%. That is, the contribution of the ECMO is weaker with the lower of BAI. Computational fluid dynamic method In order to investigate hemodynamic performance of the peripheral ECMO under different perfusion conditions, several numerical simulations were conducted. The three-dimensional geometry of peripheral ECMO cannulation was created as shown in Fig. 1. The vessel size refers to literatures [26] as Table 1. The ideal three-dimensional geometry of peripheral ECMO cannulation and related blood vessels Table 1 The vessel sizes of the geometry To determine the number of meshes of the model, having a good quality and little influence on the results of CFD simulation, a gird independent test was conducted. Five models with different numbers of element, from 1.4 to 8.1 millions, were established. The steady flow numerical studies, which include an inlet boundary condition of constant velocity flow rate, and 70 mmHg pressure for the outlets, was used to the analysis. The pressure at the inlet plane of aorta and the average wall shear stress of the whole vessel are chosen as the indicators. The relative error of them has been used to evaluate the accuracy of simulation. The test results were listed in Table 2. It is shown that both relative errors of the pressure and wall shear stress decreased to less than 1%, when the numbers of elements is more than 2,358,058. Hence the mesh containing 2,358,058 elements has been used in this work. Table 2 The results of analysis of grid independence There are four simulations cases conducted in this work. Cases 1–4 is the simulation of the peripheral ECMO under BAI of 80, 60, 40 and 0%. The boundary conditions were set as pulsatile velocity inlet and fully developed constant pressure outlet. The inlet flow rate waveforms of the aorta were assumed to be transient which was obtained from the lumped parameter model (LMP) [27,28,29] as shown in Fig. 2. The period of the pulsatile flow is set as 0.8 s, which is equal to the cardiac period. The sum of the average flow rate of ECMO cannula and cardiac output keeps constant, thus the total blood perfusion is 5 L/min meeting the physical requirement. The outlet pressure was set as 70 mmHg for all vessel outlets. The inlet flow rate waveforms of the aorta of different BAI In all simulation cases, the blood was assumed as a homogeneous, incompressible, and Newtonian fluid flow for reducing the cost of computation. And the density and viscosity of blood is set as 1050 kg/m3 and 0.0035 kg/m s, respectively [25]. Therefore, blood flow was modeled using incompressible Navier–Stokes equations in moving boundaries. Definition of indicators of hemodynamic performance To assess the perfusion condition main vessels under ECMO, blood flow rate ratios (R) of arterial bifurcations was defined. R up is the ratio of the upper limb and brain blood supply to total blood supply. R down is the ratio of the lower limb blood supply to total blood supply. They were defined as Eqs. 2 and 3. $$R_{up} = \frac{{Q_{innominateartery} + Q_{leftcommoncarotidartery} + Q_{leftsubclavianartery} }}{{Q_{aorta} }}$$ $$R_{down} = \frac{{Q_{leftfemoralartery} + Q_{rightfemoralartery} }}{{Q_{aorta} }}.$$ To evaluate the pulsatility of the flow rate, the harmonic index (HI) was proposed. HI is a measure of the relative contribution of non-static intensity to the overall signal intensity, and this parameter ranges from zero (in the case of a steady nonzero flow rate signal) to one (in the case of a purely oscillatory signal with a time average of zero) [19]. The harmonic index (HI) is defined as Eq. 4: $$HI = \frac{{\sum\nolimits_{n = 1}^{ + \infty } {T[nw_{0} ]} }}{{\sum\nolimits_{n = 0}^{ + \infty } {T[nw_{0} ]} }},$$ where \(T[nw_{0} ]\) is the magnitude of the transformed flow rate signal. To clarify the flow oscillation during cardiac cycle and quantify the change in direction and magnitude of the WSS, the oscillatory shear index (OSI) was calculated as Eq. 5 [20]: $$OSI = \frac{1}{2}\left( {1 - \frac{{\left| {\int_{0}^{T} {\tau_{w} dt} } \right|}}{{\int_{0}^{T} {\left| {\tau_{w} } \right|dt} }}} \right)$$ where \(\tau_{w}\) is wall shear stress, T is one cardiac cycle. The OSI value can vary from 0 to 0.5, where 0 describes a total unidirectional WSS and the latter a purely unsteady, oscillatory shear flow with a net amount of zero WSS. Areas of high OSI are predisposed to endothelial dysfunction and thermogenesis [21, 22]. Figure 3 shows the flow rate of all the outlets of the cases. Figure 3 is the mass flow rate of the peripheral ECMO under different BAI. The flow rate of the outlets is changing with time and the changing trend is related with the inlet flow of the aorta. There is descend value around 0.4 s due to the end of cardiac ejection. The peak values of flow rate of all the outlets increase with the decrease of BAI. The peak value of innominate artery (IA) is much higher than other outlets. Without ECMO (0% BAI), there exists backflow moment of IA, left common carotid artery (LCCA) and left subclavian artery (LSA). While for peripheral ECMO, along with the increase of BAI, the backflow situation gets better with the increase of BAI, and under 80% BAI, there is no backflow moment of all the outlets. With ECMO assistance, the backflow condition had been improved. The flow rate of all the outlets under different BAI. a The flow rate of innominate artery, b the flow rate of left common carotid artery, c the flow rate of left subclavian artery, d the flow rate of left femoral artery Figure 4 is the average mass flow rate under different BAI. The blood flow of the IA, LCCA and LSA supply the upper limbs and brain. The blood flowing into the left femoral artery (LFA) and right femoral artery (RFA) perfuse the lower limbs. And the average flow to the upper limbs and brain has a positive correlation with BAI of ECMO (r = 0.037, p < 0.05), while there is a negative correlation with lower limbs (r = − 0.054, p < 0.05). The average mass flow rate under different BAI Figure 5 is the HI of all the flow rate waves under different BAI. The HI decreases with the increase of BAI. The HI of all the flow rate waves under different BAI Figure 6 is the velocity vector of aorta arch and femoral bifurcation at peak moment. With the BAI increases, the energy of ECMO becomes weaker and the energy of local heart becomes stronger. There is backflow at the inside of the aorta bent. With the BAI gets lower, the vortex in the aorta arch becomes stronger. At the bifurcation of femoral, there exists vortex near the jet flow of ECMO. Seen from Fig. 7, the blood interface changes with the BAI, and with the BAI decrease, the location is further from the heart, the backflow inside the aorta bent is stronger. The velocity vector of aorta arch and femoral bifurcation at peak moment under different BAI The wall shear stress (WSS) contours of aortic arch at feature moments under different BAI Figures 7 and 8 illustrate the wall shear stress (WSS) contours at feature moments under different BAI. The inside aorta bent (Fig. 7) and side of femoral (Fig. 8) are high WSS region of femoral ECMO. The WSS distribution also changes with BAI. For aorta arch, the WSS of the inner aorta arch and arterial bifurcation increase with the BAI decreases. The WSS of femoral bifurcation decrease with the BAI decreases and it is higher than aorta arch. Table 3 is the average WSS of the both regions. The wall shear stress (WSS) contours at the bifurcation of femoral artery under different BAI. a The WSS under 0% BAI, b is the WSS under 40% BAI, c the WSS under 60% BAI, d the WSS under 80% BAI Table 3 The average WSS under different BAI (pa) Figure 9 is the OSI under different BAI. With the BAI decrease, for the inner wall of the aortic arch, the OSI decrease, and for the descending aorta, the OSI increase, and for the femoral access, the OSI increase. The OSI under different BAI VA ECMO is an effective treatment for severe cardiopulmonary failure, but the complications of ECMO threaten the life of patients. Except bleeding and infection which are mainly caused by the operation, other complications, such as ischemia, hypoxaemia, hyperperfusion and vascular complication, have relationship with the hemodynamic flow field. This article focuses on the hemodynamic difference of the peripheral ECMO under different BAI. CFD was used to solve these problems obtaining the hemodynamic results of the two modes and BAI was defined to represent the perfusion condition of ECMO. Hemodynamic factors different BAI were compared. Although the effect of ECMO on cardiovascular system has attracted many interesting, the relationship between support level of ECMO, blood distribution and hemodynamic states are not clear. This is the first paper revealing these relationships by using numerical method. Limb ischemia is a common complication of ECMO and it may lead to limb loss even death in more serious cases. Slottosch et al. [30] reported that 20.8% of the patients undergoing peripheral ECMO required treatment of lower limb ischemia. Cheng et al. [31] stated that for peripheral ECMO 16.9% of patients meet lower extremity ischemia and 4.7% of patients have lower extremity amputation. Distal perfusion catheters were proposed to improve this situation [32], but there is still a 3.2% rate of limb ischemia even though distal perfusion catheters were implanted [33]. Cerebral blood vessels and upper limbs are also under the threat of hypoxaemia undergoing peripheral ECMO [34] for proximal branches of the aorta receive predominantly deoxygenated blood ejected from the left heart and this situation persists as long as the return cannula is placed centrally [35]. Our results suggest that blood perfusion to limbs is correlation with BAI (p < 0.05), and for lower limbs there is negative correlation between BAI and flow rate while for upper limbs and brain it is positive correlated (Fig. 4). These results are consistent with clinical events listed above. The results of this study show that the support level could directly affect the distribution of blood between upper limbs and lower limbs. That means, the output of ECMO should be regulated in response to the changes in both perfusion requirements and cardiac function to achieve an optimal clinical outcome. For peripheral ECMO, there exists blood interface due to the different direction of blood supply from heart and peripheral ECMO. After cardiac injection, the perfusion from the heart becomes weaker, retrograde flow from peripheral ECMO reaches aorta arch meeting the antegrade flow from heart forming the interface. The peripheral ECMO mainly supply the blood to the bifurcation of the arch. The flow interface caused by ECMO jet flow and cardiac output flow is another factor that influences the perfusion under peripheral ECMO. Seen from Fig. 6, the closer the location of the interface is to the heart, the higher the BAI and the interface reaches the branches of aorta when BAI exceeded 40%. It may be used to explain the growth of the flow rate to upper limbs and brain under peripheral ECMO slows down when the BAI is greater than 40%. In addition, when BAI is set as 40%, a vortex is found under the inlet of left subclavian artery, which has generated regurgitation flow. Hence, the support level of peripheral ECMO should be regulated to avoid the vortex closing to the inlet of upper vessels. The human heart produces pulse by contraction, stroke volume ejection, and then relaxation, with one-way valves. Given this mechanism, a pulsatile circulation is obligatory [36]. The non-pulsatile blood flow from the ECMO may generate negative effects on heart and aorta. Short et al. [37] have shown that VA ECMO alters pulsatile blood flow patterns and cerebral autoregulation in animal models of VA ECMO for the effect on endothelial reactivity. HI is an index evaluating the pulsatility of the flow rate. The higher the HI, the stronger the pulsatility. HI has negative correlation with BAI (p < 0.05, r < 0). Wall shear stress has relationship with vascular remodeling, which is an implication on atherosclerosis, coronary stents, VAD and ECMO [38]. Mean and maximum values of WSS together with WSS amplitude are major determinants of endothelial pathology [39, 40] and intimal disease [41]. Adel et al. [42] indicated that arterial-level shear stress (> 1.5 pa) induces endothelial quiescence and an atheroprotective gene expression profile, while low shear stress (< 0.4 pa), which is prevalent at atherosclerosis-prone sites, stimulates an atherogenic phenotype. Seen from Table 3, our results shows that under the average WSS at region 2 during one cardiac cycle under 0% of BAI was all higher than 1.5 pa. That is without ECMO, the WSS was in the safe range for endothelial cell. For peripheral ECMO, the average WSS of region 1 is lower than that without ECMO while the average WSS of region 2 is much higher. And the average WSS has negative correlation with BAI (p < 0.05, r < 0). For peripheral ECMO, the OSI of the aorta arch and access of femoral artery is higher than normal state. And it shows that the areas with high values of OSI are usually located in the regions where wall shear stress is low. It is constant with other studies [43]. With the BAI decrease, the OSI in the inner aorta decrease while the WSS increase. The inlet flow of peripheral ECMO canula causes high OSI compared with 0% of BAI. Areas of high OSI are predisposed to endothelial dysfunction and atherogenesis. Flow-imaging techniques such as phase contrast magnetic resonance imaging is performed to produce flow fields of blood. The state of change in swirling blood flow within cardiac chambers, flow information by overlaying velocity fields, and to quantify it for clinical analysis was studied by Wong [44, 45]. It was establish a framework to produce flow information and set of reference data to compare with unusual flow patterns due to cardiac abnormalities. In addition, Du propose a regression segmentation framework (Bi-DBN) to automatically segment bi-ventricle by establishing a boundary regression model that implicates the nonlinear mapping relationship between cardiac MR images and desired object boundaries [46]. Zhang propose meshfree particle computational method for cardiac image analysis with the energy minimization formulations to solve the fundamental problem about the optimal mathematical description in cardiac image analysis on a digital computer [47]. In this paper, computational fluid method was used to research the hemodynamic effects of perfusion level of peripheral ECMO. In the future, we will try to use bi-ventricle segmentation and meshfree particle computational method in my research. Combine the flow-imaging techniques and computational fluid method to measurement and analysis the flow will be used in this field. The idealistic geometric can capture most of important characteristic of the problem. For computational fluid dynamic, idealistic geometric is also simple and easier to implement [48, 49]. In this work, the ideal 3-dimensional geometry model is established, consisting the ascending aorta, the innominate artery (IA), left common carotid artery (LCCA), left subclavian artery (LSA), left femoral artery (LFA), right femoral artery (RFA). Hence, the accuracy of results may be limited by this choice. However, this work is focused on the common relationship between perfusion level and cardiovascular system, rather than the hemodynamic effect on specific patients. Hence, these results also could reveal the mechanisms on hemodynamic status under peripheral ECMO support in some degree. In the future, the MRI data will be used to obtain motion of coronary wall and the fluid structure interaction method (FSI) will be used to study the hemodynamic states of coronary artery. Cardiovascular disease is still the leading cause of death over the world. There are significant challenges including the real-time monitoring of physiological states, imaging technologies and personalized predication [50]. According to literatures, the hemodynamic states have strong effects on function and structure of cardiovascular system, such as the auto-regulation system, the function of aortic valve and the brain perfusion. These effects are very important for the long-term prognosis outcome of patients. Then, computer modeling supplies an opportunity for parameters of hemodynamic states to provide a quantitative assessment [51]. This work is mainly focused on the hemodynamic effect caused by different support level, hence the physiological effects was not studied in this work. In the future, other study on the physiological effects of different support level of ECMO on cardiovascular structure and function will be conducted. Moreover, the results is derived from numerical study (CFD). Although the CFD method could reveal many kinds of very useful information, the PIV method is still needed to be conducted to verify their accuracy and strengthen these results. Hence, the PIV study on the hemodynamic change under different support level of ECMO at aorta and femoral artery will be conducted. Peripheral ECMO is an effective cardiopulmonary support in clinical. In this paper, the effects of the perfusion level on hemodynamics of peripheral ECMO was studied. The geometric model of cardiovascular system with peripheral ECMO was established. The blood assist index was used to classify the perfusion level of the ECMO. The flow pattern from the aorta to the femoral artery and their branches, blood flow rate from aorta to brain and limbs, flow interface, harmonic index of blood flow, wall shear stress and oscillatory shear index were chosen to evaluate the hemodynamic effects of peripheral ECMO. The results revealed the mechanisms on hemodynamic status under peripheral ECMO support in some degree, its further research will be thorough and extensive. Marasco SF, Esmore DS, Negri J, et al. Early institution of mechanical support improves outcomes in primary cardiac allograft failure. J Heart Lung Transplant. 2005;24(12):2037–42. Francesco R, Maria CA, Domenico S, et al. Percutaneous assist devices in acute myocardial infarction with cardiogenic shock: review, meta-analysis. World J Cardiol. 2016;8(1):98–111. Sebastian N, Karl W. IABP plus ECMO—is one and one more than two? J Thorac Dis. 2017;9(4):961–4. Bartlett RH, Roloff DW, Custer JR, et al. Extracorporeal life support: the University of Michigan experience. JAMA. 2000;283(7):904–8. Marasco SF, Lukas G, McDonald M, et al. Review of ECMO (extra corporeal membrane oxygenation) support in critically ill adult patients. Heart Lung Circ. 2008;17(4):41–7. Madershahian N, Nagib R, Wippermann J, et al. A simple technique of distal limb perfusion during prolonged femoro-femoral cannulation. J Cardiac Surg. 2006;21(2):168–9. Hung M, Vuylsteke A, Valchanov K. Extracorporeal membrane oxygenation: coming to an ICU near you. J Intensive Care Soc. 2012;13:31–8. Klein MD, Andrews AF, Wesley JR, et al. Venovenous perfusion in ECMO for newborn respiratory insufficiency. A clinical comparison with venoarterial perfusion. Ann Surg. 1985;201(4):520. Cheng R, Hachamovitch R, Kittleson M, et al. Complications of extracorporeal membrane oxygenation for treatment of cardiogenic shock and cardiac arrest: a meta-analysis of 1866 adult patients. Ann Thorac Surg. 2014;97(2):610–6. Bisdas T, Beutel G, Warnecke G, et al. Vascular complications in patients undergoing femoral cannulation for extracorporeal membrane oxygenation support. Ann Thorac Surg. 2011;92(2):626–31. Auzinger G, Best T, Vercueil A, et al. Computed tomographic imaging in peripheral VA-ECMO: where has all the contrast gone? J Cardiothorac Vasc Anesth. 2014;28(5):1307–9. Bahekar A, Singh M, Singh S, et al. Cardiovascular outcomes using intra-aortic balloon pump in high-risk acute myocardial infarction with or without cardiogenic shock:a meta-analysis. J Cardiovasc Pharmacol Ther. 2012;17(1):44–56. Vohra HA, Dimitri WR. Elective intraaortic balloon counterpulsation in high-risk off-pump coronary artery bypass grafting. J Card Surg. 2006;2006(21):1–5. Lafçı G, Budak AB, Yener AÜ, et al. Use of extracorporeal membrane oxygenation in adults. Heart Lung Circ. 2014;23(1):10–23. Hines MH. ECMO and congenital heart disease. Semin Perinatol. 2005;29(1):34–9. Gander JW, Fisher JC, Reichstein AR, et al. Limb ischemia after common femoral artery cannulation for venoarterial extracorporeal membrane oxygenation: an unresolved problem. J Pediatr Surg. 2010;45(11):2136–40. Chamogeorgakis T, Lima B, Shafii AE, et al. Outcomes of axillary artery side graft cannulation for extracorporeal membrane oxygenation. J Thorac Cardiovasc Surg. 2013;145(4):1088–92. Rais Bahrami K, Van Meurs KP. ECMO for neonatal respiratory failure. Semin Perinatol. 2005;29(1):15–23. Avrahami I, Dilmoney B, Azuri A, et al. Investigation of risks for cerebral embolism associated with the hemodynamics of cardiopulmonary bypass cannula: a numerical model. Artif Organs. 2013;37(10):857–65. Avrahami I, Dilmoney B, Hirshorn O, et al. Numerical investigation of a novel aortic cannula aimed at reducing cerebral embolism during cardiovascular bypass surgery. J Biomech. 2013;46(2):354–61. Menon PG, Teslovich N, Chen CY, et al. Characterization of neonatal aortic cannula jet flow regimes for improved cardiopulmonary bypass. J Biomech. 2013;46(2):362–72. Gao B, Chang Y, Xuan Y, et al. The hemodynamic effect of the support mode for the intra-aorta pump on the cardiovascular system. Artif Organs. 2013;37(2):157–65. Gu K, Zhang Y, Gao B, et al. Hemodynamic differences between central ECMO and peripheral ECMO: a primary CFD Study. Med Sci Monit. 2016;22:717–26. Xuan Y, Chang Y, Gu K, et al. Hemodynamic simulation study of a novel intra-aorta left ventricular assist device. ASAIO J. 2012;58(5):462–9. Xuan YJ, Chang Y, Gao B, et al. Effect of continuous arterial blood flow of intra-aorta pump on the aorta: a computational study. Appl Mech Mater. 2013;275:672–6. Assmann A, Benim AC, Gül F, et al. Pulsatile extracorporeal circulation during on-pump cardiac surgery enhances aortic wall shear stress. J Biomech. 2012;45(1):156–63. Gu K, Chang Y, Gao B, et al. Lumped parameter model for heart failure with novel regulating mechanisms of peripheral resistance and vascular compliance. ASAIO J. 2012;58(3):223–31. Gu K, Gao B, Chang Y, et al. Research on lumped parameter model based on intra-aorta pump. J Med Biomech. 2011;4:020. Gao B, Gu KY, Zeng Y, Liu JY, Chang Y. A blood assist index control by intraaorta pump: a control strategy for ventricular recovery. ASAIO J. 2011;57(5):358–62. Slottosch Ingo, Liakopoulos Oliver, Kuhn Elmar, et al. Outcomes after peripheral extracorporeal membrane oxygenation therapy for postcardiotomy cardiogenic shock: a single-center experience. J Surg Res. 2013;181(2):e47–55. Huang SC, Yu HY, Ko WJ, et al. Pressure criterion for placement of distal perfusion catheter to prevent limb ischemia during adult extracorporeal life support. J Thorac Cardiovasc Surg. 2004;128:776–7. Ganslmeier P, Philipp A, Rupprecht L, et al. Percutaneous cannulation for extracorporeal life support. Thorac Cardiovasc Surg. 2011;59:103–7. Yoda Masataka, Hata Mitsumasa, Sezai Akira, et al. A case report of central extracorporeal membrane oxygenation after implantation of a left ventricular assist system: femoral vein and left atrium cannulation for ECMO. Ann Thorac Cardiovasc Surg. 2009;15(6):408–11. Lafç G, Budak AB, Yener AÜ, Cicek OF. Use of extracorporeal membrane oxygenation in adults heart. Lung Circ. 2014;23:10–23. Saito S, Nishinaka T, Westaby S. Hemodynamics of chronic nonpulsatile flow: implications for LVAD development. Commun Numer Methods Eng. 2009;25:1097–106. Short BL, Walker LK, Bender KS, et al. Impairment of cerebral autoregulation during extracorporeal membrane oxygenation in newbornlambs. Pediatr Res. 1993;33:289–94. Wong KK, Wang D, Ko JK, et al. Computational medical imaging and hemodynamics framework for functional analysis and assessment of cardiovascular structures. Biomed Eng Online. 2017;16(1):35. Qiu Y, Tarbell JM. Numerical simulation of pulsatile flow in a compliant curved tube model of a coronary artery. J Biomech Eng. 2000;122:77–85. Fung YC. Biomechanics: circulation. 2nd ed. New York: Springer; 1996. Zarins CK, Giddens DP, Bharadvaj BK, et al. Carotid bifurcation atherosclerosis. Quantitative correlation of plaque localization with flow velocity profiles and wall shear stress. Circ Res. 1983;53(4):502–14. Malek AM, Alper SL, Izumo S. Hemodynamic shear stress and its role in atherosclerosis. JAMA. 1999;282:2035–42. Nordgaard H, Swillens A, Nordhaug D, et al. Impact of competitive flow on wall shear stress in coronary surgery: computational fluid dynamics of a LIMA–LAD model. Cardiovasc Res. 2010;88(3):512–9. Wong KKL, Kelso RM, Worthley SG, et al. Cardiac flow analysis applied to phase contrast magnetic resonance imaging of the heart. Ann Biomed Eng. 2009;37(8):1495–515. Wong KKL, Kelso RM, Worthley SG, et al. Medical imaging and processing methods for cardiac flow reconstruction. J Mech Med Biol. 2009;9(1):1–20. Du X, Zhang W, Zhang H, et al. Deep regression segmentation for cardiac bi-ventricle MR images. IEEE Access. 2018;6(99):3828–38. Zhang H, Gao Z, Xu L, et al. A meshfree representation for cardiac medical image computing. IEEE J Transl Eng Health Med. 2018;99:1–12. Wong KKL, Thavornpattanapong P, Cheung SCP, et al. Biomechanical investigation of pulsatile flow in a three-dimensional atherosclerotic carotid bifurcation model. J Mech Med Biol. 2013;13(1):1–21. Liu G, Wu J, Huang W, et al. Numerical simulation of flow in curved coronary arteries with progressive amounts of stenosis using fluid-structure interaction modelling. J Med Imaging Health Inform. 2014;4(4):605–11. Zhang YT, Zheng YL, Lin WH, et al. Challenges and opportunities in cardiovascular health informatics. IEEE Trans Biomed Eng. 2013;60(3):633–42. Lin WH, Zhang H, Zhang YT, et al. Investigation on cardiovascular risk prediction using physiological parameters. Comput Math Methods Med. 2013;343:180–4. FW and YC participated in the design and overall investigation. KG and BG participated in the computational modeling and performed the statistical analysis. KG, BG and ZZ conceived of the study, and participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript. This work partly sponsored by the National Natural Science Foundation of China (Grant No. 11572014). We thank Ya Zhang for 3D construction model. Data and materials presented in this paper can be shared upon request. Publication of this article was funded by the National Natural Science Foundation of China (Grant No. 11572014). Peking University Third Hospital, 49 North Garden Rd., Haidian District, Beijing, 100191, China Kaiyun Gu , Zhe Zhang & Feng Wan Peking University Health Science Center, Xueyuan Rd, Haidian District, Beijing, 100083, China College of Life Science & Bio-Engineering, Beijing University of Technology, Beijing, 100124, China Bin Gao & Yu Chang Search for Kaiyun Gu in: Search for Zhe Zhang in: Search for Bin Gao in: Search for Yu Chang in: Search for Feng Wan in: Correspondence to Zhe Zhang or Yu Chang. Gu, K., Zhang, Z., Gao, B. et al. Hemodynamic effects of perfusion level of peripheral ECMO on cardiovascular system. BioMed Eng OnLine 17, 59 (2018) doi:10.1186/s12938-018-0493-5 Peripheral ECMO Blood assist index Oscillatory shear index BioMedical Engineering and the Heart
CommonCrawl
Electronic Communications in Probability Electron. Commun. Probab. Volume 18 (2013), paper no. 27, 13 pp. Random pure quantum states via unitary Brownian motion Ion Nechita and Clément Pellegrini More by Ion Nechita More by Clément Pellegrini We introduce a new family of probability distributions on the set of pure states of a finite dimensional quantum system. Without any a priori assumptions, the most natural measure on the set of pure state is the uniform (or Haar) measure. Our family of measures is indexed by a time parameter $t$ and interpolates between a deterministic measure ($t=0$) and the uniform measure ($t=\infty$). The measures are constructed using a Brownian motion on the unitary group $\mathcal U_N$. Remarkably, these measures have a $\mathcal U_{N-1}$ invariance, whereas the usual uniform measure has a $\mathcal U_N$ invariance. We compute several averages with respect to these measures using as a tool the Laplace transform of the coordinates. Electron. Commun. Probab., Volume 18 (2013), paper no. 27, 13 pp. First available in Project Euclid: 7 June 2016 https://projecteuclid.org/euclid.ecp/1465315566 doi:10.1214/ECP.v18-2426 Primary: 39A50: Stochastic difference equations Secondary: 81P45: Quantum information, communication, networks [See also 94A15, 94A17] quantum states unitary Brownian motion Nechita, Ion; Pellegrini, Clément. Random pure quantum states via unitary Brownian motion. Electron. Commun. Probab. 18 (2013), paper no. 27, 13 pp. doi:10.1214/ECP.v18-2426. https://projecteuclid.org/euclid.ecp/1465315566 Benaych-Georges, Florent. Central limit theorems for the Brownian motion on large unitary groups. Bull. Soc. Math. France 139 (2011), no. 4, 593–610. Bengtsson, Ingemar; Życzkowski, Karol. Geometry of quantum states. An introduction to quantum entanglement. Cambridge University Press, Cambridge, 2006. xii+466 pp. ISBN: 978-0-521-81451-5; 0-521-81451-0 Collins, Benoît; Nechita, Ion. Random quantum channels I: graphical calculus and the Bell state phenomenon. Comm. Math. Phys. 297 (2010), no. 2, 345–370. Collins, Benoît; Nechita, Ion. Random quantum channels II: entanglement of random subspaces, Rényi entropy estimates and additivity problems. Adv. Math. 226 (2011), no. 2, 1181–1201. Digital Object Identifier: doi:10.1016/j.aim.2010.08.002 Collins, Benoît; Nechita, Ion; Życzkowski, Karol. Random graph states, maximal flow and Fuss-Catalan distributions. J. Phys. A 43 (2010), no. 27, 275303, 39 pp. Digital Object Identifier: doi:10.1088/1751-8113/43/27/275303 Hastings, M.B. Superadditivity of communication capacity using entangled inputs. Nature Physics 5, 255. Hayden, Patrick; Winter, Andreas. Counterexamples to the maximal $p$-norm multiplicity conjecture for all $p>1$. Comm. Math. Phys. 284 (2008), no. 1, 263–280. Hiai, Fumio; Petz, Dénes. The semicircle law, free random variables and entropy. Mathematical Surveys and Monographs, 77. American Mathematical Society, Providence, RI, 2000. x+376 pp. ISBN: 0-8218-2081-8 Lévy, Thierry. Schur-Weyl duality and the heat kernel measure on the unitary group. Adv. Math. 218 (2008), no. 2, 537–575. Lévy, Thierry; Maïda, Mylène. Central limit theorem for the heat kernel measure on the unitary group. J. Funct. Anal. 259 (2010), no. 12, 3163–3204. Digital Object Identifier: doi:10.1016/j.jfa.2010.08.005 Nechita, Ion. Asymptotics of random density matrices. Ann. Henri Poincaré 8 (2007), no. 8, 1521–1538. Å»yczkowski, Karol; Sommers, Hans-Jürgen. Induced measures in the space of mixed quantum states. Quantum information and computation. J. Phys. A 34 (2001), no. 35, 7111–7125. Digital Object Identifier: doi:10.1088/0305-4470/34/35/335 The Institute of Mathematical Statistics and the Bernoulli Society IMS Co-sponsored Journal On Partially Trace Distance Preserving Maps and Reversible Quantum Channels Jian, Long, He, Kan, Yuan, Qing, and Wang, Fei, Journal of Applied Mathematics, 2013 Quantum Loewner evolution Miller, Jason and Sheffield, Scott, Duke Mathematical Journal, 2016 Weak Convergence of a Sequence of Quickest Detection Problems Iglehart, Donald L. and Taylor, Howard M., The Annals of Mathematical Statistics, 1968 An asymptotic error bound for testing multiple quantum hypotheses Nussbaum, Michael and Szkoła, Arleta, The Annals of Statistics, 2011 On the eigenvalues of truncations of random unitary matrices Meckes, Elizabeth and Stewart, Kathryn, Electronic Communications in Probability, 2019 Kinetic Brownian motion on Riemannian manifolds Angst, Jürgen, Bailleul, Ismaël, and Tardif, Camille, Electronic Journal of Probability, 2015 Spectral asymptotics for sub-Riemannian Laplacians, I: Quantum ergodicity and quantum limits in the 3-dimensional contact case Colin de Verdière, Yves, Hillairet, Luc, and Trélat, Emmanuel, Duke Mathematical Journal, 2018 Densité des orbites des trajectoires browniennes sous l'action de la transformation de Lévy Brossard, Jean and Leuridan, Christophe, Annales de l'Institut Henri Poincaré, Probabilités et Statistiques, 2012 Hausdorff measure of arcs and Brownian motion on Brownian spatial trees Croydon, David A., The Annals of Probability, 2009 A remark on the equivalence of Gaussian processes van Zanten, Harry, Electronic Communications in Probability, 2008 euclid.ecp/1465315566
CommonCrawl
Farberov, S. ; Meidan, R. Fibroblast growth factor-2 and transforming growth factor-beta1 oppositely regulate miR-221 that targets thrombospondin-1 in bovine luteal endothelial cells. Biology of Reproduction 2017, 98, 366 - 375. Publisher's VersionAbstract Thrombospondin-1 (THBS1) affects corpus luteum (CL) regression. Highly induced during luteolysis, it acts as a natural anti-angiogenic, proapoptotic compound. THBS1 expression is regulated in bovine luteal endothelial cells (LECs) by fibroblast growth factor-2 (FGF2) and transforming growth factor-beta1 (TGFB1) acting in an opposite manner. Here we sought to identify specific microRNAs (miRNAs) targeting THBS1 and investigate their possible involvement in FGF2 and TGFB1-mediated THBS1 expression. Several miRNAs predicted to target THBS1 mRNA (miR-1, miR-18a, miR-144, miR-194, and miR-221) were experimentally tested. Of these, miR-221 was shown to efficiently target THBS1 expression and function in LECs. We found that this miRNA is highly expressed in luteal cells and in mid-cycle CL. Consistent with the inhibition of THBS1 function, miR-221 also reduced Serpin Family E Member 1 [SERPINE1] in LECs and promoted angiogenic characteristics of LECs. Plasminogen activator inhibitor-1 (PAI-1), the gene product of SERPINE1, inhibited cell adhesion, suggesting that PAI-1, like THBS1, has anti-angiogenic properties. Importantly, FGF2, which negatively regulates THBS1, elevates miR-221. Conversely, TGFB1 that stimulates THBS1, significantly reduces miR-221. Furthermore, FGF2 enhances the suppression of THBS1 caused by miR-221 mimic, and prevents the increase in THBS1 induced by miR-221 inhibitor. In contrast, TGFB1 reverses the inhibitory effect of miR-221 mimic on THBS1, and enhances the upregulation of THBS1 induced by miR-221 inhibitor. These data support the contention that FGF2 and TGFB1 modulate THBS1 via miR-221. These in vitro data propose that dynamic regulation of miR-221 throughout the cycle, affecting THBS1 and SERPINE1, can modulate vascular function in the CL. Ochoa, J. C. ; Peñagaricano, F. ; Baez, G. M. ; Melo, L. F. ; Motta, J. C. L. ; Garcia-Guerra, A. ; Meidan, R. ; Pinheiro Ferreira, J. C. ; Sartori, R. ; Wiltbank, M. C. Mechanisms for rescue of corpus luteum during pregnancy: gene expression in bovine corpus luteum following intrauterine pulses of prostaglandins E1 and F2α†. Biology of Reproductionbiolreprod 2017, 98, 465 - 479. Publisher's VersionAbstract In ruminants, uterine pulses of prostaglandin (PG) F2α characterize luteolysis, while increased PGE2/PGE1 distinguish early pregnancy. This study evaluated intrauterine (IU) infusions of PGF2α and PGE1 pulses on corpus luteum (CL) function and gene expression. Cows on day 10 of estrous cycle received 4 IU infusions (every 6 h; n = 5/treatment) of saline, PGE1 (2 mg PGE1), PGF2α (0.25 mg PGF2α), or PGE1 + PGF2α. A luteal biopsy was collected at 30 min after third infusion for determination of gene expression by RNA-Seq. As expected, IU pulses of PGF2α decreased (P < 0.01) P4 luteal volume. However, there were no differences in circulating P4 or luteal volume between saline, PGE1, and PGE1 + PGF2α, indicating inhibition of PGF2α-induced luteolysis by IU pulses of PGE1. After third pulse of PGF2α, luteal expression of 955 genes were altered (false discovery rate [FDR] < 0.01), representing both typical and novel luteolytic transcriptomic changes. Surprisingly, after third pulse of PGE1 or PGE1 + PGF2α, there were no significant changes in luteal gene expression (FDR > 0.10) compared to saline cows. Increased circulating concentrations of the metabolite of PGF2α (PGFM; after PGF2α and PGE1 + PGF2α) and the metabolite PGE (PGEM; after PGE1 and PGE1 + PGF2α) demonstrated that PGF2α and PGE1 are entering bloodstream after IU infusions. Thus, IU pulses of PGF2α and PGE1 allow determination of changes in luteal gene expression that could be relevant to understanding luteolysis and pregnancy. Unexpectedly, by third pulse of PGE1, there is complete blockade of either PGF2α transport to the CL or PGF2α action by PGE1 resulting in complete inhibition of transcriptomic changes following IU PGF2α pulses. Levavi-Sivan, B. ; G, D. ; A, H. Vitellogenin Level in the Plasma of Russian Sturgeon (Acipenser gueldenstaedtii) Northern Israel. Journal of Marine Science: Research & Development 2017, 7 244. Publisher's VersionAbstract In the present study, we examined the vitellogenin (Vg) level of Russian sturgeon maintained in a relatively constant aquaculture at a water temperature of 18-22°C during growth and maturation. An increase in Vg in the blood plasma from oocytes was found in the yellow oocytes stage to the gray oocytes stage. However, no Vg was found in the pre-vitellogenic stage. Based on the present study and previous studies on hormone control reproduction and growth, we proposed a quality model that correlated between egg development and the hormones involved in controlling vitelogenesis (VTL). Aizen, J. ; Hollander-Cohen, L. ; Shpilman, M. ; Levavi-Sivan, B. Biologically active recombinant carp LH as a spawning-inducing agent for carp. Journal of Endocrinology 2017, 232. Publisher's Version Sanchís-Benlloch, P. J. ; Nocillado, J. ; Ladisa, C. ; Aizen, J. ; Miller, A. ; Shpilman, M. ; Levavi-Sivan, B. ; Ventura, T. ; Elizur, A. In-vitro and in-vivo biological activity of recombinant yellowtail kingfish (Seriola lalandi) follicle stimulating hormone. General and Comparative Endocrinology 2017, 241, 41 - 49. Publisher's VersionAbstract Biologically active recombinant yellowtail kingfish follicle stimulating hormone (rytkFsh) was produced in yeast Pichia pastoris and its biological activity was demonstrated by both in-vitro and in-vivo bioassays. Incubation of ovarian and testicular fragments with the recombinant hormone stimulated E2 and 11-KT secretion, respectively. In-vivo trial in immature female YTK resulted in a significant increase of plasma E2 levels and development of oocytes. In males at the early stages of puberty, advancement of spermatogenesis was observed, however plasma 11-KT levels were reduced when administered with rytkFsh. Spicer, O. S. ; Zmora, N. ; Wong, T. - T. ; Golan, M. ; Levavi-Sivan, B. ; Gothilf, Y. ; Zohar, Y. The gonadotropin-inhibitory hormone (Lpxrfa) system's regulation of reproduction in the brain–pituitary axis of the zebrafish (Danio rerio)†. Biology of Reproduction 2017, 96, 1031-1042. Publisher's VersionAbstract Gonadotropin-inhibitory hormone (GNIH) was discovered in quail with the ability to reduce gonadotropin expression/secretion in the pituitary. There have been few studies on GNIH orthologs in teleosts (LPXRFamide (Lpxrfa) peptides), which have provided inconsistent results. Therefore, the goal of this study was to determine the roles and modes of action by which Lpxrfa exerts its functions in the brain–pituitary axis of zebrafish (Danio rerio). We localized Lpxrfa soma to the ventral hypothalamus, with fibers extending throughout the brain and to the pituitary. In the preoptic area, Lpxrfa fibers interact with gonadotropin-releasing hormone 3 (Gnrh3) soma. In pituitary explants, zebrafish peptide Lpxrfa-3 downregulated luteinizing hormone beta subunit and common alpha subunit expression. In addition, Lpxrfa-3 reduced gnrh3 expression in brain slices, offering another pathway for Lpxrfa to exert its effects on reproduction. Receptor activation studies, in a heterologous cell-based system, revealed that all three zebrafish Lpxrfa peptides activate Lpxrf-R2 and Lpxrf-R3 via the PKA/cAMP pathway. Receptor activation studies demonstrated that, in addition to activating Lpxrf receptors, zebrafish Lpxrfa-2 and Lpxrfa-3 antagonize Kisspeptin-2 (Kiss2) activation of Kisspeptin receptor-1a (Kiss1ra). The fact that kiss1ra-expressing neurons in the preoptic area are innervated by Lpxrfa-ir fibers suggests an additional pathway for Lpxrfa action. Therefore, our results suggest that Lpxrfa may act as a reproductive inhibitory neuropeptide in the zebrafish that interacts with Gnrh3 neurons in the brain and with gonadotropes in the pituitary, while also potentially utilizing the Kiss2/Kiss1ra pathway. Simon, Y. ; Levavi-Sivan, B. ; Cahaner, A. ; Hulata, G. ; Antler, A. ; Rozenfeld, L. ; Halachmi, I. A behavioural sensor for fish stress. Aquacultural Engineering 2017, 77, 107 - 111. Publisher's VersionAbstract Due to water turbidity, fish stress might be difficult to observe. Evaluation of fish stress by blood sampling requires removing a fish from the water, which is in itself a stressful event. Therefore, we designed and built a sensor to detect fish behaviour that reflects stress. The electronic sensor detected early signs of fish stress by scoring the fish's inactivity. LEDs and detectors are embedded on a steel wand that is held underwater by an operator. In this preliminary (feasibility) study, the new sensor was validated for Tilapia (Cichlidae) and Hybrid Striped Bass (Morone). We induced stressful situations in the fish tanks by manipulating oxygen and temperature levels. Results Lowering the temperature and oxygen levels both significantly increased the average number of signals identified by the sensor, which indicate stress. The effect of reducing water temperature from 24°C to 15°C was three times stronger than was the effect of lowering the oxygen saturation level from 85% to 50%. The difference in the number of signals between the good and stressful conditions was statistically significant, amounting to approximately eight sensor signals, 10.57 compared to 2.49 respectively. Lowering the temperature increased the mean number of signals by 5.85 and 6.06 at 85% and 50% oxygen saturation respectively, whereas lowering oxygen levels increased the mean number of signals by 2.02 and 2.23 at 24°C and 15°C, respectively. The results indicate that the stress status of cultured fish can be evaluated using the proposed behavioural sensor. The new sensor may provide an earlier indication of a problem in a fish tank or pond than was heretofore possible. This early warning can enable the fish farmer to take action before many fish are harmed. Zmora, N. ; Wong, T. - T. ; Stubblefield, J. ; Levavi-Sivan, B. ; Zohar, Y. Neurokinin B regulates reproduction via inhibition of kisspeptin in a teleost, the striped bass. Journal of Endocrinology 2017, 233. Publisher's Version Pines, M. ; Levi, O. ; Genin, O. ; Lavy, A. ; Angelini, C. ; Allamand, V. ; Halevy, O. Elevated Expression of Moesin in Muscular Dystrophies. The American Journal of Pathology 2017, 187, 654 - 664. Publisher's VersionAbstract Fibrosis is the main complication of muscular dystrophies. We identified moesin, a member of the ezrin-radixin-moesin family, in dystrophic muscles of mice representing Duchenne and congenital muscular dystrophies (DMD and CMD, respectively) and dysferlinopathy, but not in the wild type. High levels of moesin were also observed in muscle biopsy specimens from DMD, Ullrich CMD, and merosin-deficient CMD patients, all of which present high levels of fibrosis. The myofibroblasts, responsible for extracellular matrix protein synthesis, and the macrophages infiltrating the dystrophic muscles were the source of moesin. Moesin-positive cells were embedded within the fibrotic areas between the myofibers adjacent to the collagen type I fibers. Radixin was also synthesized by the myofibroblasts, whereas ezrin colocalized with the myofiber membranes. In animal models and patients' muscles, part of the moesin was in its active phosphorylated form. Inhibition of fibrosis by halofuginone, an antifibrotic agent, resulted in a major decrease in moesin levels in the muscles of DMD and CMD mice. In summary, the results of this study may pave the way for exploiting moesin as a novel target for intervention in MDs, and as part of a battery of biomarkers to evaluate treatment success in preclinical studies and clinical trials. Piestun, Y. ; Patael, T. ; Yahav, S. ; Velleman, S. G. ; Halevy, O. Early posthatch thermal stress affects breast muscle development and satellite cell growth and characteristics in broilers. Poultry Science 2017, 96, 2877 - 2888. Publisher's VersionAbstract ABSTRACT Heat or cold stress, can disrupt well-being and physiological responses in birds. This study aimed to elucidate the effects of continuous heat exposure in the first 2 wk of age on muscle development in broilers, with an emphasis on the pectoralis muscle satellite cell population. Chicks were reared for 13 d under either commercial conditions or a temperature regime that was 5°C higher. Body and muscle weights, as well as absolute muscle growth were lower in heat-exposed chicks from d 6 onward. The number of satellite cells derived from the experimental chicks was higher in the heat-treated group on d 3 but lower on d 8 and 13 compared to controls. This was reflected in a lower number of myonuclei expressing proliferating nuclear cell antigen in cross sections of pectoralis major muscle sampled on d 8. However, a TUNEL assay revealed similar cell survival in both groups. Mean myofiber diameter and distribution were lower in muscle sections sampled on d 8 and 13 in heat-treated versus control group, suggesting that the lower muscle growth is due to changes in muscle hypertrophy. Oil-Red O staining showed a higher number of satellite cells with lipids in the heat-treated compared to the control group on these days. Moreover, lipid deposition was observed in pectoralis muscle cross sections derived from the heat-treated chicks on d 13, whereas the controls barely exhibited any lipid staining. The gene and protein expression levels of CCAAT/enhancer binding protein β in pectoralis muscle from the heat-treated group were significantly higher on d 13 than in controls, while myogenin levels were similar. The results suggest high sensitivity of muscle progenitor cells in the early posthatch period at a time when they are highly active, to chronic heat exposure, leading to impaired myogenicity of the satellite cells and increased fat deposition. Wein, Y. ; Geva, Z. ; Bar-Shira, E. ; Friedman, A. Transport-related stress and its resolution in turkey pullets: activation of a pro-inflammatory response in peripheral blood leukocytes. Poultry Science 2017, 96, 2601 - 2613. Publisher's VersionAbstract The transportation process is one of the most stressful practices in poultry and livestock management. Extensive knowledge is available on the impact of transport on stress and animal welfare; however, little is known on the impact of transport on the physiology of turkey pullets, their welfare and health, and even less on the process of homeostatic recovery in the post-transport new environment. The main focus of this manuscript was to focus on trauma, stress, and recovery following transport of turkey pullets from nurseries to pullet farms. Specifically, we determined the physiological consequences of transport, the temporal restoration of homeostasis and its effects on immune system function. We hypothesized that stress signaling by stress hormones would directly activate circulating turkey blood leukocytes (TBL), thus inducing a pro-inflammatory response directed towards tissue repair and recovery. Extensive blood analyses prior to transit and during the collecting, transit, and post-transit stages revealed extensive stress (elevated heat shock protein 70) and blunt-force trauma (internal bleeding and muscle damage as well as limb fractures). TBL were shown to increase mRNA expression of cortisol and adrenergic receptors during transit, thus indicating a possible direct response to circulating stress hormones. Consequently, TBL were shown to increase mRNA expression of pro-inflammatory cytokines, as well as that of serum inflammatory proteins (lysozyme and transferrin) partaking in reducing oxygen radicals as demonstrated by consumption of these proteins. The flare-up due to transit related stress diminished with time until 10 d post-transit, a time at which most parameters returned to resting levels. Though general and vaccine-specific antibody levels were not altered by transport-related stress, the physical and physiological injury caused during transport may explain the susceptibility of turkey pullets to opportunist pathogens in the immediate post-transit period. Tadmor-Levi, R. ; Asoulin, E. ; Hulata, G. ; David, L. Studying the Genetics of Resistance to CyHV-3 Disease Using Introgression from Feral to Cultured Common Carp Strains. Frontiers in Genetics 2017, 8 24. Publisher's VersionAbstract Sustainability and further development of aquaculture production are constantly challenged by outbreaks of fish diseases, which are difficult to prevent or control. Developing fish strains that are genetically resistant to a disease is a cost-effective and a sustainable solution to address this challenge. To do so, heritable genetic variation in disease resistance should be identified and combined together with other desirable production traits. Aquaculture of common carp has suffered substantial losses from the infectious disease caused by the cyprinid herpes virus type 3 (CyHV-3) virus and the global spread of outbreaks indicates that many cultured strains are susceptible. In this research, CyHV-3 resistance from the feral strain "Amur Sassan" was successfully introgressed into two susceptible cultured strains up to the first backcross (BC1) generation. Variation in resistance of families from F1 and BC1 generations was significantly greater compared to that among families of any of the susceptible parental lines, a good starting point for a family selection program. Considerable additive genetic variation was found for CyHV-3 resistance. This phenotype was transferable between generations with contributions to resistance from both the resistant feral and the susceptible cultured strains. Reduced scale coverage (mirror phenotype) is desirable and common in cultured strains, but so far, cultured mirror carp strains were found to be susceptible. Here, using BC1 families ranging from susceptible to resistant, no differences in resistance levels between fully scaled and mirror full-sib groups were found, indicating that CyHV-3 resistance was successfully combined with the desirable mirror phenotype. In addition, the CyHV-3 viral load in tissues throughout the infection of susceptible and resistant fish was followed. Although resistant fish get infected, viral loads in tissues of these fish are significantly lesser than in those of susceptible fish, allowing them to survive the disease. Taken together, in this study we have laid the foundation for breeding CyHV-3-resistant strains and started to address the mechanisms underlying the phenotypic differences in resistance to this disease. Yasur-Landau, D. ; Jaffe, C. L. ; Doron-Faigenboim, A. ; David, L. ; Baneth, G. Induction of allopurinol resistance in Leishmania infantum isolated from dogs. PLOS Neglected Tropical Diseases 2017, 11, 1-10. Publisher's VersionAbstract Author summary Visceral leishmaniasis caused by the parasite Leishmania infantum is a neglected tropical disease transmitted from animal hosts to humans by sand fly bites. This potentially fatal disease affects thousands of people annually and threatens millions who live in disease risk areas. Domestic dogs are considered as the main reservoir of this parasite which can also cause a severe chronic canine disease. Allopurinol is the main drug used for long term treatment of this disease but it often does not eliminate infection in dogs. We have recently demonstrated that allopurinol resistant parasites can be isolated from naturally infected dogs that have developed clinical recurrence of disease during allopurinol treatment. In this study we aimed to see if resistance can be induced in susceptible parasite strains isolated from sick dogs by growing them in increasing drug concentrations under laboratory conditions. The changes in allopurinol susceptibility were measured and the impact of drug on parasite growth was monitored over 23 weeks. Induction of resistance was successful producing parasites 20-folds less susceptible to the drug. The pattern of change in drug susceptibility suggests that a genetic change is responsible for the increased resistance which is likely to mimic the formation of resistance in dogs. Petit, J. ; David, L. ; Dirks, R. ; Wiegertjes, G. F. Genomic and transcriptomic approaches to study immunology in cyprinids: What is next?. Developmental & Comparative Immunology 2017, 75, 48 - 62. Publisher's VersionAbstract Accelerated by the introduction of Next-Generation Sequencing (NGS), a number of genomes of cyprinid fish species have been drafted, leading to a highly valuable collective resource of comparative genome information on cyprinids (Cyprinidae). In addition, NGS-based transcriptome analyses of different developmental stages, organs, or cell types, increasingly contribute to the understanding of complex physiological processes, including immune responses. Cyprinids are a highly interesting family because they comprise one of the most-diversified families of teleosts and because of their variation in ploidy level, with diploid, triploid, tetraploid, hexaploid and sometimes even octoploid species. The wealth of data obtained from NGS technologies provides both challenges and opportunities for immunological research, which will be discussed here. Correct interpretation of ploidy effects on immune responses requires knowledge of the degree of functional divergence between duplicated genes, which can differ even between closely-related cyprinid fish species. We summarize NGS-based progress in analysing immune responses and discuss the importance of respecting the presence of (multiple) duplicated gene sequences when performing transcriptome analyses for detailed understanding of complex physiological processes. Progressively, advances in NGS technology are providing workable methods to further elucidate the implications of gene duplication events and functional divergence of duplicates genes and proteins involved in immune responses in cyprinids. We conclude with discussing how future applications of NGS technologies and analysis methods could enhance immunological research and understanding. Argov-Argaman, N. ; Mandel, D. ; Lubetzky, R. ; Kedem, M. H. ; Cohen, B. - C. ; Berkovitz, Z. ; Reifen, R. Human milk fatty acids composition is affected by maternal age. The Journal of Maternal-Fetal & Neonatal Medicine 2017, 30, 34-37. Publisher's VersionAbstract AbstractHuman colostrums and transition milk were collected from women under the age of 37 years and women aged 37 years and older. Transition milk of the younger group had lower fat content and 10-fold higher concentrations of omega 6 FA, eicosadecanoic, and arachdonic acids. Gestational age affected the colostrum concentration of total fat and omega 3 and omega 6 FA composition only in the older group. We concluded that age may be a factor in the FA composition of human milk. This should be taken into account when planning diets for pregnant women of different ages. Shefer-Weinberg, D. ; Sasson, S. ; Schwartz, B. ; Argov-Argaman, N. ; Tirosh, O. Deleterious effect of n-3 polyunsaturated fatty acids in non-alcoholic steatohepatitis in the fat-1 mouse model. Clinical Nutrition Experimental 2017, 12, 37 - 49. Publisher's VersionAbstract Summary Non-alcoholic fatty liver disease (NAFLD) represents a spectrum of pathologies, ranging from hepatocellular steatosis to non-alcoholic steatohepatitis (NASH), fibrosis and cirrhosis. It has been suggested that fish oil containing n-3 polyunsaturated fatty acids (n-3 PUFA) induce beneficial effects in NAFLD. However, n-3 PUFA are sensitive to peroxidation that generate free radicals and reactive aldehydes. We aimed at determining whether changing the tissue ratio of n-3 to n-6 PUFA may be beneficial or alternatively harmful to the etiology of NAFLD. The transgenic Fat-1 mouse model was used to determine whether n-3 PUFA positively or negatively affect the development of NAFLD. fat-1mice express the fat-1 gene of Caenorhabditis elegans, which encodes an n-3 fatty-acid desaturase that converts n-6 to n-3 fatty acids. Wild-type C57BL/6 mice served as the control group. Both groups of mice were fed methionine and choline deficient (MCD) diet, which induces NASH within 4 weeks. The study shows that NASH developed faster and was more severe in mice from the fat-1 group when compared to control C57BL/6 mice. This was due to enhanced lipid peroxidation of PUFA in the liver of the fat-1 mice as compared to the control group. Results of our mice study suggest that supplementing the diet of individuals who develop or have fatty livers with n-3 PUFA should be carefully considered and if recommended adequate antioxidants should be added to the diet in order to reduce such risk. Chertok, I. R. A. ; Haile, Z. T. ; Eventov-Friedman, S. ; Silanikove, N. ; Argov-Argaman, N. Influence of gestational diabetes mellitus on fatty acid concentrations in human colostrum. Nutrition 2017, 36, 17 - 21. Publisher's VersionAbstract Objective The aim of this study was to examine differences in fatty acid concentrations in colostrum of women with and without gestational diabetes mellitus (GDM). The effect of GDM on fatty acid composition of colostrum is not fully understood, although rates of GDM are increasing globally. Methods A prospective case–control study was conducted of postpartum women with and without GDM. Gas chromatographic analysis was conducted to examine differences in colostral fatty acids of the colostrum samples of 29 women with and 34 without GDM. Results Analyses of the fatty acid composition revealed significantly higher concentrations of four essential ω-6 polyunsaturated fatty acids—γ-linolenic, eicosatrienoic, arachidonic, and docosatetraenoic—in the colostrum of GDM women compared with non-GDM women. Timing of collection influenced saturated medium chain fatty acid and monounsaturated fatty acid levels. Conclusions Differences in concentrations of ω-6 fatty acids but not in dietary linoleic fatty acid or ω-3 fatty acids suggest that altered concentrations are attributed to changes in specific endogenous metabolic pathways. Implications of higher concentrations of ω-6 fatty acids in the colostrum of women with GDM have yet to be determined. Timing of colostrum collection is critical in determining colostral fatty acid and metabolite concentrations. Hadaya, O. ; Landau, S. Y. ; Glasser, T. ; Muklada, H. ; Dvash, L. ; Mesilati-Stahy, R. ; Argov-Argaman, N. Milk composition in Damascus, Mamber and F1 Alpine crossbred goats under grazing or confinement management. Small Ruminant Research 2017, 153, 31 - 40. Publisher's VersionAbstract The interactive effect of breed and feeding management on milk composition was established in local goats (Damascus, Mamber) and their F1 Alpine crossbreeds, half of which grazed daily for 4h in Mediterranean brushland (Pasture – P) and half were fed clover hay (Hay – H) indoors, in addition to concentrate fed individually. Milk composition and fatty acid profile were measured, and individual nutritional composition was estimated by fecal NIRS; DM intake was calculated from the proportion of dietary concentrate. Milk and feces were collected at 65 (pretreatment), 110, 135 and 170 days of lactation. DM intake was lower in the H vs. P group (P<0.0001) in Damascus and Damascus crossbreed (P<0.01), but not in the other breeds. The Alpine crossbreeds yielded 0.6kg more milk (P<0.001) than their local counterparts. P group yielded milk that was richer in protein (P<0.01) and fat (P<0.0001), especially in the Damascus breed. Urea concentration in milk was 66% higher in H-group of all breeds throughout the experiment (P<0.001). H goats produced milk richer in medium-chain fatty acids (P<0.001) and monounsaturated fatty acids (P<0.01) than P goats. Omega 6 was higher for P goats with a strong breed×diet interaction effect (P<0.01) in Mamber goats. The P group produced milk that was 20% richer in omega 3 than the H group (P<0.0001). In the P group of Damascus goats, low omega 6/3 ratio was found compared with H group. This study shows that breed and management interact to affect milk composition and fatty acid profile. Therefore both factors and their interaction should be considered when industry pursues means to enrich milk with bioactive, essential lipid components which can turn milk into health promoting commodity. Cohen, B. - C. ; Raz, C. ; Shamay, A. ; Argov-Argaman, N. Lipid Droplet Fusion in Mammary Epithelial Cells is Regulated by Phosphatidylethanolamine Metabolism. J Mammary Gland Biol Neoplasia 2017, 22, 235-249.Abstract Mammary epithelial cells (MEC) secrete fat in the form of milk fat globules (MFG) which are found in milk in diverse sizes. MFG originate from intracellular lipid droplets, and the mechanism underlying their size regulation is still elusive. Two main mechanisms have been suggested to control lipid droplet size. The first is a well-documented pathway, which involves regulation of cellular triglyceride content. The second is the fusion pathway, which is less-documented, especially in mammalian cells, and its importance in the regulation of droplet size is still unclear. Using biochemical and molecular inhibitors, we provide evidence that in MEC, lipid droplet size is determined by fusion, independent of cellular triglyceride content. The extent of fusion is determined by the cell membrane's phospholipid composition. In particular, increasing phosphatidylethanolamine (PE) content enhances fusion between lipid droplets and hence increases lipid droplet size. We further identified the underlying biochemical mechanism that controls this content as the mitochondrial enzyme phosphatidylserine decarboxylase; siRNA knockdown of this enzyme reduced the number of large lipid droplets threefold. Further, inhibition of phosphatidylserine transfer to the mitochondria, where its conversion to PE occurs, diminished the large lipid droplet phenotype in these cells. These results reveal, for the first time to our knowledge in mammalian cells and specifically in mammary epithelium, the missing biochemical link between the metabolism of cellular complex lipids and lipid-droplet fusion, which ultimately defines lipid droplet size. Meidan, R. ; Girsh, E. ; Mamluk, R. ; Levy, N. ; Farberov, S. Luteolysis in Ruminants: Past Concepts, New Insights, and Persisting Challenges. In The Life Cycle of the Corpus Luteum; Meidan, R., Ed. The Life Cycle of the Corpus Luteum; Springer International Publishing: Cham, 2017; pp. 159–182. Publisher's VersionAbstract It is well established that in ruminants, and in other species with estrous cycles, luteal regression is stimulated by the episodic release of prostaglandin F2$\alpha$ (PGF2$\alpha$) from the uterus, which reaches the corpus luteum (CL) through a countercurrent system between the uterine vein and the ovarian artery. Because of their luteolytic properties, PGF2$\alpha$ and its analogues are routinely administered to induce CL regression and synchronization of estrus, and as such, it is the basis of protocols for synchronizing ovulation. Luteal regression is defined as the loss of steroidogenic function (functional luteolysis) and the subsequent involution of the CL (structural luteolysis). During luteolysis, the CL undergoes dramatic changes in its steroidogenic capacity, vascularization, immune cell activation, ECM composition, and cell viability. Functional genomics and many other studies during the past 20 years elucidated the mechanism underlying PGF2$\alpha$ actions, substantially revising old concepts. PGF2$\alpha$ acts directly on luteal steroidogenic and endothelial cells, which express PGF2$\alpha$ receptors (PTGFR), or indirectly on immune cells lacking PTGFR, which can be activated by other cells within the CL. Accumulating evidence now indicates that the diverse processes initiated by uterine or exogenous PGF2$\alpha$, ranging from reduction of steroid production to apoptotic cell death, are mediated by locally produced factors. Data summarized here show that PGF2$\alpha$ stimulates luteal steroidogenic and endothelial cells to produce factors such as endothelin-1, angiopoietins, nitric oxide, fibroblast growth factor 2, thrombospondins, transforming growth factor-B1, and plasminogen activator inhibitor-B1, which act sequentially to inhibit progesterone production, angiogenic support, cell survival, and ECM remodeling to accomplish CL regression.
CommonCrawl
Mathematics Educators Mathematics Educators Meta Mathematics Educators Beta Determining sets to show sufficiency of a condition? $p \to q$ that means (among others) $p$ is a sufficient condition for $q$. To show the sufficiency, I teach my study by determining the set for $p$, the set for $q$ first and comparing their cardinal numbers. If the former has lower cardinal number then $p \to q$ is a correct proportion rather than $q \to p$. p: I am in Tokyo, q: I am in Japan. The set for $p$ just contains a single city Tokyo but the set for $q$ contains many cities such as Tokyo, Osaka, Sapporo, etc. As the former set has small cardinal number then "I am in Tokyo." is a sufficient condition for "I am in Japan." or $p\to q$. p: $x=2$, q: $x^2=4$. The set for $p$ just contains a single element 2 and the set for $q$ contains 2 elements (2 and -2). Therefore, "$x=2$" is a sufficient condition for "$x^2=4$" or $p\to q$. Now consider the following p: I am a vegetarian. q: I don't eat pork. The students are asked to determine the correct implication whether "$p \to q$" or "$q \to p$". the set for p is {vegetarian} the set for q is the set of people not eating pork = {vegetarian, Moslem, people who are allergic to pork, etc} As the cardinal number of p is lower than q then $p\to q$ is the correct implication. My student attempt the set of p is the set meats the vegetarian don't eat = {pork, beef, fish, etc} the set of q is {pork} As the cardinal number of q is lower than p then "$q \to p$" is the correct implication. I realize that my attempt is correct and the student's attempt is wrong. As a teacher, how should I explain their fallacy in determining the set? logic teaching set-theory Money Oriented ProgrammerMoney Oriented Programmer $\begingroup$ I don't see how this can have an answer, because I don't see how you can determine the 'right' sets in this case. The student has come up with two suitable sets and applied the rule you told them. I think the problem is that you've tried to teach them a method that can't be taught explicitly. It seems to me the only way to determine the sets is to already understand the implication. $\endgroup$ – Jessica B Jul 18 '16 at 5:45 $\begingroup$ I must confess that I find what you are trying to do very confusing, and possibly not even correct (but the ambiguities of your explanation prevent me from judging correctness). Also, I think this is making a simple idea more complex than it needs to be. $p \rightarrow q$ means that you can logically go (or "drive" if students like cars) from $p$ to $q.$ It also means that $p$ is stronger (has more information, etc.) than $q,$ and that $q$ is weaker (has less information, etc.) than $p.$ Of course, "stronger" and "weaker" here are used in their non-strict sense. $\endgroup$ – Dave L Renfro Jul 18 '16 at 20:45 $\begingroup$ One problem is that this approach seems to assume that either p => q or q => p, but of course that is typically not true. The most that can be said is that if p,q are predicates which define finite sets and if it is known that either p => q or q => p is true, then looking at the cardinalities of the corresponding finite sets can determine which. But -- this is clearly not a robust approach to teaching implication. At best, it can help explain the difference between a statement and its converse. $\endgroup$ – John Coleman Jul 21 '16 at 11:33 First, although you talk a bunch about cardinality, I don't see how that makes sense, so I'm going to assume you mean that you have them determine if the set corresponding to p is a subset of the set corresponding to q. (Otherwise, in your second example, you'd also have $x=2$ implies $x^2=9$, for instance.) In formal terms, your method requires translating the sentences into predicates with a free variable and then comparing the sets defined by those predicates. With the math example, this is easy because there is a free variable. With the English examples, though, it's less obvious. In the implication "I am in Tokyo"/"I am in Japan" there are really two potential variables: the underlying form is "X is in Y". The actual comparison you want is the one with a single free variable and two predicates: you want to compare "X am in Tokyo" to "X am in Japan", that is, your sets are "the set of people in Tokyo" versus "the set of people in Japan". Instead you've chosen to work with the single predicate "I am in Y" for two different values of Y. There's no way to make that work in pure logic: the reason it works in your example is that "X am in Y" is monotone in the predicate Y. Which means that it happens to work in that example, but for a completely different reason than the method you're trying to teach. Your students are doing exactly the same thing with "I am a vegetarian" and "I don't eat pork". They follow you in parsing this as "I don't eat {pork, chicken, ...}" and "I don't eat pork". Again, this is a two place predicate "X don't eat Y". If you think your first example is right, you have to think their parsing, of "I don't eat Y" is correct. The problem is that "X don't eat Y" is antimonotone in Y. If you aren't going to insist that you compare on the common free variable, there's no way to make this work without getting into questions of monotonicity of predicates, which is actually (while quite interesting, and potentially accessible to children - I recently saw a Dr. Seuss themed talk on the subject by Larry Moss) rather complicated. Henry TowsnerHenry Towsner I would not recommend to teach this method since there are some downsides. Take $A(x) \iff x \text{ is divisible by } 2$ $B(x) \iff x \text{ is divisible by } 42$ Is $A(x) \implies B(x)$ or $B(x) \implies A(x)$? Since there are infinitely many $x\in\mathbb Z$ fullfiling $A(x)$ and $B(x)$ this cannot be answered unless your students already know about the cardinality of infinite sets. Actually the cardinality of $\{x\in \mathbb Z: A(x)\}$ and $\{x\in \mathbb Z: B(x)\}$ is the same. Does this mean, that $A(x) \iff B(x)$? Here is another example: $C(x) \iff x = 23$ $D(x) \iff x = 42 \text{ or } x = 102$ There are two objects fulfilling $D(x)$ and one object for which $C(x)$ is true. So we have $C(x) \implies D(x)$, right?! The set-theoretic counterpart of the implication is the inclusion. You have $A(x)\implies B(x)$ whenever $\{x:A(x)\} \subseteq \{x:B(x)\}$. However, the fact that the cardinality of $\{x:A(x)\}$ is less or equal to the cardinality of $\{x:B(x)\}$ does not imply $\{x:A(x)\} \subseteq \{x:B(x)\}$. That's the reason why your method does not work in the above examples. Stephan KullaStephan Kulla $\begingroup$ "Some downsides" is a rather mild way of putting it — the method is simply wrong, and very often leads to false conclusions. $\endgroup$ – Daniel Hast Jul 18 '16 at 20:07 The elements of the set describing a statement are things that "make the statement true." The statement "I am in Japan" is described by the set {Tokyo, Osaka, Sapporo, ...} because "I am in Osaka" makes "I am in Japan" true. The statement "$x^2=4$" is described by the set {-2,2} because "$x=-2$" makes "$x^2=4$" true. The statement "I don't eat pork" is described by the set {vegetarian, Moslem, a person who is allergic to pork,..} because "I am a Moslem" makes "I don't eat pork" true. The student claims that the statement "I am a vegetarian" is described by the set {pork, beef, fish, ...}. This is incorrect: "I don't eat beef" does not (necessarily) make "I am a vegetarian" true, because, for example, I could be allergic to beef but still not be a vegetarian. Joel Reyes NocheJoel Reyes Noche $\begingroup$ Unlike other answers, this answers the questions instead of dismissing the method. $\endgroup$ – Amy B Jul 21 '16 at 6:45 Thanks for contributing an answer to Mathematics Educators Stack Exchange! Not the answer you're looking for? Browse other questions tagged logic teaching set-theory or ask your own question. How important is it to show students an application of the topics seen in an undergraduate course?
CommonCrawl
AMS Invited Address Prestrained Elasticity: Curvature Constraints and Differential Geometry with Low Regularity Wednesday January 6, 2016, 10:05 a.m.-10:55 a.m., Ballroom 6BC, Washington State Convention Center Marta Lewicka, University of Pittsburgh See Slides for Her Talk Here This lecture is concerned with the analysis of thin elastic films exhibiting residual stress at free equilibria. Examples of such structures and their actuations include: plastically strained sheets; specifically engineered swelling or shrinking gels; growing tissues; atomically thin graphene layers, etc. These and other phenomena can be studied through a variational model, pertaining to the non-Euclidean version of nonlinear elasticity, which postulates formation of a target Riemannian metric, resulting in the morphogenesis of the tissue which attains an orientation-preserving configuration closest to being the metric's isometric immersion. In this context, analysis of scaling of the energy minimizers in terms of the film's thickness leads to the rigorous derivation of a hierarchy of limiting theories, differentiated by the embeddability properties of the target metrics and, a-posteriori, by the emergence of isometry constraints with low regularity. This leads to questions of rigidity and flexibility of solutions to the weak formulations of the related PDEs, including the Monge-Ampere equation. In particular, we observe that the set of $C^{1,\alpha}$ solutions to the Monge-Ampere is dense in $C^0$ provided that $\alpha<1/7$, whereas rigidity holds when $\alpha>2/3$. AMS-MAA Invited Address Statistical Paradises and Paradoxes in Big Data Wednesday January 6, 2016, 11:10 a.m.-12:00 p.m., Ballroom 6BC, Washington State Convention Center Xiao-Li Meng, Harvard University Statisticians are increasingly posed with thought-provoking and often paradoxical questions, challenging our qualifications for entering the statistical paradises created by Big Data. Questions addressed in this talk include 1) Which one should I trust: a 1% survey with 60% response rate or a self-reported administrative dataset covering 80% of the population? 2) With all the big data, is sampling or randomization still relevant? 3) Personalized treatments---that sounds heavenly, but where on earth did they find the right guinea pig for me? The proper responses are respectively 1) "It depends!," because we need data-quality indexes, not merely quantitative sizes, to determine; 2) "Absolutely!," and indeed Big Data has inspired methods such as counterbalancing sampling to combat inherent selection bias in big data; and 3) "They didn't!," but the question has led to a multi-resolution framework for studying statistical evidence for predicting individual outcomes. All proposals highlight the need, as we get deeper into this era of Big Data, to reaffirm some time-honored statistical themes (e.g., bias-variance trade-off), and to remodel some others (e.g., approximating individuals from proxy populations verses inferring populations from samples). AMS Colloquium Lectures Lecture I Quasirandom Sets, Quasirandom Graphs, and Applications Wednesday January 6, 2016, 1:00 p.m.-2:00 p.m., Ballroom 6BC, Washington State Convention Center W. Timothy Gowers, University of Cambridge, UK Listen as Mike Breen, AMS Public Awareness Officer, speaks with W. Timothy Gowers about his upcoming lectures. See lecture notes here. In this lecture I shall discuss a few applications of discrete Fourier analysis on finite Abelian groups. I shall also talk about quasirandom graphs, explaining what they are and why they are useful. The two topics are closely related, and I shall explain why. Finally, as a way of motivating certain generalizations of Fourier analysis to be discussed in the second and third lectures, I shall give examples of problems that do not yield to the basic technique discussed here. Lecture II Arithmetic Progressions of Length 4, Quadratic Fourier Analysis, and 3-Uniform Hypergraphs Thursday January 7, 2016, 1:00 p.m.-2:00 p.m., Ballroom 6BC, Washington State Convention Center In this lecture I shall say something about quadratic (and higher-order) Fourier analysis, which relates to notable results such as Szemer\'edi's theorem and the Green-Tao theorem. I shall also discuss a notion of quasirandomness for hypergraphs and show that it relates to quadratic Fourier analysis in a similar way to the way that quasirandom graphs relate to conventional Fourier analysis. I shall also discuss the more general question of what one would ideally like from a generalization of Fourier analysis. Quadratic Fourier analysis has enough of the desired properties to be a useful technique, but there are certain properties that it lacks, at least in its current form, and there are therefore interesting challenges for future research. Some parts of this lecture will be hard to understand by people who have not attended the first lecture, but I will try to recap the most important ideas. This lecture will, however, not be necessary for following the third. Lecture III Fourier Analysis on General Finite Groups Friday January 8, 2016, 1:00 p.m.-2:00 p.m., Ballroom 6BC, Washington State Convention Center The first two lectures in this series will be about Fourier analysis and generalizations that apply to scalar-valued functions on finite Abelian groups. This one will be about how it can be generalized in two further directions: to non-Abelian groups and to matrix-valued functions. An obvious example of a matrix-valued function on a group is a representation, and indeed basic representation theory plays a central part in these generalizations. I shall give examples of how non-Abelian Fourier analysis can be used to solve interesting problems at the intersection of combinatorics and group theory. I shall also mention connections between some of these problems and the notion of quasirandom graphs from the first lecture. MAA Invited Address Singing Along with Math: The Mathematical Work of the Opera Singer Jerome Hines T. Christine Stevens, American Mathematical Society For over forty years, Jerome Hines (1921-2003) sang principal bass roles at the Metropolitan Opera in New York and in opera houses around the world. He was also a math major who retained a lifelong interest in mathematics. During the 1950's Hines published five papers in Mathematics Magazine that were based on work that he had done as a student, and he later produced several lengthy mathematical manuscripts about cardinality and infinite sets. I will discuss some of Hines' mathematical work, as well as the way in which his undergraduate experience at UCLA converted him from a student with no particular liking for mathematics into an aspiring mathematician. I also hope to explore the question of what mathematics meant to Hines and why, in the midst of demanding musical career, he felt it important for him to develop and publish his mathematical ideas. Mathematics and Policy: Strategies for Effective Advocacy Katherine D. Crowley One day in the United States Senate, a team of political staffers took a spontaneous break from writing legislation to request combinatorial proofs on demand of their favorite mathematical identities from their mathematician colleague (me). As the barrage of job demands implored us to disperse moments later, our legislative director chided me for sneaking in the final answer by induction. What is the level of understanding of mathematics among those who craft our national policies? What impact does a mathematician have in a seat at the table of debate over our country's most pressing challenges? How can mathematicians inform policy, and how can policy support mathematics? I will discuss the elements of effective advocacy for our discipline. AMS Josiah Willard Gibbs Lecture Graphs, Vectors, and Matrices Daniel Alan Spielman, Yale University See Slides for His Talk Here I will explain how we use linear algebra to understand graphs and how recently developed ideas in graph theory have inspired progress in linear algebra. Graphs can take many forms, from social networks to road networks, and from protein interaction networks to scientific meshes. One of the most effective ways to understand the large-scale structure of a graph is to study algebraic properties of matrices we associate with it. I will give examples of what we can learn from the Laplacian matrix of a graph. We will use the graph Laplacian to define a notion of what it means for one graph to approximate another, and we will see that every graph can be well-approximated by a graph having few edges. For example, the best sparse approximations of complete graphs are provided by the famous Ramanujan graphs. As the Laplacian matrix of a graph is a sum of outer products of vectors, one for each edge, the problem of sparsifying a general graph can be recast as a problem of approximating a collection of vectors by a small subset of those vectors. The resulting problem appears similar to the problem of Kadison and Singer in Operator Theory. We will sketch how research on the sparsification of graphs led to its solution. MAA Invited Address - SPEAKER unable to speak. In his place, Francis Su (Harvey Mudd College) will give a talk on the same topic. Fair Division Thursday January 7, 2016, 9:00 a.m.-9:50 a.m., Ballroom 6BC, Washington State Convention Center Steven Brams, New York University Ideas about fair division, including "I cut, you choose," can be traced back to the Bible. But since the discovery 20 years ago of an $n$-person algorithm for the envy-free division of a heterogeneous divisible good, such as cake or land, interest in fair division has burgeoned. Besides envy-freeness, properties such as equitability, efficiency, and strategy-proofness have been studied, and both existence results and algorithms to implement them will be discussed (some implementations will be shown to be impossible). More recent work on algorithms for the fair allocation of indivisible items, and trades among properties, will be presented. Applications, including those to dispute resolution, will be discussed. NEW ABSTRACT BY FRANCIS SU: I'll give an overview of the problem of "fair division", whose ideas trace back to antiquity but was perhaps first posed as a mathematical challenge by Steinhaus in 1948. How do you "cut" a "cake" "fairly"? All these words must be made precise and that is where mathematics comes in. I'll show how the problem has attracted ideas from many areas: measure theory, graph theory, game theory, and combinatorial topology, and---a sign that this is an interesting problem---just plain old ingenuity. AWM-AMS Noether Lecture The Power of Noether's Ring Theory in Understanding Singularities of Complex Algebraic Varieties Thursday January 7, 2016, 10:05 a.m.-10:55 a.m., Ballroom 6BC, Washington State Convention Center Karen E. Smith, University of Michigan In one of the tremendous innovations of twentieth century mathematics, Emmy Noether introduced the rigorous definition of commutative rings and their homomorphisms. One of her main motivating examples was the ring of polynomial functions on a complex algebraic variety. The algebraic study of these rings can have deep geometric consequences for the corresponding variety. In this talk, I hope to explain one example of this phenomenon: namely, how reduction to prime characteristic can give us insight into the singularities of the corresponding algebraic variety. Of course, I will need to convince you that we gain something powerful in reducing modulo $p$, since we have given up all the tools of analysis in doing so. What we gain is the Frobenius operator on the ring, which raises elements to their $p$-th powers, and is a ring homomorphism in characteristic $p$. I hope to explain how the Frobenius operator is helpful in understanding the singularities. As an application, I will describe some work with Angelica Benito, Jenna Rajchgot and Greg Muller on the singularities of varieties that arise in the theory of cluster algebras in combinatorics. Chaotic Billiards and Vibrations of Drums Steve Zelditch, Northwestern University There are two ways to `play' on a drum, which we allow to be shaped in any way, for instance as a standard circular drum-head, or as a stadium-shaped drum-head. First, one may play billiards on it, shooting a ball in a straight line that bounces off the sides by the law of equal angles. For a circular drum-head the billiard trajectories are completely predictable, but for the stadium-shaped drum they are chaotic and unpredictable. Second, the drum-head may vibrate in one if its normal modes. To visualize these modes, one sprinkles sand on the drum and watches the sand accumulate on the nodal set, where the drum is not vibrating (Chladni). My talk is concerned with the question, how are billiard trajectories related to nodal lines? What do the nodal lines look like as the frequency of vibration tends to infinity? In particular, what happens if the billiards are `chaotic'? AMS Retiring Presidential Address Conjugacy Classes and Group Representations David Vogan, Massachusetts Institute of Technology The conjugacy classes in a group carry a lot of nice information in an easy-to-understand package: conjugacy classes of permutations are classified by their cycle decomposition, and conjugacy classes of matrices by (more or less!) their eigenvalues. The sizes of conjugacy classes measure how noncommutative the group is. The representations of a group offer much more information, but in less agreeable packaging: it is not so easy to say even what the representations of a permutation group are, for example. An idea of Kirillov and Kostant from the 1960s seeks to describe (abstract and mysterious) representations in terms of (concrete and geometric) conjugacy classes. I'll recall what their idea looks like; some of its classical successes; and some ways that it fits into modern mathematics. What Makes for Powerful Classrooms---and What Can We Do, Now That We Know? Friday January 8, 2016, 9:00 a.m.-9:50 a.m., Ballroom 6BC, Washington State Convention Center Alan Schoenfeld, University of California, Berkeley Listen as Alexandra Branscombe, MAA Staff Writer, speaks with Alan Schoenfeld about his upcoming lecture. We now understand the properties of classrooms that produce powerful mathematical thinkers and problem solvers. The evidence comes mostly but not exclusively from K-12. The question for us: What are the implications for the ways we teach post-secondary mathematics? The $SL(2,\mathbb{R})$ Action on Moduli Space Friday January 8, 2016, 10:05 a.m.-10:55 a.m., Ballroom 6BC, Washington State Convention Center Alex Eskin, University of Chicago I will discuss ergodic theory over the moduli space of compact Riemann surfaces and its applications to the study of polygonal billiard tables. There is an analogy between this subject and the theory of flows on homogeneous spaces; I will talk about some successes and limitations of this viewpoint. This is joint work with Maryam Mirzakhani and Amir Mohammadi. How to Keep Your Genome Secret Friday January 8, 2016, 11:10 a.m.-12:00 p.m., Ballroom 6BC, Washington State Convention Center Kristin Estella Lauter, Microsoft Research Listen as Alexandra Branscombe, MAA Staff Writer, speaks with Kristin Lauter about her upcoming lecture. Over the last 10 years, the cost of sequencing the human genome has come down to around \$1,000 per person. Human genomic data is a gold-mine of information, potentially unlocking the secrets to human health and longevity. As a society, we face ethical and privacy questions related to how to handle human genomic data. Should it be aggregated and made available for medical research? What are the risks to individual's privacy? This talk will describe a mathematical solution for securely handling computation on genomic data, and highlight the results of a recent international contest in this area. The solution uses "Homomorphic Encryption", based on hard problems in number theory related to lattices. This application highlights the importance of a new class of hard problems in number theory to be solved. MAA Lecture for Students The Fractal Geometry of the Mandelbrot Set Friday January 8, 2016, 1:00 p.m.-1:50 p.m., Ballroom 6A, Washington State Convention Center Robert Devaney, Boston University In this lecture we describe several folk theorems concerning the Mandelbrot set. While this set is extremely complicated from a geometric point of view, we will show that, as long as you know how to add and how to count, you can understand this geometry completely. We will encounter many famous mathematical objects in the Mandelbrot set, like the Farey tree and the Fibonacci sequence. And we will and many soon-to-be-famous objects as well, like the "Devaney" sequence. There might even be a joke or two in the talk. Ancient Solutions to Parabolic Partial Differential Equations Saturday January 9, 2016, 9:00 a.m.-9:50 a.m., Ballroom 6BC , Washington State Convention Center Panagiota Daskalopoulos, Columbia University Some of the most important problems in geometric partial differential equations are related to the understanding of singularities. This usually happens through a blow up procedure near the potential singularity which uses the scaling properties of the equation. In the case of a parabolic equation the blow up analysis often leads to special solutions which are defined for all time $-\infty<t\le T$, for some $T\le+\infty$. We refer to them as ancient if $T<+\infty$. The classification of such solutions, when possible, often sheds new insight to the singularity analysis. We will give a survey of recent research progress on ancient solutions to geometric flows such as the Ricci flow, the Mean Curvature flow and the Yamabe flow. Our discussion will also include other models of nonlinear parabolic partial differential equations. We will address the classification of ancient solutions to parabolic equations as well as the construction of new ancient solutions from the gluing of two or more solitons. A Mathematical Tour Through a Collapsing World Saturday January 9, 2016, 10:05 a.m.-10:55 a.m., Ballroom 6BC, Washington State Convention Center Charles R. Hadlock, Bentley University If you search the word "collapse" on Google News on any given day, you are sure to get thousands of hits, as well as a healthy reminder that we do live in a world where a very wide variety of things are collapsing every day. When assessing the risk of collapse, one's initial mindset about its source can lead to insufficient attention being paid to alternative sources. That's why financial auditors, accident investigators, and similar professionals follow systematic protocols that attempt to assure that a wide field of issues are addressed, even in the presence of strong evidence pointing in a particular direction. This same mentality is important in more general and less structured treatments of risk and possible collapse, whether to companies, currencies, species, governments, facilities, diseases, societies, or almost anything else. Mathematics provides an ideal framework for capturing the essence of a wide range of common collapse dynamics that permeate many areas of application. After all, we customarily discuss subjects like probabilities, extrema, stability, nonlinearity, games, networks, and others, all of which are closely related to possible collapses. But beyond capturing the concepts, which itself should not be understated as an important contribution to workers from diverse disciplines, we also offer powerful tools for going deeper to mine important insights, resolve specific uncertainties, and guide future actions. I will expand upon these ideas with examples from the real world and with some mathematical gems that many of us might not ordinarily encounter in our mathematical training or reading. I will also mention how this work grew out of an exhilarating interdisciplinary undergraduate seminar course. MAA-AMS-SIAM Gerald and Judith Porter Public Lecture Network Science: From the Online World to Cancer Genomics Saturday January 9, 2016, 3:00 p.m.-4:00 p.m., Ballroom 6BC, Washington State Convention Center Jennifer Chayes, Microsoft Research Listen as Mike Breen, AMS Public Awareness Officer, speaks with Jennifer Chayes about her upcoming lecture. Everywhere we turn these days, we find that networks can be used to describe relevant interactions. In the high tech world, we see the Internet, the World Wide Web, mobile phone networks, and a variety of online social networks. In economics, we are increasingly experiencing both the positive and negative effects of a global networked economy. In epidemiology, we find disease spreading over our ever-growing social networks, complicated by mutation of the disease agents. In biomedical research, we are beginning to understand the structure of gene regulatory networks, with the prospect of using this understanding to manage many human diseases. In this talk, I look quite generally at some of the models we are using to describe these networks, processes we are studying on the networks, algorithms we have devised for the networks, and finally, methods we are developing to indirectly infer network structure from measured data. I'll discuss in some detail particular applications to cancer genomics, applying network algorithms to suggest possible drug targets for certain kinds of cancer.
CommonCrawl
A Novel Method for Quality Assurance of the Cyberknife Iris Variable Aperture Collimator Sarah-Charlotta Heidorn, Nikolaus Kremer, Christoph Fürweger cyberknife, variable circular aperture collimator, iris, large-area parallel-plate ionisation chamber, quality assurance, field-size determination Sarah-Charlotta Heidorn , Nikolaus Kremer, Christoph Fürweger Published: May 21, 2016 (see history) DOI: 10.7759/cureus.618 Cite this article as: Heidorn S, Kremer N, Fürweger C (May 21, 2016) A Novel Method for Quality Assurance of the Cyberknife Iris Variable Aperture Collimator. Cureus 8(5): e618. doi:10.7759/cureus.618 Objective: To characterize a novel method for field-size quality assurance of a variable approximately circular aperture collimator by means of dose-area product measurements and to validate its practical use over two years of clinical application. Methods: To assess methodical limitations, we analyze measurement errors due to change in linac output, beam tuning, uncertainty in MU delivery, daily factors, inherent uncertainty of the large-area parallel-plate ionisation chamber, and misalignment of the large-area parallel-plate ionisation chamber relative to the primary beam axis. To establish a baseline for quality assurance, the dose-area product is measured with the large-area parallel-plate ionisation chamber for all 12 clinical iris apertures in relation to the 60 mm fixed reference aperture. To evaluate the long-term stability of the Iris collimation system, deviation from baseline data is assessed monthly and compared to a priori derived tolerance levels. Results: Only chamber misalignment, variation in output, and uncertainty in MU delivery contribute to a combined error that is estimated at 0.2 % of the nominal field size. This is equivalent to a resolution of 0.005 mm for the 5 mm, and 0.012 mm for the 60 mm field. The method offers ease of use, small measurement time commitment, and is independent of most error sources. Over the observed period, the Iris accuray is within the tolerance levels. Conclusions: The method is an advantageous alternative to film quality assurance with a high reliability, short measurement time, and superior accuracy in field-size determination. The CyberKnife (CK) system (Accuray Inc., Sunnyvale, CA) can be equipped with an optional Iris Variable Aperture Collimator (Iris) containing two stacked hexagonal banks of tungsten segments. They together produce a 12-sided aperture with an accuracy ±0.2 mm at nominal distance of 800 mm [1]. The Iris aperture uses multiple aperture sizes and hence benefits improved plan quality and time efficiency [1]. The current manufacturer recommendation (Accuray Physics Essentials Guide 2012, P/N 1023868-ENG A, Accuray Inc. (Sunnyvale, CA)) for quality assurance (QA) suggests monthly film measurements of all 12 field sizes. In order to achieve sufficient accuracy, several hours per measurement series are required [2]. A less time consuming method that achieves the same precision is preferable. The requirements are accurate field size determination, stable and reproducible results, ease of use (clinical utility), and reasonable (small) measurement time commitment. Possible alternatives are scanning water phantom measurements, Iris camera direct imaging [3], Iris beam aperture caliper [4-5], and large-area parallel-plate ionization chamber (LAC) measurements [6]. The LAC is originally intended for proton measurements, and is also proposed to measure dose area product (DAP) in small field energy photon beams [7]. Unfortunately, water phantom measurements are time consuming, and for other suggested methods such as Iris camera direct imaging [3], Iris beam aperture caliper [4-5], and LAC measurements [6], available data is still limited. We present results from LAC measurements for Iris QA (LAC method), an analysis of limits and influencing factors of the LAC method, and data from the clinical application of the LAC method over 22 months. Further, the long-term Iris performance is investigated and discussed. The Iris collimator contains two stacked hexagonal banks of tungsten segments that together produce a 12-sided aperture that can be continuously varied [1]. The use in the CyberKnife system is restricted to a set of 12 different field sizes (with a diameter d of 5, 7.5, 10, 12.5, 15, 20, 25, 30, 35, 40, 50, and 60 mm specified at a nominal distance of 800 mm). According to the manufacturer, the Iris aperture reproducibility specification is ±0.2 mm at the nominal distance [1]. A large-area parallel-plate ionization chamber (TM34070-2,5 Bragg peak chamber, PTW Freiburg, diameter of the active area 81.6 mm, thickness of entrance window 3.47 mm) is placed on top of a hardware accessory that fits into the birdcage assembly (Figure 1). The birdcage is a frame that can be fastened to the collimator assembly where the ionization chamber is arranged at a reproducible position along the central beam axis (SAD 79.1 cm). Figure 1: Experimental set-up The LAC is positioned SAD 79.1 cm by means of a hardware accessory and aligned along the central beam axis. For 100 MU each, the uncorrected readings of the 12 Iris apertures dose area products DAPIris(d) and the fixed 60 mm aperture DAPFixed(60mm) are measured three to five times (Unidos Webline, PTW, 10021). The arithmetic mean values for both DAPIris(d) and DAPFixed(60mm) are calculated, and its quotient is determined: $$\theta (d) = \frac{DAP_{Iris}(d)}{DAP_{Fixed}(60mm)}$$ In a similar way, the quotient of baseline data θbaseline(d), that have been acquired during commissioning, is calculated. We analyze different error sources. To assess the change in linac output Δoutput, multiple measurements (60 exposures, 100 MU, 60 mm fixed collimator) are acquired over the course of 30 minutes, and the standard deviation is determined. The dependency of θ(d) on primary beam changes is investigated by deliberate detuning of beam symmetry and homogeneity to a level that is clinically not acceptable (parameters: gun voltage from 10.90 kV to 11.85 kV, grid bias cuttoff voltage from 167 eV to 164 eV). This change corresponds to the worst case scenario encountered in three years of use which does not trip an interlock. The consequence of the detuning is a decrease of the dose in the shoulder area of the profile by approximately 4%. We derive θbeamchange(d) and analyze the deviation from baseline results by: $$\Delta_{beam change}(d) = (\frac{\theta_{beam change}(d)}{\theta_{unmodified}(d)} - 1) * 100$$ We investigate the impact of misalignments during experimental setup. Such a misalignment is possible when exchanging the collimator head from fixed to Iris (or vice versa) because the birdcage and LAC must be removed from the Linac head for exchange. We analyze the influence of different misalignments and check for size dependence. The influence of the positioning of the LAC on θ(d) is derived by misaligning the LAC relative to the central axis (2 mm, 5 mm and 10 mm). The analysis of Δmisalign(d) is in analogy to the previous equation: $$\Delta_{misalign}(d) = (\frac{\theta _{misalign}(d)}{\theta _{aligned}(d)} - 1) * 100$$ For QA, θ(d) is compared to θbaseline(d) and given as its percentage deviation via: $$\delta (d) = (\frac{\theta (d)}{\theta _{baseline}(d)} - 1) * 100$$ In order to define action levels for QA, the Iris aperture reproducibility specification of ±0.2 mm is converted into percentage difference limits of δ(d). The maximal percentage deviations δ±0.2(d) that are within the specifications are the field-size dependent positive and negative limits, respectively. The limits δ±0.2(d) are both calculated and measured. For the measurement, the field size is changed by ±0.2 mm three to five times for all 12 apertures each, 100 MUs are irradiated, θmeasurement, ±0,2mm(d) is measured, and the arithmetic mean value is calculated. In analogy to equation (2), the limit δmeasurement,±0.2(d) is: $$\delta _{measurement, \pm0.2}(d) = (\frac{\theta _{measurement ,\pm0.2mm}(d)}{\theta _{baseline}(d)} - 1) * 100$$ In an analytical approximation and in analogy to equation (5), the limits δcalculation,±0.2(d) are derived from water tank commissioning data by calculating the dose-area product (obtained by radial integration of off-center ratios (OCR) over the chamber area) weighted with the output factor (OF) (Figure 2): $$\theta _{calculation,baseline} = OF * \int_{0}^{r_{max}} OCR (r,d)2\pi r dr$$ with rmax the radius of the LAC sensitive area. This is compared to the DAP calculated for altered beam profiles θ(r ±2) with a modified radius of ±0.2 mm for the nominal field size and a corrected output factor (OF') (Figure 2): $$\theta _{calculation,+0.2mm} = OF' * \int_{0}^{r_{max}} OCR (r,d)2\pi r dr$$ The corrected output factors (OF') are derived analytically by interpolation between adjacent OFs measured during commissioning. Figure 2: Output factors Measured factors OF (black square) and calculated factors OF' (grey circle: -0.2 mm; blue diamond: +0.2 mm) with respect to Iris aperture size. An error analysis is performed to validate the LAC method. Different errors may contribute to the quotient θ(d). They can originate from intrinsic linac and Iris characteristics, and the measurement technique (Figure 3). Linac-specific errors Δlinac may arise from daily factors, changes in linac output, primary beam changes, and the uncertainty in MU delivery. Measurement-specific errors Δmeasurement can originate from the measurement setup and the inherent uncertainty of the LAC. Iris specific errors ΔIris may consist of the Iris reproducibility and calibration drift over time. Iris specific characteristics are covered in a separate section, and we now analyze linac- and measurement-specific errors. Therefore, we use fixed collimators to exclude the inherent accuracy of the Iris collimator. Figure 3: Overview of error sources First, linac-specific errors are investigated. Daily factors like temperature, air-pressure, and dose per MU can be neglected. But, the error Δoutput originating from output changes over the course of a measurement series influences DAP and must be taken into account. Sixty consecutive measurements over the course of 30 minutes show that the error Δoutput is 0.04%. To investigate the impact of changes in the primary beam, we measure DAP with a detuned primary-beam profile, and calculate θ(d) and the deviation to data from an unmodified beam profile. As a result, DAP, its standard deviation σDAP, and the quotient θ(d) derived for both the detuned and normal beam profile agree within the error. Since we used a beam status that corresponds to the worst case encountered since installation, this is an indication that typical beam changes have no effect on measurements with the LAC method. To estimate the error resulting from the uncertainty in MU delivery (i.e. the output variation when requesting 100 MU), we calculate the mean value of σDAP in percent for 31 measurement series obtained in 22 months for both a 12.5 mm and a 60 mm fixed collimator. With very similar values of 0.046 ±0.025% (60 mm) and 0.036 ±0.020% (12.5 mm), it is size-independent, and the overall error Δoutput of DAP due to the MU uncertainty can be estimated as 0.04%. Next, measurement-specific errors are derived. The relative error ΔLAC from measurements with the LAC is negligible. We investigate the impact on θ(d) originating from the setup error Δmisalign due to a change in position of the LAC with respect to the central beam axis (misalignment) in the measurement setup. The error Δmisalign is determined for both a small (12.5 mm) and a large (60 mm) fixed collimator. Table 1 shows the mean value of three measurements of θ(d) with a LAC aligned along the central beam axis and of θmisalign(d) where the LAC is misaligned by 2 mm, 5 mm, and 10 mm with respect to the central axis. Deviation (in mm) Δmisalign (60 mm) Δmisalign (12.5 mm) 2 0.02 ±0.08% 0.10 ±0.02% 10 1.26 ±0.08% 0.41 ±0.04% Table 1: Misalignment Deviation between measurements of θ(d) with a LAC aligned along the central beam axis and of θmisalign(d) where the LAC is misaligned by 2 mm, 5 mm, and 10 mm with respect to the central axis. The discrepancy is 0.02 ±0.08% (60 mm) and 0.10 ±0.02% (12.5 mm) for a misalignment of 2 mm. A misalignment of 5 mm results in a deviation of 0.21 ±0.08% (60 mm) and 0.19 ±0.06% (12.5 mm), respectively. For a 2 mm and a 5 mm shift, the error Δmisalgn is size-independent. A shift of 10 mm results in a deviation of 1.26 ±0.08% (60 mm) and 0.41 ±0.04% (12.5 mm), and thus size-dependent. For the error estimation, we assumed a misalignment of 2 mm. To conclude, the combined linac- and measurement-specific errors that contribute to θ(d) are approximately 0.2%. Characterization of DAPIris(d) and the quotient θ(d) with the LAC method We characterize the relationship for one measurement series between Iris aperture and DAPIris(d) (Figure 4a) and its associated quotient θ(d) (Figure 4b). The arithmetic mean values of DAPIris(d) are 2.41 ±0.013 nC, 35.88 ±0.006 nC, and 136.33 ±0.058 nC for Iris apertures of 7.5 mm, 30 mm, and 60 mm, respectively (Figure 4a). The fit is parabolic with an exponent of 1.935 ±0.004 (blue dotted line in Figure 4a) as expected due to the circular surface area of the LAC´s sensitive volume. The arithmetic mean values of the appropriate quotients θ(d) are 0.0168 ±0.00010 at an area of 1.77 cm² (7.5 mm), 0.2503 ±0.0007 at 28.27 cm² (30 mm), and 95.09 ±0.00079 at 113.1 cm² (60 mm) (Figure 4b). As expected, the relationship between the aperture area and the quotient θ(d) is linear (Figure 4b). Figure 4: Characterization Relationship between DAP and Iris aperture radius (a) and quotient θ(d) and Iris aperture area (b). The blue dashed line is a fit of the form y = b*xc with an exponent c = 1.935 ±0.004 and b = 0.0493 (a), the grey dashed line is a linear fit y = b*x with b = 0.00846 (b). Specification limits Figure 5 shows the measured (LAC method, grey dotted lines and crosses) and calculated (black circles) specification limits as a percentage deviation from baseline data (for details about δ(d), see methods section).The measured specification limits are +10.00 ±1.47% and -9.51 ±1.47% (5 mm), 1.21 ±0.211% (30 mm) and -1.41 ± 0.211% (60 mm). The calculated specification limits are, e.g., +11.67% and -13.72% (5 mm), +1.33 and -1.34 (30 mm), and ±0.67 (60 mm). Figure 5: Specification limits Measured (LAC method, grey dotted lines and crosses) and calculated (black circles) specification limits as a percentage deviation from baseline data (for details about δ(d), see methods section). Iris characteristics: reproducibility and stability In this section, we account for the combined linac- and method-specific errors estimated in the first section. To derive the reproducibility of the Iris, we calculate DAP's median standard deviation DAPmed for 31 measurements (Figure 6a). It decreases with aperture size, from 1.64% for a 5 mm aperture to 0.01% for a 60 mm aperture (Figure 6a). Calculating the absolute reproducibility in millimeters (Figure 6b), we find that the reproducibility for all 12 Iris apertures is equal within the error. The overall Iris reproducibility is below 0.05 mm. In comparison, median standard deviations for fixed cones are minimal because there is no change in field size. Figure 6: Reproducibility Median standard deviation of 31 DAP measurements for all 12 iris apertures over 22 months in percent (a) and in mm (b). Error bars are first and third quartiles. For comparison, values for fixed 12.5 mm and 60 mm aperture are shown (right side of the x-axis). The calibration drift over time is derived by investigating the quotient θ(d) for 31 consecutive QA measurements over a period of 22 months. There is no trend in time recognizable (not shown). When pooling all 31 datasets, the mean value of the standard deviation of the quotient θ(d) is between 1.5% (5 mm) and 0.6% (60 mm), with larger values for smaller Iris apertures (Figure 7a). Translated to absolute variation of the beam diameter (Figure 7b), this corresponds to 0.037 mm (5 mm) and 0.13 mm (60 mm). Figure 7: Stability Mean value of standard deviation of the quotient θ(d) from 31 DAP measurements for all 12 Iris apertures over 22 months in percent (a) and in mm (b). For comparison, values for fixed 12.5 mm aperture is shown (right side of the x-axis). Long-term QA To interpret the same dataset in terms of clinical acceptability of the Iris collimator, the deviation δ(d) to baseline data is analyzed (black dots in Figure 8). For all 12 apertures, the deviations δ(d) are well within the specification (measured specification, grey dotted lines in Figure 8). The standard deviation of δ(d) from all measurements (inlet in Figure 8) is between 1.2% (5 mm) and 0.27% (60 mm). Figure 8: Long-term QA measurements QA measurements for all 12 Iris apertures over 22 months (grey dashed lines: measured tolerace limits, inlet standard deviation of δ(d)). The error for the worst case measurement series is 3.63 ±0.63% for a 5 mm collimator. This corresponds to a geometric difference of 0.090 ±0.002 mm. Larger apertures of 20 mm and 60 mm have an error (worst case measurement series) of -0.82 ±0.56% and 0.63 ±0.41%, respectively. This is equal to a geometric discrepancy of -0.082 ±0.006 mm and 0.189 ±0.057 mm. To evaluate the LAC method, we discuss its accuracy in field size determination, the value of the method for stability and reproducibility, and its clinical utility including expenditure of time. A linear (parabolic) relationship is expected between θ(d) and Iris aperture area (size), which is confirmed by our data. The residual deviation from linearity (parabolic form) may have its origin in various factors, e.g., the different measurement depths of OF and DAP, backscatter from the plastic support on the birdcage assembly, and the deviation of the real Iris aperture from an ideal radial aperture that is assumed for calculation. Various factors influence the accuracy of the LAC method. Main contributions come from changes in linac output, the uncertainty in MU delivery, and a misalignment in the setup. The influence from a modification of the primary beam can be neglected, and the LAC method is insensitive to primary beam changes. The error Δmisalign is size-dependent for a 10 mm shift (Table 1, lower row). The reason is that a 10 mm shift moves the penumbra of the 60 mm field very close to the edge of the sensitive volume, which causes a larger difference in the chamber reading. As a conclusion, Δmisalign is size-dependent for large misalignments. It is advisable to minimize any misalignments and achieve a precision in every setup below 2 mm. The validation of a measurement series is done by comparison to baseline data and calculating the derivation δ(d). It is important to keep in mind that baseline data represent a snapshot in time at commissioning. Errors like misalignment, change in output, and uncertainty in MU delivery also will contribute to baseline data. Within this limitation, tolerance values (action levels for QA) in line with Iris technical specifications are established by means of analytical calculation and measurements. Both approaches are in good agreement. Small differences are found for small collimators of 5 mm, 7.5 mm, and 10 mm. This is due to the fact that the OFs are not measured but calculated by interpolation between adjacent Iris apertures sizes. For smaller collimators this has a larger effect because of the increasing gradient of the OF function (Figure 2). In measurements with the LAC, the Iris collimator displays stable performance, with Iris aperture sizes well within the tolerance limits and high stability over 22 months. Noteworthy, especially small apertures (5 mm, 7.5 mm, and 10 mm) have a much higher precision/repeatability than indicated by the manufacturer. Regarding the clinical use of the smallest apertures, one must take into account two other Iris characteristics beyond basic field size QA: first, the same absolute deviation in aperture size means a higher uncertainty in total dose to the target or patient, which is better represented by percentage deviations in our measurement series, e.g., 0.1 mm corresponds to 0.34% for a 60 mm field but 2.7% for a 7.5 mm field, second, the treatment planning system assumes circular fields and for small collimators, the deviation between the circular field and the real 12-sided field is larger [1]. Keeping all these arguments in mind, small Iris collimators can be clinically used in a moderate and adequate way. As a visual summary of both the QA data and the impact of key uncertainty factors, Figure 9 compares data acquired with methodical errors to our QA results (red open squares: misalignment of 10 mm; yellow open circles: detuned beam), measured tolerance levels (grey dashed lines), and the QA data aquired during 22 months (black dots). All modified data are within tolerance levels and agree with maximal deviations of the long-term QA. In this manner, the LAC method is demonstrated to be robust against minor errors of the operator and important technical disturbances. Figure 9: Comparison Misalignment (10 mm, red open squares), beam tune (yellow open circles), and long-term QA measurements (black dots). The grey dashed lines are the measured tolerance levels. Both setup and measurement with the LAC are straightforward and take less than an hour; so the method can easily be implemented in clinical daily life. The informative value is high because several measurement values are obtained per aperture size, and a mean value is calculated. As a comparison, the film-based standard technique takes several hours, and only one film measurement per aperture size is acquired. Due to these characteristics, the LAC can be considered superior. To conclude, the LAC method is capable for accurate determination of field size changes by measuring DAP and comparing with reference data acquired at time of commissioning. Characteristics of the LAC method are stable and reproducible results, ease of use, and reasonable measurement time commitment of less than one hour. The methodical error is as low as 0.2%. Major error contributions originate from a variation in linac output, uncertainty in MU delivery, and misalignment of the LAC relative to the primary beam axis. As a further result, the Iris has a high reproducibility with a reliable and stable functionality over 22 months. Echner G G, Kilby W, Lee M, et al: The design, physical properties and clinical utility of an iris collimator for robotic radiosurgery. Phys Med Biol. 2009, 54:5359–5380. 10.1088/0031-9155/54/18/001 Validation and use of the "Iris Quality Assurance Tool". (2013). Accessed: April 5 2016: http://therss.org/document/docdownload.aspx?docid=1312. Evaluation of a prototype optical image-based measurement tool for routine quality assurance of field size for the CyberKnifeTM IRIS collimation system. (2012). Accessed: April 5 2016: http://www.therss.org/document/docdownload.aspx?docid=1066. CyberKnife iris beam QA using fluence divergence. (2012). Accessed: April 5 2016: http://logosvisionsystem.com/downloads/IBACBeamDivergenceReport.pdf. IBAC fluence to film dose FWHM comparison. (2013). Accessed: April 5 2016: http://logosvisionsystem.com/downloads/FluenceDoseFWHMComparison.pdf. Dose-area product as a method for small field geometric QA. (2013). Accessed: April 5 2016: https://www.aapm.org/meetings/2013AM/PRAbs.asp?mid=77&aid=22378. Djouguela A, Harder D, Kollhoff R, Rühmann A, Willborn KC, Poppe B: The dose-area product, a new parameter for the dosimetry of narrow photon beams. Z Med Phys. 2006, 16:217-227. 10.1078/0939-3889-00317 Sarah-Charlotta Heidorn Corresponding Author Medical Physicist, European CyberKnife Center Munich Nikolaus Kremer Christoph Fürweger Chief Medical Physicist, European CyberKnife Center Munich Clinic for Stereotaxy and Neurosurgery, University Hospital Cologne 10.7759/cureus.618 Heidorn S, Kremer N, Fürweger C (May 21, 2016) A Novel Method for Quality Assurance of the Cyberknife Iris Variable Aperture Collimator. Cureus 8(5): e618. doi:10.7759/cureus.618 Received by Cureus: April 03, 2016 Peer review began: April 06, 2016 Peer review concluded: May 20, 2016 Heidorn et al. This is an open access article distributed under the terms of the Creative Commons Attribution License CC-BY 3.0., which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Impact of Premature Greying of Hair on Socio-cultural Adjustment and Self-es ... Reducing Resident Physician Workload to Improve Well Being Concept for a Fan-beam Computed Tomography Image-guided Radiotherapy Device Medical Students' Perceptions of the Doctor-Patient Relationship: A Cross-Se ... Fluctuating International Normalized Ratio in Patients Compliant on Warfarin ... The Impact of an Acute Care Surgical Service on the Quality and Efficiency o ...
CommonCrawl
Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization Jiaoyun Yang1, Haipeng Wang1, Huitong Ding1, Ning An1 & Gil Alterovitz2 BMC Bioinformatics volume 18, Article number: 47 (2017) Cite this article Visualizing data by dimensionality reduction is an important strategy in Bioinformatics, which could help to discover hidden data properties and detect data quality issues, e.g. data noise, inappropriately labeled data, etc. As crowdsourcing-based synthetic biology databases face similar data quality issues, we propose to visualize biobricks to tackle them. However, existing dimensionality reduction methods could not be directly applied on biobricks datasets. Hereby, we use normalized edit distance to enhance dimensionality reduction methods, including Isomap and Laplacian Eigenmaps. By extracting biobricks from synthetic biology database Registry of Standard Biological Parts, six combinations of various types of biobricks are tested. The visualization graphs illustrate discriminated biobricks and inappropriately labeled biobricks. Clustering algorithm K-means is adopted to quantify the reduction results. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. By combining normalized edit distance with Isomap and Laplacian Eigenmaps, synthetic biology biobircks are successfully visualized in two dimensional space. Various types of biobricks could be discriminated and inappropriately labeled biobricks could be determined, which could help to assess crowdsourcing-based synthetic biology databases' quality, and make biobricks selection. In synthetic biology, one of the most important tasks is to assemble various standardized gene segments, i.e. biobricks, to form artificial biological devices with specific functions [1, 2]. Therefore, the establishment of biobricks database appears to be particularly important. Due to the rapid development of this area, numerous amount of biobricks have been generated, e.g. about 30,000 biobricks in Registry of Standard Biological Parts (http://parts.igem.org/Main_Page) [3]. This brings several challenges for this domain. One is that the database is constructed by crowdsourcing strategy [4], which means anyone could contribute to the database, hence it could not guarantee biobricks' quality. Another one is that so many biobricks make it hard to choose one for constructing devices. In order to overcome above issues, analytical methodology is urgently needed to make quality assessment and interpretation. An efficient method is to reduce the dimensions of biobricks and visualize them in two or three dimensional spaces, then some patterns may emerge, e.g. similar data would flock together to become clusters, and they could be easily observed in the graph. This could help to get a first impression of properties or the quality of the database. Dimensionality reduction for visualization has been successfully applied in many areas, e.g. microarray data analysis, etc. [5, 6]. In the visualized graph, a point denotes an item, e.g. gene, biorbrick, etc. The distance between points usually represents the similarity. Hence, the closer two biobricks are in the graph, the more similarity they have. It has been showed that similar genes have similar functions or structures [7]. There are various types of biobricks, corresponding to different functions, e.g. promoters initiate transcription of a particular gene, primers are used as a starting point for PCR amplification or sequencing, etc. If a biobrick is visualized among some biobricks with different types, it may be marked with inappropriate types. Besides, when users select a biobrick in the graph, they could also find other biobricks with similar functions, which could help to determine an appropriate biobrick to use. There are mainly two categories of methods for dimensionality reduction. One is feature selection, which is to select a subset of features to represent the whole samples [8, 9]. If applied here, it would be choosing two or three gene sites as representative of the biobricks. As biobricks are gene segments with length ranging from several hundred to several thousand, only using two or three gene sites to denote the whole segments would lose most of the information and is unreliable. The other category of methods for dimensionality reduction is feature extraction, which builds derived features by mapping features from high dimensional space to low dimensional space. There are essential difference between feature selection and feature extraction. The former one just selects a subset of original features, while the latter one needs to generate new features, which are totally different from original features. Therefore, feature extraction is more suitable for biobricks' dimensionality reduction than feature selection. Feature extraction methods could be grouped into two categories, linear dimensionality reduction and nonlinear dimensionality reduction [10]. The features derived by linear dimensionality reduction could be regarded as linear combinations of original features. A classical linear dimensionality reduction method is principal component analysis (PCA), which first constructs data covariance (or correlation) matrix, and then applies eigenvalue decomposition to obtain mapped results [11, 12]. As an unsupervised learning method, PCA is widely used to deal with large scale unlabeled data. However an issue emerges when applying PCA. Biobricks are gene segments with various lengths, while data covariance matrix consists of covariance of two samples and requires the identical dimension of various samples. Therefore, it is impractical to construct covariance matrix based on these biobricks. Multi-Dimensional Scaling (MDS) and its improved linear methods first construct a distance matrix on the dataset and then embed the data in low dimensions by eigendecomposition [13]. Current distance matrix is evaluated in Euclidean space, which requires to conduct numerical operations on data with identical dimension. Biobricks are represented as sequences with various lengths in computer, besides numerical operation on the characters in biobricks could not represent the similarity between biobricks, therefore current distance matrix could not be applied on biobricks. Nonlinear dimensionality reduction is mainly based on manifold learning and could handle data's nonlinear property. One kind of these methods are established on the extension of linear methods. For example, kernel PCA extends PCA by applying a kernel function to the original data and then performing PCA process [14]. Isomap is an extension of MDS and tries to maintain the intrinsic geometry of by adopting an approximation of the geodesic distance on the manifold, where the geodesic distance is calculated by summing the Euclidean distances along the shortest path between two nodes [15]. Since linear methods are not suitable for processing biobricks and this kind of methods still involve linear methods, they are also not the right choice for handling biobricks. Another kind of nonlinear dimensionality reduction methods adopt various strategies to capture the geometry structure and apply eigendecomposition to maintain the structure in a lower-dimensional embedding of the data. The classical methods include Local Linear Embedding (LLE), Laplacian Eigenmaps, etc. LLE assumes each sample could be represented as the linear combination of its local neighbor samples, and tries to find an embedding that could preserve the local geometry in the neighborhood of each data point [16]. Some methods are proposed to improve LLE's quality, such as Hessian Locally-Linear Embedding (HLLE) [17], Modified Locally-Linear Embedding (MLLE) [18], etc. However, when applying these methods here, an issue emerges that it is usually hard to use a combination of gene segments to denote another segment, especially when the lengths are not identical. Laplacian Eigenmaps is according to the assumption that the Laplacian of the graph obtained from the data points may be reviewed as an approximation to the Laplace-Beltrami operator defined on the manifold [19, 20]. This method regards each data point as a node in a graph, and the connection of nodes is based on k-nearest neighbor strategy. It needs to calculate the Euclidean distance to construct the graph, therefore it faces the issue that Euclidean distance is not applicable for gene segments. From the above analysis, we can see that current dimensionality reduction methods could not be directly applied to biobricks, and it is mostly because of the coordinate calculation for various purposes. Among these purposes, there is a specific one that coordinate calculation is used to measure the similarities of samples and help to find the neighborhood, including MDS, Isomap, Laplacian Eigenmaps. We could find alternative methods for biobricks' similarity calculation. Actually edit distance is a widely used measurement for gene similarity, and it is equal to the minimum number of operations required to transform one gene sequence into the other. Therefore, edit distance could be combined with this specific group of method to reduce biobricks' dimensionality. In this paper, we propose to combine edit distance with dimensionality reduction methods for biobricks' visualization. By adopting normalized edit distance to construct similarity matrix, both Isomap and Laplacian Eigenmaps successfully accomplish biobricks' dimensionality reduction, and visualize the dataset derived from Registry of Standard Biological Parts. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. Furthermore, clustering algorithm K-means is applied on the dimensionality reduction results to quantify the dimensionality reduction performance. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively, which indicate that the proposed dimensionality reduction methods could preserve the underlying structure of biobricks, and the visualization results could reflect the relationships among biorbicks. The rest of this paper is organized as follows. We first formulate the dimensionality reduction problem for synthetic biology, and then describe the edit distance and how to combine edit distance with Isomap and Laplacian Eigenmaps. After that, the dataset used in this paper will be introduced and the visualization and clustering results will be illustrated in the results and discussion section. In the last, we summarize the paper. In this section, we first formulate the dimensionality reduction problem. Then we introduce the normalized edit distance and how to combine it with dimensionality reduction methods. Problem formulation Let X={x 1,x 2,…,x n } be a set of DNA sequences defined on a finite alphabet Σ={A,T,C,G}, where \(\phantom {\dot {i}\!}x_{i}=s_{1}s_{2}\ldots s_{|x_{i}|}\) represents a biobrick with length |x i |. The dimensionality reduction problem for synthetic biology is to find a vector set Y={y 1,y 2,…,y n }, where y i is the reduction result of x i , and these vectors satisfy: 1) ∀i (1≤i≤n), y i is a k-dimension vector that could be represented in Euclidean Space; 2) Vectors in Y should maintain the underlying structure among biobircks in X. For the first constraint, the value of k is usually 2 or 3, so that the vector could be visualized in a 2-D or 3-D space. For the second constraint, the most common underlying structure among the original dataset is manifold. In order to capture the structure, various algorithms set different optimization functions, and convert the problem into an optimization problem to achieve the reduction results. Another important difference among these algorithms is the way of constructing similarity matrix. In this paper, we focus on Isomap and Laplacian Eigenmaps, and the detailed process will be discussed in the next section. Normalized edit distance Assume x i and x j are two biobricks in X, and their lengths are |x i | and |x j |, respectively. The edit distance d(x i ,x j ) is defined as the minimum summation of edit operations' weight that transforms x i into x j , where the edit operation could be insertion, deletion, substitution, etc. The classical algorithm for edit distance calculation is dynamic programming, which recursively constructs a score matrix T with size (|x i |+1)∗(|x j |+1). In matrix T, T[p i ,p j ] contains the edit distance of prefixes x i [1…p i ] and x j [1…p j ]. If let w ins , w del , w sub , w mat denote the weight of insertion, deletion, substitution and match operation, the recursive formula is as follows: $$ T\left[p_{i},p_{j}\right]=\min \left\{\begin{array}{ll} T\left[p_{i}-1,p_{j}\right]+w_{del}\\ T\left[p_{i},p_{j}-1\right]+w_{ins}\\ T\left[p_{i}-1,p_{j}-1\right]+w_{sub}\\ \verb+ + \textrm{if \(x_{i}\left[p_{i}\right]\neq x_{j}\left[p_{j}\right]\)}\\ T\left[p_{i}-1,p_{j}-1\right]+w_{mat}\\ \verb+ + \textrm{if \(x_{i}\left[p_{i}\right]= x_{j}\left[p_{j}\right]\)}\\ \end{array} \right. $$ For example, if let w del , w ins , w sub be equal to 1, and w mat be equal to 0. Figure 1 illustrates the dynamic table T for DNA sequence ATCAGTA and TCGACTA, where the value is calculated based on Eq. 1. The edit distance of these two sequences is 3, i.e. the value in cell T[7,7]. It denotes that at least 3 edit operations are needed to transform ATCAGTA into TCGACTA. Dynamic table T for calculating the edit distance between DNA sequence ATCAGTA and TCGACTA. The optimal edit distance is 3, i.e. the value in cell T[7,7] The length of bioricks varies a lot, ranging from several hundred to several thousand. Thus there are huge differences between edit distances of various biobricks. For example, the edit distance of two biobricks with length 1000 and 800 is at least 200, while the edit distance of two biobricks with length 200 and 100 is at most 200. The former distance is larger than the latter one. However, we could not draw the conclusion that the former two biobricks are less similar than the latter two biobricks. Therefore, we adopt a normalized distance to represent the edit distance, which is defined as follow: $$ n\_d(x_{i},x_{j})=\frac{d(x_{i},x_{j})}{max(length(x_{i}), length(x_{j}))} $$ Where d(x i ,x j ) represents the edit distance of x i and x j , and m a x(l e n g t h(x i ),l e n g t h(x j )) denotes the maximum value of the length of x i and x j . Based on n_d(x i ,x j ), a matrix M could be constructed as Eq. 3, where M ij represents the normalized edit distance of x i and x j , i.e. M ij =n_d(x i ,x j ). $$ M= \left(\begin{array}{cccc} M_{11} & M_{12} &\dots & M_{1n} \\ M_{21} & M_{22} &\dots & M_{2n} \\ \vdots & \vdots &\ddots & \vdots \\ M_{n1} & M_{n2} &\dots & M_{nn} \\ \end{array} \right) $$ Isomap with normalized edit distance Isomap algorithm maintains the manifold structure by optimizing the following function: $$ \min \left(\sum_{i=1}^{n}\sum_{j=1}^{n} \left({S_{ij}} - ||y_{i}-y_{j}||\right)^{2}\right)^{-\frac{1}{2}} $$ Where S ij is the similarity of x i and x j , and y i and y j denote the reduction results of x i and x j , respectively. The dimensionality reduction problem is converted to an optimization problem, and Isomap solves it through three main steps. The first step is to establish the neighborhood graph, which could be constructed based on matrix M. Each node in the graph G represents a biobrick, and if x j is one of the K nearest neighbors of x i , there is an edge to connect x i and x j by assigning weight M ij . Otherwise, the weight of x i and x j is equal to infinity. In other words, Isomap reconstructs matrix M by replacing the value M ij by infinity if x j is not one of the K nearest neighbors of x i . The second step is to calculate the shortest path of x i and x j to approximate the geodesic distance, and the shortest path distance is used to represent the similarity of x i and x j . Here we apply S ij to denote this similarity. There have been many successful algorithms to find the shortest path, among which Floyd's algorithm is a classical one. It performs the following process: for each value k=1,2,…,n in turn, replace the value of S ij by min{S ij ,S ik +S kj }, and the initial value of S ij is the same as the reconstruction matrix in the first step. After achieving matrix S, we should square each value in S before processing the next step. The third step is to construct d-dimension embedding, which is done by the eigendecomposition of matrix D. D is constructed based on Eq. 5. $$ {G_{ij}}=-\frac{1}{2}\left({S_{ij}}-\frac{1}{n}D_{i}-\frac{1}{n}D_{j}+\frac{1}{n^{2}}D_{i}D_{j}\right) $$ Where D i is computed according to Eq. 6. $$ D_{i}=\sum_{1\leq j\leq n} {S_{ij}} $$ Assume λ p is the p−t h eigenvalue (in decreasing order) of matrix D, and \(v_{p}^{i}\) is the i−t h component of the p−t h eigenvector. Then the p−t h component of the embedding results y i for sample x i is equal to \(\sqrt {\lambda _{p}}v_{p}^{i}\). Laplacian eigenmaps with normalized edit distance Different from Isomap algorithm, Laplacian Eigenmaps employs the following Eq. 4 as the optimization function. $$ min \sum_{i=1}^{n}\sum_{j=1}^{n} ||y_{i} - y_{j}||^{2}{S_{ij}} $$ Where S ij is the similarity of x i and x j , and y i and y j represent the reduction results of x i and x j , respectively. Note that the similarity matrix S here is different from Isomap's similarity matrix S. Laplacian Eigenmaps applies a kernel function on matrix M to achieve S. A widely used kernel function is Gaussian kernel, which is defined as Eq. 8. $$ f(M_{ij})= e^{-\frac{M_{ij}^{2}}{2\sigma^{2}}} $$ After constructing the similarity matrix S, Laplacian Eigenmaps solves the problem by applying eigendecomposition to matrix G, where G=D−S and D is a diagonal matrix with the values D 1,D 2,…,D n on the diagonal. D i could be calculated based on 6. The final embedding result y i consists of the i−t h component of the first k eigenvectors. Figure 2 illustrates the comparison of Isomap and Laplacian Eigenmaps in terms of optimization function, procedures and reduction results. Both algorithms share some steps, such as calculating the normalized edit distance matrix M, computing the diagonal matrix D, and applying eigendecomposition to matrix G. The main differences are the way of constructing similarity matrix S, matrix G, and achieving reduction results. The first difference is because Isomap employs geodesic distance to denote the similarity, and the latter two differences are due to the diverse optimization functions. Comparison of Isomap and Laplacian Eigenmaps. The comparison is performed in terms of optimization functions, procedures and reduction results, where y i [p] denotes the p−t h component of y i In this section, the synthetic biology dataset is first introduced, and then we illustrate the dimensionality reduction results of employing Isomap and Laplacian Eigenmaps to the dataset. Besides, the clustering algorithm is adopted on the dimensionality reduction results to validate the performance. Datasets and implementation details The dataset is obtained from 'Registry of Standard Biological Parts' (http://parts.igem.org/), which is a publicly available synthetic biology database for storing biobricks. There are mainly 26 categories of biobricks. The category information is achieved from each part's XML files provided by the registry. According to the official site description, these various categories of biobricks belong to 11 types, which means some types have several subtypes. Without loss of generality, four different types of biobricks are selected to validate the algorithms, including protein generators, protein domains, Ribosomal Binding Site (RBS), primers. The number of these four biobricks are 500, 500, 300, and 500, respectively. There are only 300 RBS in the database, so these numbers are not equal. More experiments about other types of biobricks are included in Additional file 1. These four different types of biobricks correspond to various functions. Protein generators are parts or devices used for generating proteins. Protein domains are conserved parts of given protein sequences and could make up a protein coding sequence with the rest of protein chains. A RBS is a sequence of nucleotides upstream of the start codon of an mRNA transcript. A primer is a short single-stranded DNA sequences used as a starting point for PCR amplification or sequencing. The presented methods were implemented in python 2.7.11 and Matlab 8.4.0 (R2014b). The python-package, levenshtein, was used to compute the edit distance of each two genes. The dimensionality reduction algorithms Isomap and Laplacian Eigenmaps were implemented in Matlab. In the final of routine performing, we evaluated the accuracy by results of K-means, which was implemented in Matlab. Isomap needs to adopt knn algorithm, the parameter K is set to 30% of the dataset size. Laplacian Eigenmaps applies Gaussian kernel, and the parameter σ is set to 0.3. Dimensionality reduction results We first conduct dimensionality reduction on various combination of these four types of biobricks with Isomap and Laplacian Eigenmaps algorithms. Thus, these various types of biobricks with different lengths are reduced to two dimension vectors. Then, these reduction results are visualized in graphs. Figures 3 and 4 illustrate the 2-D visualization results for Isomap and Laplacian Eigenmaps, respectively. Each subfigure shows the visualization of two different biobricks, where protein generators, primers, protein domains, and RBS are marked with red color, green color, blue color, and yellow color, respectively. 2-D visualization for the dimensionality reduction results by applying Isomap algorithm. The datasets are composed of various combinations of Protein generators, Primers, Protein domains, and RBS, where R, G, B, Y denote red color, green color, blue color, and yellow color, respectively. a Protein generators (R) and Primers (G), b Protein generators (R) and Protein domains (B), c Protein generators (R) and RBS (Y), d Primers (G) and Protein domains (B), e Protein domains (B) and RBS (Y) (f) Primers (G) and RBS (Y), f Primers (G) and RBS (Y) 2-D visualization for the dimensionality reduction results by applying Laplacian Eigenmaps algorithm. The datasets are composed of various combinations of Protein generators, Primers, Protein domains, and RBS, where R, G, B, Y denote red color, green color, blue color, and yellow color, respectively. a Protein generators (R) and Primers (G), b Protein generators (R) and Protein domains (B), c Protein generators (R) and RBS (Y), d Primers (G) and Protein domains (B), e Protein domains (B) and RBS (Y), f Primers (G) and RBS (Y) The visualization results demonstrate that various combinations of these biobricks could be separated after dimensionality reduction. The distribution of these combinations varies a lot. Generally speaking, the results generated by Laplacian Eigenmaps are more concentrated than that of Isomap. This may because that Isomap adopts shortest path algorithm to calculate the similarity, while Laplacian Eigenmaps applies Gaussian kernel on the similarity matrix, and some similarities may become zero after this process, which makes the points in the graph more concentrated. These findings could also be found on other types of biobricks [see Additional file 1: Figures S1 and S2]. Among all the combinations, the combination of protein generators and primers achieves the best discrimination for both Isomap and Laplacian Eigenmaps, which means protein generators and primers have the most dissimilarity among all these six combinations. Combinations of protein generators and RBS, primers and protein domains also obtain promising discrimination, while combinations of protein generators and protein domains, primers and RBS are not easy to distinguish, which means many of these biobricks share some similarities. In addition, we could find that even for a finite type of biobrick, the visualization may present some clusters. For example, there are three obvious clusters for primers in the subfigure (a), (d) and (f) of Figs. 3 and 4 (marked with green color). These clusters denote different types of primers. One type is inter-strain nested primer. One type is used for genomic integration and expression of Green fluorescent protein under the control of various promoters. Actually these biobricks might not be appropriate to be marked as primer according to their function. In Figs. 3(a) and 4(a), they are closer to protein generators, even mixed in them. Except for inappropriately labeled primers, there are also some other inappropriately labeled bioricks. Figures 3 and 4 shows some protein generators are closer to primers or RBS. Actually these protein generators are only composed of promoter and RBS, e.g. BBa_K143050, etc., and they do not contain any coding sequences. Therefore, they might not be suitable to be labeled with protein generators. In Figs. 3(c) and 4(c), some RBS are mixed with protein generators. When checking the biobricks' documents, some of them have coding sequences, e.g. BBa_K079013, etc., and some of them do not have any explanations, e.g. BBa_K294120, etc. This demonstrates that the visualization could help to determine whether the biobricks are appropriately labeled. Besides, similar biobricks have close distance in the graph. Users could find a set of biobricks for a specific function in the graph and choose the best one for their purpose. Furthermore, we tested the 3-D visualization results of Isomap and Laplacian Eigenmaps by mixing any three types of biobricks together. Figures 5 and 6 demonstrate the results. 3-D graphs could be viewed from any angles, however we could only show them in a particular angle in the paper. Discriminated biobricks still emerge based on various types. Besides, there are also clusters like 2-D graphs. For example, there are still three clusters of primers corresponding to different functions. Besides, the distribution of inappropriately labeled biobricks is similar as that in 2-D graphs. 3-D visualization for the dimensionality reduction results by applying Isomap algorithm. The datasets are composed of various combinations of Protein generators, Primers, Protein domains, and RBS, where R, G, B, Y denote red color, green color, blue color, and yellow color, respectively. a Protein generators (R), Primers (G) and Protein domains (B). b Protein generators (R), Primers (G) and RBS (Y). c Protein generators (R), Protein domains (B) and RBS (Y) 3-D visualization for the dimensionality reduction results by applying Laplacian Eigenmaps algorithm. The datasets are composed of various combinations of Protein generators, Primers, Protein domains, and RBS, where R, G, B, Y denote red color, green color, blue color, and yellow color, respectively. a Protein generators (R), Primers (G) and Protein domains (B), b Protein generators (R), Primers (G) and RBS (Y), c Protein generators (R), Protein domains (B) and RBS (Y) Time performance We also test the time performance of Isomap and Laplacian Eigenmaps algorithms. Both algorithms need to calculate the edit distance, thus this calculation is performed independently, and is not included in this time comparison. Table 1 shows the results. Laplacian Eigenmaps consumes much less time than Isomap, at least 5 times faster. According to Fig. 2, the most different step for these two algorithms is to calculate the similarity matrix. Isomap needs to apply knn algorithm and shortest path algorithm to achieve the similarity matrix, while Laplacian Eigenmaps only applies Gaussian kernel on the edit distance matrix. knn algorithm and shorest path algorithm have larger time complexity than calculating Gaussian kernel, and this results in much larger time consumption of Isomap than Laplacian Eigenmaps. For the combinations containing RBS, both algorithms would cost shorter time than other combinations, this is because the size of RBS is smaller than other biobricks. The time of Isomap decreases larger than that of Laplacian Eigenmaps, which means the time performance of Isomap is more sensitive about the data size than that of Laplacian Eigenmaps. Table 1 Comparison of Isomap and Laplacian Eigenmaps in terms of time consumption Clustering validation In order to quantify the dimensionality reduction results, we adopt a classical clustering algorithm, K-means, on the results to determine how well the combination of various types of biobricks could be discriminated. K-mean is an unsupervised clustering algorithm to group samples into different clusters based on distances between samples [21]. It performs the following procedures. Randomly select K samples as the initial centroids. For each sample i, compute the distance between sample i and all the centroids, and find the centroid k with the smallest distance. Then assign sample i to cluster k. Recompute the centroids for each cluster by averaging all the samples among this cluster. If the centroids change compared with previous centroids, go to step 2. End the algorithm. Since the dimensionality reduction results are numerical vectors, we adopt Euclidean distance to measure the distance. The parameter K is set to 2 since there are two types of biobricks for each combination. Clustering accuracy is defined as Eq. 9, where N and c i denote the number of samples and the number of samples that are correctly assigned to the i−t h cluster, respectively. $$ Accuracy =\frac{\sum_{i=1}^{k} c_{i}}{N} $$ Table 2 shows the clustering accuracies by applying K-means algorithm on the 2-D dimensionality reduction results generated by Isomap and Laplacian Eigenmaps. Protein generators and Primers achieve the best clustering accuracy, while Protein generators and Protein domains obtain the worst clustering accuracy, which are consistent with the visualization results. The clustering accuracy for Isomap is better than that of Laplacian Eigenmaps except for Protein domains and RBS, this may because Laplacian Eigenmaps applies Gaussian kernel to the distance matrix, and some distances become 0. Actually, this property also makes the visualization results more concentrated. The average accuracies of these six datasets are 0.857 and 0.844 for Isomap and Laplacian Eigenmaps, respectively. Table 2 Clustering accuracy comparison of 2-D dimensionality reduction results by Isomap and Laplacian Eigenmaps Table 3 demonstrates the clustering accuracies on the 3-D dimensionality reduction results obtained by Isomap and Laplacian Eigenmaps. The average accuracies of these datasets generated by Isomap and Laplacian Eigenmaps are 0.927 and 0.928, respectively, which validates the effectiveness of dimensionality reduction methods, and denotes that different types of biobricks could be easily separated after visualizing them in one graph. Clustering results on other types of biobricks could be found in Additional file 1: Table S1. The average accuracy shows how well different types of biobricks could be separated. Isomap and Laplacian Eigenmaps differ a little on the accuracy. This difference is caused by the various ways of calculating similarity matrices. Isomap first applies knn algorithm to construct the neighbor graph, then adopts the shortest path algorithm to achieve the similarity matrix, while Laplacian Eigenmaps only applies Gaussian kernel on the edit distance matrix. After applying Gaussian kernel, some distances may become 0. This operation may cause information lost, however it could make the graph more concentrate to discriminate biobricks. Besides, Laplacian Eigenmaps is much faster than Isomap. Therefore, Laplacian Eigenmaps is more suitable for handling large size datasets. Besides, classification validation is also conducted on these biobricks, and the results could be found in Additional file 1: Table S2. In this paper, we propose to combine normalized edit distance with Isomap and Laplacian Eigenmaps for biobricks' dimensionality reduction and visualization. The visualization results illustrate that different types of biobricks could be easily distinguished by applying the proposed method, and some inappropriately labeled biobricks could be determined. Besides, K-means algorithm is adopted to quantify the dimensionality reduction results. The average clustering accuracy of six various combinations of biobricks are 0.857 and 0.844 for the proposed two algorithms, respectively. This validates that different types of biobricks could be separated in the visualized graph by applying the proposed dimensionality reduction methods. It also implies the visualization could help to assess the quality of biobricks in the crowdsourcing based synthetic biology database. LLE: Local linear embedding LE: Laplacian Eigenmaps MDS: Multi-dimensional scaling PCA: RBS: Ribosomal binding site Benner SA, Sismour AM. Synthetic biology. Nat Rev Genet. 2005; 6(7):533–43. De Lorenzo V, Serrano L, Valencia A. Synthetic biology: challenges ahead. Bioinformatics. 2006; 22(2):127–8. Endy D. Foundations for engineering biology. Nature. 2005; 438(7067):449–53. Smolke CD. Building outside of the box: iGEM and the BioBricks Foundation. Nat Biotechnol. 2009; 12:1099–102. Bartenhagen C, Klein HU, Ruckert C, et al. Comparative study of unsupervised dimension reduction techniques for the visualization of microarray gene expression data. BMC Bioinforma. 2010; 11(1):1. Pochet N, De Smet F, Suykens JA, et al. Systematic benchmarking of microarray data classification: assessing the role of non-linearity and dimensionality reduction. Bioinformatics. 2004; 20(17):3185–95. Mount DW. Sequence and genome analysis. Cold Spring Harbour: Bioinformatics: Cold Spring Harbour Laboratory Press: 2004. Lazar C, Taminau J, Meganck S, et al. A survey on filter techniques for feature selection in gene expression microarray analysis. IEEE/ACM Trans Comput Biol Bioinformatics (TCBB). 2012; 9(4):1106–19. Saeys Y, Inza I, Larrañaga P. A review of feature selection techniques in bioinformatics. Bioinformatics. 2007; 23(19):2507–17. Van Der Maaten L, Postma E, Van den Herik J. Dimensionality reduction: a comparative. J Mach Learn Res. 2009; 10:66–71. Jolliffe I. Principal component analysis. United States: John Wiley & Sons, Ltd: 2002. Yeung KY, Ruzzo WL. Principal component analysis for clustering gene expression data. Bioinformatics. 2011; 17(9):763–74. Mardia KV, Kent JT, Bibby JM. Multivariate analysis. London: Academic Press; 1980. Schölkopf B, Smola A, Müller KR. Kernel principal component analysis. In: International Conference on Artificial Neural Networks. Heidelberg: Springer Berlin: 1997. p. 583–8. Tenenbaum, JB, De Silva, V, Langford JC. A global geometric framework for nonlinear dimensionality reduction. Science. 2000; 290(5500):2319–23. Roweis ST, Saul LK. Nonlinear dimensionality reduction by locally linear embedding. Science. 2000; 290(5500):2323–6. Donoho DL, Grimes C. Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. 100. 2003; 10:5591–6. Zhang Z, Wang J. MLLE: Modified locally linear embedding using multiple weights. In: Advances in neural information processing systems. Canada: 2006. p. 1593–600. Belkin M, Niyogi P. Laplacian eigenmaps and spectral techniques for embedding and clustering. In: NIPS Vol. 14. Canada: 2001. p. 585–91. Belkin M, Niyogi P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 2003; 15(6):1373–96. MacQueen J. Some methods for classification and analysis of multivariate observations. In: Proceedings of the fifth Berkeley symposium on mathematical statistics and probability. Vol. 1, No. 14. United States: 1967. p. 281–97. This work was supported partially by the National Natural Science Foundation of China (No. 61502135), the Programme of Introducing Talents of Discipline to Universities (No. B14025), and the Fundamental Research Funds for the Central Universities (No. JZ2015HGBZ0111). The funding bodies had no role in study design, data collection and analysis, or preparation of the manuscript. The datasets generated during and/or analysed during the current study are available in the Registry of Standard Biological Parts repository (http://parts.igem.org/), which is a publicly available synthetic biology database. The proposed methods are implemented with Python and Matlab, which could be available form https://github.com/WhpHenry/dim_reduction. JY developed the methods. JY, NA and GA drafted the manuscript. HW and HD implemented the software and conducted the test. All authors read and approved the final manuscript. School of Computer and Information, Hefei University of Technology, Tunxi Road, Hefei, 230009, China Jiaoyun Yang, Haipeng Wang, Huitong Ding & Ning An Harvard Medical School, Boston Children's Hospital, Boston, 02115, MA, USA Gil Alterovitz Jiaoyun Yang Haipeng Wang Huitong Ding Ning An Correspondence to Ning An. This file contains more experiments on other types of biobricks. Besides, classification validation for the dimensionality reduction results are also included. These results are illustrated in two figures and two tables in the file. Figure S1: Dimensionality reduction results for various combinations of Plasmid backbones, Promoters, Terminators, Translational units, Protein generators, Primers by applying Isomap algorithm. Figure S2: Dimensionality reduction results for various combinations of Plasmid backbones, Promoters, Terminators, Translational units, Protein generators, Primers by applying Laplacian Eigenmaps algorithm. Table S1: Clustering accuracy comparison of dimensionality reduction results in Figures S1 and S2. Table S2: Classification accuracy comparison of dimensionality reduction results by Isomap and Laplacian Eigenmaps. (PDF 123 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Yang, J., Wang, H., Ding, H. et al. Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization. BMC Bioinformatics 18, 47 (2017). https://doi.org/10.1186/s12859-017-1484-4 Edit distance Imaging, image analysis and data visualization
CommonCrawl
Homoclinic loop and multiple limit cycle bifurcation surfaces Author: L. M. Perko Journal: Trans. Amer. Math. Soc. 344 (1994), 101-130 MSC: Primary 58F14; Secondary 34C23, 34C37, 58F21 Abstract: This paper establishes the existence and analyticity of homoclinic loop bifurcation surfaces $\mathcal {H}$ and multiplicity-two, limit cycle bifurcation surfaces $\mathcal {C}$ for planar systems depending on two or more parameters; it determines the side of $\mathcal {H}$ or $\mathcal {C}$ on which limit cycles occur; and it shows that if $\mathcal {H}$ and $\mathcal {C}$ intersect, then typically they do so at a flat contact. A. A. Andronov, E. A. Leontovich, I. I. Gordon, and A. G. Maĭer, Theory of bifurcations of dynamic systems on a plane, Halsted Press [A division of John Wiley & Sons], New York-Toronto, Ont.; Israel Program for Scientific Translations, Jerusalem-London, 1973. Translated from the Russian. MR 0344606 Carmen Chicone and Marc Jacobs, Bifurcation of limit cycles from quadratic isochrones, J. Differential Equations 91 (1991), no. 2, 268–326. MR 1111177, DOI https://doi.org/10.1016/0022-0396%2891%2990142-V Carmen Chicone, On bifurcation of limit cycles from centers, Bifurcations of planar vector fields (Luminy, 1989) Lecture Notes in Math., vol. 1455, Springer, Berlin, 1990, pp. 20–43. MR 1094376, DOI https://doi.org/10.1007/BFb0085389 L. M. Perko, Bifurcation of limit cycles: geometric theory, Proc. Amer. Math. Soc. 114 (1992), no. 1, 225–236. MR 1086341, DOI https://doi.org/10.1090/S0002-9939-1992-1086341-1 L. M. Perko, Bifurcation of limit cycles, Bifurcations of planar vector fields (Luminy, 1989) Lecture Notes in Math., vol. 1455, Springer, Berlin, 1990, pp. 315–333. MR 1094385, DOI https://doi.org/10.1007/BFb0085398 T. R. Blows and L. M. Perko, Bifurcation of limit cycles from centers and separatrix cycles of planar analytic systems, SIAM Rev. 36 (1994), no. 3, 341–376. MR 1292641, DOI https://doi.org/10.1137/1036094 G. F. D. Duff, Limit-cycles and rotated vector fields, Ann. of Math. (2) 57 (1953), 15–31. MR 53301, DOI https://doi.org/10.2307/1969724 John Guckenheimer and Philip Holmes, Nonlinear oscillations, dynamical systems, and bifurcations of vector fields, Applied Mathematical Sciences, vol. 42, Springer-Verlag, New York, 1983. MR 709768 Lawrence Perko, Differential equations and dynamical systems, Texts in Applied Mathematics, vol. 7, Springer-Verlag, New York, 1991. MR 1083151 L. M. Perko, A global analysis of the Bogdanov-Takens system, SIAM J. Appl. Math. 52 (1992), no. 4, 1172–1192. MR 1174053, DOI https://doi.org/10.1137/0152069 Jan A. Sanders and Richard Cushman, Limit cycles in the Josephson equation, SIAM J. Math. Anal. 17 (1986), no. 3, 495–511. MR 838238, DOI https://doi.org/10.1137/0517039 Christiane Rousseau, Example of a quadratic system with two cycles appearing in a homoclinic loop bifurcation, J. Differential Equations 66 (1987), no. 1, 140–150. MR 871575, DOI https://doi.org/10.1016/0022-0396%2887%2990044-1 Cheng Zhi Li and Christiane Rousseau, A system with three limit cycles appearing in a Hopf bifurcation and dying in a homoclinic bifurcation: the cusp of order $4$, J. Differential Equations 79 (1989), no. 1, 132–167. MR 997613, DOI https://doi.org/10.1016/0022-0396%2889%2990117-4 Freddy Dumortier and Peter Fiddelaers, Quadratic models for generic local $3$-parameter bifurcations on the plane, Trans. Amer. Math. Soc. 326 (1991), no. 1, 101–126. MR 1049864, DOI https://doi.org/10.1090/S0002-9947-1991-1049864-0 F. Dumortier, R. Roussarie, and J. Sotomayor, Generic $3$-parameter families of vector fields on the plane, unfolding a singularity with nilpotent linear part. The cusp case of codimension $3$, Ergodic Theory Dynam. Systems 7 (1987), no. 3, 375–413. MR 912375, DOI https://doi.org/10.1017/S0143385700004119 Shui-Nee Chow, Bo Deng, and Bernold Fiedler, Homoclinic bifurcation at resonant eigenvalues, J. Dynam. Differential Equations 2 (1990), no. 2, 177–244. MR 1050642, DOI https://doi.org/10.1007/BF01057418 L. M. Perko, Rotated vector fields, J. Differential Equations 103 (1993), no. 1, 127–145. MR 1218741, DOI https://doi.org/10.1006/jdeq.1993.1044 Salomon Bochner and William Ted Martin, Several Complex Variables, Princeton Mathematical Series, vol. 10, Princeton University Press, Princeton, N. J., 1948. MR 0027863 Chengzhi Li, Christiane Rousseau, and Xian Wang, A simple proof for the unicity of the limit cycle in the Bogdanov-Takens system, Canad. Math. Bull. 33 (1990), no. 1, 84–92. MR 1036862, DOI https://doi.org/10.4153/CMB-1990-015-3 L. Dingjun, H. Maoan, and Z. Deming, The uniqueness of limit cycles bifurcating from a homoclinic orbit, preprint, 1990. Morris W. Hirsch and Stephen Smale, Differential equations, dynamical systems, and linear algebra, Academic Press [A subsidiary of Harcourt Brace Jovanovich, Publishers], New York-London, 1974. Pure and Applied Mathematics, Vol. 60. MR 0486784 J. Dieudonné, Foundations of modern analysis, Pure and Applied Mathematics, Vol. X, Academic Press, New York-London, 1960. MR 0120319 Earl A. Coddington and Norman Levinson, Theory of ordinary differential equations, McGraw-Hill Book Company, Inc., New York-Toronto-London, 1955. MR 0069338 L. M. Perko, Multiple limit cycle bifurcation surfaces and global families of multiple limit cycles, J. Differential Equations 122 (1995), no. 1, 89–113. MR 1356131, DOI https://doi.org/10.1006/jdeq.1995.1140 ---, Homoclinic loop and semistable (multiplicity-2) limit cycle bifurcation surfaces I and II, N.A.U. Research Report, 1991. Martin Golubitsky and David G. Schaeffer, Singularities and groups in bifurcation theory. Vol. I, Applied Mathematical Sciences, vol. 51, Springer-Verlag, New York, 1985. MR 771477 Shui Nee Chow and Jack K. Hale, Methods of bifurcation theory, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 251, Springer-Verlag, New York-Berlin, 1982. MR 660633 L. M. Perko, Global families of limit cycles of planar analytic systems, Trans. Amer. Math. Soc. 322 (1990), no. 2, 627–656. MR 998357, DOI https://doi.org/10.1090/S0002-9947-1990-0998357-4 A. A. Andronov, E. A. Leontovich, I. I. Gordon, and A. G. Maĭer, Qualitative theory of second-order dynamic systems, Halsted Press (A division of John Wiley & Sons), New York-Toronto, Ont.; Israel Program for Scientific Translations, Jerusalem-London, 1973. Translated from the Russian by D. Louvish. MR 0350126 H. Poincaré, Sur les équations de la dynamique et le problème des trois corps, Acta Math. 13 (1890), 1-270. Philip Holmes, Poincaré, celestial mechanics, dynamical-systems theory and "chaos", Phys. Rep. 193 (1990), no. 3, 137–163. MR 1076345, DOI https://doi.org/10.1016/0370-1573%2890%2990012-Q V. K. Mel′nikov, On the stability of a center for time-periodic perturbations, Trudy Moskov. Mat. Obšč. 12 (1963), 3–52 (Russian). MR 0156048 J. Sotomayor, Estabilidade estrutural de primeira ordem e variedades de Banach, Doctoral thesis, IMPA, Brazil, 1964. V. I. Arnold, Instability of dynamical systems with several degrees of freedom, Soviet Math. Dokl. 5 (1964), 581-585. A. A. Andronov and L. S. Pontryagin, Structurally stable systems, Dokl. Akad. SSSR 14 (1937). A. A. Andronov, A. A. Vitt, and S. E. Khaikin The theory of oscillations, Fizmatgiz, 1959. R. Roussarie, On the number of limit cycles which appear by perturbation of separatrix loop of planar vector fields, Bol. Soc. Brasil. Mat. 17 (1986), no. 2, 67–101. MR 901596, DOI https://doi.org/10.1007/BF02584827 A. A. Andronov, E. A. Leontovich, I. I. Gordon, and A. G. Maier, Theory of bifurcations of dynamical systems on a plane, Israel Program for Scientific Translations, Jerusalem, 1971. C. Chicone and M. Jacobs, Bifurcation of limit cycles from quadratic isochrones, J. Differential Equations 91 (1991), 268-326. C. Chicone, On bifurcation of limit cycles from centers, Lecture Notes in Math., vol. 1455, Springer-Verlag, Berlin and New York, 1990, pp. 20-43. L. M. Perko, Bifurcation of limit cycles: geometric theory, Proc. Amer. Math. Soc. 114 (1992), 225-236. ---, Bifurcation of limit cycles, Lecture Notes in Math., vol. 1455, Springer-Verlag, Berlin and New York, 1990, pp. 315-333. T. R. Blows and L. M. Perko, Bifurcation of limit cycles from centers and separatrix cycles, SIAM Rev. (to appear). G. F. D. Duff, Limit cycles and rotated vector fields, Ann. of Math. (2) 67 (1953), 15-31. J. Guckenheimer and P. Holmes, Nonlinear oscillations, dynamical systems, and bifurcations of vector fields, Appl. Math. Sci., vol. 42, Springer-Verlag, Berlin and New York, 1983. L. M. Perko, Differential equations and dynamical systems, Texts in Appl. Math., vol 7, Springer-Verlag, Berlin and New York, 1991. ---, A global analysis of the Bogdanov-Takens system, SIAM J. Appl Math. 52 (1992), 1172-1192. J. A. Sanders and R. Cushman, Limit cycles in the Josephson equation, SIAM J. Math. Anal. 17 (1986), 495-511. C. Rousseau, Example of a quadratic system with two cycles appearing in a homoclinic loop bifurcation, J. Differential Equations 66 (1987), 140-150. Chengzhi Li and C. Rousseau, A system with three limit cycles appearing in a Hopf bifurcation and dying in a homoclinic bifurcation: The cusp of order four, J. Differential Equations 79 (1989), 132-167. F. Dumortier and P. Fiddelaers, Quadratic models for generic local 3-parameter bifurcations on the plane, Trans. Amer. Math. Soc. 326 (1991), 101-126. F. Dumortier, R. Roussarie, and J. Sotomayor, Generic 3-parameter families of vector fields on the plane, unfolding a singularity with nilpotent linear part: The cusp case of codimension three, Ergodic Theory Dynamical, Systems 7 (1987), 375-413. S. N. Chow, B. Deng, and B. Fiedler, Homoclinic bifurcation at resonant eigenvalues, J. Dynamics Differential Equations 2 (1990) 177-244. L. M. Perko, Rotated vector fields, J. Differential Equations 103 (1992), 127-145. S. Bochner and W. T. Martin, Several complex variables, Princeton Univ. Press, Princeton, NJ, 1948. Chengzhi Li, C. Rousseau, and Xian Wang, A simple proof for the unicity of the limit cycle in the Bogdanov-Takens system, Canad. Math. Bull. 33 (1990), 84-92. L. Dingjun, H. Maoan, and Z. Deming, The uniqueness of limit cycles bifurcating from a homoclinic orbit, preprint, 1990. M. W. Hirsch and S. Smale, Differential equations, dynamical systems, and linear algebra, Academic Press, San Diego, 1974. J. Dieudonné, Foundations of modern analysis, Academic Press, San Diego, 1960. E. A. Coddington and N. Levinson, Theory of ordinary differential equations, McGraw-Hill, New York, 1955. L. M. Perko, Multiple limit cycle bifurcation surfaces and global families of multiple limit cycles, J. Differential Equations (submitted). ---, Homoclinic loop and semistable (multiplicity-2) limit cycle bifurcation surfaces I and II, N.A.U. Research Report, 1991. M. Golubitsky and D. G. Schaeffer, Singularities and goups in bifurcation theory, Appl. Math. Sci., vol. 51, Springer-Verlag, Berlin and New York, 1985. S. N. Chow and J. K. Hale, Methods of bifurcation theory, Springer-Verlag, Berlin and New York, 1982. L. M. Perko, Global families of limit cycles of planar analytic systems, Trans. Amer. Math. Soc. 322 (1990), 627-656. A. A. Andronov, E. A. Leontovich, I. I. Gordon, and A. G. Maier, Qualitative theory of second-order dynamic systems, Israel Program for Scientific Translations, Jerusalem, 1973. H. Poincaré, Sur les équations de la dynamique et le problème des trois corps, Acta Math. 13 (1890), 1-270. P. Holmes, Poincaré, celestial mechanics, dynamical systems theory and chaos, Phys. Rep. 193 (1990), 137-163. V. K. Melnikov, On the stability of the center for time periodic perturbations, Trans. Moscow Math. Soc. 12 (1963), 1-57. J. Sotomayor, Estabilidade estrutural de primeira ordem e variedades de Banach, Doctoral thesis, IMPA, Brazil, 1964. V. I. Arnold, Instability of dynamical systems with several degrees of freedom, Soviet Math. Dokl. 5 (1964), 581-585. A. A. Andronov and L. S. Pontryagin, Structurally stable systems, Dokl. Akad. SSSR 14 (1937). A. A. Andronov, A. A. Vitt, and S. E. Khaikin The theory of oscillations, Fizmatgiz, 1959. R. Roussarie, On the number of limit cycles which appear by a perturbation of a separatrix loop of planar vector fields, Bol. Soc. Brasil Mat. 17 (1986), 67-101. Retrieve articles in Transactions of the American Mathematical Society with MSC: 58F14, 34C23, 34C37, 58F21 Retrieve articles in all journals with MSC: 58F14, 34C23, 34C37, 58F21
CommonCrawl
Symmetry and shape Santiago de Compostela, Spain Confirmed main speakers The Alekseevskii conjecture Christoph Böhm (Universität Münster, Germany) We will report on recent progress towards the classification of homogeneous Einstein metrics. While homogeneous Ricci-flat spaces are flat, homogeneous Einstein spaces with negative Einstein constant are diffeomorphic to a Euclidean space, as predicted by the Alekseevski conjecture. If time permits we will indicate some ideas of the proof. The classification of homogeneous Einstein spaces with positive Einstein constant is wide open, even though there are general existence and non-existence results. Some canonical metrics on Riemannian manifolds Giovanni Catino (Politecnico di Milano, Italy) In this talk I will review recent results concerning the existence of some canonical Riemannian metrics on closed (compact with no boundary) smooth manifolds. The constructions of these metrics are based on Aubin's local deformations and a variant of the Yamabe problem which was first studied by Gursky. Fano 3-folds and classification of constantly curved holomorphic 2-spheres of degree $6$ in the complex Grassmannian $G(2,5)$ Quo-Shin Chi (Washington University at St. Louis, USA) Harmonic maps from the Riemann sphere to Grassmannians (and, more generally, to symmetric spaces) arise in the sigma-model theory in Physics. Such maps of constant curvature constitute a prototypical class of interest, for which Delisle, Hussin and Zakrzewski proposed the conjecture that the maximal degree of constantly curved holomorphic 2-spheres in the (complex) $G(m, n)$ is $m(n-m)$ and confirmed it when $m=2$ and $n=4$ or $5$. Up to unitary equivalence, there is only one constantly curved holomorphic 2-sphere of maximal degree 4 in $G(2,4)$ by Jin and Yu. On the other hand, up to now, the only known example in the literature of constantly curved holomorphic 2-sphere of maximal degree $6$ in $G(2,5)$ has been the first associated curve of the Veronese curve of degree $4$. By exploring the rich interplay between the Riemann sphere and projectively equivalent Fano 3-folds of index $2$ and degree $5$, we prove, up to the ambient unitary equivalence, that the moduli space of (precisely defined) generic such 2-spheres is semialgebraic of dimension 2. All these 2-spheres are verified to have non-parallel second fundamental form except for the above known example. Special Hermitian metrics and suspensions Anna Fino (University of Torino, Italy) In the talk I will report some general results on existence of special hermitian structures, like balanced, SKT and generalized Kähler, on suspensions. In particular, I will show some recent results on compact solvmanifolds, in collaboration with Fabio Paradiso, and on toric suspensions of hyperKähler manifolds, in collaboration with Gueo Grantcharov and Misha Verbitsky. Some topological and geometrical consequences of positive intermediate Ricci curvature Luis Guijarro (Universidad Autónoma de Madrid, Spain) Intermediate Ricci curvature appears naturally when taking traces of the transversal Jacobi equation. We will show how conditions on its sign restrict the geometry and the topology of a manifold in two different situations: first, we will examine the interplay between the second fundamental form of submanifolds and its focal radius; second, we will use this to provide versions of Wilking's Connectivity Theorem and of Fraenkel's Theorem on the intersection of totally geodesic submanifolds. This is joint work with Fred Wilhelm (UCR). Polar actions on Damek-Ricci spaces Andreas Kollross (Universität Stuttgart, Germany) A proper isometric Lie group action on a Riemannian manifold is called polar if there exists a closed connected submanifold which meets all orbits orthogonally. Most known examples of polar actions are related to symmetric spaces. In this talk, after a short survey on classification results, I will talk about polar actions on Damek-Ricci spaces. Using an earlier result of S. Kim, Y. Nikolayevsky and J. Park on totally geodesic subspaces of Damek-Ricci spaces, I will give examples and present some partial classifications of polar actions on Damek-Ricci spaces. In particular, I will show that non-trivial polar actions exist on all Damek-Ricci spaces. Behaviour of pseudo-Kähler structures under holomorphic deformations Adela Latorre (Universidad Politécnica de Madrid, Spain) Let $M$ be a $2n$-dimensional differentiable manifold. A pseudo-Kähler structure on $M$ is a pair $(J,\omega)$, where $J$ is a complex structure and $\omega$ is a symplectic form that satisfy the compatibility condition \[ \omega(J\cdot, J\cdot) = \omega(\cdot,\,\cdot). \] When $g(\cdot,\,\cdot)=\omega(\cdot,\,J\cdot)$ is a positive definite metric, the manifold $(M,J,\omega)$ is called Kähler. Then, a well-known result of Kodaira and Spencer states that for every small deformation $J_t$ of the initial complex structure $J_0:=J$ one can find a symplectic form $\omega_t$ that makes $(M,J_t,\omega_t)$ a Kähler manifold for every sufficiently small $t\neq 0$. In this talk, we will see that a similar behaviour cannot be guaranteed for pseudo-Kähler manifolds, namely, when the initial metric $g$ is not positive definite. This will lead us to address the problem of finding conditions on the families of complex manifolds $(M,J_t)$ under which the existence of $\omega_t$ compatible with $J_t$ is preserved. Time permitting, we will also analyze this problem for other related structures, such as neutral Calabi-Yau metrics. Homogeneous Riemannian manifolds with nullity Carlos Olmos (Universidad Nacional de Córdoba, Argentina) We will speak about joint results with Antonio J. Di Scala and Francisco Vittone, about the structure of irreducible homogeneous Riemannian manifolds $M^n= G/H$ whose curvature tensor has non-trivial nullity. In a recent paper we developed a general theory to deal with such spaces. By making use of this theory we were able to construct the first non-trivial examples in any dimension. Such examples have the minimum possible conullity $k=3$. The key fact is the existence of a non-trivial transvection $X$ at $p$ (i.e. Killing fields $(\nabla X)_p = 0$) such that $X_p$ is not in the nullity subspace $\nu_p$ at $p$ but the Jacobi operator $R_{\cdot, X_p}X_p$ is zero. The nullity distribution $\nu$ is highly non-homogeneous in the sense that no non-trivial Killing field lie in $\nu$ and hence $\nu$ is not given by the tangent spaces of orbits of an isometry subgroup of $G$. The Lie algebra $\mathfrak g$ of $G$ is never reductive (in particular, $M$ is not compact). After surveying on these results we will present recent results that give a substantial improvement for the structure of homogeneous spaces in relation with the nullity. By some rather delicate argument we showed that there is always a transvection, possibly enlarging the presentation group $G$, in any direction $\nu _p$ of the nullity, for all $p\in M$. Moreover, such transvections form an abelian ideal $\mathfrak{a}$ of $\mathfrak{g}$ which implies, if $k=3$, that $G=\mathbb{R}^{n-1}\rtimes \mathbb{R}$ and $H$ is trivial. On the one hand, the leaves of the nullity are Euclidean spaces and the projection to the quotient space $M/\nu$ is never a Riemannian submersion. But on the other hand, the foliation given by the orbits of the normal subgroup $A\subset G$, associated to $\mathfrak{a}$ is intrinsically flat, contains the nullity foliation and the projection to the quotient is a Riemannian submersion. Finally, we will show simply connected examples with non-trivial topology and compact quotients. This answers a natural question. When is an orbifold, a manifold? Marco Radeschi (University of Notre Dame, USA) Riemannian orbifolds are metric spaces locally isometric to a quotient of a Riemannian manifold, by a finite group of isometries. For such spaces, one can define orbifold homotopy and (co)-homology groups that, unlike their standard counterparts, contain information about the local quotients. In this talk, I survey a few recent results about finding geometric and topological conditions on an orbifold, which ensure it is in fact a manifold. In particular, I will talk about a recent joint work with Christian Lange, proving that if an $n$-orbifold is $n/2$-connected in the orbifold topology, then it is a manifold. Totally geodesic submanifolds in Hopf-Berger spheres Alberto Rodríguez-Vázquez (Universidade de Santiago de Compostela, Spain) The classification of transitive Lie group actions on spheres was obtained by Borel, Montgomery, and Samelson in the forties. As a consequence of this, it turns out that apart from the round metric there are other Riemannian metrics in spheres which are invariant under the action of a transitive Lie group. These other homogeneous metrics in spheres can be constructed by modifying the metric of the total space of the complex, quaternionic or octonionic Hopf fibration in the direction of the fibers. In this talk, I will report on an ongoing joint work with Carlos Olmos (Universidad Nacional de Córdoba), where we classify totally geodesic submanifolds in Hopf-Berger spheres. These are those Riemannian homogeneous spheres obtained by rescaling the round metric of the total space of Hopf fibrations by a positive factor in the direction of the fibers. Quaternion Kähler manifolds of non-negative curvature Uwe Semmelmann (Universität Stuttgart, Germany) In my talk I will discuss a (still unproved) conjecture for quaternion Kähler manifolds. For this I will present a new formulation of the proof for the analogous statement on Kähler manifolds, originally due to A. Gray. I will explain why a proof of the conjecture given by Chow and Yang is not correct and will make a few additional comments on the curvature of Wolf spaces. My talk is based on discussions with Gregor Weingart and Oscar Macia. Lagrangian submanifolds of the complex (hyperbolic) quadric Joeri Van der Veken (University of Leuven, Belgium) In this talk we discuss submanifolds of the complex quadric and the complex hyperbolic quadric. Both are Kähler-Einstein spaces with a particular geometric structure: they carry a family of non-integrable almost product structures which anti-commute with the complex structure. Most of the talk will be about Lagrangian submanifolds of these spaces, which can be seen as images of Gauss maps of hypersurfaces of spheres and of spacelike hypersurfaces of anti-de Sitter spaces respectively. Local topological rigidity of 3-manifolds of hyperbolic type Andrea Drago (Sapienza University of Rome, Italy) We study systolic inequalities for closed, orientable, Riemannian $3$-manifolds of bounded positive volume entropy. This allows us to prove that the class of atoroidal manifolds (i.e. that admit an hyperbolic metric) with uniformly bounded diameter and volume entropy is topologically rigid. In particular our main result is the following theorem: Let $X$ be a closed, orientable, atoroidal, Riemannian $3$-manifold with $\textup{Ent}(X)<E$ and $\textup{Diam}(X)<D$. Then there exist a function $s(E,D)$ such that, if $Y$ is closed, orientable, torsionless, Riemannian $3$-manifold with $\textup{Ent}(Y)<E$ and $d_{GH}(X,Y)<s(E,D)$, then $\pi_{1}(X)\cong\pi_{1}(Y)$. In particular, $X$ and $Y$ are diffeomorphic. Homogeneous spaces of $G_2$ Cristina Draper Fontanals (Universidad de Málaga, Spain) Pilar Benito, Cristina Draper and Alberto Elduque study the reductive homogeneous spaces obtained as quotients of the exceptional group $G_2$ in the Draper doctoral dissertation (see [1]), from an algebraic perspective. In this poster we revisit these spaces from a more geometrical approach. P. Benito, C. Draper, A. Elduque: Lie-Yamaguti algebras related to $\mathfrak{g}_2$, J. Pure Appl. Algebra 202 (2005), 22-54. Codimension two polar homogeneous foliations on noncompact symmetric spaces Juan Manuel Lorenzo-Naveiro (Universidade de Santiago de Compostela, Spain) An isometric action of a Lie group on a Riemannian manifold is polar if there exists a complete submanifold (called a section) that intersects every orbit perpendicularly. It is an open problem to determine all such actions up to orbit equivalence on ambient manifolds with a large group of isometries (in particular, symmetric spaces). We will focus on polar actions without singular orbits on symmetric spaces of noncompact type. In this setting, a complete classification of cohomogeneity one actions is known [2], while in codimension two there is a procedure to construct all possible foliations in which the section is the Euclidean plane, derived from a more general method described in [1]. We will show how to finish the case of codimension two by determining all polar foliations whose section is homothetic to the hyperbolic plane. J. Berndt, J. C. Díaz-Ramos, H. Tamaru: Hyperpolar homogeneous foliations on symmetric spaces of noncompact type, J. Differential Geom. 86 (2012), no. 2, 191-235. J. Berndt, H. Tamaru: Homogeneous codimension one foliations on noncompact symmetric spaces, J. Differential Geom. 63 (2003), no. 1, 1-40. Rigidity of weighted Einstein manifolds Diego Mojón Álvarez (Universidade de Santiago de Compostela, Spain) A smooth metric measure space (SMMS) is a 5-tuple $(M^n,g,f,m,\mu)$, where $(M,g)$ is a Riemannian manifold, $f\in C^\infty(M)$ is a density function, $m\in \mathbb{R^+}$ is a dimensional parameter, and $\mu\in \mathbb{R}$ is an auxiliary curvature parameter. The study of the geometry of SMMS relies on weighted objects, which retain certain geometric properties while incorporating information about the density function. We consider the weighted analogues of tensors associated to curvature. Under the weighted Einstein condition, we obtain some rigidity results when the weighted Weyl tensor is weighted harmonic, showing that the underlying structure of the manifold is that of a warped product with 1-dimensional base and Einstein fiber. Moreover, if the scalar curvature is constant then the manifold is Einstein in the usual sense. Homogeneous Hypersurfaces on Symmetric Spaces Tomás Otero-Casal (Universidade de Santiago de Compostela, Spain) Riemannian symmetric spaces provide a particularly nice setting in which to study (extrinsically) homogeneous submanifolds, which arise as orbits of isometric actions. This poster reports on some recent developments on the classification of homogeneous hypersurfaces in symmetric spaces of noncompact type where, in a joint work with J. Carlos Díaz-Ramos and Miguel Domínguez-Vázquez [1] we have proposed a new structural result about cohomogeneity one actions. J. C. Díaz-Ramos, M. Domínguez-Vázquez, T. Otero: Cohomogeneity one actions on symmetric spaces of noncompact type and higher rank. arXiv:2202.05138. The $\kappa$-nullity of Riemannian manifolds and their splitting tensors Felippe Soares Guimarães (IME-USP / KU Leuven, Brazil / Belgium) We consider Riemannian $n$-manifolds $M$ with nontrivial $\kappa$-nullity "distribution" of the curvature tensor $R$, namely, the variable rank distribution of tangent subspaces to $M$ where $R$ coincides with the curvature tensor of a space of constant curvature $\kappa$ ($\kappa\in\mathbb{R}$) is nontrivial. We obtain classification theorems under different additional assumptions, in terms of low nullity/conullity, controlled scalar curvature or existence of quotients of finite volume. We prove new results, but also revisit previous ones. On a class of system of partial differential equations describing pseudo-spherical or spherical surfaces Filipe Kelmer (University of Brasília, Brazil) We consider systems of partial differential equations of the form \[ \left\{ \begin{array}{l} u_{xt}=F\left(u,u_x,v,v_x\right),\\ v_{xt}=G\left(u,u_x,v,v_x\right), \end{array} \right. \] describing pseudospherical (pss) or spherical surfaces (ss), meaning that, their generic solutions $(u(x,t), v(x,t))$ provide metrics, with coordinates $(x,t)$, on open subsets of the plane, with constant curvature $K=-1$ or $K=1$. These systems can be described as the integrability conditions of $\mathfrak{g}$-valued linear problems, with $\mathfrak{g}=\mathfrak{sl}(2,\mathbb{R})$ or $\mathfrak{g}=\mathfrak{su}(2)$, when $K=-1$, $K=1$, respectively. We obtain characterization and also classification results. Applications of the theory provide new examples and new families of systems of differential equations, which also contain generalizations of a Pohlmeyer-Lund-Regge type system and of the Konno-Oono coupled dispersionless system. We provide explicitly the first few conservation laws, from an infinite sequence, for some of the systems describing pss. F. Kelmer, K. Tenenblat: On a class of systems of hyperbolic equations describing pseudo-spherical or spherical surfaces, J. Differential Equations 339 (2022), 372-394.
CommonCrawl
Flow visualization used PIV of hydrogen fueled free piston engine with uni-flow scavenging Cho, H.W.;Yoon, J.S.;Lee, J.T.;Lim, H.S. 182 PDF KSCI In order to improve scavenging performance of free piston hydrogen fueled engine, this study estimate compatibility of uni-flow scavenging. The scavenging flow characteristics in the cylinder is investigated by flow visualization and PIV method. Consequently it has been found that the scavenging performance decreased with abnormal expansion of piston and delay of the exhaust valve opening timing. And the scavenging performance of exhaust valve located center in cylinder head is better than that of exhaust valve located side in cylinder head. Redox reaction of Fe-based oxide mediums for hydrogen storage and release: cooperative effects of Rh, Ce and Zr additives Lee, Dong-Hee;Park, Chu-Sik;Kim, Young-Ho 189 Cooperative effects of Rh, Ce and Zr added to Fe-based oxide mediums were investigated using temperature programmed redox reaction (TPR/TPO) and isothermal redox reaction in the view point of hydrogen storage and release. As the results of TPR/TPO, Rh was a sale additive to remarkably promote the redox reaction on the medium as evidenced by the lower highest peak temperature, even though its addition was to accelerate deactivation of the mediums due to sintering. On the other hand, Ce and Zr additives played an important role to suppress deactivation of the medium in repeated redox cycles. The medium co-added by Rh, Ce and Zr (FRCZ) exhibited synergistic performance in the repeated isothermal redox reaction, and the amount of hydrogen produced in the water splitting step at 623 K was highly maintained at ca. $17\;mmol{\cdot}g^{-1}-Fe$ during three repeated redox cycles. Characteristics of Pt/C-based Catalysts for HI Decomposition in SI process Kim, J.M.;Kim, Y.H.;Kang, K.S.;Kim, C.H.;Park, C.S.;Bae, K.K. 199 HI decomposition was conducted using Pt/C-based catalysts with a fixed-bed reactor in the range of 573 K to 773 K. To examine the change of the characteristic properties of the catalysts, $N_2$ adsorption analyser, a X-ray diffractometer(XRD), and a scanning electron microscopy(SEM) were used before and after the HI decomposition reaction. the effect of Pt loading on HI decomposition was investigated by $CO_2$-TPD. HI conversion of all catalysts increased as decomposition temperature increased. The XRD analysis showed that the sizes of platinum particle became larger and agglomerated into a lump during the reaction. From $CO_2$-TPD, it can be concluded that the cause for the increase in catalytic activity may be attributed to the basic sites of catalyst surface. The results of both b desorption and gasification reaction showed the restriction on the use of Pt/C-based catalyst. Modeling and parametric studies of PEM fuel cell performance Noh, Young-Woo;Kim, Sae-Hoon;Jeong, Kwi-Seong;Son, Ik-Jae;Han, Kook-Il;Ahn, Byung-Ki 209 In the present study, a mathematical model has been formulated for the performance of polymer electrolyte fuel cells. Modify the concentration polarization equation using concentration coefficient that represents the characteristics of bipolar plate reactant distribution. The model predictions have been compared with experimental results and good agreement has been demonstrated for the cell polarization curves. The effects of operating parameters on the performance of fuel cells have been studied. Increases of operation pressure reduce the effect of temperature on the performance. Studies on a Micro Reformer System with a Two-staged Microcombustor Kim, Ki-Baek;Lee, Jung-Hak;Kwon, Oh-Chae 217 A new micro reformer system consisted of a micro reformer, a microcombustor and a micro evaporator was studied experimentally and computationally. In order to satisfy the primary requirements for designing the microcombustor integrated with a micro evaporator, i.e. stable burning in a small confinement and maximum heat transfer through a wall, the present microcombustor is simply cylindrical to be easily fabricated but two-staged (expanding downstream) to feasibly control ignition and stable burning. Results show that the aspect ratio and wall thickness of the microcombustor substantially affect ignition and thermal characteristics. For the optimized design conditions, a premixed microflame was easily ignited in the expanded second stage combustor, moved into the smaller first stage combustor, and finally stabilized therein. A micro reformer system integrated with a modified microcombustor based on the optimized design condition was fabricated. For a typical operating condition, the designed micro reformer system produced 22.3 sccm hydrogen (3.61 W in LHV) in an overall efficiency of 12%. $SO_3$ decomposition over Cu/Fe/$Al_2O_3$ granules with controlled size for hydrogen production in SI thermochemical cycle Yoo, Kye-Sang;Jung, Kwang-Deog 226 Cu/Fe/$Al_2O_3$ granules with various sizes have been prepared by a combination of sol-gel and oil drop method for the use in sulfur trioxide decomposition, a subcycle in thermochemical sulfur-iodine cycle to split water in the hydrogen and oxygen. The size of composite granules have been mainly changed by the flow-rate of the gel mixture before dropping in the synthesis. The structural properties of the samples were comparable with granule size. In the reaction, the catalytic activity was enhanced by decreasing size in the entire reaction temperature ranges. The effect of PEMFC stack performance at air supply condition Park, Chang-Kwon;Oh, Byeong-Soo 232 Research has been proceeded on fuel cell which is fueled by hydrogen. Polymer electrolyte membrane fuel cell (PEMFC) is promising power source due to high power density, simple construction and operation at low temperature. But it has problems such as high cost, temperature dependent performance. These problems could be solved by experiment which is useful for analysis and optimization of fuel cell performance and heat management. In this paper, when hydrogen flows constantly at the stoichiometry of ${\xi}=1.6$, the performance of the fuel cell stack was increased and the voltage difference between each cells was decreased according to the increase of air stoichiometry by 2.0, 2.5, 3.0. Therefore, the control of air flow rate in the same gas channel is important to get higher performance. Purpose of this research is to expect operation temperature, flow rate, performance and mass transportation through experiment and to help actual manufacture of PEM fuel cell stack. A Study on Methodology of Assessment for Hydrogen Explosion in Hydrogen Production Facility Jae, Moo-Sung;Jun, Gun-Hyo;Lee, Hyun-Woo;Lee, Won-Jae;Han, Seok-Jung 239 Hydrogen production facility using very high temperature gas cooled reactor lies in situation of high temperature and corrosion which makes hydrogen release easily. In that case of hydrogen release, there lies a danger of explosion. However, from the point of thermal-hydraulics view, the long distance of them makes lower efficiency result. In this study, therefore, outlines of hydrogen production using nuclear energy are researched. Several methods for analyzing the effects of hydrogen explosion upon high temperature gas cooled reactor are reviewed. Reliability physics model which is appropriate for assessment is used. Using this model, leakage probability, rupture probability and structure failure probability of very high temperature gas cooled reactor are evaluated and classified by detonation volume and distance. Also based on standard safety criteria which is value of $1{\times}10^{-6}$, safety distance between the very high temperature gas cooled reactor and the hydrogen production facility is calculated. Technology Trend for Non-carbon Nanomaterials Hydrogen Storage by the Patent Analysis Lee, Jin-Bae;Kang, Kyung-Seok;Han, Hye-Jeong;Kim, Jong-Wook;Kim, Hae-Jin 248 There are several well-known materials for the hydrogen storage such as metallic alloy, carbon nanomaterials, non-carbon nanomaterials, and compounds etc. Efficient and inexpensive hydrogen storage methods are an essential prerequisite for the utilization of hydrogen, one of the new and clean energy sources. Many researches have been widely performed for the hydrogen storage techniques and materials to improve the high storage capacity and stability. In this paper, the patents concerning the non-carbon nanomaterial hydrogen storage method were collected and analyzed. The search range was limited in the open patents of Korea(KR), Japan(JP), USA(US) and European Union(EP) from 1996 to 2007. Patents were collected by using key-words searching and filtered by filtering criteria. The trends of the patents was analyzed by the years, countries, companies. and technologies. Renewable Energy Policy in the UK - with Focus on Biomass Ryu, Chang-Kook 260 As one of renewable energy sources, biomass is playing a major role in reducing the greenhouse gas emission in the UK. The country currently produces about 4.5% (18.1TWh in 2006) of the total electricity generation from renewables, where biomass-based sources accounts for 50% of the amount and the remainder mostly from hydro and windpower. In 2007, the UK government has announced its new energy policy through the Energy White Paper, which includes an ambitious national target of 60% cuts in carbon emission by 2050. Complementary strategic plans in key renewable energy technologies accompanied the Energy White Paper, including biomass strategy, waste strategy and low carbon transportation strategy. This paper summarizes the current status and policy of UK for renewable energy production with focus on the use of biomass and bioenergy.
CommonCrawl
Physics Meta Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. Deriving shallow water equations from Euler's equations I would like to derive the one-dimensional shallow water equations from Eulers's equations. This works perfectly for the conservation of mass. Especially the meaning of the longitudinal fluid velocity $\bar u$ in the shallow water equations becomes clear. It can be interpreted as average of the longitudinal velocity in Euler's equations over the height above ground level. But, in Euler's balance of momentum the longitudinal velocity occurs squared and the average of the squared velocity is not necessarily equal to the square of the averaged velocity. I am stuck at this point. Next, there follows what I have so far. propagation in the direction of the $x$-axis (unit vector $\vec{e}_x$) $y$-axis points upwards (unit vector $\vec{e}_y$) everything is constant in $z$-direction (unit vector $\vec{e}_z$; this direction is mostly left out). Volume integrals become area integrals and surface integrals become line integrals. If the path runs with positive circulation the outer surface normal can be calculated via $d\vec{r}\times\vec{e}_z$. incompressible medium; The density $\rho(x,y,t)$ is constant. The ground at $y=0$ is flat. The water height is $h(x,t)$. Region of water: $0\leq y \leq h(x,t)$, region of air: $h(x,t) < y$. The static relative pressure (w.r.t. atmospheric pressure) is $p(x,y,t)=g\rho(h(x,y,t)-y)$ where $g$ is the gravitation constant. No friction. We describe the fluid motion through the height above ground $h(x,t)$ and the velocity filed $\vec{v}(x,y,t)$ in the fluid region. At first the "working" case of Euler's mass balance. A fluid motion satisfies Euler's mass balance if for all parts $A$ of the fluid cross-section area in the $(x,y)$-plane there holds the equation $$ \partial_t\int_{A} \rho dA + \int_{\partial A} \rho\, \vec{v}(x,y,t)\cdot d\vec{r}\times\vec{e}_z = 0. $$ Euler's mass balance leads to the mass balance of the shallow water equations if one restricts the choice of areas to stripes $A:=\{(x,y)\in[x_1,x_2]\times\mathbb{R}\mid 0\leq y \leq h(x,t)\}$ for $x_1 < x_2$. On the bottom $y=0$ the velocity $v(x,0,t)$ is parallel to the path element $d\vec{r}$, the spat product in the path integral over $\partial A$ is zero and thus the contribution of this section of $\partial A$ to the path integral in Euler's mass balance is zero. Because of the height changing with time there is actually a normal component of the velocity at the top. But, this is already considered in the time-dependent area in the area integral. It is easier to consider the fluid mass between the half planes $A_x:=\{(x,y,z)\in\mathbb{R}^3\mid y\geq 0\}$ (note the fixed $x$) at $x=x_1$ and $x=x_2$. The growth of this fluid mass together with the out-flow of the fluid mass through the planes $A_{x_1}$, $A_{x_2}$ must balance to zero. This directly leads directly to the formula $$ \partial_t \int_{x_1}^{x_2}h(x,t)dx + \left[\int_{0}^{h(x,t)}u(x,y,t)dy\right]_{x=x_1}^{x_2} = 0 $$ where $u(x,y,t)=\vec{e}_x\cdot \vec{v}(x,y,t)$ is the $x$-component of the fluid velocity $\vec{v}$. Thereby, we have already divided through the constant density $\rho$. With the averaged longitudinal velocity $$ \bar u(x,t):=\frac1{h(x,t)}\int_0^{h(x,t)}u(x,y,t)dy $$ the last formula gives after differentiation w.r.t. $x_2$ and renaming $x_2\mapsto x$ the mass balance of the shallow water equations: $$ \partial_t h(x,t) + \partial_x \bigl(h(x,t) \bar u(x,t)\bigr) =0 $$ Now, the more difficult case of the momentum balance. The momentum balance from Euler's equations is satisfied if for all parts $A$ of the fluid cross-section area in the $(x,y)$-plane the equation $$ \partial_t \int_{A} \rho \vec{v} d A + \int_{\partial A} \rho \vec{v} \vec{v}\cdot d \vec{r}\times \vec{e}_z = \int_{\partial A} p d \vec{r}\times \vec{e}_z = \int_{\partial A} g\rho \left(h(x,t)-y\right) d \vec{r}\times \vec{e}_z $$ is satisfied. We get rid of the constant density and only consider the $x$-component of this formula by multiplying it with $\frac1{\rho}\vec{e}_x$ and we restrict the area to sections $A=\{(x,y)\in[x_1,x_2]\times\mathbb{R}\mid 0\leq y\leq h(x,t)\}$ with $x_1<x_2$. For simplification we again integrate over the half planes $A_{x_1}$ and $A_{x_2}$ $$ \partial_t\int_{x_1}^{x_2} \int_0^{h(x,t)} u(x,y,t) d y\, d x + \left[ \int_{y=0}^{h(x,t)} u^2(x,y,t)d y \right]_{x=x_1}^{x_2}= \left[ \frac12 gh(x,t)^2 \right]_{x=x_1}^{x_2} $$ Substituting $\int_0^h u dy=h\cdot \bar u$ yields $$ \partial_t \int_{x_1}^{x_2} h(x,t)\bar u(x,t) d x + \left[ \int_{y=0}^{h(x,t)} u^2(x,y,t)d y \right]_{x=x_1}^{x_2}= \left[ \frac12 gh(x,t)^2 \right]_{x=x_1}^{x_2}. $$ Here, I am stuck. The substitution $\int_0^h u^2 dy = h {\bar u}^2$ is possible if $u(x,y,t)$ is independent of the height $y$. If this is really the case what is the reasoning for this assumption? How do we come to know the vertical distribution of $u$? If this assumption is true then we arrive at the impulse balance for the shallow water equations through differentiation w.r.t. $x_2$ and renaming $x_2\mapsto x$: $$ \partial_t\bigl(h\bar u\bigr)(x,t) + \partial_x \left(\left((h\bar u)(x,t)\right)^2 - \frac 12 g h(x,t)^2\right)=0 $$ There is a similar derivation at http://www.whoi.edu/fileserver.do?id=136564&pt=10&p=85713. But, there it is just assumed that $u$ does not depend on $y$. fluid-dynamics asked Jan 9, 2014 at 12:15 TobiasTobias Your analysis is absolutely correct. One of the reasons that the shallow water equations contain the word "shallow" in their title is that, in terms of your coordinate system, they assume that the variation of the x-velocity along the y-dimension of the fluid is negligible. This would not be reasonable in general if the vertical height of the fluid were large compared to lateral length scales (i.e. the wavelengths that contain most of the energy of the fluid motion). It is reasonable if the vertical length scale is small compared to other length scales. To be more explicit, consider some type of shallow water wave, like a gravity wave, with wavenumber $k$ (units length$^{-1}$) and a fluid of height $h$. The shallow water equations are suitable when you have most of your energy in waves that satisfy $kh << 1$. Otherwise, you have to use the full fluid equations. This is all discussed in the wikipedia article on the shallow water equations. You should consider looking at this article by David Randall, which is presently linked from the wikipedia article. You might also consider looking at Landau & Lifshitz book on fluid mechanics, sections 9 through 14 to see a treatment of shallow water gravity waves from the point of view of velocity potential. Finally, be wary of the effects of surface tension on shallow water gravity waves. If it's important, you're dealing with capillary waves. See section 62 of Landau & Lifshiftz. kleingordonkleingordon $\begingroup$ Okay I have to look deeper into the statement "horizontal pressure gradients are due to the displacement of the pressure surface, implying that the horizontal velocity field is constant throughout the depth of the fluid" from en.wikipedia.org/wiki/Shallow_water_equations. Thanks for the hint. This ensures you the bounty. I will see whether I can require Lifshitz in our Library. $\endgroup$ – Tobias Thanks for contributing an answer to Physics Stack Exchange! Shallow water wave question from Acheson's book Momentum Equations for Micropolar Fluid Landau & Lifshitz - Euler's equation for one-dimensional flow Variational form of Euler's incompressible fluid equations? External Forces from Fluid Motion Euler's Equations for Isentropic Flow Derivation Porous media flow equation in terms of fluid compressibility and determination of coefficients? Euler's Equation applied to Perfect Fluid Biological Key for Yard Weeds?
CommonCrawl
Problems in Mathematics Problems by Topics Gauss-Jordan Elimination Inverse Matrix Linear Transformation Vector Space Eigen Value Cayley-Hamilton Theorem Diagonalization Exam Problems Group Homomorphism Sylow's Theorem Module Theory Ring Theory LaTex/MathJax Login/Join us Solve later Problems My Solved Problems You solved 0 problems!! Solved Problems / Solve later Problems Compute Power of Matrix If Eigenvalues and Eigenvectors Are Given Let $A$ be a $3\times 3$ matrix. Suppose that $A$ has eigenvalues $2$ and $-1$, and suppose that $\mathbf{u}$ and $\mathbf{v}$ are eigenvectors corresponding to $2$ and $-1$, respectively, where \[\mathbf{u}=\begin{bmatrix} 1 \\ \end{bmatrix} \text{ and } \mathbf{v}=\begin{bmatrix} \end{bmatrix}.\] Then compute $A^5\mathbf{w}$, where \[\mathbf{w}=\begin{bmatrix} \end{bmatrix}.\] Add to solve later Since $\mathbf{u}$ is an eigenvector corresponding to the eigenvalue $2$, we have \[A\mathbf{u}=2\mathbf{u}.\] Similarly, we have \[A\mathbf{v}=-\mathbf{v}.\] From these, we have \[A^5\mathbf{u}=2^5\mathbf{u} \text{ and } A\mathbf{v}=(-1)^5\mathbf{v}.\] To compute $A^5\mathbf{w}$, we first need to express $\mathbf{w}$ as a linear combination of $\mathbf{u}$ and $\mathbf{v}$. Thus, we need to find scalars $c_1, c_2$ such that \[\mathbf{w}=c_1\mathbf{u}+c_2\mathbf{v}.\] By inspection, we have \[\begin{bmatrix} \end{bmatrix}=3\begin{bmatrix} \end{bmatrix}+2\begin{bmatrix} \end{bmatrix},\] and thus we obtain $c_1=3$ and $c_2=2$, We compute $A^5\mathbf{w}$ as follows: A^5\mathbf{w}&=A^5(3\mathbf{u}+2\mathbf{v})\\ &=3A^5\mathbf{u}+2A^5\mathbf{v}\\ &=3\cdot 2^5\mathbf{u}+2\cdot (-1)^5\mathbf{v}\\ &=96\mathbf{u}-2\mathbf{v}\\[6pt] &=96\begin{bmatrix} \end{bmatrix}-2\begin{bmatrix} \end{bmatrix}\\[6pt] &=\begin{bmatrix} 92 \\ -2 \\ \end{bmatrix}. Therefore, the result is \[A^5\mathbf{w}=\begin{bmatrix} Click here if solved 22 How to Find a Formula of the Power of a Matrix Let $A= \begin{bmatrix} 1 & 2\\ 2& 1 \end{bmatrix}$. Compute $A^n$ for any $n \in \N$. Plan. We diagonalize the matrix $A$ and use this Problem. Steps. Find eigenvalues and eigenvectors of the matrix $A$. Diagonalize the matrix $A$. Use […] Diagonalize the Upper Triangular Matrix and Find the Power of the Matrix Consider the $2\times 2$ complex matrix \[A=\begin{bmatrix} a & b-a\\ 0& b \end{bmatrix}.\] (a) Find the eigenvalues of $A$. (b) For each eigenvalue of $A$, determine the eigenvectors. (c) Diagonalize the matrix $A$. (d) Using the result of the […] Diagonalize a 2 by 2 Matrix $A$ and Calculate the Power $A^{100}$ Let \[A=\begin{bmatrix} 1 & 2\\ 4& 3 \end{bmatrix}.\] (a) Find eigenvalues of the matrix $A$. (b) Find eigenvectors for each eigenvalue of $A$. (c) Diagonalize the matrix $A$. That is, find an invertible matrix $S$ and a diagonal matrix $D$ such that […] Compute $A^5\mathbf{u}$ Using Linear Combination Let \[A=\begin{bmatrix} -4 & -6 & -12 \\ -2 &-1 &-4 \\ 2 & 3 & 6 \end{bmatrix}, \quad \mathbf{u}=\begin{bmatrix} 6 \\ 5 \\ -3 \end{bmatrix}, \quad \mathbf{v}=\begin{bmatrix} -2 \\ 0 \\ 1 \end{bmatrix}, \quad \text{ and } […] Use the Cayley-Hamilton Theorem to Compute the Power $A^{100}$ Let $A$ be a $3\times 3$ real orthogonal matrix with $\det(A)=1$. (a) If $\frac{-1+\sqrt{3}i}{2}$ is one of the eigenvalues of $A$, then find the all the eigenvalues of $A$. (b) Let \[A^{100}=aA^2+bA+cI,\] where $I$ is the $3\times 3$ identity matrix. Using the […] Given Eigenvectors and Eigenvalues, Compute a Matrix Product (Stanford University Exam) Suppose that $\begin{bmatrix} 1 \\ 1 \end{bmatrix}$ is an eigenvector of a matrix $A$ corresponding to the eigenvalue $3$ and that $\begin{bmatrix} 2 \\ 1 \end{bmatrix}$ is an eigenvector of $A$ corresponding to the eigenvalue $-2$. Compute $A^2\begin{bmatrix} 4 […] Two Eigenvectors Corresponding to Distinct Eigenvalues are Linearly Independent Let $A$ be an $n\times n$ matrix. Suppose that $\lambda_1, \lambda_2$ are distinct eigenvalues of the matrix $A$ and let $\mathbf{v}_1, \mathbf{v}_2$ be eigenvectors corresponding to $\lambda_1, \lambda_2$, respectively. Show that the vectors $\mathbf{v}_1, \mathbf{v}_2$ are […] Given All Eigenvalues and Eigenspaces, Compute a Matrix Product Let $C$ be a $4 \times 4$ matrix with all eigenvalues $\lambda=2, -1$ and eigensapces \[E_2=\Span\left \{\quad \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} \quad\right \} \text{ and } E_{-1}=\Span\left \{ \quad\begin{bmatrix} 1 \\ 2 \\ 1 \\ 1 […] Tags: eigenvalueeigenvectorlinear algebralinear combinationpower of a matrix Next story Determinant of a General Circulant Matrix Previous story Polynomial $(x-1)(x-2)\cdots (x-n)-1$ is Irreducible Over the Ring of Integers $\Z$ If the Nullity of a Linear Transformation is Zero, then Linearly Independent Vectors are Mapped to Linearly Independent Vectors by Yu · Published 04/30/2018 Every Basis of a Subspace Has the Same Number of Vectors A Square Root Matrix of a Symmetric Matrix with Non-Negative Eigenvalues by Yu · Published 08/09/2016 · Last modified 11/17/2017 This website's goal is to encourage people to enjoy Mathematics! This website is no longer maintained by Yu. ST is the new administrator. Linear Algebra Problems by Topics The list of linear algebra problems is available here. Introduction to Matrices Elementary Row Operations Gaussian-Jordan Elimination Solutions of Systems of Linear Equations Linear Combination and Linear Independence Nonsingular Matrices Inverse Matrices Subspaces in $\R^n$ Bases and Dimension of Subspaces in $\R^n$ General Vector Spaces Subspaces in General Vector Spaces Linearly Independency of General Vectors Bases and Coordinate Vectors Dimensions of General Vector Spaces Linear Transformation from $\R^n$ to $\R^m$ Linear Transformation Between Vector Spaces Orthogonal Bases Determinants of Matrices Computations of Determinants Introduction to Eigenvalues and Eigenvectors Eigenvectors and Eigenspaces Diagonalization of Matrices The Cayley-Hamilton Theorem Dot Products and Length of Vectors Eigenvalues and Eigenvectors of Linear Transformations Jordan Canonical Form Elementary Number Theory (1) Field Theory (27) Group Theory (126) Linear Algebra (485) Math-Magic (1) Module Theory (13) Ring theory (67) Mathematical equations are created by MathJax. See How to use MathJax in WordPress if you want to write a mathematical blog. Overall Fraction of Defective Smartphones of Three Factories Complement of Independent Events are Independent Independent and Dependent Events of Three Coins Tossing Independent Events of Playing Cards Jewelry Company Quality Test Failure Probability Orthogonality of Eigenvectors of a Symmetric Matrix Corresponding to Distinct Eigenvalues Linear Transformation to 1-Dimensional Vector Space and Its Kernel Polynomial $x^p-x+a$ is Irreducible and Separable Over a Finite Field If $ab=1$ in a Ring, then $ba=1$ when $a$ or $b$ is Not a Zero Divisor How to Find a Basis for the Nullspace, Row Space, and Range of a Matrix How to Diagonalize a Matrix. Step by Step Explanation. Determine Whether Each Set is a Basis for $\R^3$ Find the Formula for the Power of a Matrix Find an Orthonormal Basis of the Given Two Dimensional Vector Space Inverse Matrix of Positive-Definite Symmetric Matrix is Positive-Definite Range, Null Space, Rank, and Nullity of a Linear Transformation from $\R^2$ to $\R^3$ Express a Vector as a Linear Combination of Other Vectors How to Find a Formula of the Power of a Matrix Site Map & Index abelian group augmented matrix basis basis for a vector space characteristic polynomial commutative ring determinant determinant of a matrix diagonalization diagonal matrix eigenvalue eigenvector elementary row operations exam field theory finite group group group homomorphism group theory homomorphism ideal inverse matrix invertible matrix kernel linear algebra linear combination linearly independent linear transformation matrix matrix representation nonsingular matrix normal subgroup null space Ohio State Ohio State.LA rank ring ring theory subgroup subspace symmetric matrix system of linear equations transpose vector vector space Search More Problems Membership Level Free If you are a member, Login here. Problems in Mathematics © 2019. All Rights Reserved. More in Linear Algebra Hyperplane Through Origin is Subspace of 4-Dimensional Vector Space Let $S$ be the subset of $\R^4$ consisting of vectors $\begin{bmatrix} x \\ y \\ z \\ w \end{bmatrix}$ satisfying...
CommonCrawl
15 February 2018 / Data Science Learning from imbalanced data. In this blog post, I'll discuss a number of considerations and techniques for dealing with imbalanced data when training a machine learning model. The blog post will rely heavily on a sklearn contributor package called imbalanced-learn to implement the discussed techniques. Training a machine learning model on an imbalanced dataset can introduce unique challenges to the learning problem. Imbalanced data typically refers to a classification problem where the number of observations per class is not equally distributed; often you'll have a large amount of data/observations for one class (referred to as the majority class), and much fewer observations for one or more other classes (referred to as the minority classes). For example, suppose you're building a classifier to classify a credit card transaction a fraudulent or authentic - you'll likely have 10,000 authentic transactions for every 1 fraudulent transaction, that's quite an imbalance! To understand the challenges that a class imbalance imposes, let's consider two common ways we'll train a model: tree-based logical rules developed according to some splitting criterion, and parameterized models updated by gradient descent. When building a tree-based model (such as a decision tree), our objective is to find logical rules which are capable of taking the full dataset and separating out the observations into their different classes. In other words, we'd like each split in the tree to increase the purity of observations such that the data is filtered into homogeneous groups. If we have a majority class present, the top of the decision tree is likely to learn splits which separate out the majority class into pure groups at the expense of learning rules which separate the minority class. For a more concrete example, here's a decision tree trained on the wine quality dataset used as an example later on in this post. The field value represents the number of observations for each class in a given node. Similarly, if we're updating a parameterized model by gradient descent to minimize our loss function, we'll be spending most of our updates changing the parameter values in the direction which allow for correct classification of the majority class. In other words, many machine learning models are subject to a frequency bias in which they place more emphasis on learning from data observations which occur more commonly. It's worth noting that not all datasets are affected equally by class imbalance. Generally, for easy classification problems in which there's a clear separation in the data, class imbalance doesn't impede on the model's ability to learn effectively. However, datasets that are inherently more difficult to learn from see an amplification in the learning challenge when a class imbalance is introduced. When dealing with imbalanced data, standard classification metrics do not adequately represent your models performance. For example, suppose you are building a model which will look at a person's medical records and classify whether or not they are likely to have a rare disease. An accuracy of 99.5% might look great until you realize that it is correctly classifying the 99.5% of healthy people as "disease-free" and incorrectly classifying the 0.5% of people which do have the disease as healthy. I discussed this in my post on evaluating a machine learning model, but I'll provide a discussion here as well regarding useful metrics when dealing with imbalanced data. Precision is defined as the fraction of relevant examples (true positives) among all of the examples which were predicted to belong in a certain class. [{\rm{precision}} = \frac{{{\rm{true \hspace{2mm} positives}}}}{{{\rm{true \hspace{2mm} positives + false \hspace{2mm} positives}}}}] Recall is defined as the fraction of examples which were predicted to belong to a class with respect to all of the examples that truly belong in the class. [{\rm{recall}} = {\rm{ }}\frac{{{\rm{true \hspace{2mm} positives}}}}{{{\rm{true \hspace{2mm} positives + false \hspace{2mm} negatives}}}}] The following graphic does a phenomenal job visualizing the difference between precision and recall. We can further combine these two metrics into a single value by calcuating the f-score as defined below. [{F _\beta } = \left( {1 + {\beta ^2}} \right)\frac{{{\rm{precision}} \cdot {\rm{recall}}}}{{\left( {{\beta ^2} \cdot {\rm{precision}}} \right) + {\rm{recall}}}}] The $\beta$ parameter allows us to control the tradeoff of importance between precision and recall. $\beta < 1$ focuses more on precision while $\beta > 1$ focuses more on recall. Another common tool used to understand a model's performance is a Receiver Operating Characteristics (ROC) curve. An ROC curve visualizes an algorithm's ability to discriminate the positive class from the rest of the data. We'll do this by plotting the True Positive Rate against the False Positive Rate for varying prediction thresholds. [{\rm{TPR}} = {\rm{ }}\frac{{{\rm{true \hspace{2mm} positives}}}}{{{\rm{true \hspace{2mm} positives + false \hspace{2mm} negatives}}}}] [{\rm{FPR}} = {\rm{ }}\frac{{{\rm{false \hspace{2mm} positives}}}}{{{\rm{false \hspace{2mm} positives + true \hspace{2mm} negatives}}}}] For classifiers which only produce factor outcomes (ie. directly output a class), there exists a fixed TPR and FPR for a trained model. However, other classifiers, such as logistic regression, are capable of giving a probabilistic output (ie. the chance that a given observation belongs to the positive class). For these classifiers, we can specify the probability threshold by which above that amount we'll predict the observation belongs to the positive class. Image credit and Image credit If we set a very low value for this probability threshold, we can increase our True Positive Rate as we'll be more likely to capture all of the positive observations. However, this can also introduce a number of false positive classifications, increasing our False Positive Rate. Intuitively, there exists a tradeoff between maximizing our True Positive Rate and minimizing our False Positive Rate. The ideal model would correctly identify all positive observations as belonging to the positive class (TPR=1) and would not incorrectly classify negative observations as belonging to the positive class (FPR=0). This tradeoff can be visualized in this demonstration in which you can adjust the class distributions and classification threshold. The area under the curve (AUC) is a single-value metric for which attempts to summarize an ROC curve to evaluate the quality of a classifier. As the name implies, this metric approximates the area under the ROC curve for a given classifier. Recall that the ideal curve hugs the upper lefthand corner as closely as possible, giving us the ability to identify all true positives while avoiding false positives; this ideal model would have an AUC of 1. On the flipside, if your model was no better than a random guess, your TPR and FPR would increase in parallel to one another, corresponding with an AUC of 0.5. from sklearn.metrics import roc_curve, roc_auc_score preds = model.predict(X_test) fpr, tpr, thresholds = roc_curve(y_test, preds, pos_label=1) auc = roc_auc_score(y_test, preds) fig, ax = plt.subplots() ax.plot(fpr, tpr) ax.plot([0, 1], [0, 1], color='navy', linestyle='--', label='random') plt.title(f'AUC: {auc}') ax.set_xlabel('False positive rate') ax.set_ylabel('True positive rate') Class weight One of the simplest ways to address the class imbalance is to simply provide a weight for each class which places more emphasis on the minority classes such that the end result is a classifier which can learn equally from all classes. To calculate the proper weights for each class, you can use the sklearn utility function shown in the example below. from sklearn.utils.class_weight import compute_class_weight weights = compute_class_weight('balanced', classes, y) In a tree-based model where you're determining the optimal split according to some measure such as decreased entropy, you can simply scale the entropy component of each class by the corresponding weight such that you place more emphasis on the minority classes. As a reminder, the entropy of a node can be calculated as $$ \mathop - \sum \limits_i^{} {p_i}{\log}\left( {{p_i}} \right) $$ where $p_i$ is the fraction of data points within class $i$. In a gradient-based model, you can scale the calculated loss for each observation by the appropriate class weight such that you place more significance on the losses associated with minority classes. As a reminder, a common loss function for classification is the categorical cross entropy (which is very similar to the above equation, albeit with slight differences). This may be calculated as $$ -\sum_i y_i \log \hat{y}_ i $$ where $y_i$ represents the true class (typically a one-hot encoded vector) and $\hat{y}_ i$ represents the predicted class distribution. Oversampling Another approach towards dealing with a class imbalance is to simply alter the dataset to remove such an imbalance. In this section, I'll discuss common techniques for oversampling the minority classes to increase the number of minority observations until we've reached a balanced dataset. Random oversampling The most naive method of oversampling is to randomly sample the minority classes and simply duplicate the sampled observations. With this technique, it's important to note that you're artificially reducing the variance of the dataset. SMOTE However, we can also use our existing dataset to synthetically generate new data points for the minority classes. Synthetic Minority Over-sampling Technique (SMOTE) is a technique that generates new observations by interpolating between observations in the original dataset. For a given observation $x_i$, a new (synthetic) observation is generated by interpolating between one of the k-nearest neighbors, $x_{zi}$. $$x_{new} = x_i + \lambda (x_{zi} - x_i)$$ where $\lambda$ is a random number in the range $\left[ {0,1} \right]$. This interpolation will create a sample on the line between $x_{i}$ and $x_{zi}$. This algorithm has three options for selecting which observations, $x_i$, to use in generating new data points. regular: No selection rules, randomly sample all possible $x_i$. borderline: Separates all possible $x_i$ into three classes using the k nearest neighbors of each point. noise: all nearest-neighbors are from a different class than $x_i$ in danger: at least half of the nearest neighbors are of the same class as $x_i$ safe: all nearest neighbors are from the same class as $x_i$ svm: Uses an SVM classifier to identify the support vectors (samples close to the decision boundary) and samples $x_i$ from these points. ADASYN Adaptive Synthetic (ADASYN) sampling works in a similar manner as SMOTE, however, the number of samples generated for a given $x_i$ is proportional to the number of nearby samples which do not belong to the same class as $x_i$. Thus, ADASYN tends to focus solely on outliers when generating new synthetic training examples. Undersampling Rather than oversampling the minority classes, it's also possible to achieve class balance by undersampling the majority class - essentially throwing away data to make it easier to learn characteristics about the minority classes. Random undersampling As with oversampling, a naive implementation would be to simply sample the majority class at random until reaching a similar number of observations as the minority classes. For example, if your majority class has 1,000 observations and you have a minority class with 20 observations, you would collect your training data for the majority class by randomly sampling 20 observations from the original 1,000. As you might expect, this could potentially result in removing key characteristics of the majority class. The general idea behind near miss is to only the sample the points from the majority class necessary to distinguish between other classes. NearMiss-1 select samples from the majority class for which the average distance of the N closest samples of a minority class is smallest. NearMiss-2 select samples from the majority class for which the average distance of the N farthest samples of a minority class is smallest. Tomeks links A Tomek's link is defined as two observations of different classes ($x$ and $y$) such that there is no example $z$ for which: $$d(x, z) < d(x, y) \text{ or } d(y, z) < d(x, y)$$ where $d()$ is the distance between the two samples. In other words, a Tomek's link exists if two observations of different classes are the nearest neighbors of each other. In the figure below, a Tomek's link is illustrated by highlighting the samples of interest in green. For this undersampling strategy, we'll remove any observations from the majority class for which a Tomek's link is identified. Depending on the dataset, this technique won't actually achieve a balance among the classes - it will simply "clean" the dataset by removing some noisy observations, which may result in an easier classification problem. As I discussed earlier, most classifiers will still perform adequately for imbalanced datasets as long as there's a clear separation between the classifiers. Thus, by focusing on removing noisy examples of the majority class, we can improve the performance of our classifier even if we don't necessarily balance the classes. Edited nearest neighbors EditedNearestNeighbours applies a nearest-neighbors algorithm and "edit" the dataset by removing samples which do not agree "enough" with their neighboorhood. For each sample in the class to be under-sampled, the nearest-neighbours are computed and if the selection criterion is not fulfilled, the sample is removed. This is a similar approach as Tomek's links in the respect that we're not necessarily focused on actually achieving a class balance, we're simply looking to remove noisy observations in an attempt to make for an easier classification problem. To demonstrate these various techniques, I've trained a number of models on the UCI Wine Quality dataset where I've generated my target by asserting that observations with a quality rating less than or equal to 4 are "low quality" wine and observations with a quality rating greater than or equal to 5 are "high quality" wine. As you can see, this has introduced an imbalance between the two classes. I'll provide the notebook I wrote to explore these techniques in a Github repo if you're interested in exploring this further. I highly encourage you to check out this notebook and perform the same experiment on a different dataset to see how it compares - let me know in the comment section! Logistic regression SVM Classifier AdaBoost Classifier Simple neural network Learning from Imbalanced Data - Literature Review Learning from Imbalanced Classes Learning from imbalanced data: open challenges and future directions Handling imbalanced datasets in machine learning Setting the learning rate of your neural network. In previous posts, I've discussed how we can train neural networks using backpropagation with gradient descent. One of the key hyperparameters to set in order to train a neural network is the learning Normalizing your data (specifically, input and batch normalization). In this post, I'll discuss considerations for normalizing your data - with a specific focus on neural networks. In order to understand the concepts discussed, it's important to have an understanding of gradient
CommonCrawl
MIT-Harvard-MSR Combinatorics Seminar Schedule 2005 Spring Michael Mitzenmacher, Harvard University The Power of Two Choices: Some Old Results and New Variations (PDF) Sharad Goel, Cornell University Mixing Times For Top To Bottom Shuffles (PDF) Yuri Rabinovich, Univesity of Haifa Local Versus Local Properties Of Metric Spaces (PDF) Greg Blekherman, University of Michigan Ann Arbor Convex Geometry Of Orbits (PDF) Kyle Petersen, Brandeis University Descents, Peaks and P-Partitions (PDF) Sergi Elizalde, MSRI Inference Functions And Sequence Alignment (PDF) Jonathan David Farley, Harvard University Posets With The Same Number Of Order Ideals Of Each Cardinality: A Problem From Stanley's Enumerative Combinatorics In Richard P. Stanley's 1986 text, {\sl Enumerative Combinatorics\/}, the following problem is posed: Fix a natural number $k$. Consider the posets $P$ of cardinality $n$ such that, for $0<i<n$, $P$ has exactly $k$ order ideals (down-sets) of cardinality $i$. Let $f_k(n)$ be the number of such posets. What is the generating function $\sum f_k(n) x^n$? We solve this problem. (Joint work with Ryan Klippenstine.) Michael Shapiro, Michigan State University Cluster Algebras of Finite Mutation Coordinate rings of some natural geometrical objects (for example, Grassmannians) possess distinguished sets of generators (Plucker coordinates). Cluster algebras are generalizations of such coordinate rings. Sets of generators of a cluster algebra are organized in a tree-like structure with simple transformation rules between neighboring sets of generators. Fomin and Zelevinsky obtained a complete classification for cluster algebras of finite type (i.e., with finite number of generators). This classification coincides with the Cartan-Killing classification of simple Lie algebras. We try to describe cluster algebras with finitely many transformation rules, so called cluster algebras of finite mutation type. We prove that a special class of cluster algebras originating from triangulation of Riemann surfaces is of finite mutation type. This is joint work with Sergey Fomin. James Propp The Combinatorics Of Markoff Numbers Alexander Yong, Berkeley On Smoothness and Gorensteinness of Schubert Varieties Schubert varieties are classical objects of study in algebraic geometry; their study often reduces to easy-to-state combinatorial questions. Gorensteinness is a well-known measure of the "pathology" of the singularities of an algebraic variety. Gorensteinness is a condition that is logically weaker than smoothness but stronger than Cohen-Macaulayness. We present a non-recursive, combinatorial characterization of which Schubert varieties in the flag variety are Gorenstein. Our answer is in terms of generalized permutation pattern avoidance conditions. I'll explain the algebraic geometric and representation (Borel-Weil) theoretic applications of this work. I will also describe further combinatorial questions. This is a joint project with Alexander Woo, see math.AG/0409490. Yuval Roichman, BIU and UCSD Statistics on Permutation Groups, Canonical Words and Pattern Avoidance The number of left to right minima of a permutation is generalized to Coxeter (and closely related) groups, via an interpretation as the number of "long factors" in canonical expressions of elements in the group. This statistic is used to determine a covering map, which 'lifts' identities on the symmetric group $S_n$ to the alternating group $A_{n+1}$. The covering map is then extended to 'lift' known identities on $S_n$ to new identities on $S_{n+q-1}$ for every positive integer $q$, thus yielding $q$-analogues of the known $S_n$ identities. Equi-distribution identities on certain families of pattern avoiding permutations follow. The cardinalities of subsets of permutations avoiding these patterns are given by extended Stirling and Bell numbers. The dual systems (determined by matrix inversion) have combinatorial realizations via statistics on colored permutations. Joint with Amitai Regev. Egon Schulte, Northeastern University Reflection Groups and Polytopes Over Finite Fields Any Coxeter group G with string diagram is the automorphism group of an abstract regular polytope (typically infinite). When G is crystallographic, its standard real representation is easily reduced modulo an odd prime p, thus giving a finite representation in some finite orthogonal space V over $\Bbb F_p$. The finite group need not be polytopal; and whether or not it is depends in an intricate way on the geometry of V. The talk presents recent work with Barry Monson, in which we describe this construction in considerable generality and study in depth the interplay between the geometric properties of the polytope (if it exist) and the algebraic structure of the overlying finite orthogonal group. As a byproduct, we obtain many new maps on surfaces and even more interesting polytopes of higher rank. Seunghyun Seo, Brandeis University A Generelized Enumeration of Labeled Trees In this talk, we'll give a simple combinatorial explanation of a formula of A. Postnikov relating bicolored rooted trees to bicolored binary trees. We'll also present generalized formula for the number of labeled k-ary trees, rooted labeled trees, and labeled plane trees. Combinatorial explanations of these formulas will be discussed too. This is joint work with Ira Gessel. Benny Sudakov, Princeton University Dependent Random Choice and Ramsey-Turan Type Problems The Probabilistic Method is a powerful tool in tackling many problems in Combinatorics and it belongs to those areas of mathematical research that have experienced a most impressive growth in recent years. One of the parts of discrete mathematics where this approach has proved to be especially useful is Extremal Combinatorics. In fact, many of the strongest results in this area in the last few decades are examples of this method. In this talk we discuss a few recent applications of this methodology. In particular, we present simple but yet surprisingly powerful probabilistic arguments which were used recently to make progress on some long-standing Ramsey and Turan type problems. Lauren Williams, Massachusetts Institute of Technology Bergman Complexes, Coxeter Arrangements, and Graph Associahedra The Bergman complex B(M) and the positive Bergman complex B+(M) of an oriented matroid M generalize to matroids the notions of tropical varieties and positive tropical varieties. Our main result is that if M is the oriented matroid of a Coxeter arrangement, then B(M) equals the nested set complex of that arrangement, and B+(M) is dual to the corresponding graph associahedron. This recovers Carr and Devadoss' tiling of the minimal blowup of a Coxeter complex by graph associahedra. This is joint work with Federico Ardila and Vic Reiner. Thorsten Theobald, TU Berlin and Yale University Combinatorial Aspects of Tropical Geometry Tropical geometry is the geometry of the tropical semiring $(\mathbb{R}, \min, +)$. Tropical varieties are polyhedral cell complexes which behave like complex algebraic varieties. The link between classical complex geometry and tropical geometry is provided by amoebas, which are logarithmic images of complex varieties. In this talk, we begin with a combinatorially oriented review of some fundamental concepts in tropical geometry (aimed at a general audience), and then we turn towards some algorithmic problems concerning the intersection of tropical hypersurfaces in general dimension: deciding whether this intersection is nonempty, whether it is a tropical variety, and whether it is connected, as well as counting the number of connected components. We characterize the borderline between tractable and hard computations by proving NP-hardness and #P-hardness results even under various strong restrictions of the input data, as well as providing polynomial time Noga Alon, Tel Aviv University & IAS Structures and Algorithms in External Combinatorics (Simons Lectures) Ramsey Theory: Motivation, Results and Challenges The Structure of Graphs and Grothendieck Type Inequalities Property Testing and Approximation Algorithms Peter Cameron, University of London Tutte Polynomial and Orbit Counting Many counting problems for graphs, codes, etc. are solved by appropriate specialisations of the Tutte polynomial. Suppose that we have a group of automorphisms of the structure in question, and we want to count orbits of this group acting on the appropriate objects. Is there a polynomial which does this? Two such polynomials have been proposed; the first combines the cycle index of the group with the Tutte polynomial, the second directly generalises the Tutte polynomial itself. The second example works only for matroids representable over a principal ideal domain, and is a multivariate generating function for the invariant factors of certain matrices. Derek Smith, Lafayette College A Family of Configurations From Quantum Logic This broadly-accessible talk will introduce certain finite collections of vectors in $2n$-dimensional Euclidean space with combinatorial and geometrical properties that are useful in the study of quantum logic. These configurations generalize the $4$-dimensional configurations employed by Peres (1993), Navara and Pt\'{a}k (2004), and others in various treatments of the Kochen-Specker theorem. We will discuss applications, including the characterization of certain group-valued measures on the closed subspaces of Hilbert space, and we will conclude with several open problems. This work is joint with John Harding and Ekaterina Jager. David Gamarnik, IBM Applications of the Local Weak Convergence Method to Random Graph Problems Local Weak Convergence method (LWC) exploits local structure (typically a tree) of a large random combinatorial object and leads to a complete asymptotic solutions to several optimization problems on random graphs. The method reduces the original problem into the problem of finding fixed points of a certain distributional operator. We show that when the fixed point of the second iterate of the distributional operator is unique, it determines the value of the underlying combinatorial optimization problem. Martin Kassabov, Cornell University Symmetric Groups and Expanders A finite graphs with large spectral gap are called expanders. These graphs have many nice properties and have many applications. It is easy to see that a random graph is an expander but constructing an explicit examples is very difficult. All known explicit constructions are based on the group theory --- if an infinite group G has property T (or its variants) then the Cayley graphs of its finite quotients form an expander family. This leads to the following question: For which infinite families of groups G_i, it is possible to find generating sets S_i which makes the Cayley graphs expanders? The answer of the question is known only in few case. It seems that if G_i are far enough from being abelian then the answer is YES. However if one takes 'standard' generating sets the resulting Cayley graphs are not expanders (in many cases). I will describe a recent construction which answers the above question in the case of the family of all symmetric/alternating groups. It is possible to construct explicit generating sets S_n of Alt_n, such that the Cayley graphs C(Alt_n,S_n) are expanders, and the expanding constant can be estimated. Unlike the usually constructions of expanders, the proof does not use an infinite group with property T (although such group exists) but uses the representation theory of the symmetric groups directly. József Solymosi, UBC On The Number Of Sums And Products (PDF) All announcements since Fall 2007 are in the Google Calendar 2006 FA 2005 IAP Seminar Home MIT Mathematics Accessibility
CommonCrawl
ATTENTION CANADIAN CUSTOMERS: Due to delays caused by it's union's ongoing rotating strikes, Canada Post has suspended its delivery standard guarantees for parcel services. This may cause a delay in the delivery of your shipment unless you select DHL Express or UPS Express as your shipping service. For more information or further assistance, please visit the Canada Post website. Thank you. Much better than I had expected. One of the best superhero movies so far, better than Thor or Watchmen (and especially better than the Iron Man movies). I especially appreciated how it didn't launch right into the usual hackneyed creation of the hero plot-line but made Captain America cool his heels performing & selling war bonds for 10 or 20 minutes. The ending left me a little nonplussed, although I sort of knew it was envisioned as a franchise and I would have to admit that showing Captain America wondering at Times Square is much better an ending than something as cliche as a close-up of his suddenly-opened eyes and then a fade out. (The movie continued the lamentable trend in superhero movies of having a strong female love interest… who only gets the hots for the hero after they get muscles or powers. It was particularly bad in CA because she knows him and his heart of gold beforehand! What is the point of a feminist character who is immediately forced to do that?)↩ However, normally when you hear the term nootropic kicked around, people really mean a "cognitive enhancer" — something that does benefit thinking in some way (improved memory, faster speed-of-processing, increased concentration, or a combination of these, etc.), but might not meet the more rigorous definition above. "Smart drugs" is another largely-interchangeable term. Aniracetam is known as one of the smart pills with the widest array of uses. From benefits for dementia patients and memory boost in adults with healthy brains, to the promotion of brain damage recovery. It also improves the quality of sleep, what affects the overall increase in focus during the day. Because it supports the production of dopamine and serotonin, it elevates our mood and helps fight depression and anxiety. Two increasingly popular options are amphetamines and methylphenidate, which are prescription drugs sold under the brand names Adderall and Ritalin. In the United States, both are approved as treatments for people with ADHD, a behavioural disorder which makes it hard to sit still or concentrate. Now they're also widely abused by people in highly competitive environments, looking for a way to remain focused on specific tasks. Let's start with the basics of what smart drugs are and what they aren't. The field of cosmetic psychopharmacology is still in its infancy, but the use of smart drugs is primed to explode during our lifetimes, as researchers gain increasing understanding of which substances affect the brain and how they do so. For many people, the movie Limitless was a first glimpse into the possibility of "a pill that can make you smarter," and while that fiction is a long way from reality, the possibilities - in fact, present-day certainties visible in the daily news - are nevertheless extremely exciting. After my rudimentary stacking efforts flamed out in unspectacular fashion, I tried a few ready-made stacks—brand-name nootropic cocktails that offer to eliminate the guesswork for newbies. They were just as useful. And a lot more expensive. Goop's Braindust turned water into tea-flavored chalk. But it did make my face feel hot for 45 minutes. Then there were the two pills of Brain Force Plus, a supplement hawked relentlessly by Alex Jones of InfoWars infamy. The only result of those was the lingering guilt of knowing that I had willingly put $19.95 in the jorts pocket of a dipshit conspiracy theorist. Smart pills containing Aniracetam may also improve communication between the brain's hemispheres. This benefit makes Aniracetam supplements ideal for enhancing creativity and stabilizing mood. But, the anxiolytic effects of Aniracetam may be too potent for some. There are reports of some users who find that it causes them to feel unmotivated or sedated. Though, it may not be an issue if you only seek the anti-stress and anxiety-reducing effects. One claim was partially verified in passing by Eliezer Yudkowsky (Supplementing potassium (citrate) hasn't helped me much, but works dramatically for Anna, Kevin, and Vassar…About the same as drinking a cup of coffee - i.e., it works as a perker-upper, somehow. I'm not sure, since it doesn't do anything for me except possibly mitigate foot cramps.) The information on this website has not been evaluated by the Food & Drug Administration or any other medical body. We do not aim to diagnose, treat, cure or prevent any illness or disease. Information is shared for educational purposes only. You must consult your doctor before acting on any content on this website, especially if you are pregnant, nursing, taking medication, or have a medical condition. "You know how they say that we can only access 20% of our brain?" says the man who offers stressed-out writer Eddie Morra a fateful pill in the 2011 film Limitless. "Well, what this does, it lets you access all of it." Morra is instantly transformed into a superhuman by the fictitious drug NZT-48. Granted access to all cognitive areas, he learns to play the piano in three days, finishes writing his book in four, and swiftly makes himself a millionaire. Ngo has experimented with piracetam himself ("The first time I tried it, I thought, 'Wow, this is pretty strong for a supplement.' I had a little bit of reflux, heartburn, but in general it was a cognitive enhancer. . . . I found it helpful") and the neurotransmitter DMEA ("You have an idea, it helps you finish the thought. It's for when people have difficulty finishing that last connection in the brain"). Many people quickly become overwhelmed by the volume of information and number of products on the market. Because each website claims its product is the best and most effective, it is easy to feel confused and unable to decide. Smart Pill Guide is a resource for reliable information and independent reviews of various supplements for brain enhancement. Elaborating on why the psychological side effects of testosterone injection are individual dependent: Not everyone get the same amount of motivation and increased goal seeking from the steroid and most people do not experience periods of chronic avolition. Another psychological effect is a potentially drastic increase in aggression which in turn can have negative social consequences. In the case of counterfactual Wedrifid he gets a net improvement in social consequences. He has observed that aggression and anger are a prompt for increased ruthless self-interested goal seeking. Ruthless self-interested goal seeking involves actually bothering to pay attention to social politics. People like people who do social politics well. Most particularly it prevents acting on contempt which is what Wedrifid finds prompts the most hostility and resentment in others. Point is, what is a sanity promoting change in one person may not be in another. The reviews on this site are a demonstration of what someone who uses the advertised products may experience. Results and experience may vary from user to user. All recommendations on this site are based solely on opinion. These products are not for use by children under the age of 18 and women who are pregnant or nursing. If you are under the care of a physician, have a known medical condition or are taking prescription medication, seek medical advice from your health care provider before taking any new supplements. All product reviews and user testimonials on this page are for reference and educational purposes only. You must draw your own conclusions as to the efficacy of any nutrient. Consumer Advisor Online makes no guarantee or representations as to the quality of any of the products represented on this website. The information on this page, while accurate at the time of publishing, may be subject to change or alterations. All logos and trademarks used in this site are owned by the trademark holders and respective companies. (As I was doing this, I reflected how modafinil is such a pure example of the money-time tradeoff. It's not that you pay someone else to do something for you, which necessarily they will do in a way different from you; nor is it that you have exchanged money to free yourself of a burden of some future time-investment; nor have you paid money for a speculative return of time later in life like with many medical expenses or supplements. Rather, you have paid for 8 hours today of your own time.) "I cannot overstate how grateful I am to Cavin for having published this book (and launched his podcast) before I needed it. I am 3.5 months out from a concussion and struggling to recover that final 25% or so of my brain and function. I fully believe that diet and lifestyle can help heal many of our ills, and this book gives me a path forward right now. Gavin's story is inspiring, and his book is well-researched and clearly written. I am a food geek and so innately understand a lot of his advice — I'm not intimidated by the thought of drastically changing my diet because I know well how to shop and cook for myself — but I so appreciate how his gentle approach and stories about his own struggles with a new diet might help people who would find it all daunting. I am in week 2 of following his advice (and also Dr. Titus Chiu's BrainSave plan). It's not an instantaneous miracle cure, but I do feel better in several ways that just might be related to this diet." Given the size of the literature just reviewed, it is surprising that so many basic questions remain open. Although d-AMP and MPH appear to enhance retention of recently learned information and, in at least some individuals, also enhance working memory and cognitive control, there remains great uncertainty regarding the size and robustness of these effects and their dependence on dosage, individual differences, and specifics of the task. Rogers RD, Blackshaw AJ, Middleton HC, Matthews K, Hawtin K, Crowley C, Robbins TW. Tryptophan depletion impairs stimulus-reward learning while methylphenidate disrupts attentional control in healthy young adults: Implications for the monoaminergic basis of impulsive behaviour. Psychopharmacology. 1999;146:482–491. doi: 10.1007/PL00005494. [PubMed] [CrossRef] Regardless, while in the absence of piracetam, I did notice some stimulant effects (somewhat negative - more aggressive than usual while driving) and similar effects to piracetam, I did not notice any mental performance beyond piracetam when using them both. The most I can say is that on some nights, I seemed to be less easily tired when writing or editing or n-backing (and I felt less tired than ICON 2011 than ICON 2010), but those were also often nights I was also trying out all the other things I had gotten in that order from Smart Powders, and I am still dis-entangling what was responsible. (Probably the l-theanine or sulbutiamine.) Competitors of importance in the smart pills market have been recorded and analyzed in MRFR's report. These market players include RF Co., Ltd., CapsoVision, Inc., JINSHAN Science & Technology, BDD Limited, MEDTRONIC, Check-Cap, PENTAX Medical, INTROMEDIC, Olympus Corporation, FUJIFILM Holdings Corporation, MEDISAFE, and Proteus Digital Health, Inc. It may also be necessary to ask not just whether a drug enhances cognition, but in whom. Researchers at the University of Sussex have found that nicotine improved performance on memory tests in young adults who carried one variant of a particular gene but not in those with a different version. In addition, there are already hints that the smarter you are, the less smart drugs will do for you. One study found that modafinil improved performance in a group of students whose mean IQ was 106, but not in a group with an average of 115. If this is the case, this suggests some thoughtfulness about my use of nicotine: there are times when use of nicotine will not be helpful, but times where it will be helpful. I don't know what makes the difference, but I can guess it relates to over-stimulation: on some nights during the experiment, I had difficult concentrating on n-backing because it was boring and I was thinking about the other things I was interested in or working on - in retrospect, I wonder if those instances were nicotine nights. "As a neuro-optometrist who cares for many brain-injured patients experiencing visual challenges that negatively impact the progress of many of their other therapies, Cavin's book is a god-send! The very basic concept of good nutrition among all the conflicting advertisements and various "new" food plans and diets can be enough to put anyone into a brain fog much less a brain injured survivor! Cavin's book is straightforward and written from not only personal experience but the validation of so many well-respected contemporary health care researchers and practitioners! I will certainly be recommending this book as a "Survival/Recovery 101" resource for all my patients including those without brain injuries because we all need optimum health and well-being and it starts with proper nourishment! Kudos to Cavin Balaster!" The research literature, while copious, is messy and varied: methodologies and devices vary substantially, sample sizes are tiny, the study designs vary from paper to paper, metrics are sometimes comically limited (one study measured speed of finishing a RAPM IQ test but not scores), blinding is rare and unclear how successful, etc. Relevant papers include Chung et al 2012, Rojas & Gonzalez-Lima 2013, & Gonzalez-Lima & Barrett 2014. Another Longecity user ran a self-experiment, with some design advice from me, where he performed a few cognitive tests over several periods of LLLT usage (the blocks turned out to be ABBA), using his father and towels to try to blind himself as to condition. I analyzed his data, and his scores did seem to improve, but his scores improved so much in the last part of the self-experiment I found myself dubious as to what was going on - possibly a failure of randomness given too few blocks and an temporal exogenous factor in the last quarter which was responsible for the improvement. My answer is that this is not a lot of research or very good research (not nearly as good as the research on nicotine, eg.), and assuming it's true, I don't value long-term memory that much because LTM is something that is easily assisted or replaced (personal archives, and spaced repetition). For me, my problems tend to be more about akrasia and energy and not getting things done, so even if a stimulant comes with a little cost to long-term memory, it's still useful for me. I'm going continue to use the caffeine. It's not so bad in conjunction with tea, is very cheap, and I'm already addicted, so why not? Caffeine is extremely cheap, addictive, has minimal effects on health (and may be beneficial, from the various epidemiological associations with tea/coffee/chocolate & longevity), and costs extra to remove from drinks popular regardless of their caffeine content (coffee and tea again). What would be the point of carefully investigating it? Suppose there was conclusive evidence on the topic, the value of this evidence to me would be roughly $0 or since ignorance is bliss, negative money - because unless the negative effects were drastic (which current studies rule out, although tea has other issues like fluoride or metal contents), I would not change anything about my life. Why? I enjoy my tea too much. My usual tea seller doesn't even have decaffeinated oolong in general, much less various varieties I might want to drink, apparently because de-caffeinating is so expensive it's not worthwhile. What am I supposed to do, give up my tea and caffeine just to save on the cost of caffeine? Buy de-caffeinating machines (which I couldn't even find any prices for, googling)? This also holds true for people who drink coffee or caffeinated soda. (As opposed to a drug like modafinil which is expensive, and so the value of a definitive answer is substantial and would justify some more extensive calculating of cost-benefit.) In 2011, as part of the Silk Road research, I ordered 10x100mg Modalert (5btc) from a seller. I also asked him about his sourcing, since if it was bad, it'd be valuable to me to know whether it was sourced from one of the vendors listed in my table. He replied, more or less, I get them from a large Far Eastern pharmaceuticals wholesaler. I think they're probably the supplier for a number of the online pharmacies. 100mg seems likely to be too low, so I treated this shipment as 5 doses: Integrity & Reputation: Go with a company that sells more than just a brain formula. If a company is just selling this one item,buyer-beware!!! It is an indication that it is just trying to capitalize on a trend and make a quick buck. Also, if a website selling a brain health formula does not have a highly visible 800# for customer service, you should walk away. Use of prescription stimulants by normal healthy individuals to enhance cognition is said to be on the rise. Who is using these medications for cognitive enhancement, and how prevalent is this practice? Do prescription stimulants in fact enhance cognition for normal healthy people? We review the epidemiological and cognitive neuroscience literatures in search of answers to these questions. Epidemiological issues addressed include the prevalence of nonmedical stimulant use, user demographics, methods by which users obtain prescription stimulants, and motivations for use. Cognitive neuroscience issues addressed include the effects of prescription stimulants on learning and executive function, as well as the task and individual variables associated with these effects. Little is known about the prevalence of prescription stimulant use for cognitive enhancement outside of student populations. Among college students, estimates of use vary widely but, taken together, suggest that the practice is commonplace. The cognitive effects of stimulants on normal healthy people cannot yet be characterized definitively, despite the volume of research that has been carried out on these issues. Published evidence suggests that declarative memory can be improved by stimulants, with some evidence consistent with enhanced consolidation of memories. Effects on the executive functions of working memory and cognitive control are less reliable but have been found for at least some individuals on some tasks. In closing, we enumerate the many outstanding questions that remain to be addressed by future research and also identify obstacles facing this research. The benefits that they offer are gradually becoming more clearly understood, and those who use them now have the potential to get ahead of the curve when it comes to learning, information recall, mental clarity, and focus. Everyone is different, however, so take some time to learn what works for you and what doesn't and build a stack that helps you perform at your best. There are also premade 'stacks' (or formulas) of cognitive enhancing superfoods, herbals or proteins, which pre-package several beneficial extracts for a greater impact. These types of cognitive enhancers are more 'subtle' than the pharmaceutical alternative with regards to effects, but they work all the same. In fact, for many people, they work better than smart drugs as they are gentler on the brain and produce fewer side-effects. Brain focus pills mostly contain chemical components like L-theanine which is naturally found in green and black tea. It's associated with enhancing alertness, cognition, relaxation, arousal, and reducing anxiety to a large extent. Theanine is an amino and glutamic acid that has been proven to be a safe psychoactive substance. Some studies suggest that this compound influences, the expression in the genes present in the brain which is responsible for aggression, fear, and memory. This, in turn, helps in balancing the behavioral responses to stress and also helps in improving specific conditions, like Post Traumatic Stress Disorder (PTSD). A record of nootropics I have tried, with thoughts about which ones worked and did not work for me. These anecdotes should be considered only as anecdotes, and one's efforts with nootropics a hobby to put only limited amounts of time into due to the inherent limits of drugs as a force-multiplier compared to other things like programming1; for an ironic counterpoint, I suggest the reader listen to a video of Jonathan Coulton's I Feel Fantastic while reading. Some smart drugs can be found in health food stores; others are imported or are drugs that are intended for other disorders such as Alzheimer's disease and Parkinson's disease. There are many Internet web sites, books, magazines and newspaper articles detailing the supposed effects of smart drugs. There are also plenty of advertisements and mail-order businesses that try to sell "smart drugs" to the public. However, rarely do these businesses or the popular press report results that show the failure of smart drugs to improve memory or learning. Rather, they try to show that their products have miraculous effects on the brain and can improve mental functioning. Wouldn't it be easy to learn something by "popping a pill" or drinking a soda laced with a smart drug? This would be much easier than taking the time to study. Feeling dull? Take your brain in for a mental tune up by popping a pill! Burke says he definitely got the glow. "The first time I took it, I was working on a business plan. I had to juggle multiple contingencies in my head, and for some reason a tree with branches jumped into my head. I was able to place each contingency on a branch, retract and go back to the trunk, and in this visual way I was able to juggle more information." "Cavin's enthusiasm and drive to help those who need it is unparalleled! He delivers the information in an easy to read manner, no PhD required from the reader. 🙂 Having lived through such trauma himself he has real empathy for other survivors and it shows in the writing. This is a great read for anyone who wants to increase the health of their brain, injury or otherwise! Read it!!!" In addition, large national surveys, including the NSDUH, have generally classified prescription stimulants with other stimulants including street drugs such as methamphetamine. For example, since 1975, the National Institute on Drug Abuse–sponsored Monitoring the Future (MTF) survey has gathered data on drug use by young people in the United States (Johnston, O'Malley, Bachman, & Schulenberg, 2009a, 2009b). Originally, MTF grouped prescription stimulants under a broader class of stimulants so that respondents were asked specifically about MPH only after they had indicated use of some drug in the category of AMPs. As rates of MPH prescriptions increased and anecdotal reports of nonmedical use grew, the 2001 version of the survey was changed to include a separate standalone question about MPH use. This resulted in more than a doubling of estimated annual use among 12th graders, from 2.4% to 5.1%. More recent data from the MTF suggests Ritalin use has declined (3.4% in 2008). However, this may still underestimate use of MPH, as the question refers specifically to Ritalin and does not include other brand names such as Concerta (an extended release formulation of MPH). So I eventually got around to ordering another thing of nicotine gum, Habitrol Nicotine Gum, 4mg MINT flavor COATED gum. 96 pieces per box. Gum should be easier to double-blind myself with than nicotine patches - just buy some mint gum. If 4mg is too much, cut the gum in half or whatever. When it arrived, my hopes were borne out: the gum was rectangular and soft, which made it easy to cut into fourths. Our 2nd choice for a Brain and Memory supplement is Clari-T by Life Seasons. We were pleased to see that their formula included 3 of the 5 necessary ingredients Huperzine A, Phosphatidylserine and Bacopin. In addition, we liked that their product came in a vegetable capsule. The product contains silica and rice bran, though, which we are not sure is necessary. Phenserine, as well as the drugs Aricept and Exelon, which are already on the market, work by increasing the level of acetylcholine, a neurotransmitter that is deficient in people with the disease. A neurotransmitter is a chemical that allows communication between nerve cells in the brain. In people with Alzheimer's disease, many brain cells have died, so the hope is to get the most out of those that remain by flooding the brain with acetylcholine. Never heard of OptiMind before? This supplement promotes itself as an all-natural nootropic supplement that increases focus, improves memory, and enhances overall mental drive. The product first captured our attention when we noticed that their supplement blend contains a few of the same ingredients currently present in our editor's #1 choice. So, of course, we grew curious to see whether their formula was as (un)successful as their initial branding techniques. Keep reading to find out what we discovered… Learn More... Chocolate or cocoa powder (Examine.com), contains the stimulants caffeine and the caffeine metabolite theobromine, so it's not necessarily surprising if cocoa powder was a weak stimulant. It's also a witch's brew of chemicals such as polyphenols and flavonoids some of which have been fingered as helpful10, which all adds up to an unclear impact on health (once you control for eating a lot of sugar). All of the coefficients are positive, as one would hope, and one specific factor (MR7) squeaks in at d=0.34 (p=0.05). The graph is much less impressive than the graph for just MP, suggesting that the correlation may be spread out over a lot of factors, the current dataset isn't doing a good job of capturing the effect compared to the MP self-rating, or it really was a placebo effect: Feeling behind, I resolved to take some armodafinil the next morning, which I did - but in my hurry I failed to recall that 200mg armodafinil was probably too much to take during the day, with its long half life. As a result, I felt irritated and not that great during the day (possibly aggravated by some caffeine - I wish some studies would be done on the possible interaction of modafinil and caffeine so I knew if I was imagining it or not). Certainly not what I had been hoping for. I went to bed after midnight (half an hour later than usual), and suffered severe insomnia. The time wasn't entirely wasted as I wrote a short story and figured out how to make nicotine gum placebos during the hours in the dark, but I could have done without the experience. All metrics omitted because it was a day usage. One of the most popular legal stimulants in the world, nicotine is often conflated with the harmful effects of tobacco; considered on its own, it has performance & possibly health benefits. Nicotine is widely available at moderate prices as long-acting nicotine patches, gums, lozenges, and suspended in water for vaping. While intended for smoking cessation, there is no reason one cannot use a nicotine patch or nicotine gum for its stimulant effects. It looks like the overall picture is that nicotine is absorbed well in the intestines and the colon, but not so well in the stomach; this might be the explanation for the lack of effect, except on the other hand, the specific estimates I see are that 10-20% of the nicotine will be bioavailable in the stomach (as compared to 50%+ for mouth or lungs)… so any of my doses of >5ml should have overcome the poorer bioavailability! But on the gripping hand, these papers are mentioning something about the liver metabolizing nicotine when absorbed through the stomach, so… The smart pill that FDA approved is called Abilify MyCite. This tiny pill has a drug and an ingestible sensor. The sensor gets activated when it comes into contact with stomach fluid to detect when the pill has been taken. The data is then transmitted to a wearable patch that eventually conveys the information to a paired smartphone app. Doctors and caregivers, with the patient's consent, can then access the data via a web portal. "How to Feed a Brain is an important book. It's the book I've been looking for since sustaining multiple concussions in the fall of 2013. I've dabbled in and out of gluten, dairy, and (processed) sugar free diets the past few years, but I have never eaten enough nutritious foods. This book has a simple-to-follow guide on daily consumption of produce, meat, and water. Nicotine's stimulant effects are general and do not come with the same tweakiness and aggression associated with the amphetamines, and subjectively are much cleaner with less of a crash. I would say that its stimulant effects are fairly strong, around that of modafinil. Another advantage is that nicotine operates through nicotinic receptors and so doesn't cross-tolerate with dopaminergic stimulants (hence one could hypothetically cycle through nicotine, modafinil, amphetamines, and caffeine, hitting different receptors each time). MarketInsightsReports provides syndicated market research reports to industries, organizations or even individuals with an aim of helping them in their decision making process. These reports include in-depth market research studies i.e. market share analysis, industry analysis, information on products, countries, market size, trends, business research details and much more. MarketInsightsReports provides Global and regional market intelligence coverage, a 360-degree market view which includes statistical forecasts, competitive landscape, detailed segmentation, key trends, and strategic recommendations. Clearly, the hype surrounding drugs like modafinil and methylphenidate is unfounded. These drugs are beneficial in treating cognitive dysfunction in patients with Alzheimer's, ADHD or schizophrenia, but it's unlikely that today's enhancers offer significant cognitive benefits to healthy users. In fact, taking a smart pill is probably no more effective than exercising or getting a good night's sleep. That it is somewhat valuable is clear if we consider it under another guise. Imagine you received the same salary you do, but paid every day. Accounting systems would incur considerable costs handling daily payments, since they would be making so many more and so much smaller payments, and they would have to know instantly whether you showed up to work that day and all sorts of other details, and the recipients themselves would waste time dealing with all these checks or looking through all the deposits to their account, and any errors would be that much harder to track down. (And conversely, expensive payday loans are strong evidence that for poor people, a bi-weekly payment is much too infrequent.) One might draw a comparison to batching or buffers in computers: by letting data pile up in buffers, the computer can then deal with them in one batch, amortizing overhead over many items rather than incurring the overhead again and again. The downside, of course, is that latency will suffer and performance may drop based on that or the items becoming outdated & useless. The right trade-off will depend on the specifics; one would not expect random buffer-sizes to be optimal, but one would have to test and see what works best. As mentioned earlier, cognitive control is needed not only for inhibiting actions, but also for shifting from one kind of action or mental set to another. The WCST taxes cognitive control by requiring the subject to shift from sorting cards by one dimension (e.g., shape) to another (e.g., color); failures of cognitive control in this task are manifest as perseverative errors in which subjects continue sorting by the previously successful dimension. Three studies included the WCST in their investigations of the effects of d-AMP on cognition (Fleming et al., 1995; Mattay et al., 1996, 2003), and none revealed overall effects of facilitation. However, Mattay et al. (2003) subdivided their subjects according to COMT genotype and found differences in both placebo performance and effects of the drug. Subjects who were homozygous for the val allele (associated with lower prefrontal dopamine activity) made more perseverative errors on placebo than other subjects and improved significantly with d-AMP. Subjects who were homozygous for the met allele performed best on placebo and made more errors on d-AMP. Nor am I sure how important the results are - partway through, I haven't noticed anything bad, at least, from taking Noopept. And any effect is going to be subtle: people seem to think that 10mg is too small for an ingested rather than sublingual dose and I should be taking twice as much, and Noopept's claimed to be a chronic gradual sort of thing, with less of an acute effect. If the effect size is positive, regardless of statistical-significance, I'll probably think about doing a bigger real self-experiment (more days blocked into weeks or months & 20mg dose) In addition, while the laboratory research reviewed here is of interest concerning the effects of stimulant drugs on specific cognitive processes, it does not tell us about the effects on cognition in the real world. How do these drugs affect academic performance when used by students? How do they affect the total knowledge and understanding that students take with them from a course? How do they affect various aspects of occupational performance? Similar questions have been addressed in relation to students and workers with ADHD (Barbaresi, Katusic, Colligan, Weaver, & Jacobsen, 2007; Halmøy, Fasmer, Gillberg, & Haavik, 2009; see also Advokat, 2010) but have yet to be addressed in the context of cognitive enhancement of normal individuals. Perceptual–motor congruency was the basis of a study by Fitzpatrick et al. (1988) in which subjects had to press buttons to indicate the location of a target stimulus in a display. In the simple condition, the left-to-right positions of the buttons are used to indicate the left-to-right positions of the stimuli, a natural mapping that requires little cognitive control. In the rotation condition, the mapping between buttons and stimulus positions is shifted to the right by one and wrapped around, such that the left-most button is used to indicate the right-most position. Cognitive control is needed to resist responding with the other, more natural mapping. MPH was found to speed responses in this task, and the speeding was disproportionate for the rotation condition, consistent with enhancement of cognitive control. Productivity is the most cited reason for using nootropics. With all else being equal, smart drugs are expected to give you that mental edge over other and advance your career. Nootropics can also be used for a host of other reasons. From studying to socialising. And from exercise and health to general well-being. Different nootropics cater to different audiences. The evidence? A 2012 study in Greece found it can boost cognitive function in adults with mild cognitive impairment (MCI), a type of disorder marked by forgetfulness and problems with language, judgement, or planning that are more severe than average "senior moments," but are not serious enough to be diagnosed as dementia. In some people, MCI will progress into dementia. ADHD medication sales are growing rapidly, with annual revenues of $12.9 billion in 2015. These drugs can be obtained legally by those who have a prescription, which also includes those who have deliberately faked the symptoms in order to acquire the desired medication. (According to an experiment published in 2010, it is difficult for medical practitioners to separate those who feign the symptoms from those who actually have them.) That said, faking might not be necessary if a doctor deems your desired productivity level or your stress around a big project as reason enough to prescribe medication. An unusual intervention is infrared/near-infrared light of particular wavelengths (LLLT), theorized to assist mitochondrial respiration and yielding a variety of therapeutic benefits. Some have suggested it may have cognitive benefits. LLLT sounds strange but it's simple, easy, cheap, and just plausible enough it might work. I tried out LLLT treatment on a sporadic basis 2013-2014, and statistically, usage correlated strongly & statistically-significantly with increases in my daily self-ratings, and not with any sleep disturbances. Excited by that result, I did a randomized self-experiment 2014-2015 with the same procedure, only to find that the causal effect was weak or non-existent. I have stopped using LLLT as likely not worth the inconvenience. Autism Brain brain fuel brain health Brain Injury broth Cholesterol choline DAI DHA Diabetes digestion Exercise Fat Functional Medicine gastric Gluten gut-brain Gut Brain Axis gut health Health intestinal permeability keto Ketogenic leaky Gut Learning Medicine Metabolism Music Therapy neurology Neuroplasticity neurorehabilitation Nutrition omega Paleo Physical Therapy Recovery Science second brain superfood synaptogenesis TBI Therapy tube feed uridine Similarly, we could try applying Nick Bostrom's reversal test and ask ourselves, how would we react to a virus which had no effect but to eliminate sleep from alternating nights and double sleep in the intervening nights? We would probably grouch about it for a while and then adapt to our new hedonistic lifestyle of partying or working hard. On the other hand, imagine the virus had the effect of eliminating normal sleep but instead, every 2 minutes, a person would fall asleep for a minute. This would be disastrous! Besides the most immediate problems like safely driving vehicles, how would anything get done? You would hold a meeting and at any point, a third of the participants would be asleep. If the virus made it instead 2 hours on, one hour off, that would be better but still problematic: there would be constant interruptions. And so on, until we reach our present state of 16 hours on, 8 hours off. Given that we rejected all the earlier buffer sizes, one wonders if 16:8 can be defended as uniquely suited to circumstances. Is that optimal? It may be, given the synchronization with the night-day cycle, but I wonder; rush hour alone stands as an argument against synchronized sleep - wouldn't our infrastructure would be much cheaper if it only had to handle the average daily load rather than cope with the projected peak loads? Might not a longer cycle be better? The longer the day, the less we are interrupted by sleep; it's a hoary cliche about programmers that they prefer to work in long sustained marathons during long nights rather than sprint occasionally during a distraction-filled day, to the point where some famously adopt a 28 hour day (which evenly divides a week into 6 days). Are there other occupations which would benefit from a 20 hour waking period? Or 24 hour waking period? We might not know because without chemical assistance, circadian rhythms would overpower anyone attempting such schedules. It certainly would be nice if one had long time chunks in which could read a challenging book in one sitting, without heroic arrangements.↩ Barbara Sahakian, a neuroscientist at Cambridge University, doesn't dismiss the possibility of nootropics to enhance cognitive function in healthy people. She would like to see society think about what might be considered acceptable use and where it draws the line – for example, young people whose brains are still developing. But she also points out a big problem: long-term safety studies in healthy people have never been done. Most efficacy studies have only been short-term. "Proving safety and efficacy is needed," she says. So with these 8 results in hand, what do I think? Roughly, I was right 5 of the days and wrong 3 of them. If not for the sleep effect on #4, which is - in a way - cheating (one hopes to detect modafinil due to good effects), the ratio would be 5:4 which is awfully close to a coin-flip. Indeed, a scoring rule ranks my performance at almost identical to a coin flip: -5.49 vs -5.5419. (The bright side is that I didn't do worse than a coin flip: I was at least calibrated.) A new all-in-one nootropic mix/company run by some people active on /r/nootropics; they offered me a month's supply for free to try & review for them. At ~$100 a month (it depends on how many months one buys), it is not cheap (John Backus estimates one could buy the raw ingredients for $25/month) but it provides convenience & is aimed at people uninterested in spending a great deal of time reviewing research papers & anecdotes or capping their own pills (ie. people with lives) and it's unlikely I could spare the money to subscribe if TruBrain worked well for me - but certainly there was no harm in trying it out. ADMISSIONSUNDERGRADUATE GRADUATE CONTINUING EDUCATION RESEARCHDIVISIONS RESEARCH IMPACT LIBRARIES INNOVATION AND PARTNERSHIP SUPPORT FOR RESEARCHERS RESEARCH IN CONVERSATION PUBLIC ENGAGEMENT WITH RESEARCH NEWS & EVENTSEVENTS SCIENCE BLOG ARTS BLOG OXFORD AND BREXIT NEWS RELEASES FOR JOURNALISTS FILMING IN OXFORD FIND AN EXPERT ABOUTORGANISATION FACTS AND FIGURES OXFORD PEOPLE OXFORD ACCESS INTERNATIONAL OXFORD BUILDING OUR FUTURE JOBS 牛津大学Staff Oxford students Alumni Visitors Local community If you want to focus on boosting your brain power, Lebowitz says you should primarily focus on improving your cardiovascular health, which is "the key to good thinking." For example, high blood pressure and cholesterol, which raise the risk of heart disease, can cause arteries to harden, which can decrease blood flow to the brain. The brain relies on blood to function normally. Cytisine is not known as a stimulant and I'm not addicted to nicotine, so why give it a try? Nicotine is one of the more effective stimulants available, and it's odd how few nicotine analogues or nicotinic agonists there are available; nicotine has a few flaws like short half-life and increasing blood pressure, so I would be interested in a replacement. The nicotine metabolite cotinine, in the human studies available, looks intriguing and potentially better, but I have been unable to find a source for it. One of the few relevant drugs which I can obtain is cytisine, from Ceretropic, at 2x1.5mg doses. There are not many anecdotal reports on cytisine, but at least a few suggest somewhat comparable effects with nicotine, so I gave it a try. A similar pill from HQ Inc. (Palmetto, Fla.) called the CorTemp Ingestible Core Body Temperature Sensor transmits real-time body temperature. Firefighters, football players, soldiers and astronauts use it to ensure that they do not overheat in high temperatures. HQ Inc. is working on a consumer version, to be available in 2018, that would wirelessly communicate to a smartphone app. The abuse of drugs is something that can lead to large negative outcomes. If you take Ritalin (Methylphenidate) or Adderall (mixed amphetamine salts) but don't have ADHD, you may experience more focus. But what many people don't know is that the drug is very similar to amphetamines. And the use of Ritalin is associated with serious adverse events of drug dependence, overdose and suicide attempts [80]. Taking a drug for another reason than originally intended is stupid, irresponsible and very dangerous. Since dietary supplements do not require double-blind, placebo-controlled, pharmaceutical-style human studies before going to market, there is little incentive for companies to really prove that something does what they say it does. This means that, in practice, nootropics may not live up to all the grandiose, exuberant promises advertised on the bottle in which they come. The flip side, though? There's no need to procure a prescription in order to try them out. Good news for aspiring biohackers—and for people who have no aspirations to become biohackers, but still want to be Bradley Cooper in Limitless (me). Modafinil is a prescription smart drug most commonly given to narcolepsy patients, as it promotes wakefulness. In addition, users indicate that this smart pill helps them concentrate and boosts their motivation. Owing to Modafinil, the feeling of fatigue is reduced, and people report that their everyday functions improve because they can manage their time and resources better, as a result reaching their goals easier. Nicotine absorption through the stomach is variable and relatively reduced in comparison with absorption via the buccal cavity and the small intestine. Drinking, eating, and swallowing of tobacco smoke by South American Indians have frequently been reported. Tenetehara shamans reach a state of tobacco narcosis through large swallows of smoke, and Tapirape shams are said to eat smoke by forcing down large gulps of smoke only to expel it again in a rapid sequence of belches. In general, swallowing of tobacco smoke is quite frequently likened to drinking. However, although the amounts of nicotine swallowed in this way - or in the form of saturated saliva or pipe juice - may be large enough to be behaviorally significant at normal levels of gastric pH, nicotine, like other weak bases, is not significantly absorbed. The infinite promise of stacking is why, whatever weight you attribute to the evidence of their efficacy, nootropics will never go away: With millions of potential iterations of brain-enhancing regimens out there, there is always the tantalizing possibility that seekers haven't found the elusive optimal combination of pills and powders for them—yet. Each "failure" is but another step in the process-of-elimination journey to biological self-actualization, which may be just a few hundred dollars and a few more weeks of amateur alchemy away. And the drugs are not terribly difficult to get, depending on where you're located. Modafinil has an annual global share of $700 million, with high estimated off-label use. Although these drugs can be purchased over the internet, their legal status varies between countries. For example, it is legal to possess and use Modafinil in the United Kingdom without a prescription, but not in United States. Nootropics are a specific group of smart drugs. But nootropics aren't the only drugs out there that promise you some extra productivity. More students and office workers are using drugs to increase their productivity than ever before [79]. But unlike with nootropics, many have side-effects. And that is precisely what is different between nootropics and other enhancing drugs, nootropics have little to no negative side-effects. Another important epidemiological question about the use of prescription stimulants for cognitive enhancement concerns the risk of dependence. MPH and d-AMP both have high potential for abuse and addiction related to their effects on brain systems involved in motivation. On the basis of their reanalysis of NSDUH data sets from 2000 to 2002, Kroutil and colleagues (2006) estimated that almost one in 20 nonmedical users of prescription ADHD medications meets criteria for dependence or abuse. This sobering estimate is based on a survey of all nonmedical users. The immediate and long-term risks to individuals seeking cognitive enhancement remain unknown. According to clinical psychiatrist and Harvard Medical School Professor, Emily Deans, "there's probably nothing dangerous about the occasional course of nootropics...beyond that, it's possible to build up a tolerance if you use them often enough." Her recommendation is to seek pharmaceutical-grade products which she says are more accurate regarding dosage and less likely to be contaminated. That is, perhaps light of the right wavelength can indeed save the brain some energy by making it easier to generate ATP. Would 15 minutes of LLLT create enough ATP to make any meaningful difference, which could possibly cause the claimed benefits? The problem here is like that of the famous blood-glucose theory of willpower - while the brain does indeed use up more glucose while active, high activity uses up very small quantities of glucose/energy which doesn't seem like enough to justify a mental mechanism like weak willpower.↩ Either prescription or illegal, daily use of testosterone would not be cheap. On the other hand, if I am one of the people for whom testosterone works very well, it would be even more valuable than modafinil, in which case it is well worth even arduous experimenting. Since I am on the fence on whether it would help, this suggests the value of information is high. Theanine can also be combined with caffeine as both of them work in synergy to increase memory, reaction time, mental endurance, and memory. The best part about Theanine is that it is one of the safest nootropics and is readily available in the form of capsules. A natural option would be to use an excellent green tea brand which constitutes of tea grown in the shade because then Theanine would be abundantly present in it. Smart pills are defined as drugs or prescription medication used to treat certain mental disorders, from milder ones such as brain fog, to some more severe like ADHD. They are often referred to as 'nootropics' but even though the two terms are often used interchangeably, smart pills and nootropics represent two different types of cognitive enhancers. One fairly powerful nootropic substance that, appropriately, has fallen out of favor is nicotine. It's the chemical that gives tobacco products their stimulating kick. It isn't what makes them so deadly, but it does make smoking very addictive. When Europeans learned about tobacco's use from indigenous tribes they encountered in the Americas in the 15th and 16th centuries, they got hooked on its mood-altering effects right away and even believed it could cure joint pain, epilepsy, and the plague. Recently, researchers have been testing the effects of nicotine that's been removed from tobacco, and they believe that it might help treat neurological disorders including Parkinson's disease and schizophrenia; it may also improve attention and focus. But, please, don't start smoking or vaping. Check out these 14 weird brain exercises that make you smarter. This calculation - reaping only \frac{7}{9} of the naive expectation - gives one pause. How serious is the sleep rebound? In another article, I point to a mice study that sleep deficits can take 28 days to repay. What if the gain from modafinil is entirely wiped out by repayment and all it did was defer sleep? Would that render modafinil a waste of money? Perhaps. Thinking on it, I believe deferring sleep is of some value, but I cannot decide whether it is a net profit. 1 PM; overall this was a pretty productive day, but I can't say it was very productive. I would almost say even odds, but for some reason I feel a little more inclined towards modafinil. Say 55%. That night's sleep was vile: the Zeo says it took me 40 minutes to fall asleep, I only slept 7:37 total, and I woke up 7 times. I'm comfortable taking this as evidence of modafinil (half-life 10 hours, 1 PM to midnight is only 1 full halving), bumping my prediction to 75%. I check, and sure enough - modafinil. A big part is that we are finally starting to apply complex systems science to psycho-neuro-pharmacology and a nootropic approach. The neural system is awesomely complex and old-fashioned reductionist science has a really hard time with complexity. Big companies spends hundreds of millions of dollars trying to separate the effects of just a single molecule from placebo – and nootropics invariably show up as "stacks" of many different ingredients (ours, Qualia , currently has 42 separate synergistic nootropics ingredients from alpha GPC to bacopa monnieri and L-theanine). That kind of complex, multi pathway input requires a different methodology to understand well that goes beyond simply what's put in capsules. Contact us at [email protected] | Sitemap xml | Sitemap txt | Sitemap
CommonCrawl
Maximal regularity with weights for parabolic problems with inhomogeneous boundary conditions Nick Lindemulder1 Journal of Evolution Equations (2019)Cite this article In this paper, we establish weighted \(L^{q}\)–\(L^{p}\)-maximal regularity for linear vector-valued parabolic initial-boundary value problems with inhomogeneous boundary conditions of static type. The weights we consider are power weights in time and in space, and yield flexibility in the optimal regularity of the initial-boundary data and allow to avoid compatibility conditions at the boundary. The novelty of the followed approach is the use of weighted anisotropic mixed-norm Banach space-valued function spaces of Sobolev, Bessel potential, Triebel–Lizorkin and Besov type, whose trace theory is also subject of study. This paper is concerned with weighted maximal \(L^{q}\)–\(L^{p}\)-regularity for vector-valued parabolic initial-boundary value problems of the form $$\begin{aligned} \begin{array}{rllll} \partial _{t}u(x,t) + {\mathcal {A}}(x,D,t)u(x,t) &{}= f(x,t), &{} x \in {\mathscr {O}}, &{} t \in J, \\ {\mathcal {B}}_{j}(x',D,t)u(x',t) &{}= g_{j}(x',t), &{} x' \in \partial {\mathscr {O}}, &{} t \in J, &{} j=1,\ldots ,n, \\ u(x,0) &{}= u_{0}(x), &{} x \in {\mathscr {O}}. \end{array} \end{aligned}$$ Here, J is a finite time interval, \({\mathscr {O}} \subset \mathbb {R}^{d}\) is a smooth domain with a compact boundary \(\partial {\mathscr {O}}\) and the coefficients of the differential operator \({\mathcal {A}}\) and the boundary operators \({\mathcal {B}}_{1},\ldots ,{\mathcal {B}}_{n}\) are \({\mathcal {B}}(X)\)-valued, where X is a UMD Banach space. One could for instance take \(X=\mathbb {C}^{N}\), describing a system of N initial-boundary value problems. Our structural assumptions on \({\mathcal {A}},{\mathcal {B}}_{1},\ldots ,{\mathcal {B}}_{n}\) are an ellipticity condition and a condition of Lopatinskii–Shapiro type. For homogeneous boundary data (i.e., \(g_{j}=0\), \(j=1,\ldots ,n\)), these problems include linearizations of reaction–diffusion systems and of phase field models with Dirichlet, Neumann and Robin conditions. However, if one wants to use linearization techniques to treat such problems with nonlinear boundary conditions, it is crucial to have a sharp theory for the fully inhomogeneous problem. During the last 25 years, the theory of maximal regularity turned out to be an important tool in the theory of nonlinear PDEs. Maximal regularity means that there is an isomorphism between the data and the solution of the problem in suitable function spaces. Having established maximal regularity for the linearized problem, the nonlinear problem can be treated with tools as the contraction principle and the implicit function theorem. Let us mention [7, 15] for approaches in spaces of continuous functions, [1, 45] for approaches in Hölder spaces and [3, 5, 13, 14, 24, 53, 55] for approaches in \(L^{p}\)-spaces (with \(p \in (1,\infty )\)). As an application of his operator-valued Fourier multiplier theorem, Weis [65] characterized maximal \(L^{p}\)-regularity for abstract Cauchy problems in UMD Banach spaces in terms of an \({\mathcal {R}}\)-boundedness condition on the operator under consideration. A second approach to the maximal \(L^{p}\)-regularity problem is via the operator sum method, as initiated by Da Prato and Grisvard [16] and extended by Dore and Venni [23] and Kalton & Weis [37]. For more details on these approaches and for more information on (the history of) the maximal \(L^{p}\)-regularity problem in general, we refer to [17, 39]. In the maximal \(L^{q}\)–\(L^{p}\)-regularity approach to (1), one is looking for solutions u in the "maximal regularity space" $$\begin{aligned} W^{1}_{q}(J;L^{p}({\mathscr {O}};X)) \cap L^{q}(J;W^{2n}_{p}({\mathscr {O}};X)). \end{aligned}$$ To be more precise, problem (1) is said to enjoy the property of maximal\(L^{q}\)–\(L^{p}\)-regularity if there exists a (necessarily unique) space of initial-boundary data \({\mathscr {D}}_{i.b.} \subset L^{q}(J;L^{p}(\partial {\mathscr {O}};X))^{n} \times L^{p}({\mathscr {O}};X)\) such that for every \(f \in L^{q}(J;L^{p}({\mathscr {O}};X))\) it holds that (1) has a unique solution u in (2) if and only if \((g=(g_{1},\ldots ,g_{n}),u_{0}) \in {\mathscr {D}}_{i.b.}\). In this situation, there exists a Banach norm on \({\mathscr {D}}_{i.b.}\), unique up to equivalence, with $$\begin{aligned} {\mathscr {D}}_{i.b.} \hookrightarrow L^{q}(J;L^{p}(\partial {\mathscr {O}};X))^{n} \oplus L^{p}({\mathscr {O}};X), \end{aligned}$$ which makes the associated solution operator a topological linear isomorphism between the data space \(L^{q}(J;L^{p}({\mathscr {O}};X)) \oplus {\mathscr {D}}_{i.b.}\) and the solution space \(W^{1}_{q}(J;L^{p}({\mathscr {O}};X)) \cap L^{q}(J;W^{2n}_{p}({\mathscr {O}};X))\). The maximal\(L^{q}\)–\(L^{p}\)-regularity problem for (1) consists of establishing maximal \(L^{q}\)–\(L^{p}\)-regularity for (1) and explicitly determining the space \({\mathscr {D}}_{i.b.}\). The maximal \(L^{q}\)–\(L^{p}\)-regularity problem for (1) was solved by Denk, Hieber & Prüss [18], who used operator sum methods in combination with tools from vector-valued harmonic analysis. Earlier works on this problem are [40] (\(q=p\)) and [64] (\(p \le q\)) for scalar-valued second-order problems with Dirichlet and Neumann boundary conditions. Later, the results of [18] for the case that \(q=p\) have been extended by Meyries & Schnaubelt [48] to the setting of temporal power weights \(v_{\mu }(t)=t^{\mu }\), \(\mu \in [0,q-1)\); also see [47]. Works in which maximal \(L^{q}\)–\(L^{p}\)-regularity of other problems with inhomogeneous boundary conditions are studied, include [20,21,22, 24, 48] (the case \(q=p\)) and [50, 61] (the case \(q \ne p\)). It is desirable to have maximal \(L^{q}\)–\(L^{p}\)-regularity for the full range \(q,p \in (1,\infty )\), as this enables one to treat more nonlinearities. For instance, one often requires large q and p due to better Sobolev embeddings, and \(q \ne p\) due to scaling invariance of PDEs (see, e.g., [30]). However, for (1) the case \(q \ne p\) is more involved than the case \(q=p\) due to the inhomogeneous boundary conditions. This is not only reflected in the proof, but also in the space of initial-boundary data ( [18, Theorem 2.3] versus [18, Theorem 2.2]). Already for the heat equation with Dirichlet boundary conditions, the boundary data g have to be in the intersection space $$\begin{aligned} F^{1-\frac{1}{2p}}_{q,p}(J;L^{p}(\partial {\mathscr {O}})) \cap L^{q}(J;B^{2-\frac{1}{p}}_{p,p}(\partial {\mathscr {O}})), \end{aligned}$$ which in the case \(q=p\) coincides with \(W^{1-\frac{1}{2p}}_{p}(J;L^{p}(\partial {\mathscr {O}})) \cap L^{p}(J;W^{2-\frac{1}{p}}_{p}(\partial {\mathscr {O}}))\); here \(F^{s}_{q,p}\) denotes a Triebel–Lizorkin space and \(W^{s}_{p}=B^{s}_{p,p}\) a non-integer order Sobolev–Slobodeckii space. In this paper, we will extend the results of [18, 48], concerning the maximal \(L^{q}\)–\(L^{p}\)-regularity problem for (1), to the setting of power weights in time and in space for the full range \(q,p \in (1,\infty )\). In contrast to [18, 48], we will not only view spaces (2) and (3) as intersection spaces, but also as anisotropic mixed-norm function spaces on \(J \times {\mathscr {O}}\) and \(J \times \partial {\mathscr {O}}\), respectively. Identifications of intersection spaces of type (3) with anisotropic mixed-norm Triebel–Lizorkin spaces have been considered in a previous paper [43], all in a generality including the weighted vector-valued setting. The advantage of these identifications is that they allow us to use weighted vector-valued versions of trace results of Johnsen & Sickel [36]. These trace results will be studied in their own right in the present paper. The weights we consider are the power weights $$\begin{aligned} v_{\mu }(t) = t^{\mu } \,\quad (t \in J) \quad \quad \text{ and } \quad \quad w^{\partial {\mathscr {O}}}_{\gamma }(x) = \mathrm {dist}(\,\cdot \,,\partial {\mathscr {O}})^{\gamma } \,\quad (x \in {\mathscr {O}}), \end{aligned}$$ where \(\mu \in (-1,q-1)\) and \(\gamma \in (-1,p-1)\). These weights yield flexibility in the optimal regularity of the initial-boundary data and allow to avoid compatibility conditions at the boundary, which is nicely illustrated by the result (see Example 3.7) that the corresponding version of (3) becomes $$\begin{aligned} F^{1-\frac{1}{2p}(1+\gamma )}_{q,p}(J,v_{\mu };L^{p}(\partial {\mathscr {O}})) \cap L^{q}(J,v_{\mu };B^{2-\frac{1}{p}(1+\gamma )}_{p,p}(\partial {\mathscr {O}})). \end{aligned}$$ Note that one requires less regularity of g by increasing \(\gamma \). The idea to work in weighted spaces equipped with weights like (4) has already proven to be very useful in several situations. In an abstract semigroup setting, temporal weights were introduced by Clément & Simonett [15] and Prüss & Simonett [54], in the context of maximal continuous regularity and maximal \(L^{p}\)-regularity, respectively. Other works on maximal temporally weighted \(L^{p}\)-regularity are [38, 41] for quasilinear parabolic evolution equations and [48] for parabolic problems with inhomogeneous boundary conditions. Concerning the use of spatial weights, we would like to mention [9, 46, 52] for boundary value problems and [2, 10, 25, 56, 62] for problems with boundary noise. The paper is organized as follows. In Sect. 2 we discuss the necessary preliminaries, in Sect. 3 we state the main result of this paper, Theorem 3.4, in Sect. 4 we establish the necessary trace theory, in Sect. 5 we consider a Sobolev embedding theorem, and in Sect. 6 we finally prove Theorem 3.4. Weighted mixed-norm Lebesgue spaces A weight on \(\mathbb {R}^{d}\) is a measurable function \(w:\mathbb {R}^{d} \longrightarrow [0,\infty ]\) that takes its values almost everywhere in \((0,\infty )\). We denote by \({\mathcal {W}}(\mathbb {R}^{d})\) the set of all weights on \(\mathbb {R}^{d}\). For \(p \in (1,\infty )\) we denote by \(A_{p} = A_{p}(\mathbb {R}^{d})\) the class of all Muckenhoupt \(A_{p}\)-weights, which are all the locally integrable weights for which the \(A_{p}\)-characteristic \([w]_{A_{p}}\) is finite. Here, $$\begin{aligned}{}[w]_{A_{p}} = \sup _{Q} \left( \fint _{Q}w \right) \left( \fint _{Q}w^{-p'/p} \right) ^{p/p'} \end{aligned}$$ with the supremum taken over all cubes \(Q \subset \mathbb {R}^{d}\) with sides parallel to the coordinate axes. We furthermore set \(A_{\infty } := \bigcup _{p \in (1,\infty )}A_{p}\). For more information on Muckenhoupt weights we refer to [31]. Important for this paper are the power weights of the form \(w=\mathrm {dist}(\,\cdot \,,\partial {\mathscr {O}})^{\gamma }\), where \({\mathscr {O}}\) is a \(C^{\infty }\)-domain in \(\mathbb {R}^{d}\) and where \(\gamma \in (-1,\infty )\). If \(\gamma \in (-1,\infty )\) and \(p \in (1,\infty )\), then (see [27, Lemma 2.3] or [52, Lemma 2.3]) $$\begin{aligned} w_{\gamma }^{\partial {\mathscr {O}}} := \mathrm {dist}(\,\cdot \,,\partial {\mathscr {O}})^{\gamma } \in A_{p} \quad \Longleftrightarrow \quad \gamma \in (-1,p-1). \end{aligned}$$ For the important model problem case \({\mathscr {O}} = \mathbb {R}^{d}_{+}\), we simply write \(w_{\gamma }:= w_{\gamma }^{\partial \mathbb {R}^{d}_{+}} = \mathrm {dist}(\,\cdot \,,\partial \mathbb {R}^{d}_{+})^{\gamma }\). Replacing cubes by rectangles in the definition of the \(A_{p}\)-characteristic \([w]_{A_{p}} \in [1,\infty ]\) of a weight w gives rise to the \(A_{p}^{rec}\)-characteristic \([w]_{A_{p}^{rec}} \in [1,\infty ]\) of w. We denote by \(A_{p}^{rec} = A_{p}^{rec}(\mathbb {R}^{d})\) the class of all weights with \([w]_{A_{p}^{rec}} < \infty \). For \(\gamma \in (-1,\infty )\) it holds that \(w_{\gamma } \in A_{p}^{rec}\) if and only if \(\gamma \in (-1,p-1)\). Let with . The decomposition is called the -decomposition of \(\mathbb {R}^{d}\). For \(x \in \mathbb {R}^{d}\) we accordingly write \(x = (x_{1},\ldots ,x_{l})\) and , where and \(x_{j,i} \in \mathbb {R}\) . We also say that we view \(\mathbb {R}^{d}\) as being -decomposed. Furthermore, for each \(k \in \{1,\ldots ,l\}\) we define the inclusion map and the projection map Suppose that \(\mathbb {R}^{d}\) is -decomposed as above. Let \({\varvec{p}} = (p_{1},\ldots ,p_{l}) \in [1,\infty )^{l}\) and . We define the weighted mixed-norm space as the space of all \(f\in L^{0}(\mathbb {R}^{d})\) satisfying We equip with the norm , which turns it into a Banach space. Given a Banach space X, we denote by the associated Bochner space Suppose that \(\mathbb {R}^{d}\) is -decomposed as in Sect. 2.1. Given \({\varvec{a}} \in (0,\infty )^{l}\), we define the -anisotropic dilation on \(\mathbb {R}^{d}\) by \(\lambda > 0\) to be the mapping on \(\mathbb {R}^{d}\) given by the formula A -anisotropic distance function on \(\mathbb {R}^{d}\) is a function \(u:\mathbb {R}^{d} \longrightarrow [0,\infty )\) satisfying \(u(x)=0\) if and only if \(x=0\). for all \(x \in \mathbb {R}^{d}\) and \(\lambda > 0\). There exists a \(c>0\) such that \(u(x+y) \le c(u(x)+u(y))\) for all \(x,y \in \mathbb {R}^{d}\). All -anisotropic distance functions on \(\mathbb {R}^{d}\) are equivalent: Given two -anisotropic distance functions u and v on \(\mathbb {R}^{d}\), there exist constants \(m,M>0\) such that \(m u(x) \le v(x) \le M u(x)\) for all \(x \in \mathbb {R}^{d}\) In this paper, we will use the -anisotropic distance function given by the formula Fourier multipliers Let X be a Banach space. The space of X-valued tempered distributions on \(\mathbb {R}^{d}\) is defined as \({\mathcal {S}}'(\mathbb {R}^{d};X) := {\mathcal {L}}({\mathcal {S}}(\mathbb {R}^{d});X)\); for the theory of vector-valued distributions we refer to [4] (and [3, Section III.4]). We write \(\widehat{L^{1}}(\mathbb {R}^{d};X) := {\mathscr {F}}^{-1}L^{1}(\mathbb {R}^{d};X) \subset {\mathcal {S}}'(\mathbb {R}^{d};X)\). To a symbol \(m \in L^{\infty }(\mathbb {R}^{d};{\mathcal {B}}(X))\), we associate the Fourier multiplier operator $$\begin{aligned} T_{m}: \widehat{L^{1}}(\mathbb {R}^{d};X) \longrightarrow \widehat{L^{1}}(\mathbb {R}^{d};X),\, f \mapsto {\mathscr {F}}^{-1}[m{\hat{f}}]. \end{aligned}$$ Given \({\varvec{p}} \in [1,\infty )^{l}\) and , we call m a Fourier multiplier on if \(T_{m}\) restricts to an operator on which is bounded with respect to -norm. In this case, \(T_{m}\) has a unique extension to a bounded linear operator on due to denseness of \({\mathcal {S}}(\mathbb {R}^{d};X)\) in , which we still denote by \(T_{m}\). We denote by the set of all Fourier multipliers \(m \in L^{\infty }(\mathbb {R}^{d};{\mathcal {B}}(X))\) on . Equipped with the norm , becomes a Banach algebra (under the natural pointwise operations) for which the natural inclusion is an isometric Banach algebra homomorphism; see [39] for the unweighted non-mixed-norm setting. For each \({\varvec{a}} \in (0,\infty )^{l}\) and \(N \in \mathbb {N}\), we define as the space of all \(m \in C^{N}(\mathbb {R}^{d})\) for which We furthermore define \(\mathscr {RM}(X)\) as the space of all operator-valued symbols \(m \in C^{1}(\mathbb {R}{\setminus }\{0\};{\mathcal {B}}(X))\) for which we have the \({\mathcal {R}}\)-bound $$\begin{aligned} ||m||_{\mathscr {RM}_(X)} := {\mathcal {R}}\big \{ tm^{[k]}(t) : t \ne 0, k=0,1 \,\big \} < \infty ; \end{aligned}$$ see, e.g., [17, 33] for the notion of \({\mathcal {R}}\)-boundedness. If X is a UMD space, \({\varvec{p}} \in (1,\infty )^{l}\), and \({\varvec{a}} \in (0,\infty )^{l}\), then there exists an \(N \in \mathbb {N}\) for which If X is a UMD space, \(p \in (1,\infty )\) and \(w \in A_{p}(\mathbb {R})\), then $$\begin{aligned} \mathscr {RM}(X) \hookrightarrow {\mathcal {M}}_{p,w}(X). \end{aligned}$$ For these results, we refer to [26] and the references given there. Function spaces For the theory of vector-valued distributions, we refer to [4] (and [3, Section III.4]). For vector-valued function spaces, we refer to [51] (weighted setting) and the references given therein. Anisotropic spaces can be found in [6, 36, 42]; for the statements below on weighted anisotropic vector-valued function space, we refer to [42]. Suppose that \(\mathbb {R}^{d}\) is -decomposed as in Sect. 2.1. Let X be a Banach space, and let \({\varvec{a}} \in (0,\infty )^{l}\). For \(0< A< B < \infty \), we define as the set of all sequences \(\varphi = (\varphi _{n})_{n \in \mathbb {N}} \subset {\mathcal {S}}(\mathbb {R}^{d})\) which are constructed in the following way: given a \(\varphi _{0} \in {\mathcal {S}}(\mathbb {R}^{d})\) satisfying \((\varphi _{n})_{n \ge 1} \subset {\mathcal {S}}(\mathbb {R}^{d})\) is defined via the relations Observe that We put . In case \(l=1\) we write , and \(\Phi _{A,B}(\mathbb {R}^{d}) \!=\! \Phi ^{1}_{A,B}(\mathbb {R}^{d})\). To , we associate the family of convolution operators \((S_{n})_{n \in \mathbb {N}} = (S_{n}^{\varphi })_{n \in \mathbb {N}} \subset {\mathcal {L}}({\mathcal {S}}'(\mathbb {R}^{d};X),{\mathscr {O}}_{M}(\mathbb {R}^{d};X)) \subset {\mathcal {L}}({\mathcal {S}}'(\mathbb {R}^{d};X))\) given by $$\begin{aligned} S_{n}f = S_{n}^{\varphi }f := \varphi _{n} * f = {\mathscr {F}}^{-1}[{\hat{\varphi }}_{n}{\hat{f}}] \quad \quad (f \in {\mathcal {S}}'(\mathbb {R}^{d};X)). \end{aligned}$$ Here, \({\mathscr {O}}_{M}(\mathbb {R}^{d};X)\) denotes the space of slowly increasing X-valued smooth functions on \(\mathbb {R}^{d}\). It holds that \(f = \sum _{n=0}^{\infty }S_{n}f\) in \({\mathcal {S}}'(\mathbb {R}^{d};X)\), respectively, in \({\mathcal {S}}(\mathbb {R}^{d};X)\) whenever \(f \in {\mathcal {S}}'(\mathbb {R}^{d};X)\), respectively, \(f \in {\mathcal {S}}(\mathbb {R}^{d};X)\). Given \({\varvec{a}} \in (0,\infty )^{l}\), \({\varvec{p}} \in [1,\infty )^{l}\), \(q \in [1,\infty ]\), \(s \in \mathbb {R}\), and , the Besov space is defined as the space of all \(f \in {\mathcal {S}}'(\mathbb {R}^{d};X)\) for which and the Triebel–Lizorkin space is defined as the space of all \(f \in {\mathcal {S}}'(\mathbb {R}^{d};X)\) for which Up to an equivalence of extended norms on \({\mathcal {S}}'(\mathbb {R}^{d};X)\), and do not depend on the particular choice of . Let us note some basic relations between these spaces. Monotonicity of \(\ell ^{q}\)-spaces yields that, for \(1 \le q_{0} \le q_{1} \le \infty \), For \(\epsilon > 0\) it holds that Furthermore, Minkowski's inequality gives Let \({\varvec{a}} \in (0,\infty )^{l}\). A normed space \(\mathbb {E}\subset {\mathcal {S}}'(\mathbb {R}^{d};X)\) is called -admissible if there exists an \(N \in \mathbb {N}\) such that where \(m(D)f={\mathscr {F}}^{-1}[m{\hat{f}}]\). The Besov space and the Triebel–Lizorkin space are examples of -admissible Banach spaces. To each \(\sigma \in \mathbb {R}\), we associate the operators and given by We call the -anisotropic Bessel potential operator of order \(\sigma \). Let \(\mathbb {E}\hookrightarrow {\mathcal {S}}'(\mathbb {R}^{d};X)\) be a Banach space. Write Given \({\varvec{n}} \in \left( \mathbb {Z}_{\ge 1} \right) ^{l}\), \({\varvec{s}}, {\varvec{a}} \in (0,\infty )^{l}\), and \(s \in \mathbb {R}\), we define the Banach spaces as follows: with the norms Note that contractively in case that \({\varvec{s}} = (s/a_{1},\ldots ,s/a_{l})\). Furthermore, note that if \(\mathbb {F}\hookrightarrow {\mathcal {S}}'(\mathbb {R}^{d};X)\) is another Banach space, then If \(\mathbb {E}\hookrightarrow {\mathcal {S}}'(\mathbb {R}^{d};X)\) is a -admissible Banach space for a given \({\varvec{a}} \in (0,\infty )^{l}\), then Let \({\varvec{a}} \in (0,\infty )^{l}\), \({\varvec{p}} \in [1,\infty )^{l}\), \(q \in [1,\infty ]\), and . For \(s,s_{0} \in \mathbb {R}\) it holds that Let X be a Banach space, \({\varvec{a}} \in (0,\infty )^{l}\), \({\varvec{p}} \in (1,\infty )^{l}\), , \(s \in \mathbb {R}\), \({\varvec{s}} \in (0,\infty )^{l}\) and \({\varvec{n}} \in (\mathbb {N}_{>0})^{l}\). We define , \({\varvec{n}} \in (\mathbb {Z}_{\ge 1})^{l}\), \({\varvec{n}}=s{\varvec{a}}^{-1}\); or ; or , \({\varvec{a}} \in (0,1)^{l}\), \({\varvec{a}}=s{\varvec{a}}^{-1}\), then we have the inclusions Theorem 2.1 [43] Let X be a Banach space, \(l=2\), \({\varvec{a}} \in (0,\infty )^{2}\), \(p,q \in (1,\infty )\), \(s > 0\), and . Then, with equivalence of norms. This intersection representation is actually a corollary of a more general intersection representation in [43]. In the above form, it can also be found in [42, Theorem 5.2.35]. For the case \(X=\mathbb {C}\), , \({\varvec{w}}={\varvec{1}}\), we refer to [19, Proposition 3.23]. The main result Maximal \(L^{q}_{\mu }\)–\(L^{p}_{\gamma }\)-regularity In order to give a precise description of the maximal weighted \(L^{q}\)–\(L^{p}\)-regularity approach for (1), let \({\mathscr {O}}\) be either \(\mathbb {R}^{d}_{+}\) or a smooth domain in \(\mathbb {R}^{d}\) with a compact boundary \(\partial {\mathscr {O}}\). Furthermore, let X be a Banach space, let $$\begin{aligned} q \in (1,\infty ),\, \mu \in (-1,q-1) \quad \text{ and } \quad p \in (1,\infty ),\, \gamma \in (-1,p-1), \end{aligned}$$ let \(v_{\mu }\) and \(w^{\partial {\mathscr {O}}}_{\gamma }\) be as in (4), put $$\begin{aligned} \mathbb {U}^{p,q}_{\gamma ,\mu }&:= W^{1}_{q}(J,v_{\mu };L^{p}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };X)) \cap L^{q}(J,v_{\mu };W^{2n}_{p}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };X)),\nonumber \\&\quad \hbox {(space of solutions }u) \nonumber \\ \mathbb {F}^{p,q}_{\gamma ,\mu }&:= L^{q}(J,v_{\mu };L^{p}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };X)), \quad \text{(space } \text{ of } \text{ domain } \text{ inhomogeneities } f)\nonumber \\ \mathbb {B}^{p,q}_{\mu }&:= L^{q}(J,v_{\mu };L^{p}(\partial {\mathscr {O}};X)), \quad \text{(boundary } \text{ space) } \end{aligned}$$ and let \(n,n_{1},\ldots ,n_{n} \in \mathbb {N}\) be natural numbers with \(n_{j} \le 2n - 1\) for each \(j \in \{1,\ldots ,n\}\). Suppose that for each \(\alpha \in \mathbb {N}^{d}, |\alpha | \le 2n\), $$\begin{aligned} a_{\alpha } \in {\mathcal {D}}'({\mathscr {O}} \times J;{\mathcal {B}}(X)) \quad \text{ with } \quad a_{\alpha }D^{\alpha } \in {\mathcal {B}}(\mathbb {U}^{p,q}_{\gamma ,\mu },\mathbb {F}^{p,q}_{\gamma ,\mu }) \end{aligned}$$ and that for each \(j \in \{1,\ldots ,n\}\) and \(\beta \in \mathbb {N}^{d}, |\beta | \le n_{j}\), $$\begin{aligned} b_{j,\beta } \in {\mathcal {D}}'(\partial {\mathscr {O}} \times J;{\mathcal {B}}(X)) \quad \text{ with } \quad b_{j,\beta }\mathrm {tr}_{\partial {\mathscr {O}}}D^{\beta } \in {\mathcal {B}}(\mathbb {U}^{p,q}_{\gamma ,\mu },\mathbb {B}^{p,q}_{\mu }), \end{aligned}$$ where the conditions \(a_{\alpha }D^{\alpha } \in {\mathcal {B}}(\mathbb {U}^{p,q}_{\gamma ,\mu },\mathbb {F}^{p,q}_{\gamma ,\mu })\) and \(b_{j,\beta }\mathrm {tr}_{\partial {\mathscr {O}}}D^{\beta } \in {\mathcal {B}}(\mathbb {U}^{p,q}_{\gamma ,\mu },\mathbb {B}^{p,q}_{\mu })\) have to be interpreted in the sense of bounded extension from the space of X-valued compactly supported smooth functions. Define \({\mathcal {A}}(D) \in {\mathcal {B}}(\mathbb {U}^{p,q}_{\gamma ,\mu },\mathbb {F}^{p,q}_{\gamma ,\mu })\) and \({\mathcal {B}}_{1}(D),\ldots ,{\mathcal {B}}_{n}(D) \in {\mathcal {B}}(\mathbb {U}^{p,q}_{\gamma ,\mu },\mathbb {B}^{p,q}_{\mu })\) by $$\begin{aligned} \begin{aligned} {\mathcal {A}}(D)&:= \sum _{|\alpha | \le 2n}a_{\alpha }D^{\alpha }, \\ {\mathcal {B}}_{j}(D)&:= \sum _{|\beta | \le n_{j}}b_{j,\beta }\mathrm {tr}_{\partial {\mathscr {O}}}D^{\beta }, \quad \quad j=1,\ldots ,n. \end{aligned} \end{aligned}$$ In the above notation, given \(f \in \mathbb {F}^{p,q}_{\gamma ,\mu }\) and \(g=(g_{1},\ldots ,g_{n}) \in [\mathbb {B}^{p,q}_{\mu }]^{n}\), one can ask the question whether the initial-boundary value problem $$\begin{aligned} \begin{array}{rll} \partial _{t}u + {\mathcal {A}}(D)u &{}= f, \\ {\mathcal {B}}_{j}(D)u &{}= g_{j}, &{} j=1,\ldots ,n, \\ \mathrm {tr}_{t=0}u &{}= u_{0}. \end{array} \end{aligned}$$ has a unique solution \(u \in \mathbb {U}^{p,q}_{\gamma ,\mu }\). Definition 3.1 We say that problem (20) enjoys the property of maximal\(L^{q}_{\mu }\)–\(L^{p}_{\gamma }\)-regularity if there exists a (necessarily unique) linear space \({\mathscr {D}}_{i.b.} \subset [\mathbb {B}^{p,q}_{\mu }]^{n} \times L^{p}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };X)\) such that (20) admits a unique solution \(u \in \mathbb {U}^{p,q}_{\gamma ,\mu }\) if and only if \((f,g,u_{0}) \in {\mathscr {D}} = \mathbb {F}^{p,q}_{\gamma ,\mu } \times {\mathscr {D}}_{i.b.}\). In this situation, we call \({\mathscr {D}}_{i.b.}\) the optimal space of initial-boundary data and \({\mathscr {D}}\) the optimal space of data. Remark 3.2 Let the notations be as above. If problem (20) enjoys the property of maximal\(L^{q}_{\mu }\)–\(L^{p}_{\gamma }\)-regularity, then there exists a unique Banach topology on the space of initial-boundary data \({\mathscr {D}}_{i.b.}\) such that \({\mathscr {D}}_{i.b.} \hookrightarrow [\mathbb {B}^{p,q}_{\mu }]^{n} \times L^{p}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };X)\). Moreover, if \({\mathscr {D}}_{i.b.}\) has been equipped with a Banach norm generating such a topology, then the solution operator $$\begin{aligned} {\mathscr {S}}: {\mathscr {D}} = \mathbb {F}^{p,q}_{\gamma ,\mu } \oplus {\mathscr {D}}_{i.b.} \longrightarrow \mathbb {U}^{p,q}_{\gamma ,\mu },\, (f,g,u_{0}) \mapsto {\mathscr {S}}(f,g,u_{0}) = u \end{aligned}$$ is an isomorphism of Banach spaces, or equivalently, $$\begin{aligned} ||u||_{\mathbb {U}^{p,q}_{\gamma ,\mu }} \eqsim ||f||_{\mathbb {F}^{p,q}_{\gamma ,\mu }} + ||(g,u_{0})||_{{\mathscr {D}}_{i.b.}}, \quad \quad u={\mathscr {S}}(f,g,u_{0}), (f,g,u_{0}) \in {\mathscr {D}}. \end{aligned}$$ The maximal\(L^{q}_{\mu }\)–\(L^{p}_{\gamma }\)-regularity problem for (20) consists of establishing maximal \(L^{q}_{\mu }\)–\(L^{p}_{\gamma }\)-regularity for (20) and explicitly determining the space \({\mathscr {D}}_{i.b.}\) together with a norm as in Remark 3.2. As the main result of this paper, Theorem 3.4, we will solve the maximal \(L^{q}_{\mu }\)–\(L^{p}_{\gamma }\)-regularity problem for (20) under the assumption that X is a UMD space and under suitable assumptions on the operators \({\mathcal {A}}(D),{\mathcal {B}}_{1}(D),\ldots ,{\mathcal {B}}_{n}(D)\). Assumptions on \(({\mathcal {A}},{\mathcal {B}}_{1},\ldots ,{\mathcal {B}}_{n})\) As in [18, 48], we will pose two type of conditions on the operators \({\mathcal {A}},{\mathcal {B}}_{1},\ldots ,{\mathcal {B}}_{n}\) for which we can solve the maximal \(L^{q}_{\mu }\)–\(L^{p}_{\gamma }\)-regularity problem for (20): smoothness assumptions on the coefficients and structural assumptions. In order to describe the smoothness assumptions on the coefficients, let \(q,p \in (1,\infty )\), \(\mu \in (-1,q-1)\), \(\gamma \in (-1,p-1)\) and put $$\begin{aligned} \kappa _{j,\gamma } := 1-\frac{n_{j}}{2n}-\frac{1}{2np}(1+\gamma ) \in (0,1), \quad \quad j=1,\ldots ,n. \end{aligned}$$ \((\mathrm {SD})\) : For \(|\alpha | = 2n\) we have \(a_{\alpha } \in BUC({\mathscr {O}} \times J;{\mathcal {B}}(X))\), and for \(|\alpha | < 2n\) we have \(a_{\alpha } \in L^{\infty }({\mathscr {O}} \times J ;{\mathcal {B}}(X))\). If \({\mathscr {O}}\) is unbounded, the limits \(a_{\alpha }(\infty ,t) := \lim _{|x| \rightarrow \infty }a_{\alpha }(x,t)\) exist uniformly with respect to \(t \in J\), \(|\alpha |=2n\). \((\mathrm {SB})\) : For each \(j \in \{1,\ldots ,m\}\) and \(|\beta | \le n_{j}\), there exist \(s_{j,\beta } \in [q,\infty )\) and \(r_{j,\beta } \in [p,\infty )\) with $$\begin{aligned} \kappa _{j,\gamma }> \frac{1}{s_{j,\beta }} + \frac{d-1}{2nr_{j,\beta }} + \frac{|\beta |-n_{j}}{2n} \quad \text{ and } \quad \mu > \frac{q}{s_{j,\beta }}-1 \end{aligned}$$ such that $$\begin{aligned} b_{j,\beta } \in F^{\kappa _{j,\gamma }}_{s_{j,\beta },p}(J;L^{r_{j,\beta }}(\partial {\mathscr {O}};{\mathcal {B}}(X))) \cap L^{s_{j,\beta }}(J;B^{2n\kappa _{j,\gamma }}_{r_{j,\beta },p}(\partial {\mathscr {O}};{\mathcal {B}}(X))). \end{aligned}$$ If \({\mathscr {O}}=\mathbb {R}^{d}_{+}\), the limits \(b_{j,\beta }(\infty ,t) := \lim _{|x'| \rightarrow \infty }b_{j,\beta }(x',t)\) exist uniformly with respect to \(t \in J\), \(j \in \{1,\ldots ,n\}\), \(|\beta |=n_{j}\). For the lower order parts of \(({\mathcal {A}},{\mathcal {B}}_{1},\ldots ,{\mathcal {B}}_{n})\), we only need \(a_{\alpha }D^{\alpha }\), \(|\alpha | < 2n\), and \(b_{j,\beta }\mathrm {tr}_{\partial {\mathscr {O}}}D^{\beta }\), \(|\beta _{j}|<n_{j}\), \(j=1,\ldots ,n\), to act as lower order perturbations in the sense that there exists \(\sigma \in [2n-1,2n)\) such that \(a_{\alpha }D^{\alpha }\), respectively, \(b_{j,\beta }\mathrm {tr}_{\partial {\mathscr {O}}}D^{\beta }\) is bounded from $$\begin{aligned} H^{\frac{\sigma }{2n}}_{q}(J,v_{\mu };L^{p}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };X)) \cap L^{q}(J,v_{\mu };H^{\sigma }_{p}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };X)) \end{aligned}$$ to \(L^{q}(J,v_{\mu };L^{p}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };X)))\), respectively, \(F^{\kappa _{j,\gamma }}_{q,p}(J,v_{\mu };L^{p}(\partial {\mathscr {O}};X)) \cap L^{q}(J,v_{\mu };F^{2n\kappa _{j,\gamma }}_{p,p}(\partial {\mathscr {O}};X))\). Here, the latter space is the optimal space of boundary data, see the statement of the main result. Let us now turn to the two structural assumptions on \({\mathcal {A}},{\mathcal {B}}_{1},\ldots ,{\mathcal {B}}_{n}\). For each \(\phi \in [0,\pi )\), we introduce the conditions \((\mathrm {E})_{\phi }\) and \((\mathrm {LS})_{\phi }\). The condition \((\mathrm {E})_{\phi }\) is parameter ellipticity. In order to state it, we denote by the subscript \(\#\) the principal part of a differential operator: given a differential operator \(P(D)=\sum _{|\gamma | \le k}p_{\gamma }D^{\gamma }\) of order \(k \in \mathbb {N}\), \(P_{\#}(D) = \sum _{|\gamma | = k}p_{\gamma }D^{\gamma }\). \((\mathrm {E})_{\phi }\) : For all \(t \in {\overline{J}}\), \(x \in \overline{{\mathscr {O}}}\) and \(|\xi |=1\) it holds that \(\sigma ({\mathcal {A}}_{\#}(x,\xi ,t)) \subset \Sigma _{\phi }\). If \({\mathscr {O}}\) is unbounded, then it in addition holds that \(\sigma ({\mathcal {A}}_{\#}(\infty ,\xi ,t)) \subset \mathbb {C}_{+}\) for all \(t \in {\overline{J}}\) and \(|\xi |=1\). The condition \((\mathrm {LS})_{\phi }\) is a condition of Lopatinskii–Shapiro type. Before we can state it, we need to introduce some notation. For each \(x \in \partial {\mathscr {O}}\), we fix an orthogonal matrix \(O_{\nu (x)}\) that rotates the outer unit normal \(\nu (x)\) of \(\partial {\mathscr {O}}\) at x to \((0,\ldots ,0,-1) \in \mathbb {R}^{d}\) and define the rotated operators \(({\mathcal {A}}^{\nu },{\mathcal {B}}^{\nu })\) by $$\begin{aligned} {\mathcal {A}}^{\nu }(x,D,t) := {\mathcal {A}}(x,O_{\nu (x)}^{T}D,t), \quad {\mathcal {B}}^{\nu }(x,D,t) := {\mathcal {B}}(x,O_{\nu (x)}^{T}D,t). \end{aligned}$$ \((\mathrm {LS})_{\phi }\) : For each \(t \in {\overline{J}}\), \(x \in \partial {\mathscr {O}}\), \(\lambda \in {\overline{\Sigma }}_{\pi -\phi }\) and \(\xi ' \in \mathbb {R}^{d-1}\) with \((\lambda ,\xi ') \ne 0\) and all \(h \in X^{n}\), the ordinary initial value problem $$\begin{aligned} \begin{array}{rlll} \lambda w(y) + {\mathcal {A}}^{\nu }_{\#}(\xi ',D_{y},t)w(y) &{}= 0, &{} y > 0 &{} \\ {\mathcal {B}}^{\nu }_{j,\#}(\xi ',D_{y},t)w(y)|_{y=0} &{}= h_{j}, &{} j=1,\ldots ,n. \end{array} \end{aligned}$$ has a unique solution \(w \in C^{\infty }([0,\infty );X)\) with \(\lim _{y \rightarrow \infty }w(y)=0\). Statement of the main result Let \({\mathscr {O}}\) be either \(\mathbb {R}^{d}_{+}\) or a \(C^{\infty }\)-domain in \(\mathbb {R}^{d}\) with a compact boundary \(\partial {\mathscr {O}}\). Let X be a Banach space, \(q,p \in (1,\infty )\), \(\mu \in (-1,q-1)\), \(\gamma \in (-1,p-1)\) and \(n,n_{1},\ldots ,n_{n} \in \mathbb {N}\) natural numbers with \(n_{j} \le 2n - 1\) for each \(j \in \{1,\ldots ,n\}\), and \(\kappa _{1,\gamma },\ldots ,\kappa _{n,\gamma } \in (0,1)\) as defined in (21). Put $$\begin{aligned} \mathbb {I}^{p,q}_{\gamma ,\mu }&:= B^{2n(1-\frac{1+\mu }{q})}_{p,q}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };X), \quad \text{(initial } \text{ data } \text{ space) } \nonumber \\ \mathbb {G}^{p,q}_{\gamma ,\mu ,j}&:= F^{\kappa _{j,\gamma }}_{q,p}(J,v_{\mu };L^{p}(\partial {\mathscr {O}};X)) \cap L^{q}(J,v_{\mu };F^{2n\kappa _{j,\gamma }}_{p,p}(\partial {\mathscr {O}};X)), \quad j=1,\ldots ,n, \nonumber \\ \mathbb {G}^{p,q}_{\gamma ,\mu }&:= \mathbb {G}^{p,q}_{1,\mu ,\gamma } \oplus \ldots \oplus \mathbb {G}^{p,q}_{n,\mu ,\gamma }. \quad \text{(space } \text{ of } \text{ boundary } \text{ data } \text{ g) } \end{aligned}$$ Furthermore, let \(\mathbb {U}^{p,q}_{\gamma ,\mu }\) and \(\mathbb {F}^{p,q}_{\gamma ,\mu }\) be as in (18). Let the notations be as above. Suppose that X is a UMD space, that \({\mathcal {A}}(D),{\mathcal {B}}_{1}(D),\ldots ,{\mathcal {B}}_{n}(D)\) satisfy the conditions \((\mathrm {SD})\), \((\mathrm {SB})\), \((\mathrm {E})_{\phi }\) and \((\mathrm {LS})_{\phi }\) for some \(\phi \in (0,\frac{\pi }{2})\), and that \(\kappa _{j,\gamma } \ne \frac{1+\mu }{q}\) for all \(j \in \{1,\ldots ,n\}\). Put $$\begin{aligned} \mathbb {D}^{p,q}_{\gamma ,\mu } := \left\{ (g,u_{0}) \in \mathbb {G}^{p,q}_{\gamma ,\mu } \oplus \mathbb {I}^{p,q}_{\gamma ,\mu } : \mathrm {tr}_{t=0}g_{j} - {\mathcal {B}}^{t=0}_{j}(D)u_{0} = 0 \,\,\text{ when }\,\, \kappa _{j,\gamma } > \frac{1+\mu }{q} \right\} , \end{aligned}$$ where \({\mathcal {B}}^{t=0}_{j}(D) := \sum _{|\beta | \le n_{j}}b_{j,\beta }(0,\,\cdot \,)\mathrm {tr}_{\partial {\mathscr {O}}}D^{\beta }\). Then, problem (20) enjoys the property of maximal \(L^{q}_{\mu }\)–\(L^{p}_{\gamma }\)-regularity with \(\mathbb {D}^{p,q}_{\gamma ,\mu }\) as the optimal space of initial-boundary data, i.e., problem (20) admits a unique solution \(u \in \mathbb {U}^{p,q}_{\gamma ,\mu }\) if and only if \((f,g,u_{0}) \in \mathbb {F}^{p,q}_{\gamma ,\mu } \oplus \mathbb {D}^{p,q}_{\gamma ,\mu }\). Moreover, the corresponding solution operator \({\mathscr {S}}: \mathbb {F}^{p,q}_{\gamma ,\mu } \oplus \mathbb {D}^{p,q}_{\gamma ,\mu } \longrightarrow \mathbb {U}^{p,q}_{\gamma ,\mu }\) is an isomorphism of Banach spaces. The compatibility condition \(\mathrm {tr}_{t=0}g_{j} - {\mathcal {B}}^{t=0}_{j}(D)u_{0} = 0\) in the definition of \(\mathbb {D}^{p,q}_{\gamma ,\mu }\) is basically imposed when \((g_{j},u_{0}) \mapsto \mathrm {tr}_{t=0}g_{j} - {\mathcal {B}}^{t=0}_{j}(D)u_{0}\) makes sense as a continuous linear operator from \(\mathbb {G}^{p,q}_{\gamma ,\mu ,j} \oplus \mathbb {I}^{p,q}_{\gamma ,\mu }\) to some topological vector space V. That it is indeed a well-defined continuous linear operator from \(\mathbb {G}^{p,q}_{\gamma ,\mu ,j} \oplus \mathbb {I}^{p,q}_{\gamma ,\mu }\) to \(L^{0}(\partial {\mathscr {O}};X)\) when \(\kappa _{j,\gamma } > \frac{1+\mu }{q}\) can be seen by combining the following two points: Suppose \(\kappa _{j,\gamma } > \frac{1+\mu }{q}\). Then, the condition \((\mathrm {SB})\) yields \(b_{j,\beta } \in F^{\kappa _{j,\gamma }}_{s_{j,\beta },p}(J;L^{r_{j,\beta }}({\mathscr {O}};{\mathcal {B}}(X)))\) with \(\kappa _{j,\gamma }> \frac{1+\mu }{q} > \frac{1}{s_{j,\beta }}\). By [49, Proposition 7.4], $$\begin{aligned} F^{\kappa _{j,\gamma }}_{s_{j,\beta },p}(J;L^{r_{j,\beta }}({\mathscr {O}};{\mathcal {B}}(X))) \hookrightarrow BUC(J;L^{r_{j,\beta }}({\mathscr {O}};{\mathcal {B}}(X))). \end{aligned}$$ Furthermore, it holds that \(2n(1-\frac{1+\mu }{q}) > n_{j} + \frac{1+\gamma }{q}\), so each \(\mathrm {tr}_{\partial {\mathscr {O}}}D^{\beta }\), \(|\beta | \le n_{j}\), is a continuous linear operator from \(\mathbb {I}^{p,q}_{\gamma ,\mu }\) to \(B^{2n(1-\frac{1+\mu }{q})-n_{j}-\frac{1+\gamma }{p}}_{p,q}(\partial {\mathscr {O}};X) \hookrightarrow L^{p}(\partial {\mathscr {O}};X)\) by the trace theory from Sect. 4.1. Therefore, \({\mathcal {B}}^{t=0}_{j}(D) = \sum _{|\beta | \le n_{j}}b_{j,\beta }(0,\,\cdot \,)\mathrm {tr}_{\partial {\mathscr {O}}}D^{\beta }\) makes sense as a continuous linear operator from \(\mathbb {I}^{p,q}_{\gamma ,\mu }\) to \(L^{0}(\partial {\mathscr {O}};X)\). Suppose \(\kappa _{j,\gamma } > \frac{1+\mu }{q}\). The observation that $$\begin{aligned} \mathbb {G}^{p,q}_{\gamma ,\mu ,j} \hookrightarrow F^{\kappa _{j,\gamma }}_{q,p}(J,v_{\mu };L^{p}(\partial {\mathscr {O}};X)) \end{aligned}$$ in combination with the trace theory from Sect. 4.1 yields that \(\mathrm {tr}_{t=0}\) is a well-defined continuous linear operator from \(\mathbb {G}^{p,q}_{\gamma ,\mu ,j}\) to \(L^{p}(\partial {\mathscr {O}};X) \hookrightarrow L^{0}(\partial {\mathscr {O}};X)\). The \(C^{\infty }\)-smoothness on \(\partial {\mathscr {O}}\) in Theorem 3.4 can actually be reduced to \(C^{2n}\)-smoothness, which could be derived from the theorem itself by a suitable coordinate transformation. Notice the dependence of the space of initial-boundary data on the weight parameters \(\mu \) and \(\gamma \). For fixed \(q,p \in (1,\infty )\), we can roughly speaking decrease the required smoothness (or regularity) of g and \(u_{0}\) by increasing \(\gamma \) and \(\mu \), respectively. Furthermore, compatibility conditions can be avoided by choosing \(\mu \) and \(\gamma \) big enough. So the weights make it possible to solve (20) for more initial-boundary data (compared to the unweighted setting). On the other hand, by choosing \(\mu \) and \(\gamma \) closer to \(-1\) (depending on the initial-boundary data), we can find more information about the behavior of u near the initial-time and near the boundary, respectively. The dependence on the weight parameters \(\mu \) and \(\gamma \) is illustrated in the following example of the heat equation with Dirichlet and Neumann boundary conditions: Let \(N \in \mathbb {N}\) and let \(p,q,\gamma ,\mu \) be as above. The heat equation with Dirichlet boundary condition: If \(2-\frac{2}{q}(1+\mu ) \ne \frac{1}{p}(1+\gamma )\), then the problem $$\begin{aligned} \begin{array}{rll} \partial _{t}u -\Delta u &{}= f, \\ \mathrm {tr}_{\partial {\mathscr {O}}}u &{}= g, \\ u(0) &{}= u_{0}, \end{array} \end{aligned}$$ has a unique solution \(u \in W^{1}_{q}(J,v_{\mu };L^{p}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };\mathbb {C}^{N})) \cap L^{q}(J,v_{\mu };W^{2}_{p}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };\mathbb {C}^{N}))\) if and only the data \((f,g,u_{0})\) satisfy: \(f \in L^{q}(J,v_{\mu };L^{p}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };\mathbb {C}^{N}))\); \(g \in F^{1-\frac{1}{2p}(1+\gamma )}_{q,p}(J,v_{\mu };L^{p}(\partial {\mathscr {O}};\mathbb {C}^{N})) \cap L^{q}(J,v_{\mu };F^{2-\frac{1}{p}(1+\gamma )}_{p,p}(\partial {\mathscr {O}};\mathbb {C}^{N}))\); \(u_{0} \in B^{2-\frac{2}{q}(1+\mu )}_{p,q}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };\mathbb {C}^{N})\); \(\mathrm {tr}_{t=0}g = \mathrm {tr}_{\partial {\mathscr {O}}}u_{0}\) when \(2-\frac{2}{q}(1+\mu ) > \frac{1}{p}(1+\gamma )\). The heat equation with Neumann boundary condition: $$\begin{aligned} \begin{array}{rll} \partial _{t}u -\Delta u &{}= f, \\ \partial _{\nu }u &{}= g, \\ u(0) &{}= u_{0}, \end{array} \end{aligned}$$ \(g \in F^{\frac{1}{2}-\frac{1}{2p}(1+\gamma )}_{q,p}(J,v_{\mu };L^{p}(\partial {\mathscr {O}};\mathbb {C}^{N})) \cap L^{q}(J,v_{\mu };F^{1-\frac{1}{p}(1+\gamma )}_{p,p}(\partial {\mathscr {O}};\mathbb {C}^{N}))\); Trace theory In this section, we establish the necessary trace theory for the maximal \(L^{q}_{\mu }\)–\(L^{p}_{\gamma }\)-regularity problem for (20). Traces of isotropic spaces In this subsection, we state trace results for the isotropic spaces, for which we refer to [44] (also see the references there). Note that these are of course special cases of the more general anisotropic mixed-norm spaces, for which trace theory (for the model problem case of a half-space) can be found in the next subsections and in [42]. The following notation will be convenient: $$\begin{aligned} \partial B^{s}_{p,q,\gamma }(\partial {\mathscr {O}};X) := B^{s-\frac{1+\gamma }{p}}_{p,q}(\partial {\mathscr {O}};X) \quad \quad \text{ and } \quad \quad \partial F^{s}_{p,q,\gamma }(\partial {\mathscr {O}};X) := F^{s-\frac{1+\gamma }{p}}_{p,p}(\partial {\mathscr {O}};X). \end{aligned}$$ Proposition 4.1 Let X be a Banach space, \({\mathscr {O}} \subset \mathbb {R}^{d}\) either \(\mathbb {R}^{d}_{+}\) or a \(C^{\infty }\)-domain in \(\mathbb {R}^{d}\) with a compact boundary \(\partial {\mathscr {O}}\), \({\mathscr {A}} \in \{B,F\}\), \(p \in [1,\infty )\), \(q \in [1,\infty ]\), \(\gamma \in (-1,\infty )\) and \(s>\frac{1+\gamma }{p}\). Then $$\begin{aligned} {\mathcal {S}}(\mathbb {R}^{d};X) \longrightarrow {\mathcal {S}}(\partial {\mathscr {O}};X),\, f \mapsto f_{|\partial {\mathscr {O}}}, \end{aligned}$$ uniquely extends to a retraction \(\mathrm {tr}_{\partial {\mathscr {O}}}\) from \({\mathscr {A}}^{s}_{p,q}(\mathbb {R}^{d},w^{\partial {\mathscr {O}}}_{\gamma };X)\) onto \(\partial {\mathscr {A}}^{s}_{p,q,\gamma }(\partial {\mathscr {O}};X)\). There is a universal coretraction in the sense that there exists an operator \(\mathrm {ext}_{\partial {\mathscr {O}}} \in {\mathcal {L}}({\mathcal {S}}'(\partial {\mathscr {O}};X),{\mathcal {S}}'(\mathbb {R}^{d};X))\) (independent of \({\mathscr {A}},p,q,\gamma ,s\)) which restricts to a coretraction for the operator \(\mathrm {tr}_{\partial {\mathscr {O}}} \in {\mathcal {B}}({\mathscr {A}}^{s}_{p,q}(\mathbb {R}^{d},w^{\partial {\mathscr {O}}}_{\gamma };X),\partial {\mathscr {A}}^{s}_{p,q,\gamma }(\partial {\mathscr {O}};X))\). The same statements hold true with \(\mathbb {R}^{d}\) replaced by \({\mathscr {O}}\). Recall that \({\mathcal {S}}(\mathbb {R}^{d};X)\) is dense in \({\mathscr {A}}^{s}_{p,q}(\mathbb {R}^{d},w^{\partial {\mathscr {O}}}_{\gamma };X)\) for \(q<\infty \) but not for \(q=\infty \). For \(q=\infty \) uniqueness of the extension follows from the trivial embedding \({\mathscr {A}}^{s}_{p,\infty }(\mathbb {R}^{d},w^{\partial {\mathscr {O}}}_{\gamma };X) \hookrightarrow B^{s-\epsilon }_{p,1}(\mathbb {R}^{d},w^{\partial {\mathscr {O}}}_{\gamma };X)\), \(\epsilon > 0\). Corollary 4.3 Let X be a Banach space, \({\mathscr {O}} \subset \mathbb {R}^{d}\) either \(\mathbb {R}^{d}_{+}\) or a \(C^{\infty }\)-domain in \(\mathbb {R}^{d}\) with a compact boundary \(\partial {\mathscr {O}}\), \(p \in (1,\infty )\), \(\gamma \in (-1,p-1)\), \(n \in \mathbb {N}_{>0}\) and \(s > \frac{1+\gamma }{p}\). Then uniquely extends to retractions \(\mathrm {tr}_{\partial {\mathscr {O}}}\) from \(W^{n}_{p}(\mathbb {R}^{d},w^{\partial {\mathscr {O}}}_{\gamma };X)\) onto \(F^{n-\frac{1+\gamma }{p}}_{p,p}(\partial {\mathscr {O}};X)\) and from \(W^{s}_{p}(\mathbb {R}^{d},w^{\partial {\mathscr {O}}}_{\gamma };X)\) onto \(F^{s-\frac{1+\gamma }{p}}_{p,p}(\partial {\mathscr {O}};X)\). The same statement holds true with \(\mathbb {R}^{d}\) replaced by \({\mathscr {O}}\). Traces of intersection spaces For the maximal \(L^{q}_{\mu }\)–\(L^{p}_{\gamma }\)-regularity problem for (20), we need to determine the temporal and spatial trace spaces of Sobolev and Bessel potential spaces of intersection type. As the temporal trace spaces can be obtained from the trace results in [50], we will focus on the spatial traces. By the trace theory of the previous subsection, the trace operator \(\mathrm {tr}_{\partial {\mathscr {O}}}\) can be defined pointwise in time on the intersection spaces in the following theorem. It will be convenient to use the notation \(\mathrm {tr}_{\partial {\mathscr {O}}}[\mathbb {E}]=\mathbb {F}\) to say that \(\mathrm {tr}_{\partial {\mathscr {O}}}\) is a retraction from \(\mathbb {E}\) onto \(\mathbb {F}\). Let \({\mathscr {O}}\) be either \(\mathbb {R}^{d}_{+}\) or a \(C^{\infty }\)-domain in \(\mathbb {R}^{d}\) with a compact boundary \(\partial {\mathscr {O}}\). Let X be a Banach space, Y a UMD Banach space, \(p,q \in (1,\infty )\), \(\mu \in (-1,q-1)\) and \(\gamma \in (-1,p-1)\). If \(n,m \in \mathbb {Z}_{>0}\) and \(r,s \in (0,\infty )\) with \(s > \frac{1+\gamma }{p}\), then $$\begin{aligned} \begin{aligned}&\mathrm {tr}_{\partial {\mathscr {O}}}\left[ W^{n}_{q}(J,v_{\mu };L^{p}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };X)) \cap L^{q}(J,v_{\mu };W^{m}_{p}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };X)) \right] \\&\qquad = F^{n-\frac{n}{m}\frac{1+\gamma }{p}}_{q,p}(J,v_{\mu };L^{p}(\partial {\mathscr {O}};X)) \cap L^{q}(J,v_{\mu };F^{m-\frac{1+\gamma }{p}}_{p,p}(\partial {\mathscr {O}};X)) \end{aligned} \end{aligned}$$ $$\begin{aligned} \begin{aligned}&\mathrm {tr}_{\partial {\mathscr {O}}}\left[ H^{r}_{q}(J,v_{\mu };L^{p}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };Y)) \cap L^{q}(J,v_{\mu };H^{s}_{p}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };Y)) \right] \\&\qquad = F^{r-\frac{r}{s}\frac{1+\gamma }{p}}_{q,p}(J,v_{\mu };L^{p}(\partial {\mathscr {O}};Y)) \cap L^{q}(J,v_{\mu };F^{s-\frac{1+\gamma }{p}}_{p,p}(\partial {\mathscr {O}};Y)). \end{aligned} \end{aligned}$$ The main idea behind the proof of Theorem 4.4 is, as in [60], to exploit the independence of the trace space of a Triebel–Lizorkin space on its microscopic parameter. As in [60], our approach does not require any restrictions on the Banach space X. The UMD restriction on Y comes from the localization procedure for Bessel potential spaces used in the proof, which can be omitted in the case \({\mathscr {O}} = \mathbb {R}^{d}_{+}\). This localization procedure for Bessel potential spaces could be replaced by a localization procedure for weighted anisotropic mixed-norm Triebel–Lizorkin spaces, which would not require any restrictions on the Banach space Y. However, we have chosen to avoid this as localization of such Triebel–Lizorkin spaces has not been considered in the literature before, while we do not need that generality anyway. For localization in the scalar-valued isotropic non-mixed-norm case, we refer to [44]. Proof of Theorem 4.4 By standard techniques of localization, it suffices to consider the case \({\mathscr {O}} = \mathbb {R}^{d}_{+}\) with boundary \(\partial {\mathscr {O}} = \mathbb {R}^{d-1}\). Moreover, using a standard restriction argument, we may turn to the corresponding trace problem on the full space \({\mathscr {O}} \times J = \mathbb {R}^{d} \times \mathbb {R}\). From the natural identifications $$\begin{aligned} W^{n}_{q,\mu }(L^{p}_{\gamma }) \cap L^{q}_{\mu }(W^{m}_{p,\gamma }) = W^{(m,n)}_{(p,q),(d,1)}(\mathbb {R}^{d+1},(w_{\gamma },v_{\mu });X) \end{aligned}$$ $$\begin{aligned} H^{r}_{q,\mu }(L^{p}_{\gamma }) \cap L^{q}_{\mu }(H^{s}_{p,\gamma }) = H^{(s,r)}_{(p,q),(d,1)}(\mathbb {R}^{d+1},(w_{\gamma },v_{\mu });Y), \end{aligned}$$ (16) and Corollary 4.9, it follows that $$\begin{aligned} \mathrm {tr}\,[W^{n}_{q,\mu }(L^{p}_{\gamma }) \cap L^{q}_{\mu }(W^{m}_{p,\gamma })] = F^{1-\frac{1}{m}\frac{1+\gamma }{p},\left( \frac{1}{m},\frac{1}{n}\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d},(1,v_{\mu });X) \end{aligned}$$ $$\begin{aligned} \mathrm {tr}\,[H^{r}_{q,\mu }(L^{p}_{\gamma }) \cap L^{q}_{\mu }(H^{s}_{p,\gamma }) ] = F^{1-\frac{1}{s}\frac{1+\gamma }{p},\left( \frac{1}{s},\frac{1}{r}\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d},(1,v_{\mu });Y). \end{aligned}$$ An application of Theorem 2.1 finishes the proof. \(\square \) Traces of anisotropic mixed-norm spaces The goal of this subsection is to prove the trace result Theorem 4.6, which is a weighted vector-valued version of [36, Theorem 2.2]. In contrast to Theorem 4.6, the trace result [36, Theorem 2.2] is formulated for the distributional trace operator; see Remark 4.8 for more information. However, all estimates in the proof of that result are carried out for the "working definition of the trace." The proof of Theorem 4.6 presented below basically consists of modifications of these estimates to our setting. As this can get quite technical at some points, we have decided to give the proof in full detail. The working definition of the trace Let with associated family of convolution operators \((S_{n})_{n \in \mathbb {N}} \subset {\mathcal {L}}({\mathcal {S}}'(\mathbb {R}^{d};X))\) be fixed. In order to motivate the definition to be given in a moment, let us first recall that \(f = \sum _{n=0}^{\infty }S_{n}f\) in \({\mathcal {S}}(\mathbb {R}^{d};X)\) (respectively, in \({\mathcal {S}}'(\mathbb {R}^{d};X)\)) whenever \(f \in {\mathcal {S}}(\mathbb {R}^{d};X)\) (respectively, \(f \in {\mathcal {S}}'(\mathbb {R}^{d};X)\)), from which it is easy to see that $$\begin{aligned} f_{|\{0\} \times \mathbb {R}^{d-1}} = \sum _{n=0}^{\infty }(S_{n}f)_{|\{0\} \times \mathbb {R}^{d-1}} \,\,\text{ in }\,\,{\mathcal {S}}(\mathbb {R}^{d-1};X), \quad \quad f \in {\mathcal {S}}(\mathbb {R}^{d};X). \end{aligned}$$ Furthermore, given a general tempered distribution \(f \in {\mathcal {S}}'(\mathbb {R}^{d};X)\), recall that \(S_{n}f \in {\mathscr {O}}_{M}(\mathbb {R}^{d};X)\); in particular, each \(S_{n}f\) has a well-defined classical trace with respect to \(\{0\} \times \mathbb {R}^{d-1}\). This suggests to define the trace operator \(\tau = \tau ^{\varphi }: {\mathcal {D}}(\gamma ^{\varphi }) \subset {\mathcal {S}}'(\mathbb {R}^{d};X) \longrightarrow {\mathcal {S}}'(\mathbb {R}^{d-1};X)\) by $$\begin{aligned} \tau ^{\varphi }f := \sum _{n=0}^{\infty }(S_{n}f)_{|\{0\} \times \mathbb {R}^{d-1}} \end{aligned}$$ on the domain \({\mathcal {D}}(\tau ^{\varphi })\) consisting of all \(f \in {\mathcal {S}}'(\mathbb {R}^{d};X)\) for which this defining series converges in \({\mathcal {S}}'(\mathbb {R}^{d-1};X)\). Note that \({\mathscr {F}}^{-1}{\mathcal {E}}'(\mathbb {R}^{d};X)\) is a subspace of \({\mathcal {D}}(\tau ^{\varphi })\) on which \(\tau ^{\varphi }\) coincides with the classical trace of continuous functions with respect to \(\{0\} \times \mathbb {R}^{d-1}\); of course, for an f belonging to \({\mathscr {F}}^{-1}{\mathcal {E}}'(\mathbb {R}^{d};X)\) there are only finitely many \(S_{n}f\) nonzero. The distributional trace operator Let us now introduce the concept of distributional trace operator. The reason for us to introduce it is the right inverse from Lemma 4.5. The distributional trace operator r (with respect to the hyperplane \(\{0\} \times \mathbb {R}^{d-1}\)) is defined as follows. Viewing \(C(\mathbb {R};{\mathcal {D}}'(\mathbb {R}^{d-1};X))\) as subspace of \({\mathcal {D}}'(\mathbb {R}^{d};X) = {\mathcal {D}}'(\mathbb {R}\times \mathbb {R}^{d-1};X)\) via the canonical identification \({\mathcal {D}}'(\mathbb {R};{\mathcal {D}}'(\mathbb {R}^{d-1};X)) = {\mathcal {D}}'(\mathbb {R}\times \mathbb {R}^{d-1};X)\) (arising from the Schwartz kernel theorem), $$\begin{aligned} C(\mathbb {R};{\mathcal {D}}'(\mathbb {R}^{d-1};X)) \hookrightarrow {\mathcal {D}}'(\mathbb {R};{\mathcal {D}}'(\mathbb {R}^{d-1};X)) = {\mathcal {D}}'(\mathbb {R}\times \mathbb {R}^{d-1};X), \end{aligned}$$ we define \(r \in {\mathcal {L}}(C(\mathbb {R};{\mathcal {D}}'(\mathbb {R}^{d-1};X)),{\mathcal {D}}'(\mathbb {R}^{d-1};X))\) as the 'evaluation in 0 map' $$\begin{aligned} r: C(\mathbb {R};{\mathcal {D}}'(\mathbb {R}^{d-1};X)) \longrightarrow {\mathcal {D}}'(\mathbb {R}^{d-1};X),\,f \mapsto \mathrm {ev}_{0}f. \end{aligned}$$ Then, in view of $$\begin{aligned} C(\mathbb {R}^{d};X) = C(\mathbb {R}\times \mathbb {R}^{d-1};X) = C(\mathbb {R};C(\mathbb {R}^{d-1};X)) \hookrightarrow C(\mathbb {R};{\mathcal {D}}'(\mathbb {R}^{d-1};X)), \end{aligned}$$ we have that the distributional trace operator r coincides on \(C(\mathbb {R}^{d};X)\) with the classical trace operator with respect to the hyperplane \(\{0\} \times \mathbb {R}^{d-1}\), i.e., $$\begin{aligned} r: C(\mathbb {R}^{d};X) \longrightarrow C(\mathbb {R}^{d-1};X),\,f \mapsto f_{| \{0\} \times \mathbb {R}^{d-1}}. \end{aligned}$$ The following lemma can be established as in [36, Section 4.2.1]. Lemma 4.5 Let \(\rho \in {\mathcal {S}}(\mathbb {R})\) such that \(\rho (0) = 1\) and \({{\,\mathrm{supp}\,}}{\hat{\rho }} \subset [1,2]\), \(a_{1} \in \mathbb {R}\), with , \(\tilde{{\varvec{a}}} \in (0,\infty )^{l-1}\), and . Then, for each \(g \in {\mathcal {S}}'(\mathbb {R}^{d-1};X)\), $$\begin{aligned} \mathrm {ext}\,g := \sum _{n=0}^{\infty } \rho (2^{na_{1}}\,\cdot \,) \otimes [\phi _{n}*g] \end{aligned}$$ defines a convergent series in \({\mathcal {S}}'(\mathbb {R}^{d};X)\) with for some constant \(c>0\) independent of g. Moreover, the operator \(\mathrm {ext}\) defined via this formula is a linear operator $$\begin{aligned} \mathrm {ext}:{\mathcal {S}}'(\mathbb {R}^{d-1};X) \longrightarrow C_{b}(\mathbb {R};{\mathcal {S}}'(\mathbb {R}^{d-1};X)) \end{aligned}$$ which acts as a right inverse of \(r:C(\mathbb {R};{\mathcal {S}}'(\mathbb {R}^{d-1};X)) \longrightarrow {\mathcal {S}}'(\mathbb {R}^{d-1};X)\). Trace spaces of Triebel–Lizorkin, Sobolev and Bessel potential spaces Let X be a Banach space, , \({\varvec{a}} \in (0,\infty )^{l}\), \({\varvec{p}} \in [1,\infty )^{l}\), \(q \in [1,\infty ]\), \(\gamma \in (-1,\infty )\) and \(s > \frac{a_{1}}{p_{1}}(1+\gamma )\). Let be such that \(w_{1}(x_{1}) = w_{\gamma }(x_{1}) = |x_{1}|^{\gamma }\) and for some \({\varvec{r}}''=(r_{2},\ldots ,r_{l}) \in (0,1)^{l-1}\) satisfying .Footnote 1 Then, the trace operator \(\tau = \tau ^{\varphi }\) (25) is well defined on , where it is independent of \(\varphi \), and restricts to a retraction for which the extension operator \(\mathrm {ext}\) from Lemma 4.5 (with and \(\tilde{{\varvec{a}}}= {\varvec{a}}''\)) restricts to a corresponding coretraction. In the situation of Theorem 4.6, suppose that \(q < \infty \). Then, \({\mathcal {S}}(\mathbb {R}^{d};X)\) is a dense linear subspace of and \(\tau \) is just the unique extension of the classical trace operator $$\begin{aligned} {\mathcal {S}}(\mathbb {R}^{d};X) \longrightarrow {\mathcal {S}}(\mathbb {R}^{d-1};X),\, f \mapsto f_{|\{0\} \times \mathbb {R}^{d-1}}, \end{aligned}$$ to a bounded linear operator (28). In contrary to the unweighted case considered in [36], one cannot use translation arguments to show that for \(s\!>\! \frac{a_{1}}{p_{1}}(1+\gamma )\). However, for \(s\!>\! \frac{a_{1}}{p_{1}}(1+\gamma _{+})\), \({\varvec{p}} \in (1,\infty )^{l}\) and , the inclusion can be obtained as follows: picking \({\tilde{s}}\) with \(s> {\tilde{s}} > \frac{a_{1}}{p_{1}}(1+\gamma _{+})\), there holds the chain of inclusions Here, the restriction \(s > \frac{a_{1}}{p_{1}}(1+\gamma _{+})\) when \(\gamma < 0\) is natural in view of the necessity of \(s>\frac{a_{1}}{p_{1}}\) in the unweighted case with \(p_{1}>1\) (cf. [36, Theorem 2.1]). Note that the trace space of the weighted anisotropic Triebel–Lizorkin space is independent of the microscopic parameter \(q \in [1,\infty ]\). As a consequence, if \(\mathbb {E}\) is a normed space with then the trace result of Theorem 4.6 also holds for \(\mathbb {E}\) in place of . In particular, we have: Let X be a Banach space, , \({\varvec{a}} \in (0,\infty )^{l}\), \({\varvec{p}} \in (1,\infty )^{l}\), \(\gamma \in (-1,p_{1}-1)\) and \(s > \frac{a_{1}}{p_{1}}(1+\gamma )\). Let be such that \(w_{1}(x_{1}) = w_{\gamma }(x_{1}) = |x_{1}|^{\gamma }\). Suppose that either , \({\varvec{s}} \in (0,\infty )^{l}\), \({\varvec{s}}=s{\varvec{a}}^{-1}\). Then, the trace operator \(\tau = \tau ^{\varphi }\) (25) is well defined on \(\mathbb {E}\), where it is independent of \(\varphi \), and restricts to a retraction Traces by duality for Besov spaces Let \(i \in \{1,\ldots ,l\}\). For , we define the hyperplane and we simply put . Furthermore, given sets \(S_{1},\ldots ,S_{l}\) and \(x = (x_{1},\ldots ,x_{l}) \in \prod _{j=1}^{l}S_{j}\), we write \({\varvec{x}}^{[i]} = (x_{1},\ldots ,x_{i-1},x_{i+1},\ldots ,x_{l})\). Proposition 4.10 Let X be a Banach space, \(i \in \{1,\ldots ,l\}\), \({\varvec{a}} \in (0,\infty )^{l}\), \({\varvec{p}} \in (1,\infty )^{l}\), \(q \in [1,\infty )\), and . Let be such that \(w_{i}(x_{i}) = w_{\gamma }(x_{i}) = |x_{i}|^{\gamma }\) and \(w_{j} \in A_{p_{j}}\) for each \(j \ne i\). Then, the trace operator extends to a retraction for which the extension operator \(\mathrm {ext}\) from Lemma 4.5 (with and \(\tilde{{\varvec{a}}}= {\varvec{a}}^{[i]}\), modified in the obvious way to the ith multidimensional coordinate) restricts to a corresponding coretraction. Furthermore, if , then where \(\rho _{p_{i},\gamma } := \max \{|\,\cdot \,|,1\}^{-\frac{\gamma _{-}}{p_{i}}}\). Let X be a Banach space, \({\varvec{a}} \in (0,\infty )^{l}\), \({\varvec{p}} \in (1,\infty )^{l}\), \(q \in [1,\infty )\), and . Let be such that \(w_{j}(x_{j}) = w_{\gamma }(x_{j}) = |x_{j}|^{\gamma }\) for each \(j \in \{1,\ldots ,l\}\). Then, Thanks to the Sobolev embedding of Proposition 5.1, it is enough to treat the case , which can be obtained by l iterations of Proposition 4.10. \(\square \) Remark 4.12 The above proposition and its corollary remain valid for \(q=\infty \). In this case the norm estimate corresponding to (29) can be obtained in a similar way, from which the unique extendability to a bounded linear operator (29) can be derived via the Fatou property, (10) and the case \(q=1\). The remaining statements can be established in the same way as for the case \(q<\infty \). Note that if \({\varvec{\gamma }} \in [0,\infty )^{l}\) in the situation of the above corollary, then by density of the Schwartz space \({\mathcal {S}}(\mathbb {R}^{d};X) \subset BUC(\mathbb {R}^{d};X)\) in . This could also be established in the standard way by the Sobolev embedding Proposition 5.1, see for instance [49, Proposition 7.4]. Let X be a Banach space. Then, $$\begin{aligned}{}[{\mathcal {S}}'(\mathbb {R}^{d};X)]' = {\mathcal {S}}(\mathbb {R}^{d};X^{*}) \quad \quad \text{ and } \quad \quad [{\mathcal {S}}(\mathbb {R}^{d};X)]' = {\mathcal {S}}'(\mathbb {R}^{d};X^{*}) \end{aligned}$$ via the pairings induced by $$\begin{aligned} \langle f \otimes x^{*}, g \otimes x \rangle = \langle \langle f, x^{*} \rangle , \langle g, x \rangle \rangle ; \end{aligned}$$ see [4, Corollary 1.4.10]. Let \(i \in \{1,\ldots ,l\}\) and . Let be given by . Then, the adjoint operator is given by , which can be seen by testing on the dense subspace of \({\mathcal {S}}(\mathbb {R}^{d})\). Now suppose that \(\mathbb {E}\) is a locally convex space with \({\mathcal {S}}(\mathbb {R}^{d};X) {\mathop {\hookrightarrow }\limits ^{d}} \mathbb {E}\) and that \(\mathbb {F}\) is a complete locally convex space with . Then, \(\mathbb {E}' \hookrightarrow {\mathcal {S}}'(\mathbb {R}^{d};X^{*})\) and under the natural identifications, and extends to a continuous linear operator \(\mathrm {tr}_{\mathbb {E}\rightarrow \mathbb {F}}\) from \(\mathbb {E}\) to \(\mathbb {F}\) if and only if restricts to a continuous linear operator \(T_{\mathbb {F}' \rightarrow \mathbb {E}'}\) from \(\mathbb {F}'\) to \(\mathbb {E}'\), in which case \([\mathrm {tr}_{\mathbb {E}\rightarrow \mathbb {F}}]' = T_{\mathbb {F}' \rightarrow \mathbb {E}'}\). Estimates in the classical Besov and Triebel–Lizorkin spaces for the tensor product with the one-dimensional delta-distribution \(\delta _{0}\) can be found in [34, Proposition 2.6], where a different proof is given than the one below. Lemma 4.14 Let X be a Banach space, \(i \in \{1,\ldots ,l\}\), \({\varvec{a}} \in (0,\infty )^{l}\), \({\varvec{p}} \in [1,\infty )^{l}\), \(q \in [1,\infty ]\), . Let be such that \(w_{i}(x_{i}) = w_{\gamma }(x_{i}) = |x_{i}|^{\gamma }\). For each consider the linear operator If , then is bounded from to . If , then is bounded from to with norm estimate In order to perform all the estimates in Lemma 4.14, we need the following two lemmas. Let \(\psi :\mathbb {R}^{d} \longrightarrow \mathbb {C}\) be a rapidly decreasing measurable function and put \(\psi _{R}:=R^{d}\psi (R\,\cdot \,)\) for each \(R>0\). Let \(p \in [1,\infty )\) and \(\gamma \in (-1,\infty )\). For every \(R>0\) and \(a \in \mathbb {R}^{d}\), the following estimate holds true: $$\begin{aligned} ||\psi _{R}(\,\cdot \,-a)||_{L^{p}(\mathbb {R}^{d},|\,\cdot \,|^{\gamma })} \lesssim R^{d-\frac{d+\gamma }{p}}(|a|R+1)^{\gamma _{+}/p} \end{aligned}$$ By [11, Condition \(B_{p}\)] (see [49, Lemma 4.5] for a proof), if w is an \(A_{q}\)-weight on \(\mathbb {R}^{d}\) with \(q \in (1,\infty )\), then $$\begin{aligned} \int _{\mathbb {R}^{d}}(1+|x-y|)^{-dq}\,dy \lesssim _{[w]_{A_{q}},q} \int _{B(x,1)}w(y)\,dy. \end{aligned}$$ So let us pick \(q \in (1,\infty )\) so that \(|\,\cdot \,|^{\gamma } \in A_{q}\). Then, as \(\psi \) is rapidly decreasing, there exists \(C>0\) such that \(|\psi (x)| \le C (1+|x|)^{-q/p}\) for every \(x \in \mathbb {R}^{d}\). We can thus estimate $$\begin{aligned} ||\psi _{R}(\,\cdot \,-a)||_{L^{p}(\mathbb {R}^{d},|\,\cdot \,|^{\gamma })}&= R^{d-\frac{d+\gamma }{p}}||\psi (\,\cdot \,-Ra)||_{L^{p}(\mathbb {R}^{d},|\,\cdot \,|^{\gamma } )} \\&\le C R^{d-\frac{d+\gamma }{p}} ||t \mapsto (1+|t-Ra|)^{-q/p}||_{L^{p}(\mathbb {R}^{d},|\,\cdot \,|^{\gamma } )} \\&{\mathop {\lesssim }\limits ^{(31)}} R^{d-\frac{d+\gamma }{p}}\left( \int _{B(|a|R,1)}|y|^{\gamma }\,dy \right) ^{1/p} \\&\lesssim R^{d-\frac{d+\gamma }{p}}(|a|R+1)^{\gamma _{+}/p}. \end{aligned}$$ For every \(r \in [1,\infty ]\) and \(t>0\), there exists a constant \(C > 0\) such that, for all sequences \((b_{k})_{k \in \mathbb {N}} \in \mathbb {C}^{\mathbb {N}}\), the following two inequalities hold true: $$\begin{aligned} \begin{array}{rl} \left\| \left( 2^{tk}\sum _{n=k+1}^{\infty }|b_{n}| \right) _{k \in \mathbb {N}}\right\| _{\ell ^{r}(\mathbb {N})} &{}\le C||(2^{tk}b_{k})_{k \in \mathbb {N}}||_{\ell ^{r}(\mathbb {N})}, \\ \left\| \left( 2^{-tk}\sum _{n=0}^{k}|b_{n}| \right) _{k \in \mathbb {N}}\right\| _{\ell ^{r}(\mathbb {N})} &{}\le C||(2^{-tk}b_{k})_{k \in \mathbb {N}}||_{\ell ^{r}(\mathbb {N})}. \\ \end{array} \end{aligned}$$ See [36, Lemma 4.2] (and the references given there). \(\square \) Proof of Lemma 4.14 Take with , where and . For , we then have and, for \(n \ge 1\), Applying Lemma 4.15, we obtain the estimate (i) Using (32), we can estimate As , we obtain the desired estimate by an application of the triangle inequality in followed by Lemma 4.15. (ii) Observing that the desired estimate can be derived in the same way as in (i). \(\square \) Proof of Proposition 4.10 Let us first establish (29) and (30). Thanks to the Sobolev embedding Proposition 5.1, we may restrict ourselves to the case \(\gamma \in (-1,p-1)\), so that . As and (\(s,t \in \mathbb {R}\)), we have under the natural identifications; also see the discussion preceding Lemma 4.14. In this way, we explicitly have by [43] as , where \({\varvec{p}}'=(p_{1}',\ldots ,p_{l}')\) and \({\varvec{w}} = (w_{1}^{-\frac{1}{p_{1}-1}},\ldots ,w_{l}^{-\frac{1}{p_{l}-1}})\). Note here that \({\varvec{w}}'_{i}(x_{i}) = |x_{i}|^{\gamma '}\) with \(\gamma '= -\frac{\gamma }{p_{i}-1}\). Since and , it follows from Lemma 4.14 and the discussion preceding that and, if , These two inequalities imply (29) and (30), respectively. Let us finally show that the extension operator \(\mathrm {ext}\) from Lemma 4.5 (with and \(\tilde{{\varvec{a}}}= {\varvec{a}}^{[i]}\), modified in the obvious way to the ith multidimensional coordinate) restricts to a coretraction for . To this end, we fix \(X)\). In view of (the modified version of) (27) and Lemma A.3, it suffices to estimate A simple computation even shows that The proof of Theorem 4.6 For the proof of Theorem 4.6, we need three lemmas. Two lemmas concern estimates in Triebel–Lizorkin spaces for series satisfying certain Fourier support conditions, which can be found in "Appendix A." The other lemma is Lemma 4.16. Let the notations be as in Proposition 4.5. We will show that, for an arbitrary , \(\tau ^{\varphi }\) exists on and defines a continuous operator The extension operator \(\mathrm {ext}\) from Proposition 4.5 (with and \(\tilde{{\varvec{a}}}={\varvec{a}}''\)) restricts to a continuous operator Since is a dense subspace of , the right inverse part in the first assertion follows from (I) and (II). The independence of \(\varphi \) in the first assertion follows from denseness of \({\mathcal {S}}(\mathbb {R}^{d};X)\) in in case \(q<\infty \), from which the case \(q=\infty \) can be deduced via a combination of (10) and (11). (I): We may with out loss of generality assume that \(q=\infty \). Let \({(w_{\gamma },{\varvec{w}}'');X)}\) and write \(f_{n} := S_{n}f\) for each n. Then each \(f_{n} \in {\mathcal {S}}'(\mathbb {R}^{d};X)\) has Fourier support for some constant \(c>0\) only depending on \(\varphi \). Therefore, as a consequence of the Paley–Wiener–Schwartz theorem, we have \(f_{n}(0,\cdot ) \in {\mathcal {S}}'(\mathbb {R}^{d-1};X)\) with Fourier support contained in . In view of Lemma-A.1, it suffices to show that In order to establish estimate (33), we pick an \(r_{1} \in (0,1)\) such that \(w_{\gamma } \in A_{p_{1}/r_{1}}(\mathbb {R})\), and write \({\varvec{r}}:=(r_{1},{\varvec{r}}'') \in (0,1)^{l}\). For all \(x=(x_{1},x'') \in [2^{-na_{1}},2^{(1-n)a_{1}}] \times \mathbb {R}^{d-1}\) and every \(n \in \mathbb {N}\), we have where \({\varvec{b}}^{[n]}:= (2^{na_{1}},\ldots ,2^{na_{l}}) \in (0,\infty )^{l}\) and where is the maximal function of Peetre–Fefferman–Stein type given in (59). Raising this to the \(p_{1}\)th power, multiplying by \(2^{nsp_{1}}|x_{1}|^{\gamma }\), and integrating over \(x_{1} \in [2^{-na_{1}},2^{(1-n)a_{1}}]\), we obtain It now follows that from which we in turn obtain Since \((f_{k})_{k \in \mathbb {N}} \subset {\mathcal {S}}'(\mathbb {R}^{d};X)\) satisfies for each \(k \in \mathbb {N}\) and some \(c>0\), the desired estimate (33) is now a consequence of Proposition A.6. (II): We may with out loss of generality assume that \(q=1\). Let \({(\mathbb {R}^{d-1},{\varvec{w}}'';X)}\) and write \(g_{n}=T_{n}g\) for each n. By construction of \(\mathrm {ext}\) we have \(\mathrm {ext}\,g = \sum _{n=0}^{\infty }\rho (2^{na_{1}}\,\cdot \,) \otimes g_{n}\) in \({\mathcal {S}}'(\mathbb {R}^{d};X)\) with each \(\rho (2^{na_{1}}\,\cdot \,) \otimes g_{n}\) satisfying (27) for a \(c > 1\) independent of g. In view of Lemma A.2, it is thus enough to show that In order to establish estimate (34), we define, for each \(x'' \in \mathbb {R}^{d-1}\), $$\begin{aligned} I(x'') := \int _{\mathbb {R}}\left( \sum _{n=0}^{\infty }2^{sn}||\rho (2^{na_{1}}x_{1})g_{n}(x'')|| \right) ^{p_{1}}|x_{1}|^{\gamma }dx_{1}. \end{aligned}$$ We furthermore first choose a natural number \(N > \frac{1}{p_{1}}(1+\gamma )\) and subsequently pick a constant \(C_{1} > 0\) for which the Schwartz function \(\rho \in {\mathcal {S}}(\mathbb {R})\) satisfies the inequality \(|\rho (2^{na_{1}}x_{1})| \le C_{1}|2^{na_{1}}x_{1}|^{-N}\) for every \(n \in \mathbb {N}\) and all \(x_{1} \ne 0\). Denoting by \(I_{1}(x'')\) the integral over \(\mathbb {R}{\setminus } [-1,1]\) in (35), we have $$\begin{aligned} I_{1}(x'')&\le C_{1}\int _{\mathbb {R}{\setminus } [-1,1]}\left( \sum _{n=0}^{\infty }2^{-Na_{1}n} \,2^{sn}||g_{n}(x'')||\right) ^{p_{1}}|x_{1}|^{-Np_{1}+\gamma }dx_{1} \nonumber \\&= C_{1}\int _{\mathbb {R}{\setminus } [-1,1]}|x_{1}|^{-Np_{1}+\gamma }dx_{1}\left( \sum _{n=0}^{\infty } 2^{\left( \frac{1}{p_{1}}(1+\gamma )-N\right) a_{1}n}\, 2^{\left( s-\frac{a_{1}}{p_{1}}(1+\gamma )\right) n}||g_{n}(x'')|| \right) ^{p_{1}} \nonumber \\&\le \underbrace{\int _{\mathbb {R}{\setminus } [-1,1]}|x_{1}|^{-Np_{1}+\gamma }dx_{1}||\left( \,2^{\left( \frac{1}{p_{1}}(1+\gamma )-N\right) a_{1}n}\,\right) _{n \ge 0}||_{\ell ^{p_{1}'}(\mathbb {N})}^{p_{1}}}_{=:C_{2} \in [0,\infty )}\nonumber \\&\quad ||\left( \,2^{\left( s-\frac{a_{1}}{p_{1}}(1+\gamma )\right) n}||g_{n}(x'')||\,\right) _{n \ge 0}||_{\ell ^{p_{1}}(\mathbb {N})}^{p_{1}}. \end{aligned}$$ Next we denote, for each \(k \in \mathbb {N}\), by \(I_{0,k}(x'')\) the integral over \(D_{k}:=\{ x_{1} \in \mathbb {R}\mid 2^{-(k+1)a_{1}} \le |x_{1}| \le 2^{-ka_{1}}\}\) in (35). Since the \(D_{k}\) are of measure \(w_{\gamma }(D_{k}) \le C_{3} 2^{-ka_{1}(\gamma +1)}\) for some constant \(C_{3}>0\) independent of k, we can estimate $$\begin{aligned} I_{0,k}(x'')\le & {} \int _{D_{k}}\left( \sum _{n=0}^{k}2^{sn}||\rho ||_{\infty }||g_{n}(x'')|| + \sum _{n=k+1}^{\infty }C_{1}2^{(s-a_{1}N)n}|x_{1}|^{-N}||g_{n}(x'')|| \right) ^{p_{1}}|x_{1}|^{\gamma }dx_{1} \\\le & {} C_{3}2^{-ka_{1}(\gamma +1)}\left( \sum _{n=0}^{k}2^{sn}||\rho ||_{\infty }||g_{n}(x'')|| + \sum _{n=k+1}^{\infty }C_{1}2^{(s-a_{1}N)n}2^{Na_{1}(k+1)}||g_{n}(x'')|| \right) ^{p_{1}} \\\le & {} C_{3}2^{p_{1}}||\rho ||_{\infty }^{p_{1}}2^{-ka_{1}(\gamma +1)}\left( \sum _{n=0}^{k}2^{sn}||g_{n}(x'')|| \right) ^{p_{1}} \\&\,\,+\,\, C_{3}2^{p_{1}}(C_{1}2^{Na_{1}})^{p_{1}} 2^{k\left( N-\frac{1}{p_{1}}(\gamma +1)\right) a_{1}p_{1}}\left( \sum _{n=k+1}^{\infty }2^{(s-a_{1}N)n}||g_{n}(x'')|| \right) ^{p_{1}}. \end{aligned}$$ Writing \(I_{0}(x''):= \sum _{k=0}^{\infty }I_{0,k}(x'')\), which is precisely the integral over \([-1,1]\) in (35), we obtain $$\begin{aligned} I_{0}(x'')\le & {} C_{4} \sum _{k=0}^{\infty }2^{-ka_{1}(\gamma +1)}\left( \sum _{n=0}^{k}2^{sn}||g_{n}(x'')|| \right) ^{p_{1}} + C_{4}\sum _{k=0}^{\infty } 2^{k\left( N-\frac{1}{p_{1}}(\gamma +1)\right) a_{1}p_{1}}\\&\left( \sum _{n=k+1}^{\infty }2^{(s-a_{1}N)n}||g_{n}(x'')|| \right) ^{p_{1}} \\= & {} C_{4}|| \left( 2^{-\frac{a_{1}}{p_{1}}(1+\gamma )k}\sum _{n=0}^{k}2^{sn}||g_{n}(x'')||\right) _{k \in \mathbb {N}} ||_{\ell ^{p_{1}}(\mathbb {N})}^{p_{1}} \\&+ \quad C_{4}|| \left( 2^{\left( N-\frac{1}{p_{1}}(1+\gamma )\right) a_{1}k}\sum _{n=k+1}^{\infty }2^{(s-a_{1}N)n}||g_{n}(x'')||\right) _{k \in \mathbb {N}} ||_{\ell ^{p_{1}}(\mathbb {N})}^{p_{1}}, \end{aligned}$$ which via an application of Lemma 4.16 can be further estimated as $$\begin{aligned} I_{0}(x'')&\le C_{5}||\left( \,2^{-\frac{a_{1}}{p_{1}}(1+\gamma )k}2^{sk}||g_{k}(x'')||\,\right) _{k \ge 0}||_{\ell _{p_{1}}(\mathbb {N})}^{p_{1}}\nonumber \\&\quad + C_{5}||\left( \,2^{\left( N-\frac{1}{p_{1}}(\gamma +1)\right) a_{1}k}2^{(s-a_{1}N)k}||g_{k}(x'')||\,\right) _{k \ge 0}||_{\ell ^{p_{1}}(\mathbb {N})}^{p_{1}} \nonumber \\&= 2C_{5}||\left( \,2^{\left( s-\frac{a_{1}}{p_{1}}(1+\gamma )\right) k}||g_{k}(x'')||\,\right) _{k \ge 0}||_{\ell ^{p_{1}}(\mathbb {N})}^{p_{1}}. \end{aligned}$$ Combining estimates (36) and (37), we get $$\begin{aligned} I(x'')^{1/p_{1}} \le C_{6}||\left( \,2^{\left( s-\frac{a_{1}}{p_{1}}(1+\gamma )\right) n}||g_{n}(x'')||\,\right) _{n \ge 0}||_{\ell ^{p_{1}}(\mathbb {N})}, \end{aligned}$$ from which (34) follows by taking -norms. \(\square \) Sobolev embedding for Besov spaces The result below is a direct extension of part of [49, Proposition 1.1]. We refer to [35] for embedding results for unweighted anisotropic mixed-norm Besov space, and we refer to [32] for embedding results of weighted Besov spaces. Let X be a Banach space, \({\varvec{p}},\tilde{{\varvec{p}}} \in (1,\infty )^{l}\), \(q,{\tilde{q}} \in [1,\infty ]\), \(s,{\tilde{s}} \in \mathbb {R}\), \({\varvec{a}} \in (0,\infty )^{l}\), and . Suppose that \(J \subset \{1,\ldots ,l\}\) is such that \(p_{j} = {\tilde{p}}_{j}\) and \(w_{j} = {\tilde{w}}_{j}\) for \(j \notin J\); \(w_{j}(x_{j}) = |x_{j}|^{\gamma _{j}}\) and \({\tilde{w}}_{j}(x_{j}) = |x_{j}|^{{\tilde{\gamma }}_{j}}\) for \(j \in J\) for some satisfying Furthermore, assume that \(q \le {\tilde{q}}\) and that . Then This is an immediate consequence of inequality of Plancherel–Pólya–Nikol'skii type given in Lemma 5.2. \(\square \) Let X be a Banach space, \({\varvec{p}},\tilde{{\varvec{p}}} \in (1,\infty )^{l}\), and . Suppose that \(J \subset \{1,\ldots ,l\}\) is such that Then, there exists a constant \(C>0\) such that, for all \(f \in {\mathcal {S}}'(\mathbb {R}^{d};X)\) with for some \(R_{1},\ldots ,R_{l} > 0\), we have the inequality where for each \(j \in J\). Step I.The case\(l=1\): We refer to [49, Proposition 4.1]. Step II.The case\(J=\{l\}\): Under the canonical isomorphism (Schwartz kernel theorem), f corresponds to an element of having compact Fourier support contained in . Given a compact subset we have the continuous linear operator where , \({\varvec{p}}':=(p_{1},\ldots ,p_{l-1})\), and \({\varvec{w}}'=(w_{1},\ldots ,w_{l-1})\). Accordingly, for each compact \(K \subset \mathbb {R}^{d'}\) we have with compact Fourier support contained in , so that we may apply Step I to obtain that for some constant \(C>0\) independent of f and K. Since and , the desired result follows by taking and letting \(n \rightarrow \infty \). Step III.The case\(\#J=1\): Let us say that \(J=\{j_{0}\}\). Then, as a consequence of the Banach space-valued Paley–Wiener–Schwartz theorem, for each fixed we have that \(f(\cdot ,x'')\) defines an X-valued tempered distribution having compact Fourier support contained in . The desired inequality follows by applying Step II to \(f(\cdot ,x'')\) for each \(x''\) and subsequently taking -norms with respect to \(x''\). Step IV. The general case: Just apply Step III repeatedly (\(\#J\) times). \(\square \) Proof of the main result In this section, we prove the main result of this paper, Theorem 3.4. Necessary conditions on the initial-boundary data Let the notations and assumptions be as in Theorem 3.4. Suppose that \(g=({\mathcal {B}}_{1}(D)u,\ldots ,{\mathcal {B}}_{n}(D)u)\) and \(u_{0} = \mathrm {tr}_{t=0}u\) for some \(u \in \mathbb {U}^{p,q}_{\gamma ,\mu }\). We show that \((g,u_{0}) \in \mathbb {D}^{p,q}_{\gamma ,\mu }\). It follows from [50, Theorem 1.1] (also see [55, Theorem 3.4.8]) that $$\begin{aligned}&\mathrm {tr}_{t=0}\left[ W^{1}_{q}(\mathbb {R},v_{\mu };L^{p}(\mathbb {R}^{d},w_{\gamma };X)) \cap L^{q}(\mathbb {R},v_{\mu };W^{2n}_{p}(\mathbb {R}^{d},w_{\gamma };X)) \right] \\&\quad = B^{2n(1-\frac{1+\mu }{q})}_{p,q}(\mathbb {R}^{d},w_{\gamma };X). \end{aligned}$$ Using standard techniques, one can derive the same result with \(\mathbb {R}\) replaced by J and \(\mathbb {R}^{d}\) replaced by \({\mathscr {O}}\): $$\begin{aligned} \mathrm {tr}_{t=0}[\mathbb {U}^{p,q}_{\gamma ,\mu }] = \mathbb {I}^{p,q}_{\gamma ,\mu }. \end{aligned}$$ In particular, we must have \(u_{0} \in \mathbb {I}^{p,q}_{\gamma ,\mu } \). In order to show that \(g = (g_{1},\ldots ,g_{n}) \in \mathbb {G}^{p,q}_{\gamma ,\mu }\), we claim that $$\begin{aligned} {\mathcal {B}}_{j}(D) \in {\mathcal {B}}(\mathbb {U}^{p,q}_{\gamma ,\mu },\mathbb {G}^{p,q}_{\gamma ,\mu ,j}), \quad \quad j=1,\ldots ,n. \end{aligned}$$ Combining the fact that $$\begin{aligned} L^{q}(\mathbb {R},v_{\mu };L^{p}(\mathbb {R}^{d},w_{\gamma };X)) = L^{(p,q),(d,1)}(\mathbb {R}^{d+1},(w_{\gamma },v_{\mu });X) \hookrightarrow {\mathcal {S}}'(\mathbb {R}^{d+1};X) \end{aligned}$$ is a \(\left( (d,1),(\frac{1}{2n},1)\right) \)-admissible Banach space (cf. (6)) with (13), (15) and standard techniques of localization, we find $$\begin{aligned}&D^{\beta }_{x} \in {\mathcal {B}}\left( \mathbb {U}^{p,q}_{\gamma ,\mu },H^{1-\frac{|\beta |}{2n}}_{q,\mu }(J; L^{p}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };X)) \cap L^{q}_{\mu }(J;W^{2n-|\beta |}_{p}({\mathscr {O}},w^{\partial {\mathscr {O}}}_{\gamma };X)) \right) ,\\&\quad \quad \beta \in \mathbb {N}^{d}, |\beta | < 2n. \end{aligned}$$ From Theorem 4.4, it thus follows that, for each \(\beta \in \mathbb {N}^{d}\), \(j \in \{1,\ldots ,n\}\) with \(|\beta | \le n_{j}\), \(\mathrm {tr}_{\partial {\mathscr {O}}} \circ D^{\beta }_{x}\) is continuous linear operator $$\begin{aligned}&\mathrm {tr}_{\partial {\mathscr {O}}} \circ D^{\beta }_{x}: \mathbb {U}^{p,q}_{\gamma ,\mu } \longrightarrow F^{\kappa _{j,\gamma }+\frac{n_{j}-|\beta |}{2n}}_{q,p}(J,v_{\mu };L^{p}(\partial {\mathscr {O}};X))\\&\quad \cap L^{q}(J,v_{\mu };F^{2n\kappa _{j,\gamma }+n_{j}-|\beta |}(\partial {\mathscr {O}};X)). \end{aligned}$$ The regularity assumption \((\mathrm {SB})\) on the coefficients \(b_{j,\beta }\) thus gives (39), where we use Lemmas B.1, B.3 and B.4 for \(|\beta |=n_{j}\) and Lemma B.5 for \(|\beta _{j}|<n_{j}\). Finally, suppose that \(\kappa _{j,\gamma } > \frac{1+\mu }{q}\). Then, by combination of (38), (39) and Remark 3.5, $$\begin{aligned} \mathrm {tr}_{t=0} \circ {\mathcal {B}}_{j}(D), {\mathcal {B}}^{t=0}_{j}(D) \circ \mathrm {tr}_{t=0} \in {\mathcal {B}}(\mathbb {U}^{p,q}_{\gamma ,\mu },L^{0}(\partial {\mathscr {O}};X)), \quad \quad j=1,\ldots ,n. \end{aligned}$$ By a density argument these operators coincide. Hence, $$\begin{aligned} \mathrm {tr}_{t=0}g_{j} - {\mathcal {B}}^{t=0}_{j}(D)u_{0} = [\mathrm {tr}_{t=0} \circ {\mathcal {B}}_{j}(D) -{\mathcal {B}}^{t=0}_{j}(D) \circ \mathrm {tr}_{t=0}]u = 0. \end{aligned}$$ Elliptic boundary value model problems Let X be a UMD Banach space. Let \({\mathcal {A}}(D) = \sum _{|\alpha | = 2n}a_{\alpha }D^{\alpha }\)\({\mathcal {B}}_{j}(D) = \sum _{|\beta | =}{ n_{j}}b_{j,\beta }\mathrm {tr}_{\partial \mathbb {R}^{d}_{+}}D^{\beta }\), \(j=1,\ldots ,n\) with constant coefficients \(a_{\alpha },b_{\beta ,j} \in {\mathcal {B}}(X)\). In this subsection, we study the elliptic boundary value problem $$\begin{aligned} \begin{array}{rll} \lambda v + {\mathcal {A}}(D)v &{}= 0, &{} \\ {\mathcal {B}}_{j}(D)v &{}= g_{j}, &{} j=1,\ldots ,n, \end{array} \end{aligned}$$ on \(\mathbb {R}^{d}_{+}\). By the trace result of Corollary 4.3, in order to get a solution \(v \in W^{2n}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)\) we need \(g=(g_{1},\ldots ,g_{n}) \in \prod _{j=1}^{n}F_{p,p}^{2n\kappa _{j,\gamma }}(\mathbb {R}^{d-1};X)\). In Proposition 6.2, we will see that there is existence and uniqueness plus a certain representation for the solution (which we will use to solve (49)). In this representation, we have the operator from the following lemma. Let E be a UMD Banach space, let \(p \in (1,\infty )\), \(w \in A_{p}(\mathbb {R}^{d})\), and \(n \in \mathbb {Z}_{>0}\). For each \(\lambda \in \mathbb {C}{\setminus } (-\infty ,0]\) and \(\sigma \in \mathbb {R}\), we define \(L^{\sigma }_{\lambda } \in {\mathcal {L}}({\mathcal {S}}'(\mathbb {R}^{d};E))\) by $$\begin{aligned} L^{\sigma }_{\lambda }f := {\mathscr {F}}^{-1}[(\lambda + |\,\cdot \,|^{2n})^{\sigma }{\hat{f}}] \quad \quad (f \in {\mathcal {S}}'(\mathbb {R}^{d};E)). \end{aligned}$$ Then, \(L^{\sigma }_{\lambda }\) restricts to a topological linear isomorphism from \(H^{s+2n\sigma }_{p}(\mathbb {R}^{d},w;E)\) to \(H^{s}_{p}(\mathbb {R}^{d},w;E)\) (with inverse \(L^{-\sigma }_{\lambda }\)) for each \(s \in \mathbb {R}\). Moreover, $$\begin{aligned} \mathbb {C}{\setminus } (-\infty ,0] \ni \lambda \mapsto L^{\sigma }_{\lambda } \in {\mathcal {B}}(H^{s+2n\sigma }_{p}(\mathbb {R}^{d},w;E),H^{s}_{p}(\mathbb {R}^{d},w;E)) \end{aligned}$$ defines an analytic mapping for every \(\sigma \in \mathbb {R}\) and \(s \in \mathbb {R}\). For the first part, one only needs to check the Mikhlin condition corresponding to (6) (with \(l=1\) and \({\varvec{a}}=1\)) for the symbol \(\xi \mapsto (1+|\xi |^{2})^{-(n\sigma )/2}(\lambda +|\xi |^{2n})^{\sigma }\). So let us go to the analyticity statement. We only treat the case \(\sigma \in \mathbb {R}{\setminus } \mathbb {N}\), the case \(\sigma \in \mathbb {N}\) being easy. So suppose that \(\sigma \in \mathbb {R}{\setminus } \mathbb {N}\) and fix a \(\lambda _{0} \in \mathbb {C}{\setminus } (-\infty ,0]\). We shall show that \(\lambda \mapsto L^{\sigma }_{\lambda }\) is analytic at \(\lambda _{0}\). Since \(L^{\tau }_{\lambda _{0}}\) is a topological linear isomorphism from \(H^{s+2n\tau }_{p}(\mathbb {R}^{d},w;E)\) to \(H^{s}_{p}(\mathbb {R}^{d},w;E)\), \(\tau \in \mathbb {R}\), for this it suffices to show that $$\begin{aligned} \mathbb {C}{\setminus } (-\infty ,0] \ni \lambda \mapsto L^{\sigma }_{\lambda }L^{-\sigma }_{\lambda _{0}} = L^{\frac{s}{n}}_{\lambda _{0}}L^{\sigma }_{\lambda }L^{-\frac{1}{n}(s+2n\sigma )}_{\lambda _{0}} \in {\mathcal {B}}(L^{p}(\mathbb {R}^{d},w;E)) \end{aligned}$$ is analytic at \(\lambda _{0}\). To this end, we first observe that, for each \(\xi \in \mathbb {R}^{d}\), $$\begin{aligned} \mathbb {C}{\setminus } (-\infty ,0] \ni \lambda \mapsto (\lambda + |\xi |^{2n})^{\sigma }(\lambda _{0} + |\xi |^{2n})^{-\sigma } \in \mathbb {C}\end{aligned}$$ is an analytic mapping with power series expansion at \(\lambda _{0}\) given by $$\begin{aligned} (\lambda + |\xi |^{2n})^{\sigma }(\lambda _{0} + |\xi |^{2n})^{-\sigma }= & {} 1 + \sigma (\lambda _{0}+|\xi |^{2n})^{-1}(\lambda -\lambda _{0}) \nonumber \\&+ \sigma (\sigma -1)(\lambda _{0}+|\xi |^{2n})^{-2}(\lambda -\lambda _{0})^{2} + \cdots \end{aligned}$$ for \(\lambda \in B(\lambda _{0},\delta )\), where \(\delta := d\left( 0,\{\lambda _{0}+t \mid t \ge 0\}\right) > 0\). We next recall that \(L_{\lambda _{0}}^{-1}\) restricts to a topological linear isomorphism from \(L^{p}(\mathbb {R}^{d},w;E)\) to \(H^{2n}_{p}(\mathbb {R}^{d},w;E)\); in particular, \(L_{\lambda _{0}}^{-1}\) restricts to a bounded linear operator on \(L^{p}(\mathbb {R}^{d},w;E)\). Since \(L_{\lambda _{0}}^{-k} = (L_{\lambda _{0}}^{-1})^{k}\) for every \(k \in \mathbb {N}\), there thus exists a constant \(C > 0\) such that $$\begin{aligned} ||L_{\lambda _{0}}^{-k}||_{{\mathcal {B}}(L^{p}(\mathbb {R}^{d},w;E))} \le C^{k}, \quad \quad \forall k \in \mathbb {N}. \end{aligned}$$ Now we let \(\rho > 0\) be the radius of convergence of the power series \(z \mapsto \sum _{k \in \mathbb {N}}\left[ \prod _{j=0}^{k-1}(\sigma -j)\right] C^{k}z^{k}\), set \(r:= \min (\delta ,\rho ) > 0\), and define, for each \(\lambda \in B(\lambda _{0},r)\), the multiplier symbols \(m^{\lambda },m^{\lambda }_{0},m^{\lambda }_{1},\ldots :\mathbb {R}^{d} \longrightarrow \mathbb {C}\) by $$\begin{aligned}&m^{\lambda }(\xi ) := (\lambda + |\xi |^{2n})^{\sigma }(\lambda _{0} + |\xi |^{2n})^{-\sigma } \quad \text{ and } \quad m^{\lambda }_{N}(\xi ) \\&\quad := \sum _{k = 0}^{N}\left[ \prod _{j=0}^{k-1}(\sigma -j)\right] (\lambda _{0}+|\xi |^{2n})^{-k}(\lambda -\lambda _{0})^{k}. \end{aligned}$$ Then, by (42) and (43), we get $$\begin{aligned} m^{\lambda }(\xi ) = \lim _{N \rightarrow \infty }m^{\lambda }_{N}(\xi ), \quad \quad \xi \in \mathbb {R}^{d} \end{aligned}$$ $$\begin{aligned} \lim _{N,M \rightarrow \infty }[T_{m^{\lambda }_{N}}-T_{m^{\lambda }_{M}}] = 0 \,\,\,\text{ in }\,\,\,{\mathcal {B}}(L^{p}(\mathbb {R}^{d},w;E)), \end{aligned}$$ respectively. Via the \(A_{p}\)-weighted version of [39, Facts 3.3.b], we thus obtain that $$\begin{aligned} L^{\sigma }_{\lambda }L^{-\sigma }_{\lambda _{0}}= & {} T_{m^{\lambda }} = \lim _{N \rightarrow \infty } T_{m^{\lambda }_{N}} = \lim _{N \rightarrow \infty }\sum _{k = 0}^{N}\left[ \prod _{j=0}^{k-1}(\sigma -j)\right] L_{\lambda _{0}}^{-k}(\lambda -\lambda _{0})^{k}\\&\quad \text{ in }\,\,\,{\mathcal {B}}(L^{p}(\mathbb {R}^{d},w;E)) \end{aligned}$$ for \(\lambda \in B(\lambda _{0},r)\). This shows that the map \(\mathbb {C}{\setminus } (-\infty ,0] \ni \lambda \mapsto L^{\sigma }_{\lambda }L^{-\sigma }_{\lambda _{0}} \in {\mathcal {B}}(L^{p}(\mathbb {R}^{d},w;E))\) is analytic at \(\lambda _{0}\), as desired. \(\square \) Before we can state Proposition 6.2, we first need to introduce some notation. Given a UMD Banach space X and a natural number \(k \in \mathbb {N}\), we have, for the UMD space \(E=L^{p}(\mathbb {R}_{+},|\,\cdot \,|^{\gamma };X)\), the natural inclusion $$\begin{aligned} W^{k}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X) \hookrightarrow W^{k}_{p}(\mathbb {R}^{d-1};L^{p}(\mathbb {R}_{+},|\,\cdot \,|^{\gamma };X)) = H^{k}_{p}(\mathbb {R}^{d-1};E) \end{aligned}$$ and the natural identification $$\begin{aligned} L^{p}(\mathbb {R}^{d}_{+},w_{\gamma };X) = H^{0}_{p}(\mathbb {R}^{d-1};E). \end{aligned}$$ By Lemma 6.1, we accordingly have that, for \(\lambda \in \mathbb {C}{\setminus } (-\infty ,0]\), that the partial Fourier multiplier operator $$\begin{aligned} L^{k/2n}_{\lambda } \in {\mathcal {L}}({\mathcal {S}}'(\mathbb {R}^{d-1};{\mathcal {D}}'(\mathbb {R}_{+};X))),\, f \mapsto {\mathscr {F}}^{-1}_{x'}\left[ \left( \xi ' \mapsto (\lambda +|\xi '|^{2n})^{k/2n} \right) {\mathscr {F}}_{x'}f \right] , \end{aligned}$$ restricts to a bounded linear operator $$\begin{aligned} L_{\lambda }^{k/2n} \in {\mathcal {B}}(W^{k}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X),L^{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)). \end{aligned}$$ Moreover, we even get an analytic operator-valued mapping $$\begin{aligned} \mathbb {C}{\setminus } (-\infty ,0] \longrightarrow {\mathcal {B}}(W^{k}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X),L^{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)),\, \lambda \mapsto L_{\lambda }^{k/2n}. \end{aligned}$$ In particular, we have $$\begin{aligned} L_{\lambda }^{1-\frac{n_{j}}{2n}}, L_{\lambda }^{1-\frac{n_{j}+1}{2n}}D_{y} \in {\mathcal {B}}(W^{2n-n_{j}}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X),L^{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)), \quad \quad j=1,\ldots ,n,\nonumber \\ \end{aligned}$$ with analytic dependence on the parameter \(\lambda \in \mathbb {C}{\setminus } (-\infty ,0]\). Let X be a UMD Banach space, \(p \in (1,\infty )\), \(\gamma \in (-1,p-1)\), and assume that \(({\mathcal {A}},{\mathcal {B}}_{1},\ldots ,{\mathcal {B}}_{n})\) satisfies \((\mathrm {E})\) and \((\mathrm {LS})\) for some \(\phi \in (0,\pi )\). Then, for each \(\lambda \in \Sigma _{\pi -\phi }\), there exists an operator $$\begin{aligned} {\mathcal {S}}(\lambda ) = \left( \begin{array}{ccc} {\mathcal {S}}_{1}(\lambda )&\ldots&{\mathcal {S}}_{n}(\lambda ) \end{array}\right) \in {\mathcal {B}}\left( \bigoplus _{j=1}^{n}F_{p,p}^{2n\kappa _{j,\gamma }}(\mathbb {R}^{d-1};X),W^{2n}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)\right) \end{aligned}$$ which assigns to a \(g \in \bigoplus _{j=1}^{n}F_{p,p}^{2n\kappa _{j,\gamma }}(\mathbb {R}^{d-1};X)\) the unique solution \(v={\mathcal {S}}(\lambda )g \in W^{2n}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)\) of the elliptic boundary value problem $$\begin{aligned} \begin{array}{rll} \lambda v + {\mathcal {A}}(D)v &{}= 0, &{} \\ {\mathcal {B}}_{j}(D)v &{}= g_{j}, &{} j=1,\ldots ,n; \end{array} \end{aligned}$$ recall here that \(\kappa _{j,\gamma } = 1-\frac{n_{j}}{2n}-\frac{1}{2np}(1+\gamma )\). Moreover, for each \(j \in \{1,\ldots ,n\}\), we have that $$\begin{aligned} \tilde{{\mathcal {S}}}_{j}: \Sigma _{\pi -\phi } \longrightarrow {\mathcal {B}}(W^{2n-n_{j}}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X),W^{2n}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)),\, \lambda \mapsto \tilde{{\mathcal {S}}}_{j}(\lambda ) := {\mathcal {S}}_{j}(\lambda ) \circ \mathrm {tr}_{y=0} \end{aligned}$$ defines an analytic mapping, for which the operators \(D^{\alpha }\tilde{{\mathcal {S}}}_{j}(\lambda ) \in {\mathcal {B}}(W^{2n-n_{j}}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X),L^{p}(\mathbb {R}^{d}_{+},w_{\gamma };X))\), \(|\alpha | \le 2n\), can be represented as $$\begin{aligned} D^{\alpha }\tilde{{\mathcal {S}}}_{j}(\lambda ) = {\mathcal {T}}^{1}_{j,\alpha }(\lambda )L_{\lambda }^{1-\frac{n_{j}}{2n}} + {\mathcal {T}}^{2}_{j,\alpha }(\lambda )L_{\lambda }^{1-\frac{n_{j}+1}{2n}}D_{y} \end{aligned}$$ for analytic operator-valued mappings $$\begin{aligned} {\mathcal {T}}^{i}_{j,\alpha }: \Sigma _{\pi -\phi } \longrightarrow {\mathcal {B}}(L^{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)),\, \lambda \mapsto {\mathcal {T}}^{i}_{j,\alpha }(\lambda ), \quad \quad i \in \{1,2\}, \end{aligned}$$ satisfying the \({\mathcal {R}}\)-bounds $$\begin{aligned} {\mathcal {R}}\{ \lambda ^{k+1-\frac{|\alpha |}{2n}}\partial _{\lambda }^{k}{\mathcal {T}}^{i}_{j,\alpha }(\lambda ) \mid \lambda \in \Sigma _{\pi -\phi } \} < \infty , \quad \quad k \in \mathbb {N}. \end{aligned}$$ Comments on the proof of Proposition 6.2 This proposition can be proved in the same way as [18, Lemma 4.3 & Lemma 4.4]. In fact, in the unweighted case this is just a modification of [18, Lemma 4.3 & Lemma 4.4] (also see the formulation of [47, Lemma 2.2.6]). Here, [18, Lemma 4.3] corresponds to the existence of the solution operator, whose construction was essentially already contained in [17], plus its representation, and [18, Lemma 4.4] basically corresponds to the analytic dependence of (47) plus the \({\mathcal {R}}\)-bounds (48). The analytic dependence of the operators \(\tilde{{\mathcal {S}}}_{j}(\lambda )\) on \(\lambda \) subsequently follows from Lemma 6.1 and (46). For more details, we refer to [42, Chapter 6] and Remark 6.4. \(\square \) We could have formulated Proposition 6.2 only in terms of the mappings \(\tilde{{\mathcal {S}}}_{j}\). Namely, for each \(j \in \{1,\ldots ,n\}\) there exists an analytic mapping $$\begin{aligned} \tilde{{\mathcal {S}}}_{j}: \Sigma _{\pi -\phi } \longrightarrow {\mathcal {B}}(W^{2n-n_{j}}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X),W^{2n}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)),\, \lambda \mapsto \tilde{{\mathcal {S}}}_{j}(\lambda ) \end{aligned}$$ with the property that, for every \(u \in W^{2n}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)\), \(v = \tilde{{\mathcal {S}}}_{j}u\) is the unique solution in \(W^{2n}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)\) of (45) with \(g_{i}=\delta _{i,j}{\mathcal {B}}_{i}(D)u\), for which the operators $$\begin{aligned} D^{\alpha }\tilde{{\mathcal {S}}}_{j}(\lambda ) \in {\mathcal {B}}(W^{2n-n_{j}}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X),L^{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)), |\alpha | \le 2n, \end{aligned}$$ can be represented as (46) for analytic operator-valued mappings (47) satisfying the \({\mathcal {R}}\)-bounds (48). Then, given extension operators \({\mathcal {E}}_{j} \in {\mathcal {B}}(F^{2n\kappa _{j,\gamma }}_{p,p}(\mathbb {R}^{d-1};X),W^{2n-n_{j}}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X))\) (right inverse of the trace \(\mathrm {tr}_{y=0} \in {\mathcal {B}}(W^{2n-n_{j}}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X),F^{2n\kappa _{j,\gamma }}_{p,p}(\mathbb {R}^{d-1};X))\)), \(j=1,\ldots ,n\), the composition \({\mathcal {S}}(\lambda ) = ( {\mathcal {S}}_{1}(\lambda ) \ldots {\mathcal {S}}_{n}(\lambda ) ) := ( {\mathcal {S}}_{1}(\lambda ) \ldots {\mathcal {S}}_{n}(\lambda ) ) \circ ({\mathcal {E}}_{1} \ldots {\mathcal {E}}_{n})\) defines the desired solution operator. In this formulation, the proposition the weight \(w_{\gamma }\) can actually be replaced by any weight w on \(\mathbb {R}^{d}\) which is uniformly \(A_{p}\) in the y-variable. Indeed, in the proof the weight only comes into play in [17, Lemma 7.1]. For weights w of the form \(w(x',y) = v(x')|y|^{\gamma }\) with \(v \in A_{p}(\mathbb {R}^{d-1})\), we can then still define \({\mathcal {S}}(\lambda )\) as above thanks to the available trace theory from Sect. 4.1. In [18] the specific extension operator \({\mathcal {E}}_{\lambda } = e^{-\,\cdot \,L^{1/2n}_{\lambda }}\) was used in the construction of the solution operator \({\mathcal {S}}(\lambda ) = ({\mathcal {S}}_{1}(\lambda ),\ldots ,{\mathcal {S}}_{n}(\lambda ))\), which has the advantageous property that \(D_{y}{\mathcal {E}}_{\lambda } = \imath L_{\lambda }^{1/2n}{\mathcal {E}}_{\lambda }\). Whereas in this way the obtained representation formulae \({\mathcal {S}}_{j}(\lambda ) = {\mathcal {T}}_{j}(\lambda )L_{\lambda }^{1-\frac{n_{j}}{2n}}{\mathcal {E}}_{\lambda }\) can only be used in the case \(q=p\) to solve (49) via a Fourier transformation in time (cf. [18, Proposition 4.5] and [47, Lemma 2.2.7]), our representation formulae (46) can (in combination with the theory of anisotropic function spaces) be used to solve (49) in the full parameter range \(q,p \in (1,\infty )\) (cf. Corollary 6.8). However, the alternative more involved proof of Denk, Hieber & Prüss [18, Theorem 2.3] also contains several ingredients which are of independent interest. Solving inhomogeneous boundary data for a model problem Let the notations and assumptions be as in Theorem 3.4, but for the model problem case of top-order constant coefficients on the half-space considered in Sect. 6.2. The goal of this subsection is to solve the model problem $$\begin{aligned} \begin{array}{rll} \partial _{t}u + (1+{\mathcal {A}}(D))u &{}= 0, \\ {\mathcal {B}}_{j}(D)u &{}= g_{j}, &{} j=1,\ldots ,n, \\ \mathrm {tr}_{t=0}u &{}= 0, \end{array} \end{aligned}$$ for \(g=(g_{1},\ldots ,g_{n})\) with \((0,g,0) \in \mathbb {D}^{p,q}_{\gamma ,\mu }\). Let us first observe that, in view of the compatibility condition in the definition of \(\mathbb {D}^{p,q}_{\gamma ,\mu }\), \((0,g,0) \in \mathbb {D}^{p,q}_{\gamma ,\mu }\) if and only if $$\begin{aligned} g_{j} \in {_{0}}\mathbb {G}_{j}:= & {} {_{0,(0,d)}}F^{\kappa _{j,\gamma },\left( \frac{1}{2n},1\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d-1} \times \mathbb {R}_{+},(1,v_{\mu });X) \\:= & {} \left\{ \begin{array}{ll} F^{\kappa _{j,\gamma },\left( \frac{1}{2n},1\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d-1} \times \mathbb {R}_{+},(1,v_{\mu });X), &{} \kappa _{j,\gamma } < \frac{1+\mu }{q},\\ \left\{ w \in F^{\kappa _{j,\gamma },\left( \frac{1}{2n},1\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d-1} \times \mathbb {R}_{+},(1,v_{\mu });X) : \mathrm {tr}_{t=0}w=0 \right\} , &{} \kappa _{j,\gamma } > \frac{1+\mu }{q}, \end{array}\right. \end{aligned}$$ for all \(j \in \{1,\ldots ,n\}\). Defining $$\begin{aligned} {_{0}}\mathbb {G}:= {_{0}}\mathbb {G}_{1} \oplus \cdots \oplus {_{0}}\mathbb {G}_{n}, \end{aligned}$$ we thus have \((0,g,0) \in \mathbb {D}^{p,q}_{\gamma ,\mu }\) if and only if \(g \in {_{0}}\mathbb {G}\). So we need to solve (49) for \(g \in {_{0}}\mathbb {G}\). We will solve (49) by passing to the corresponding problem on \(\mathbb {R}\) (instead of \(\mathbb {R}_{+}\)). The advantage of this is that it allows us to use the Fourier transform in time. This will give $$\begin{aligned} {\mathscr {F}}_{t}u(\theta ) = {\mathcal {S}}(1+\imath \theta )({\mathscr {F}}_{t}g_{1}(\theta ),\ldots ,{\mathscr {F}}_{t}g_{n}(\theta )), \end{aligned}$$ where \({\mathcal {S}}(1+\imath \theta )\) is the solution operator from Proposition 6.2. Recall that for the operator \(\tilde{{\mathcal {S}}}_{j}(\lambda ) = {\mathcal {S}}_{j}(\lambda ) \circ \mathrm {tr}_{y=0}\) we have the representation formula (46) in which the operators \(L^{\sigma }_{\lambda }\) occur. It will be useful to note that, for \(h \in {\mathcal {S}}(\mathbb {R}^{d}_{+} \times \mathbb {R};X)\), $$\begin{aligned} L^{\sigma }_{1+\imath \theta _{0}}[({\mathscr {F}}_{t}h)(\,\cdot \,,\theta )]= & {} {\mathscr {F}}^{-1}_{x'}[\left( (y,\xi ') \mapsto (1+\imath \theta _{0} + |\xi '|^{2n})\right) {\mathscr {F}}_{(x',t)}h(\,\cdot \,,\theta _{0})] \nonumber \\= & {} \left[ {\mathscr {F}}_{t}{\mathscr {F}}^{-1}_{(x',t)}[\left( (y,\xi ',\theta ) \mapsto (1+\imath \theta + |\xi '|^{2n})\right) {\mathscr {F}}_{(x',t)}h] \right] (\,\cdot \,,\theta _{0}) \nonumber \\= & {} ({\mathscr {F}}_{t}L^{\sigma }h)(\,\cdot \,,\theta _{0}), \end{aligned}$$ $$\begin{aligned}&L^{\sigma } \in {\mathcal {L}}({\mathcal {S}}'(\mathbb {R}^{d-1} \times \mathbb {R};{\mathcal {D}}'(\mathbb {R}_{+};X))),\, f \mapsto {\mathscr {F}}^{-1}_{(x',t)}\\&\quad \left[ \left( (\xi ',\theta ) \mapsto (1 + \imath \theta + |\xi '|^{2n})^{\sigma } \right) {\mathscr {F}}_{(x',t)}f\right] . \end{aligned}$$ Let E be a UMD space, \(p,q \in (1,\infty )\), \(v \in A_{q}(\mathbb {R})\), and \(n \in \mathbb {Z}_{>0}\). For each \(\sigma \in \mathbb {R}\), $$\begin{aligned}&{\mathcal {S}}'(\mathbb {R}^{d-1}\times \mathbb {R};E) \longrightarrow {\mathcal {S}}'(\mathbb {R}^{d-1}\times \mathbb {R};E),\, f \mapsto {\mathscr {F}}^{-1}\\&\quad \left[ \left( (\xi _{1},\xi _{2}) \mapsto (1+\imath \xi _{2}+ |\xi _{1}|^{2n})^{\sigma }\right) {\hat{f}}\right] \end{aligned}$$ $$\begin{aligned} H^{\sigma ,\left( \frac{1}{2n},1\right) }_{(p,q),(d-1,1)}(\mathbb {R}^{d-1}\times \mathbb {R},(1,v);E) \longrightarrow H^{0,\left( \frac{1}{2n},1\right) }_{(p,q),(d-1,1)}(\mathbb {R}^{d-1}\times \mathbb {R},(1,v);E). \end{aligned}$$ This can be shown by checking that the symbol $$\begin{aligned} \mathbb {R}^{d-1} \times \mathbb {R}\ni (\xi _{1},\xi _{2}) \mapsto \frac{(1+\imath \xi _{2}+|\xi _{1}|^{2n})^{\sigma }}{(1+|\xi _{1}|^{4n}+|\xi _{2}|^{2})^{\sigma /2}} \in \mathbb {C}\end{aligned}$$ satisfies the anisotropic Mikhlin condition from (6). \(\square \) Let X be a UMD space, \(q,p \in (1,\infty )\), \(\gamma \in (-1,p-1)\), \(v \in A_{q}(\mathbb {R})\). Put $$\begin{aligned} \begin{aligned} {\overline{\mathbb {G}}}_{j}&:= F^{\kappa _{j,\gamma }}_{q,p}(\mathbb {R},v;L^{p}(\mathbb {R}^{d-1};X)) \cap L^{q}(\mathbb {R},v;F^{2n\kappa _{j,\gamma }}_{p,p}(\mathbb {R}^{d-1};X)), \quad \quad j=1,\ldots ,n, \\ {\overline{\mathbb {G}}}&:= {\overline{\mathbb {G}}}_{1} \oplus \ldots \oplus {\overline{\mathbb {G}}}_{n}, \\ {\overline{\mathbb {U}}}&:= W^{1}_{q}(\mathbb {R},v;L^{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)) \cap L^{q}(\mathbb {R},v;W^{2n}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)), \end{aligned} \end{aligned}$$ where we recall that \(\kappa _{j,\gamma } = 1-\frac{n_{j}}{2n}-\frac{1}{2np}(1+\gamma ) \in (0,1)\). Furthermore, define \({_{0}}{\overline{\mathbb {G}}}_{j}\) similarly to \({_{0}}\mathbb {G}_{j}\) and put \({_{0}}{\overline{\mathbb {G}}}_{j} := {_{0}}{\overline{\mathbb {G}}}_{1} \oplus \ldots \oplus {_{0}}{\overline{\mathbb {G}}}_{n}\). Then the problem $$\begin{aligned} \begin{array}{rll} \partial _{t}u + (1+{\mathcal {A}}(D))u &{}= 0, \\ {\mathcal {B}}_{j}(D)u &{}= g_{j}, &{} j=1,\ldots ,n, \\ \end{array} \end{aligned}$$ admits a bounded linear solution operator \(\overline{{\mathscr {S}}} : {\overline{\mathbb {G}}} \longrightarrow {\overline{\mathbb {U}}}\) which maps \({_{0}}{\overline{\mathbb {G}}}\) to \({_{0}}{\overline{\mathbb {U}}} = \{ u \in {\overline{\mathbb {U}}} : u(0)=0 \}\). For the statement that \(\overline{{\mathscr {S}}}\) maps \({_{0}}{\overline{\mathbb {G}}}\) to \({_{0}}{\overline{\mathbb {U}}}\), we will use the following lemma. \(\{g_{j} \in {\mathcal {S}}(\mathbb {R}^{d};X) : \mathrm {tr}_{t=0}g_{j}=0\}\) is dense in \({_{0}}{\overline{\mathbb {G}}}_{j}\) As a consequence of Theorem 2.1, $$\begin{aligned} {_{0}}{\overline{\mathbb {G}}}_{j} = {_{0}}F^{\kappa _{j,\gamma }}_{q,p}(\mathbb {R},v_{\mu };L^{p}(\mathbb {R}^{d-1};X)) \cap L^{q}(\mathbb {R},v_{\mu };F^{2n\kappa _{j,\gamma }}_{p,p}(\mathbb {R}^{d-1};X)), \end{aligned}$$ $$\begin{aligned} {_{0}}F^{s}_{q,p}(\mathbb {R},v_{\mu };Y) = \left\{ \begin{array}{ll} F^{s}_{q,p}(\mathbb {R},v_{\mu };Y), &{} \quad s < \frac{1+\mu }{q}, \\ \{ f \in F^{s}_{q,p}(\mathbb {R},v_{\mu };Y) : \mathrm {tr}_{t=0}f=0 \}, &{} \quad s > \frac{1+\mu }{q}. \end{array}\right. \end{aligned}$$ Let \((S_{n})_{n \in \mathbb {N}}\) be the family of convolution operator corresponding to some \(\varphi = (\varphi _{n})_{n \in \mathbb {N}} \in \Phi (\mathbb {R}^{d-1})\). Then, \(S_{n} {\mathop {\longrightarrow }\limits ^{\mathrm {SOT}}} I\) as \(n \rightarrow \infty \) in both \(L^{p}(\mathbb {R}^{d-1};X)\) as \(F^{2n\kappa _{j,\gamma }}_{p,p}(\mathbb {R}^{d-1};X)\). For the pointwise induced operator family, we thus have \(S_{n} {\mathop {\longrightarrow }\limits ^{\mathrm {SOT}}} I\) in \({_{0}}{\overline{\mathbb {G}}}_{j}\). Since $$\begin{aligned}&L^{p}(\mathbb {R}^{d-1};X) \cap {\mathscr {F}}^{-1}{\mathcal {E}}'(\mathbb {R}^{d-1};X) \subset F^{0}_{p,\infty }(\mathbb {R}^{d-1};X) \cap {\mathscr {F}}^{-1}{\mathcal {E}}'(\mathbb {R}^{d-1};X)\\&\quad \subset F^{2n\kappa _{j,\gamma }}_{p,p}(\mathbb {R}^{d-1};X), \end{aligned}$$ it follows that $$\begin{aligned}&{_{0}}F^{\kappa _{j,\gamma }}_{q,p}(\mathbb {R},v_{\mu };F^{2n\kappa _{j,\gamma }}_{p,p}(\mathbb {R}^{d-1};X)) = {_{0}}F^{\kappa _{j,\gamma }}_{q,p}(\mathbb {R},v_{\mu };F^{2n\kappa _{j,\gamma }}_{p,p}(\mathbb {R}^{d-1};X))\\&\quad \cap L^{q}(\mathbb {R},v_{\mu };F^{2n\kappa _{j,\gamma }}_{p,p}(\mathbb {R}^{d-1};X)) \end{aligned}$$ is dense in \({_{0}}{\overline{\mathbb {G}}}_{j}\); in fact, $$\begin{aligned} {_{0}}F^{\kappa _{j,\gamma }}_{q,p}(\mathbb {R},v_{\mu };F^{2n\kappa _{j,\gamma }}_{p,p}(\mathbb {R}^{d-1};X)) {\mathop {\hookrightarrow }\limits ^{d}} {_{0}}{\overline{\mathbb {G}}}_{j}. \end{aligned}$$ $$\begin{aligned} \{ f \in {\mathcal {S}}(\mathbb {R}) : f(0)=0 \} \otimes {\mathcal {S}}(\mathbb {R}^{d-1};X)&{\mathop {\subset }\limits ^{d}}&\{ f \in {\mathcal {S}}(\mathbb {R}) : f(0)=0 \} \otimes F^{2n\kappa _{j,\gamma }}_{p,p}(\mathbb {R}^{d-1};X) \\&{\mathop {\subset }\limits ^{d}}&{_{0}}F^{\kappa _{j,\gamma }}_{q,p}(\mathbb {R},v_{\mu };F^{2n\kappa _{j,\gamma }}_{p,p}(\mathbb {R}^{d-1};X)) \end{aligned}$$ by [44], the desired density follows. \(\square \) Proof of Lemma 6.6 (I) Put \({\overline{\mathbb {F}}}:= L^{q}(\mathbb {R},v;L^{p}(\mathbb {R}^{d}_{+},w_{\gamma };X))\) and \(V:= {\mathscr {F}}^{-1}C^{\infty }_{c}(\mathbb {R}^{d-1};X) \otimes {\mathscr {F}}^{-1}C^{\infty }_{c}(\mathbb {R})\). Then \(V^{n}\) is dense in \({\overline{\mathbb {G}}}\). So, in view of $$\begin{aligned} \partial _{t} + (1+{\mathcal {A}}(D)) \in {\mathcal {B}}({\overline{\mathbb {U}}},{\overline{\mathbb {F}}}) \quad \quad \text{ and } \quad \quad {\mathcal {B}}_{j}(D) \in {\mathcal {B}}({\overline{\mathbb {U}}},{\overline{\mathbb {G}}}_{j}), \quad j=1,\ldots ,n, \end{aligned}$$ it suffices to construct a solution operator \(\overline{{\mathscr {S}}}:V^{n} \longrightarrow {\overline{\mathbb {U}}}\) which is bounded when \(V^{n}\) carries the induced norm from \({\overline{\mathbb {G}}}\). In order to define such an operator, fix \(g=(g_{1},\ldots ,g_{n}) \in V^{n}\). Let $$\begin{aligned} {\mathcal {E}}_{j} \in {\mathcal {B}}({\overline{\mathbb {G}}},H^{1-\frac{n_{j}}{2n},\left( \frac{1}{2n},1\right) }_{(p,q),(d,1)}(\mathbb {R}^{d}_{+} \times \mathbb {R},(w_{\gamma },v);X)), \quad j=1,\ldots ,n, \end{aligned}$$ be extension operators (right inverses of the trace operator \(\mathrm {tr}_{y=0}\)) as in Corollary 4.9. Then, \({\mathcal {E}}_{j}\) maps \(V^{n}\) into \({\mathcal {S}}(\mathbb {R}^{d}_{+};X)) \otimes {\mathscr {F}}^{-1}(C^{\infty }_{c}(\mathbb {R}))\); in particular, $$\begin{aligned} {\mathcal {E}}_{j}g_{j} \in {\mathcal {S}}(\mathbb {R}^{d}_{+};X)) \otimes {\mathscr {F}}^{-1}(C^{\infty }_{c}(\mathbb {R})), \quad j=1,\ldots ,n. \end{aligned}$$ So, for each \(j \in \{1,\ldots ,n\}\), we have $$\begin{aligned} {\mathscr {F}}_{t}{\mathcal {E}}_{j}g_{j} \in {\mathcal {S}}(\mathbb {R}^{d}_{+};X)) \otimes C^{\infty }_{c}(\mathbb {R}), \end{aligned}$$ and we may also view \({\mathscr {F}}_{t}{\mathcal {E}}_{j}g_{j}\) as a function $$\begin{aligned}{}[\theta \mapsto ({\mathscr {F}}_{t}{\mathcal {E}}_{j}g_{j})(\theta )] \in C^{\infty }_{c}(\mathbb {R};W^{2n-n_{j}}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)). \end{aligned}$$ $$\begin{aligned}{}[\theta \mapsto \tilde{{\mathcal {S}}}_{j}(1+\imath \theta )] \in C^{\infty }(\mathbb {R};{\mathcal {B}}(W^{2n-n_{j}}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X),W^{2n}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X))), \quad j=1,\ldots ,n, \end{aligned}$$ with \(\tilde{{\mathcal {S}}}_{j}(1+\imath \theta )\) as in Proposition 6.2, we may thus define $$\begin{aligned} \overline{{\mathscr {S}}}g := {\mathscr {F}}_{t}^{-1}\left[ \theta \mapsto \sum _{j=1}^{n}\tilde{{\mathcal {S}}}_{j}(1+\imath \theta ) ({\mathscr {F}}_{t}{\mathcal {E}}_{j}g_{j})(\theta ) \right] \in {\mathcal {S}}(\mathbb {R};W^{2n}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)) \end{aligned}$$ (II) We now show that \(u=\overline{{\mathscr {S}}}g \in {\mathcal {S}}(\mathbb {R};W^{2n}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X))\) is a solution of (52) for \(g \in V^{n}\). To this end, let \(\theta \in \mathbb {R}\) be arbitrary. Then, we have that \(({\mathscr {F}}_{t}{\mathcal {E}}_{j}g_{j})(\theta ) \in {\mathcal {S}}(\mathbb {R}^{d}_{+};X) \subset W^{2n-n_{j}}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)\) and \(({\mathscr {F}}_{t}g_{j})(\theta ) \in {\mathcal {S}}(\mathbb {R}^{d-1};X) \subset F^{2n\kappa _{j,\gamma }}_{p,p}(\mathbb {R}^{d-1};X)\) are related by \(\mathrm {tr}_{y=0}({\mathscr {F}}_{t}{\mathcal {E}}_{j}g_{j})(\theta ) = ({\mathscr {F}}_{t}g_{j})(\theta )\); just note that \(({\mathscr {F}}_{t}{\mathcal {E}}_{j}g_{j})(0,x',\theta ) = ({\mathscr {F}}_{t}g_{j})(x',\theta )\) for every \(x' \in \mathbb {R}^{d-1}\). Therefore, by Proposition 6.2, \(v(\theta ) = ({\mathscr {F}}_{t}u)(\theta ) =({\mathscr {F}}_{t}\overline{{\mathscr {S}}}g)(\theta ) = \sum _{j=1}^{n}\tilde{{\mathcal {S}}}_{j}(1+\imath \theta ) ({\mathscr {F}}_{t}{\mathcal {E}}_{j}g_{j})(\theta ) \in W^{2n}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)\) is the unique solution of the problem $$\begin{aligned} \begin{array}{rll} (1+\imath \theta ) v + {\mathcal {A}}(D)v &{}= 0, &{} \\ {\mathcal {B}}_{j}(D)v &{}= ({\mathscr {F}}_{t}g_{j})(\theta ), &{} j=1,\ldots ,n. \end{array} \end{aligned}$$ Applying the inverse Fourier transform \({\mathscr {F}}_{t}^{-1}\) with respect to \(\theta \), we find $$\begin{aligned} \begin{array}{rll} \partial _{t}u + (1+{\mathcal {A}}(D))u &{}= 0, &{} \\ {\mathcal {B}}_{j}(D)u &{}= g_{j}, &{} j=1,\ldots ,n. \\ \end{array} \end{aligned}$$ (III) We next derive a representation formula for \(\overline{{\mathscr {S}}}\) that is well suited for proving the boundedness of \(\overline{{\mathscr {S}}}\). To this end, fix a \(g=(g_{1},\ldots ,g_{n}) \in V^{n}\). Then we have, for each multi-index \(\alpha \in \mathbb {N}^{d}, |\alpha | \le 2n\), $$\begin{aligned} D^{\alpha }\overline{{\mathscr {S}}}g= & {} D^{\alpha } {\mathscr {F}}_{t}^{-1}\left[ \theta \mapsto \sum _{j=1}^{n}\tilde{{\mathcal {S}}}_{j}(1+\imath \theta ) ({\mathscr {F}}_{t}{\mathcal {E}}_{j}g_{j})(\theta ) \right] \nonumber \\= & {} \sum _{j=1}^{n}{\mathscr {F}}_{t}^{-1}\left[ \theta \mapsto D^{\alpha }\tilde{{\mathcal {S}}}_{j}(1+\imath \theta ) ({\mathscr {F}}_{t}{\mathcal {E}}_{j}g_{j})(\theta ) \right] \nonumber \\&{\mathop {=}\limits ^{(46)}}&\sum _{j=1}^{n}{\mathscr {F}}_{t}^{-1}\left[ \theta \mapsto {\mathcal {T}}^{1}_{j,\alpha }(1+\imath \theta )L_{1+\imath \theta }^{1-\frac{n_{j}}{2n}}({\mathscr {F}}_{t}{\mathcal {E}}_{j}g_{j})(\theta )\right. \nonumber \\&\left. +{\mathcal {T}}^{2}_{j,\alpha }(1+\imath \theta )L_{1+\imath \theta }^{1-\frac{n_{j}+1}{2n}}D_{y}({\mathscr {F}}_{t}{\mathcal {E}}_{j}g_{j})(\theta ) \right] \nonumber \\= & {} \sum _{j=1}^{n}{\mathscr {F}}_{t}^{-1}\left[ \theta \mapsto {\mathcal {T}}^{1}_{j,\alpha }(1+\imath \theta )L_{1+\imath \theta }^{1-\frac{n_{j}}{2n}}({\mathscr {F}}_{t}{\mathcal {E}}_{j}g_{j})(\theta )\right] \nonumber \\&\quad + \,\,\, \sum _{j=1}^{n}{\mathscr {F}}_{t}^{-1}\left[ \theta \mapsto {\mathcal {T}}^{2}_{j,\alpha }(1+\imath \theta )L_{1+\imath \theta }^{1-\frac{n_{j}+1}{2n}}({\mathscr {F}}_{t}D_{y}{\mathcal {E}}_{j}g_{j})(\theta ) \right] \nonumber \\&{\mathop {=}\limits ^{(50)}}&\sum _{j=1}^{n}{\mathscr {F}}_{t}^{-1}\left[ \theta \mapsto {\mathcal {T}}^{1}_{j,\alpha }(1+\imath \theta )({\mathscr {F}}_{t}L^{1-\frac{n_{j}}{2n}}{\mathcal {E}}_{j}g_{j})(\theta )\right] \nonumber \\&\quad + \,\,\, \sum _{j=1}^{n}{\mathscr {F}}_{t}^{-1}\left[ \theta \mapsto {\mathcal {T}}^{2}_{j,\alpha }(1+\imath \theta )({\mathscr {F}}_{t}L^{1-\frac{n_{j}+1}{2n}}D_{y}{\mathcal {E}}_{j}g_{j})(\theta ) \right] . \end{aligned}$$ (IV) We next show that \(||\overline{{\mathscr {S}}}g||_{{\overline{\mathbb {U}}}} \lesssim ||g||_{{\overline{\mathbb {G}}}}\) for \(g \in V^{n}\). Being a solution of (52), \(\overline{{\mathscr {S}}}g\) satisfies $$\begin{aligned} \partial _{t}\overline{{\mathscr {S}}}g = -(1+{\mathcal {A}}(D))\overline{{\mathscr {S}}}g. \end{aligned}$$ Hence, it suffices to establish the estimate \(||D^{\alpha }\overline{{\mathscr {S}}}g||_{{\overline{\mathbb {F}}}} \lesssim ||g||_{{\overline{\mathbb {G}}}}\) for all multi-indices \(\alpha \in \mathbb {N}^{d}, |\alpha | \le 2n\). So fix such an \(|\alpha | \le 2n\). Then, in view of the representation formula (54), it is enough to show that $$\begin{aligned} ||{\mathscr {F}}_{t}^{-1}\left[ \theta \mapsto {\mathcal {T}}^{1}_{j,\alpha }(1+\imath \theta )({\mathscr {F}}_{t}L^{1-\frac{n_{j}}{2n}}{\mathcal {E}}_{j}g_{j})(\theta ) \right] \,||_{{\overline{\mathbb {F}}}} \lesssim ||g||_{{\overline{\mathbb {G}}}}, \quad j=1,\ldots ,n,\nonumber \\ \end{aligned}$$ $$\begin{aligned} ||{\mathscr {F}}_{t}^{-1}\left[ \theta \mapsto {\mathcal {T}}^{2}_{j,\alpha }(1+\imath \theta )({\mathscr {F}}_{t}L^{1-\frac{n_{j}+1}{2n}}D_{y}{\mathcal {E}}_{j}g_{j})(\theta ) \right] \,||_{{\overline{\mathbb {F}}}} \lesssim ||g||_{{\overline{\mathbb {G}}}}, \quad j=1,\ldots ,n.\nonumber \\ \end{aligned}$$ We only treat estimate (56), estimate (55) being similar (but easier): Fix a \(j \in \{1,\ldots ,n\}\). For the full \((d+1)\)-dimensional Euclidean space \(\mathbb {R}^{d} \times \mathbb {R}\) instead of \(\mathbb {R}^{d}_{+} \times \mathbb {R}\), $$\begin{aligned} D_{y} \in {\mathcal {B}}\left( H^{1-\frac{n_{j}}{2n},\left( \frac{1}{2n},1\right) }_{(p,q),(d,1)}(\mathbb {R}^{d}_{+} \times \mathbb {R},(w_{\gamma },v);X),H^{1-\frac{n_{j}+1}{2n},\left( \frac{1}{2n},1\right) }_{(p,q),(d,1)}(\mathbb {R}^{d}_{+} \times \mathbb {R},(w_{\gamma },v);X)\right) . \end{aligned}$$ follows from (15) (and the fact that \(L_{(p,q),(d,1)}(\mathbb {R}^{d+1},(w_{\gamma },v_{\mu });X)\) is an admissible Banach space of X-valued tempered distributions on \(\mathbb {R}^{d+1}\) in view of (6)), from which the \(\mathbb {R}^{d}_{+} \times \mathbb {R}\)-case follows by restriction. In combination with (53) and Lemma 6.5, this yields $$\begin{aligned} L^{1-\frac{n_{j}+1}{2n}}D_{y}{\mathcal {E}}_{j} \in {\mathcal {B}}\left( {\overline{\mathbb {G}}}_{j}, \underbrace{H^{0,\left( \frac{1}{2n},1\right) }_{(p,q),(d-1,1)}(\mathbb {R}^{d-1} \times \mathbb {R},(1,v);L^{p}(\mathbb {R}_{+},|\,\cdot \,|^{\gamma };X))}_{=\,L^{q}(\mathbb {R},v;L^{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)) \,=\, {\overline{\mathbb {F}}}} \right) .\nonumber \\ \end{aligned}$$ Furthermore, we have that \({\mathcal {T}}^{2}_{j,\alpha }(1+\imath \cdot ) \in C^{\infty }(\mathbb {R};{\mathcal {B}}(L^{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)))\) satisfies $$\begin{aligned}&{\mathcal {R}}\left\{ \theta ^{k}\partial _{\theta }^{k}{\mathcal {T}}^{2}_{j,\alpha }(1+\imath \theta ) :\theta \in \mathbb {R}\right\} \le {\mathcal {R}}\left\{ (1+\imath \theta )^{k+1-\frac{|\alpha |}{2n}}\partial _{\theta }^{k}{\mathcal {T}}^{2}_{j,\alpha }(1+\imath \theta ) : \theta \in \mathbb {R}\right\} \\&\quad < \infty , \quad \quad k \in \mathbb {N}, \end{aligned}$$ by the Kahane contraction principle and (48); in particular, \({\mathcal {T}}^{2}_{j,\alpha }(1+\imath \cdot )\) satisfies the Mikhlin condition corresponding to (7). As a consequence, \({\mathcal {T}}^{2}_{j,\alpha }(1+\imath \cdot )\) defines a bounded Fourier multiplier operator on \(L^{q}(\mathbb {R},v;L^{p}(\mathbb {R}^{d}_{+},w_{\gamma };X))\). In combination with (57), this gives estimate (56). (V) We finally show that \(\overline{{\mathscr {S}}} \in {\mathcal {B}}({\overline{\mathbb {G}}},{\overline{\mathbb {U}}})\) maps \({_{0}}{\overline{\mathbb {G}}}\) to \({_{0}}{\overline{\mathbb {U}}}\). As in the proof of [47, Lemma 2.2.7], it can be shown that, if $$\begin{aligned} g = (g_{1},\ldots ,g_{n})\in \prod _{j=1}^{n}C_{L^{1}}(\mathbb {R};F_{p,p}^{2n\kappa _{j,\gamma }}(\mathbb {R}^{d-1};X)) \quad \text{ with }\quad g_{1}(0)=\ldots =g_{n}(0)=0 \end{aligned}$$ $$\begin{aligned} u \in C^{1}_{L^{1}}(\mathbb {R};L^{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)) \cap C_{L^{1}}(\mathbb {R};W^{2n}_{p}(\mathbb {R}^{d}_{+},w_{\gamma };X)) \end{aligned}$$ satisfy (52), then \(u(0)=0\). The desired statement thus follows from Lemma 6.7. \(\square \) Let the notations and assumptions be as in Theorem 3.4, but for the model problem case of top-order constant coefficients on the half-space considered in Sect. 6.2. Then, problem (49) admits a bounded linear solution operator $$\begin{aligned} {\mathscr {S}}: \{g : (0,g,0) \in \mathbb {D}^{p,q}_{\gamma ,\mu }\} \longrightarrow \mathbb {U}^{p,q}_{\gamma ,\mu }. \end{aligned}$$ We can now finally prove the main result of this paper. In view of Sect. 6.1, it remains to establish existence and uniqueness of a solution \(u \in \mathbb {U}^{p,q}_{\gamma ,\mu }\) of (20) for given \((f,g,u_{0}) \in \mathbb {G}^{p,q}_{\gamma ,\mu } \oplus \mathbb {D}^{p,q}_{\gamma ,\mu }\). By a standard (but quite technical) perturbation and localization procedure, it is enough to consider the model problem $$\begin{aligned} \begin{array}{rll} \partial _{t}u + (1+{\mathcal {A}}(D))u &{}= f, \\ {\mathcal {B}}_{j}(D)u &{}= g_{j}, &{} j=1,\ldots ,n, \\ u(0) &{}= u_{0}, \end{array} \end{aligned}$$ on the half-space, where \({\mathcal {A}}\) and \({\mathcal {B}}_{1},\ldots ,{\mathcal {B}}_{n}\) are top-order constant coefficient operators as considered in Sect. 6.2. This procedure is worked out in full detail in [47]; for further comments we refer to "Appendix 7." Let \((f,g,u_{0}) \in \mathbb {F}^{p,q}_{\gamma ,\mu } \oplus \mathbb {D}^{p,q}_{\gamma ,\mu }\). In view of Theorem 4.4 and the fact that \(\mathrm {tr}_{t=0} \circ {\mathcal {B}}_{j}(D) = {\mathcal {B}}_{j}(D)\) on \(\mathbb {U}^{p,q}_{\gamma ,\mu } \circ \mathrm {tr}_{t=0}\) when \(\kappa _{j,\gamma } < \frac{1+\mu }{q}\), we may without loss of generality assume that \(u_{0}=0\). By Corollary 6.8 we may furthermore assume that \(g=0\). Defining \(A_{B}\) as the operator on \(Y = L^{p}(\mathbb {R}^{d}_{+},w_{\gamma })\) with domain $$\begin{aligned} D(A_{B}) := \{ u \in W^{2n}_{p}(\mathbb {R}^{d}_{+},w_{\gamma }) : {\mathcal {B}}_{j}(D)v=0, j=1,\ldots ,n \} \end{aligned}$$ and given by the rule \(A_{B}v :={\mathcal {A}}(D)v\), we need to show that \(1+A_{B}\) enjoys the property of maximal \(L^{q}_{\mu }\)-regularity: for every \(f \in L^{q}(\mathbb {R}_{+},v_{\mu };Y)\) there exists a unique \(u \in {_{0}}W^{1}_{q}(\mathbb {R}_{+},v_{\mu };Y) \cap L^{q}(\mathbb {R}_{+},v_{\mu };D(A_{B}))\) with \(u'+(1+A_{B})u=f\). In the same way as in [17, Theorem 7.4] it can be shown that \(A_{B} \in {\mathcal {H}}^{\infty }(Y)\) with angle \(\phi ^{\infty }_{A_{B}} < \frac{\pi }{2}\). As Y is a UMD space, \(1+A_{B}\) enjoys maximal \(L^{q}_{\mu }\)-regularity for \(\mu =0\); see, e.g., [66, Section 4.4] and the references therein. By [12, 54] this extrapolates to all \(\mu \in (-1,q-1)\) (i.e., all \(\mu \) for which \(v_{\mu } \in A_{q}\)). \(\square \) This technical condition on \({\varvec{w}}''\) is in particular satisfied when \({\varvec{p}}'' \in (1,\infty )^{l-1}\) and . P. Acquistapace and B. Terreni. A unified approach to abstract linear nonautonomous parabolic equations. Rend. Sem. Mat. Univ. Padova, 78:47–107, 1987. E. Alòs and S. Bonaccorsi. Stability for stochastic partial differential equations with Dirichlet white-noise boundary conditions. Infin. Dimens. Anal. Quantum Probab. Relat. Top., 5(4):465–481, 2002. H. Amann. Linear and quasilinear parabolic problems. Vol. I, volume 89 of Monographs in Mathematics. Birkhäuser Boston, Inc., Boston, MA, 1995. Abstract linear theory. H. Amann. Vector-Valued Distributions and Fourier Multipliers. Unpublished notes, 2003. H. Amann. Maximal regularity and quasilinear parabolic boundary value problems. In Recent advances in elliptic and parabolic problems, pages 1–17. World Sci. Publ., Hackensack, NJ, 2005. H. Amann. Anisotropic function spaces and maximal regularity for parabolic problems. Part 1. Jindr̆ich Nec̆as Center for Mathematical Modeling Lecture Notes, 6. Matfyzpress, Prague, 2009. S.B. Angenent. Nonlinear analytic semiflows. Proc. Roy. Soc. Edinburgh Sect. A, 115(1-2):91–107, 1990. J. Bourgain. Vector-valued singular integrals and the \(H^1\)-BMO duality. In Probability theory and harmonic analysis (Cleveland, Ohio, 1983), volume 98 of Monogr. Textbooks Pure Appl. Math., pages 1–19. Dekker, New York, 1986. K. Brewster and M. Mitrea. Boundary value problems in weighted Sobolev spaces on Lipschitz manifolds. Mem. Differ. Equ. Math. Phys., 60:15–55, 2013. Z. Brzeźniak, B. Goldys, S. Peszat, and F. Russo. Second order PDEs with Dirichlet white noise boundary conditions. J. Evol. Equ., 15(1):1–26, 2015. H-Q Bui. Weighted Besov and Triebel spaces: interpolation by the real method. Hiroshima Math. J., 12(3):581–605, 1982. R. Chill and A. Fiorenza. Singular integral operators with operator-valued kernels, and extrapolation of maximal regularity into rearrangement invariant Banach function spaces. J. Evol. Equ., 14(4-5):795–828, 2014. P. Clément and S. Li. Abstract parabolic quasilinear equations and application to a groundwater flow problem. Adv. Math. Sci. Appl., 3(Special Issue):17–32, 1993/94. P. Clément and J. Prüss. Global existence for a semilinear parabolic Volterra equation. Math. Z., 209(1):17–26, 1992. P. Clément and G. Simonett. Maximal regularity in continuous interpolation spaces and quasilinear parabolic equations. J. Evol. Equ., 1(1):39–67, 2001. G. Da Prato and P. Grisvard. Sommes d'opérateurs linéaires et équations différentielles opérationnelles. J. Math. Pures Appl. (9), 54(3):305–387, 1975. R. Denk, M. Hieber, and J. Prüss. \({\cal{R}}\)-boundedness, Fourier multipliers and problems of elliptic and parabolic type. Mem. Amer. Math. Soc., 166(788):viii+114, 2003. R. Denk, M. Hieber, and J. Prüss. Optimal \(L^p\)-\(L^q\)-estimates for parabolic boundary value problems with inhomogeneous data. Math. Z., 257(1):193–224, 2007. R. Denk and M. Kaip. General parabolic mixed order systems in \({L_p}\) and applications, volume 239 of Operator Theory: Advances and Applications. Birkhäuser/Springer, Cham, 2013. R. Denk, J. Prüss, and R. Zacher. Maximal \(L_p\)-regularity of parabolic problems with boundary dynamics of relaxation type. J. Funct. Anal., 255(11):3149–3187, 2008. R. Denk, J. Saal, and J. Seiler. Inhomogeneous symbols, the Newton polygon, and maximal \(L^p\)-regularity. Russ. J. Math. Phys., 15(2):171–191, 2008. R. Denk and R. Schnaubelt. A structurally damped plate equation with Dirichlet–Neumann boundary conditions. J. Differential Equations, 259(4):1323–1353, 2015. G. Dore and A. Venni. On the closedness of the sum of two closed operators. Math. Z., 196(2):189–201, 1987. J. Escher, J. Prüss, and G. Simonett. Analytic solutions for a Stefan problem with Gibbs–Thomson correction. J. Reine Angew. Math., 563:1–52, 2003. G. Fabbri and B. Goldys. An LQ problem for the heat equation on the halfline with Dirichlet boundary control and noise. SIAM J. Control Optim., 48(3):1473–1488, 2009. S. Fackler, T.P. Hytönen, and N. Lindemulder. Weighted Estimates for Operator-Valued Fourier Multipliers. arXiv e-prints, page arXiv:1810.00172, Sep 2018. R. Farwig and H. Sohr. Weighted \(L^q\)-theory for the Stokes resolvent in exterior domains. J. Math. Soc. Japan, 49(2):251–288, 1997. C. Gallarati, E. Lorist, and M.C. Veraar. On the \(\ell ^s\)-boundedness of a family of integral operators. Rev. Mat. Iberoam., 32(4):1277–1294, 2016. J. García-Cuerva, R. Macías, and J. L. Torrea. The Hardy–Littlewood property of Banach lattices. Israel J. Math., 83(1-2):177–201, 1993. Y. Giga. Solutions for semilinear parabolic equations in \(L^p\) and regularity of weak solutions of the Navier–Stokes system. J. Differential Equations, 62(2):186–212, 1986. L. Grafakos. Modern Fourier analysis, volume 250 of Graduate Texts in Mathematics. Springer, New York, second edition, 2009. D.D. Haroske and L. Skrzypczak. Entropy and approximation numbers of embeddings of function spaces with Muckenhoupt weights, II. General weights. Ann. Acad. Sci. Fenn. Math., 36(1):111–138, 2011. T.P. Hytönen, J.M.A.M. van Neerven, M.C. Veraar, and L. Weis. Analysis in Banach spaces. Volume I. Martingales and Littlewood–Paley theory. Ergebnisse der Mathematik und ihrer Grenzgebiete, Springer-Verlag. 2016. J. Johnsen. Elliptic boundary problems and the Boutet de Monvel calculus in Besov and Triebel–Lizorkin spaces. Math. Scand., 79(1):25–85, 1996. J. Johnsen and W. Sickel. A direct proof of Sobolev embeddings for quasi-homogeneous Lizorkin–Triebel spaces with mixed norms. J. Funct. Spaces Appl., 5(2):183–198, 2007. J. Johnsen and W. Sickel. On the trace problem for Lizorkin–Triebel spaces with mixed norms. Math. Nachr., 281(5):669–696, 2008. N. J. Kalton and L. Weis. The \(H^\infty \)-calculus and sums of closed operators. Math. Ann., 321(2):319–345, 2001. M. Köhne, J. Prüss, and M. Wilke. On quasilinear parabolic evolution equations in weighted \(L_p\)-spaces. J. Evol. Equ., 10(2):443–463, 2010. P.C. Kunstmann and L. Weis. Maximal \(L_p\)-regularity for parabolic equations, Fourier multiplier theorems and \(H^\infty \)-functional calculus. In Functional analytic methods for evolution equations, volume 1855 of Lecture Notes in Math., pages 65–311. Springer, Berlin, 2004. O. A. Ladyženskaja, V. A. Solonnikov, and N. N. Ural'ceva. Linear and quasilinear equations of parabolic type. Translated from the Russian by S. Smith. Translations of Mathematical Monographs, Vol. 23. American Mathematical Society, Providence, R.I., 1968. J. LeCrone, J. Pruess, and M. Wilke. On quasilinear parabolic evolution equations in weighted \(L_p\)-spaces II. J. Evol. Equ., 14(3):509–533, 2014. N. Lindemulder. Parabolic Initial-Boundary Value Problems with Inhomogeneous Data: A weighted maximal regularity approach. Master's thesis, Utrecht University, 2014. N. Lindemulder. An Intersection Representation for a Class of Vector-valued Anisotropic Function Spaces. ArXiv e-prints (arXiv:1903.02980), March 2019. N. Lindemulder, M. Meyries, and M.C. Veraar. Interpolation with normal boundary conditions for function spaces with power weights. 2019. In preparation. A. Lunardi. Analytic semigroups and optimal regularity in parabolic problems. Progress in Nonlinear Differential Equations and their Applications, 16. Birkhäuser Verlag, Basel, 1995. V. Maz'ya and T. Shaposhnikova. Higher regularity in the layer potential theory for Lipschitz domains. Indiana Univ. Math. J., 54(1):99–142, 2005. M. Meyries. Maximal regularity in weighted spaces, nonlinear boundary conditions, and global attractors. PhD thesis, Karlsruhe Institute of Technology, 2010. M. Meyries and R. Schnaubelt. Maximal regularity with temporal weights for parabolic problems with inhomogeneous boundary conditions. Math. Nachr., 285(8-9):1032–1051, 2012. M. Meyries and M.C Veraar. Sharp embedding results for spaces of smooth functions with power weights. Studia Math., 208(3):257–293, 2012. M. Meyries and M.C. Veraar. Traces and embeddings of anisotropic function spaces. Math. Ann., 360(3-4):571–606, 2014. M. Meyries and M.C. Veraar. Pointwise multiplication on vector-valued function spaces with power weights. J. Fourier Anal. Appl., 21(1):95–136, 2015. M. Mitrea and M. Taylor. The Poisson problem in weighted Sobolev spaces on Lipschitz domains. Indiana Univ. Math. J., 55(3):1063–1089, 2006. J. Prüss. Maximal regularity for evolution equations in \(L_p\)-spaces. Conf. Semin. Mat. Univ. Bari, (285):1–39 (2003), 2002. J. Prüss and G. Simonett. Maximal regularity for evolution equations in weighted \(L_p\)-spaces. Arch. Math. (Basel), 82(5):415–431, 2004. Jan Prüss and Gieri Simonett. Moving interfaces and quasilinear parabolic evolution equations. Basel: Birkhäuser/Springer, 2016. D. Roberts. Equations with Dirichlet Boundary Noise. PhD thesis, The University of New South Wales, 2011. J.L. Rubio de Francia. Martingale and integral transforms of Banach space valued functions. In Probability and Banach spaces (Zaragoza, 1985), volume 1221 of Lecture Notes in Math., pages 195–222. Springer, Berlin, 1986. T. Runst and W. Sickel. Sobolev spaces of fractional order, Nemytskij operators, and nonlinear partial differential equations, volume 3 of de Gruyter Series in Nonlinear Analysis and Applications. Walter de Gruyter & Co., Berlin, 1996. V.S. Rychkov. On restrictions and extensions of the Besov and Triebel–Lizorkin spaces with respect to Lipschitz domains. J. London Math. Soc. (2), 60(1):237–257, 1999. B. Scharf, H-J. Schmeißer, and W. Sickel. Traces of vector-valued Sobolev spaces. Math. Nachr., 285(8-9):1082–1106, 2012. Y. Shibata and S. Shimizu. On the \(L_p\)-\(L_q\) maximal regularity of the Neumann problem for the Stokes equations in a bounded domain. J. Reine Angew. Math., 615:157–209, 2008. R. B. Sowers. Multidimensional reaction-diffusion equations with white noise boundary perturbations. Ann. Probab., 22(4):2071–2121, 1994. S.A. Tozoni. Vector-valued extensions of operators on martingales. J. Math. Anal. Appl., 201(1):128–151, 1996. P. Weidemaier. Maximal regularity for parabolic equations with inhomogeneous boundary conditions in Sobolev spaces with mixed \(L_p\)-norm. Electron. Res. Announc. Amer. Math. Soc., 8:47–51, 2002. L. Weis. Operator-valued Fourier multiplier theorems and maximal \(L_p\)-regularity. Math. Ann., 319(4):735–758, 2001. L. Weis. The \(H^\infty \) holomorphic functional calculus for sectorial operators—a survey. In Partial differential equations and functional analysis, volume 168 of Oper. Theory Adv. Appl., pages 263–294. Birkhäuser, Basel, 2006. The author would like to thank Mark Veraar for the supervision of his master thesis [42], which led to the present paper. Delft Institute of Applied Mathematics, Delft University of Technology, P.O. Box 5031, 2600 GA, Delft, The Netherlands Nick Lindemulder Search for Nick Lindemulder in: Correspondence to Nick Lindemulder. The author is supported by the Vidi subsidy 639.032.427 of the Netherlands Organisation for Scientific Research (NWO). Appendix A: Series Estimates in Triebel–Lizorkin and Besov Spaces Lemma A.1 Let X be a Banach space, \({\varvec{a}} \in (0,\infty )^{l}\), \({\varvec{p}} \in [1,\infty )^{l}\), \(q \in [1,\infty ]\), \(s > 0\), and . Suppose that there exists an \({\varvec{r}} \in (0,1)^{l}\) such that and . Then, for every \(c>0\), there exists a constant \(C>0\) such that, for all \((f_{k})_{k \in \mathbb {N}} \subset {\mathcal {S}}'(\mathbb {R}^{d};X)\) satisfying and it holds that \(\sum _{k \in \mathbb {N}}f_{k}\) defines a convergent series in \({\mathcal {S}}'(\mathbb {R}^{d};X)\) with limit of norm . This can be proved in the same way as [36, Lemma 3.19], using Lemma A.5 below instead of [36, Proposition 3.14]. For more details, we refer to [42, Lemma 5.2.22]. Let X be a Banach space, \({\varvec{a}} \in (0,\infty )^{l}\), \({\varvec{p}} \in [1,\infty )^{l}\), \(q \in [1,\infty ]\), \(s \in \mathbb {R}\), and . For every \(c>1\), there exists a constant \(C>0\) such that, for all \((f_{k})_{k \in \mathbb {N}} \subset {\mathcal {S}}'(\mathbb {R}^{d};X)\) satisfying This can be proved in the same way as [36, Lemma 3.20]. In fact, one only needs a minor modification of the proof of Lemma A.1. \(\square \) Let X be a Banach space, \({\varvec{a}} \in (0,\infty )^{l}\), \({\varvec{p}} \in [1,\infty )^{l}\), \(q \in [1,\infty ]\), \(s \in \mathbb {R}\), and . For every \(c>1\), there exists a constant \(C>0\) such that, for all \((f_{k})_{k \in \mathbb {N}} \subset {\mathcal {S}}'(\mathbb {R}^{d};X)\) satisfying (58) and The above two lemmas are through Lemma A.5 based on the following maximal inequality: Let \({\varvec{a}} \in (0,\infty )^{l}\) and . Let \(j_{0} \in \{1,\ldots ,l\}\) and \(r_{j_{0}} \in (0,\min \{p_{j_{0}},\ldots ,p_{l}\})\) be such that . Then gives rise to a well-defined bounded sublinear operator on . Moreover, there holds a Fefferman–Stein inequality for : for every \(q \in (\max \{1,r\},\infty ]\) there exists a constant \(C \in (0,\infty )\) such that, for all sequences , This can be easily derived from [28, Theorem 2.6], which is a weighted version of the special case of the \(L^{p}\)-boundedness of the Banach lattice version of the Hardy–Littlewood maximal function [8, 29, 57, 63] for mixed-norm spaces (also see [28, Remark 2.7]). \(\square \) Let X be a Banach space, \({\varvec{p}} \in [1,\infty )^{l}\), \(q \in [1,\infty ]\), and . Suppose \({\varvec{r}} \in (0,1)^{l}\) is such that for \(j=1,\ldots ,l\). Let \(\psi \in {\mathcal {S}}(\mathbb {R}^{d})\) be such that , and set for each \(n \in \mathbb {N}\). Then, there exists a constant \(C>0\) such that, for all \((f_{n})_{n \in \mathbb {N}} \subset {\mathcal {S}}'(\mathbb {R}^{d};X)\) with for some \(R \ge 1\), the following inequality holds true: As in the proof of [36, Proposition 3.14], it can be shown that for some constant \(c>0\) independent of \((f_{n})_{n}\). The desired result now follows from Lemma A.4. \(\square \) Given a function \(f:\mathbb {R}^{d} \longrightarrow X\), \({\varvec{r}} \in (0,\infty )^{l}\) and \({\varvec{b}} \in (0,\infty )^{l}\), we define the maximal function of Peetre–Fefferman–Stein type by Let X be a Banach space, \({\varvec{p}} \in [1,\infty )^{l}\), \(q \in [1,\infty ]\), and . Let \({\varvec{r}} \in (0,1)^{l}\) be such that for \(j=1,\ldots ,l\). Then, there exists a constant \(C>0\) such that, for all \((f_{n})_{n \in \mathbb {N}} \subset {\mathcal {S}}'(\mathbb {R}^{d};X)\) and \(({\varvec{b}}^{[n]})_{n \in \mathbb {N}} \subset (0,\infty )^{l}\) with for all \(n \in \mathbb {N}\), we have the inequality for some constant \(c>0\) only depending on \({\varvec{r}}\). The desired result now follows from Lemma A.4. \(\square \) Comments on the localization and perturbation procedure As already mentioned in the proof of Theorem 3.4, the localization and perturbation procedure for reducing to the model problem case on \(\mathbb {R}^{d}_{+}\) is worked out in full detail in [47]. However, there only the case \(q=p\) with temporal weights having a positive power is considered. For some of the estimates used there (parts) of the proofs do not longer work in our setting, where the main difficulty comes from \(q \ne p\). It is the goal of this appendix to consider these estimates. Top-order coefficients having small oscillations The most crucial part in the localization and perturbation procedure where we need to take care of the estimates is [47, Proposition 2.3.1] on top-order coefficients having small oscillations. To be more specific, we only consider the estimates in Step (IV) of its proof. Before we go to these estimates, let us start with the lemma that makes it possible to reduce to the situation of top-order coefficients having small oscillations. Lemma B.1 Let X be a Banach space, \(J \subset \mathbb {R}\) and interval, \({\mathscr {O}} \subset \mathbb {R}^{d}\) a domain with compact boundary \(\partial {\mathscr {O}}\), \(\kappa \in \mathbb {R}\), \(n \in \mathbb {N}_{>0}\), \(s,r \in (1,\infty )\) and \(p \in [1,\infty ]\). If \(\kappa > \frac{1}{s}+\frac{d-1}{2nr}\), then $$\begin{aligned} F^{\kappa }_{s,p}(J;L^{r}(\partial {\mathscr {O}};X)) \cap L^{s}(J;B^{2n\kappa }_{r,p}(\partial {\mathscr {O}};X)) \hookrightarrow BUC(\partial {\mathscr {O}} \times J;X). \end{aligned}$$ By a standard localization procedure, we may restrict ourselves to the case that \(J=\mathbb {R}\) and \({\mathscr {O}}=\mathbb {R}^{d}_{+}\) (so that \(\partial {\mathscr {O}} = \mathbb {R}^{d}_{+}\)). By [43], $$\begin{aligned}&F^{\kappa }_{s,p}(\mathbb {R};L^{r}(\mathbb {R}^{d-1};X)) \cap L^{s}(\mathbb {R};B^{2n\kappa }_{r,p}(\mathbb {R}^{d-1};X)) = \{ f \in {\mathcal {S}}'(\mathbb {R}^{d-1}\times \mathbb {R};X) \nonumber \\&\quad : (S_{n}f)_{n} \in L^{s}(\mathbb {R})[[\ell ^{p}_{\kappa }(\mathbb {N})]L^{r}(\mathbb {R}^{d-1})](X) \} \end{aligned}$$ with equivalence of norms, where \((S_{n})_{n \in \mathbb {N}}\) correspond to some fixed choice of \(\varphi \in \Phi ^{(d-1,1),(\frac{1}{2n},1)}(\mathbb {R}^{d})\). For \(\epsilon > 0\), we thus obtain $$\begin{aligned} F^{\kappa }_{s,p}(\mathbb {R};L^{r}(\mathbb {R}^{d-1};X)) \cap L^{s}(\mathbb {R};B^{2n\kappa }_{r,p}(\mathbb {R}^{d-1};X)) \hookrightarrow B^{\kappa -\epsilon ,\left( \frac{1}{2n},1\right) }_{(r,s),s,(d-1,1)}(\mathbb {R}^{d};X). \end{aligned}$$ Choosing \({\tilde{\kappa }}\) with \(\kappa> {\tilde{\kappa }} > \frac{1}{s}+\frac{d-1}{2nr}\), the desired inclusion follows from Corollary 4.11. Let X be a Banach space, \(i \in \{1,\ldots ,l\}\), \(T \in \mathbb {R}\) and Then, there exists an extension operator which, for every \({\varvec{a}} \in (0,\infty )^{l}\), \(s \in \mathbb {R}\), \({\varvec{p}} \in [1,\infty )^{l}\), \(q \in [1,\infty ]\) and , restricts to a bounded linear operator from to whose operator norm can be estimated by a constant independent of X and T. This can be shown in the same way as in [59]. \(\square \) Let X be a Banach space, \(I=(-\infty ,T)\) with \(T \in (-\infty ,\infty ]\), \(\kappa > 0\), \(n \in \mathbb {N}_{>0}\), \(p,q \in [1,\infty )\), \(r,u \in (p,\infty )\), \(s,v \in (q,\infty )\) with \(\frac{1}{p}=\frac{1}{r}+\frac{1}{u}\) and \(\frac{1}{q}=\frac{1}{s}+\frac{1}{v}\). Let \(\mu \in (-1,\infty )\) be such that \(\frac{v}{q}\mu \in (-1,v-1)\). Then, $$\begin{aligned}&||fg||_{F^{\kappa ,\left( \frac{1}{2n},1\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d-1}\times I,(1,v_{\mu });X)} \lesssim ||f||_{L^{\infty }(\mathbb {R}^{d-1}\times I;{\mathcal {B}}(X))} ||g||_{F^{\kappa ,\left( \frac{1}{2n},1\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d-1}\times I,(1,v_{\mu });X)} \\&\quad + ||f||_{F^{\kappa }_{s,p}(I;L^{r}(\mathbb {R}^{d-1};{\mathcal {B}}(X))) \cap L^{s}(I;B^{2n\kappa }_{r,p}(\mathbb {R}^{d-1};{\mathcal {B}}(X)))} ||g||_{F^{0,\left( \frac{1}{2n},1\right) }_{(u,v),1,(d-1,1)}(\mathbb {R}^{d-1}\times I,(1,v_{\frac{v}{q}\mu });X)} \end{aligned}$$ with implicit constant independent of X and T. Note here that \(\frac{v}{q}\mu < v-1\) when \(\mu < q-1\). Extending f from \(\mathbb {R}^{d-1} \times I\) to \(\mathbb {R}^{d-1} \times \mathbb {R}\) by using an extension operator of Fichtenholz type and extending g from \(\mathbb {R}^{d-1} \times I\) to \(\mathbb {R}^{d-1} \times \mathbb {R}\) by using an extension operator as in Lemma B.2, we may restrict ourselves to the case \(I=\mathbb {R}\). Let \((S_{n})_{n \in \mathbb {N}}\) correspond to some fixed choice of \(\varphi \in \Phi ^{(d-1,1),(\frac{1}{2n},1)}(\mathbb {R}^{d})\), say with \(A=1\) and \(B=\frac{3}{2}\). As in [58, Chapter 4] (the isotropic case), we can use paraproducts associated with \((S_{n})_{n \in \mathbb {N}}\) in order to treat the pointwise product fg. For this, it is convenient to define \(S^{k} \in {\mathcal {L}}({\mathcal {S}}'(\mathbb {R}^{d};X))\) by \(S^{k}:=\sum _{n=0}^{k}S_{n}\). Given \(f \in L^{\infty }(\mathbb {R}^{d};{\mathcal {B}}(X))\) and \(g \in F^{\kappa ,(\frac{1}{2n},1)}_{(p,q),p,(d-1,1)}(\mathbb {R}^{d},(1,v_{\mu });X) \hookrightarrow L^{(p,q),(d-1,1)}(\mathbb {R}^{d},(1,v_{\mu });X)\), if the paraproducts $$\begin{aligned}&\Pi _{1}(f,g) := \sum _{k=2}^{\infty }(S^{k-2}f)(S_{k}g), \Pi _{2}(f,g) := \sum _{k=0}^{\infty }\sum _{j=-1}^{1}(S_{k+j}f)(S_{k}g), \Pi _{3}(f,g) \\&\quad := \sum _{k=2}^{\infty }(S_{k}f)(S^{k-2}g), \end{aligned}$$ exist (as convergent series) in \({\mathcal {S}}'(\mathbb {R}^{d};X)\), then $$\begin{aligned} fg = \Pi _{1}(f,g) + \Pi _{2}(f,g) + \Pi _{3}(f,g). \end{aligned}$$ Here, the Fourier supports of the summands in the paraproducts satisfy Using Lemma A.1, it can be shown as in [47, Lemma 1.3.19] that $$\begin{aligned} ||\Pi _{i}(f,g)||_{F^{\kappa ,\left( \frac{1}{2n},1\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d},(1,v_{\mu });X)} \lesssim ||f||_{L^{\infty }(\mathbb {R}^{d};{\mathcal {B}}(X))} ||g||_{F^{\kappa ,\left( \frac{1}{2n},1\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d},(1,v_{\mu });X)}, \quad \quad i=1,2, \end{aligned}$$ $$\begin{aligned}&||\Pi _{3}(f,g)||_{F^{\kappa ,\left( \frac{1}{2n},1\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d},(1,v_{\mu });X)} \lesssim ||(2^{n\kappa }S_{n}f)_{n}||_{L^{s}(\mathbb {R})[[\ell ^{p}(\mathbb {N})]L^{r}(\mathbb {R}^{d-1})](X)}\\&\quad ||g||_{L^{(u,v),(d-1,1)}(\mathbb {R}^{d-1}\times I,(1,v_{\frac{v}{q}\mu });X)}. \end{aligned}$$ The desired estimate now follows from (16) and (60). \(\square \) Let the notations and assumptions be as in Lemma B.3. For each \(\delta > \frac{1}{s}+\frac{d-1}{2nr}\), the inclusion $$\begin{aligned} F^{\delta ,\left( \frac{1}{2n},1\right) }_{(p,q),\infty ,(d-1,1)}(\mathbb {R}^{d-1}\times I,(1,v_{\mu });X) \hookrightarrow F^{0,\left( \frac{1}{2n},1\right) }_{(u,v),1,(d-1,1)}(\mathbb {R}^{d-1}\times I,(1,v_{\frac{v}{q}\mu });X) \end{aligned}$$ holds true with a norm that can be estimated by a constant independent of T and X. Thanks to Lemma B.2, we only need to establish the inclusion for \(I=\mathbb {R}\). Writing \(\epsilon := \delta - \left[ \frac{1}{s}+\frac{d-1}{2nr}\right] > 0\), we have $$\begin{aligned} F^{\delta ,\left( \frac{1}{2n},1\right) }_{(p,q),\infty ,(d-1,1)}(\mathbb {R}^{d},(1,v_{\mu });X)&\hookrightarrow&B^{\delta ,\left( \frac{1}{2n},1\right) }_{(p,q),\infty ,(d-1,1)}(\mathbb {R}^{d},(1,v_{\mu });X) \\&\hookrightarrow&B^{\epsilon ,\left( \frac{1}{2n},1\right) }_{(u,v),\infty ,(d-1,1)}(\mathbb {R}^{d},(1,v_{\frac{v}{q}\mu });X) \\&\hookrightarrow&F^{0,\left( \frac{1}{2n},1\right) }_{(u,v),1,(d-1,1)}(\mathbb {R}^{d},(1,v_{\frac{v}{q}\mu });X), \end{aligned}$$ where the second inclusion is obtained from Proposition 5.1. \(\square \) Let us write $$\begin{aligned}&{_{0}}F^{s,\left( \frac{1}{2n},1\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d-1}\times I,(1,v_{\mu });X)\\&\quad := \left\{ \begin{array}{ll} F^{s,\left( \frac{1}{2n},1\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d-1}\times I,(1,v_{\mu });X), &{} \quad s < \frac{1+\mu }{q}, \\ \{ f \in F^{s,\left( \frac{1}{2n},1\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d-1}\times I,(1,v_{\mu });X) : \mathrm {tr}_{t=0}f=0 \}, &{} \quad s > \frac{1+\mu }{q}. \end{array}\right. \end{aligned}$$ A combination of Lemmas B.3 and B.4 followed by extension by zero for g and extension of Fichtenholz type for f yields $$\begin{aligned}&||fg||_{{_{0}}F^{\kappa ,\left( \frac{1}{2n},1\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d-1}\times J,(1,v_{\mu });X)} \lesssim ||f||_{L^{\infty }(\mathbb {R}^{d-1}\times I;{\mathcal {B}}(X))} ||g||_{{_{0}}F^{\kappa ,\left( \frac{1}{2n},1\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d-1}\times J,(1,v_{\mu });X)} \\&\quad + ||f||_{F^{\kappa }_{s,p}(J;L^{r}(\mathbb {R}^{d-1};{\mathcal {B}}(X))) \cap L^{s}(J;B^{2n\kappa }_{r,p}(\mathbb {R}^{d-1};{\mathcal {B}}(X)))} ||g||_{{_{0}}F^{\delta ,\left( \frac{1}{2n},1\right) }_{(p,q),\infty ,(d-1,1)}(\mathbb {R}^{d-1}\times J,(1,v_{\mu });X)} \end{aligned}$$ with implicit constant independent of X and T, which is a suitable substitute for the key estimate in the proof of [47, Proposition 2.3.1]. Lower order terms By the trace result Theorem 4.4, in order that the condition for the boundary operators in Remark 3.3 is satisfied, it is enough that there exist \(\sigma _{j,\beta } \in [0,\frac{n_{j}-|\beta |}{2n})\) such that \(b_{j,\beta }\) is a pointwise multiplier from $$\begin{aligned} F^{\kappa _{j,\gamma }+\sigma _{j,\beta }}_{q,p}(J,v_{\mu };L^{p}(\partial {\mathscr {O}};X)) \cap L^{q}(J,v_{\mu };F^{2n(\kappa _{j,\gamma }+\sigma _{j,\beta })}_{p,p}(\partial {\mathscr {O}};X)) \end{aligned}$$ $$\begin{aligned} F^{\kappa _{j,\gamma }}_{q,p}(J,v_{\mu };L^{p}(\partial {\mathscr {O}};X)) \cap L^{q}(J,v_{\mu };F^{2n\kappa _{j,\gamma }}_{p,p}(\partial {\mathscr {O}};X)). \end{aligned}$$ This is achieved by the next lemma. Let X be a Banach space, \(I=(-\infty ,T)\) with \(T \in (-\infty ,\infty ]\), \(\kappa ,\sigma > 0\), \(n \in \mathbb {N}_{>0}\), \(p,q \in [1,\infty )\), \(r,u \in (p,\infty )\), \(s,v \in (q,\infty )\) with \(\frac{1}{p}=\frac{1}{r}+\frac{1}{u}\) and \(\frac{1}{q}=\frac{1}{s}+\frac{1}{v}\). Let \(\mu \in (-1,\infty )\) be such that \(\frac{v}{q}\mu \in (-1,v-1)\). If \(\kappa + \sigma > \frac{1}{s}+\frac{d-1}{2nr}\), then $$\begin{aligned}&||fg||_{F^{\kappa ,\left( \frac{1}{2n},1\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d-1}\times I,(1,v_{\mu });X)} \lesssim ||f||_{F^{\kappa }_{s,p}(J;L^{r}(\mathbb {R}^{d-1};{\mathcal {B}}(X))) \cap L^{s}(J;B^{2n\kappa }_{r,p}(\mathbb {R}^{d-1};{\mathcal {B}}(X)))}\\&\quad ||g||_{F^{\kappa +\sigma ,\left( \frac{1}{2n},1\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d-1}\times I,(1,v_{\mu });X)} \end{aligned}$$ Note that for \(\mu \in (-1,\infty )\) to be such that \(\frac{v}{q}\mu \in (-1,v-1)\) it is sufficient that \(\mu \in (-1,q-1)\) with \(\mu > \frac{q}{s}-1\). As in the proof of Lemma B.3, we may restrict ourselves to the case \(I=\mathbb {R}\) and use paraproducts. Using Lemma A.1 and Lemma 4.16, we find $$\begin{aligned} ||\Pi _{1}(f,g)||_{F^{\kappa ,\left( \frac{1}{2n},1\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d},(1,v_{\mu });X)} \lesssim ||f||_{B^{-\sigma ,\left( \frac{1}{2n},1\right) }_{(\infty ,\infty ),\infty ,(d-1,1)}(\mathbb {R}^{d};X)} ||g||_{F^{\kappa +\sigma ,\left( \frac{1}{2n},1\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d},(1,v_{\mu });X)}. \end{aligned}$$ Using Lemma A.1, for \(i=2,3\) we find $$\begin{aligned}&||\Pi _{i}(f,g)||_{F^{\kappa ,\left( \frac{1}{2n},1\right) }_{(p,q),p,(d-1,1)}(\mathbb {R}^{d},(1,v_{\mu });X)} \lesssim ||(2^{n\kappa }S_{n}f)_{n}||_{L^{s}(\mathbb {R})[[\ell ^{p}(\mathbb {N})]L^{r}(\mathbb {R}^{d-1})](X)}\\&\quad ||g||_{L^{(u,v),(d-1,1)}(\mathbb {R}^{d-1}\times I,(1,v_{\frac{v}{q}\mu });X)}. \end{aligned}$$ Similarly to Lemma B.1, choosing \({\tilde{\kappa }}\) with \(\kappa + \sigma> {\tilde{\kappa }} + \sigma > \frac{1}{s}+\frac{d-1}{2nr}\), we have $$\begin{aligned} F^{\kappa }_{s,p}(\mathbb {R};L^{r}(\mathbb {R}^{d-1};X)) \cap L^{s}(\mathbb {R};B^{2n\kappa }_{r,p}(\mathbb {R}^{d-1};X)) \hookrightarrow B^{-\sigma ,\left( \frac{1}{2n},1\right) }_{(\infty ,\infty ),\infty ,(d-1,1)}(\mathbb {R}^{d};X), \end{aligned}$$ where we now use (the vector-valued version of) [35, Theorem 7] instead of Corollary 4.11. The desired estimate follows from Lemma B.4 and (16). \(\square \) Lindemulder, N. Maximal regularity with weights for parabolic problems with inhomogeneous boundary conditions. J. Evol. Equ. (2019). https://doi.org/10.1007/s00028-019-00515-7 DOI: https://doi.org/10.1007/s00028-019-00515-7 Mathematics Subject Classification Primary 35K50 Secondary 42B15 Anisotropic spaces Besov Bessel potential Inhomogeneous boundary conditions Maximal regularity Mixed-norms Parabolic initial-boundary value problems Triebel–Lizorkin Vector-valued
CommonCrawl
Proceedings of the 2018 International Conference on Intelligent Computing (ICIC 2018) and Intelligent Computing and Biomedical Informatics (ICBI) 2018 conference: medical informatics and decision making Implementation of machine learning algorithms to create diabetic patient re-admission profiles Mohamed Alloghani1,2, Ahmed Aljaaf1,3, Abir Hussain1, Thar Baker1, Jamila Mustafina4, Dhiya Al-Jumeily1 & Mohammed Khalaf5 Machine learning is a branch of Artificial Intelligence that is concerned with the design and development of algorithms, and it enables today's computers to have the property of learning. Machine learning is gradually growing and becoming a critical approach in many domains such as health, education, and business. In this paper, we applied machine learning to the diabetes dataset with the aim of recognizing patterns and combinations of factors that characterizes or explain re-admission among diabetes patients. The classifiers used include Linear Discriminant Analysis, Random Forest, k–Nearest Neighbor, Naïve Bayes, J48 and Support vector machine. Of the 100,000 cases, 78,363 were diabetic and over 47% were readmitted.Based on the classes that models produced, diabetic patients who are more likely to be readmitted are either women, or Caucasians, or outpatients, or those who undergo less rigorous lab procedures, treatment procedures, or those who receive less medication, and are thus discharged without proper improvements or administration of insulin despite having been tested positive for HbA1c. Diabetic patients who do not undergo vigorous lab assessments, diagnosis, medications are more likely to be readmitted when discharged without improvements and without receiving insulin administration, especially if they are women, Caucasians, or both. The approaches used in managing maladies have a major influence on the medical outcome of the patient including the probability of re-admission. A growing number of publications suggest the urgent needs to explore and identify the contributing factors that imply critical roles in human diseases. This can help to uncover the mechanisms underlying diseases progression. Ideally, this can be achieved through experimental results that depict valuable methods with better performance when compared with other studies. In the same context, many strategies were developed to achieve such objectives by employing novel statistical models on large-scale datasets [1–6]. Such an observation has prompted the requirement of effective patient management protocols, especially for those admitted into intensive care unit. However, the same protocols are not fully applicable to Non–Intensive Care Unit (Non-ICU) inpatients, and this has inculcated poor inpatient management practices regarding the number of treatments, the number of lab test conducted, discharge, insignificant changes or improvements at the time of discharge, and high rates of re-admissions. Nonetheless, such a claim has not been proven and the influence on these factors on re-admission among diabetes. As such, this study hypothesized that time spent in hospital, number of lab procedures, number of medications, and number of diagnoses have an association with re-admission rates and are proxies of in-hospital management practices that affect patient health outcomes. However, detection of Hemoglobin A1c (HbA1c) marker, administration of insulin treatment, diabetes treatment instances, and noted changes are factors that can moderate the admission and are treated as partial management factors in the study. Some of the re-admission is avoidable although this requires evidence-based treatments. According to [7] in a retrospective cohort study evaluated the basic diagnoses and 30-day re-admission patterns among Academic Tertiary Medical Center patients' and established within 30-days re-admissions are avoidable. In specific, the study established that 8.0% of the 22.3% of the within 30 days re-admissions are potentially avoidable. As a subtext to the conclusion, the authors asserted that these re-admission cases were related in direct or indirect consequences due to the pre-conditions related to the primary diagnosis. For instance, research demonstrated that patients admitted for heart failure and other related diseases are more likely to be readmitted for acute heart failure.However, the re-occurrence of the heart condition is dependent on the treatment administered, observed health outcome at discharge, and other pre-existing health conditions. Research contribution Under the circumstances, it is essential for healthcare stakeholders to pursue re-admission reduction strategies, especially with a specific focus on the potentially avoidable re-admissions. The authors in [8] highlighted the role that financial penalties imposed on health institutions with higher re-admission rates in reducing the re-admission incidences. Furthermore, the article assessed and concluded that extensive assessment of patient needs, reconciling medication, educating the patients, planning timely outpatient appointments, and ensuring follow-up through calls and messages are among the best emerging practices for reducing re-admission rates. However, implementing these strategies requires significant funding although the long-term impacts outweigh any financial demands. Hence, it suffices to deduce that re-admissions in a health facility are a priority area for improved health facilities and reducing healthcare cost. Regardless of the far-reaching interest in hospital re-admissions, little research has explored re-admission among diabetes patients. A reduction of diabetic patient re-admission can reduce health cost while improving health outcomes at the same time. More importantly, some studies have identified socioeconomic status, ethnicity, disease burden, public coverage, and history of hospitalization as key re-admission risk factors. Besides these factors and principal admission conditions, re-admission can be a factor of health management practices. This study provides information on the managerial causes of re-admission using six machine learning models. Additionally, most studies employ regression data mining technique and as such this study provides a framework for implementing other machine learning techniques in exploring the causative agents of re-admission rates among diabetes patients. The primary importance of the algorithm is to help hospitals identify multiple strategies that work effectively for re-admission of a given health condition. In specific, implementation of multiple strategies will focus on improved communication, the safety of the medication, advancements in care planning, and enhanced training on the management of medical conditions that often lead to re-admissions. Each of these sub-domains involves decision making and given the size and nature of healthcare information, data mining and deep learning techniques may prove critical in reducing the re-admission rates. Figure 1 illustrates the high-level machine learning process diagram used in the paper. The study explored the probable predictors of diabetes hospital re-admission among the hospitals using machine learning techniques along with other exploratory methods. The dataset consists of 55 attributes and only 18 were used as per the scope of the study. The performance of the models is evaluated using the conventional confusion matrix and ROC efficiency analysis. The final re-admission model is based on the best performing model as per the true positive rates, sensitivity and specificity. The Machine Learning Process Diagram Linear discriminant analysis LDA algorithm is a variant of Fisher's linear discriminant and it classifies data to vector format based linear combination of attributes based on a target factor or class variable. The algorithm has a close technical resemblance to Analysis of Variance (ANOVA) and regression as it explains the influences of predictors using linear combinations [5]. There are two approaches to LDA. The techniques assume that the data conforms to Gaussian distribution and as such, assumes that each attribute has a bell-shape curve when visualized and it also assumes that each variable has the same variance, and that data points of each attribute vary around the average by the same amount. That is, the algorithm requires the data and its attributes to be normally distributed and of constant variance or standard variation. As a result, the algorithm estimates the mean and the variance of the data for each of the class that it creates using the conventional statistical techniques. $$ \mu=\frac{1}{nk}\sum(x) $$ Where μ is the mean of each input attribute (x) for each class (k) and n is the total number of observations in the dataset. The variance associated with the classes is also computed using the following conventional method. $$ \sigma^{2}=\ \frac{1}{n-k}\ \sum{(x-\mu})^{2} $$ In Eq. 2, sigma squared is the variance across all instance serving as input in the model, k is the number of classes, and n is the number of observations or instance in the dataset. μ is the mean and is computed using Eq. 1. Besides the assumptions, the algorithm makes prediction using a probabilistic approach that can be summarized in two steps. Firstly, LDA classifies predictors and assigns them to a class based on the value of the posterior probability denoted as $$ \pi\ \left(y=\complement_{i}\middle| x\ \right) $$ The objective is to minimize the total probability of mis-classifying the features, and this approach relies on Bayes' rule and the Gaussian distribution assumption for class means where: $$ \pi\ \left(x\middle| y\ =\ \complement_{i}\right) $$ Secondly, LDA finds a linear combination of the predictors that return the optimum predictor value, and this study uses the latter. LDA algorithm can be implemented in five basic steps. First, in performing LDA classification, the d-dimensional mean vectors are computed for the classes identified in the dataset using the mean approach (Eq. 1). The variance and the normality assumption must be checked before proceeding. Second, both within and between-class scatters are computed and returned as a matrix. The within-class scatter or distances are computed based on Eq. 5. $$ S_{within\ =\ }\sum_{i=1}^{c}S_{i} $$ $$ S_{i}\ =\ \sum_{x\in D i}^{n}{\left(x\ -\ \mu_{i}\ \right)(x\ -\ \mu_{i})^{T}} $$ where i is the scatter for every class identified in the dataset and μ is the mean of the classes computed using Eq. 1. The Between-class scatter is calculated using Eq. 7. $$ S_{between}\ =\ \sum_{i-1}^{c}{N_{i}\left(\mu_{i}\ -\ \mu\ \right)(\mu_{i}\ -\ \mu)^{T}} $$ In Eq. 7, S is general mean value while μ and N refers to the sample mean and sizes of identified classes respectively. The third step involves solving Eigenvectors associated with the product of the within-class and out-class matrices. The fourth step involves sorting the linear discriminant to identify the new feature subspace. The selection and sorting using decreasing magnitudes of Eigenvalues. The last step involves the transformation of the samples or observations onto the new linear discriminant sub-spaces. The pseudo-code for LDA is presented in Algortihm 1. For the classes i, the algorithm divides the data into D1 and D2 then calculates the within and between the class distances, and the best linear discriminant is a vector obtained from the product of transpose of within-class and between-class scatter matrices. Random forest is a variant of decision degree growing technique and it is different from the other classifiers, because it supports random growth branches within the selected subspace. The random forest model predicts the outcome based on a set of random base regression trees. The algorithm selects a node at each random base regression and split it to grow the other branches. It is important to note that Random Forest is an ensemble algorithm because it combines different trees. Ideally, ensemble algorithms combine one or more classifiers with the different types. Random forest can be thought of a bootstrapping approach for improving the results obtained from the decision tree. The algorithm works in the following order. First, it selects a bootstrap sample S(i)from the sample space and the argument denoting the bootstrap sample refers to the ith bootstrap. The algorithm learns a conventional decision tree although through implementation of a modified decision tree algorithm. The modification is specific and is systematically implemented as the tree grows. That is, at each node of decision tree, instead of implementing an iteration for all possible feature split, RF randomly selects a subset of features such that f⊆F and then splits the features in the subset (f). The splitting is based on the best feature in the subset and during implementation, the algorithm chooses the subset that it is much smaller than the set of all features. Small size of subset reduces the burden to decide on the number of features to split since datasets with large size subsets tend to increase the computational complexity. Hence, the narrowing of the attributes to be learned improves the learning speed of the algorithm. The algorithm uses bagging to implement the ensemble decision tree, and it is prudent to note that bagging reduces the variance of the decision tree algorithm. Support Vector Machine is a group of supervised learning techniques that classify data based on regression analysis. One of the variables in the training sample should be categorical so that the learning process assigns new categorical value as part of the predictive outcome. As such, SVM is a non-likelihood binary classifier leveraging the linear properties. Besides classification and regression, SVM detects outliers and is versatile when applied to dimensionality high [1]. Ideally, a training vector variable, that has at least two categories, is defined as follows: $$ x_{i}\in\mathbb{R}^{p},i=1,...,n $$ where xi represents the training observation and Rp indicates the real-valued p-dimensional feature space and predictor vector space. A pseudo-code for a simple SVM algorithm is illustrated: The algorithm searches for candidate support vectors denoted as S and it assumes that SV occupies as a space where the parameters of the linear features of the hyper-plane are stored. k-nearest neighbor kNN classifies data using the same distance measurement techniques as LDA and other regression-based algorithms. In classification application, the algorithm produces class members while in regression application it returns the value of a feature or a predictor [9]. The technique can identify the most significant predictor and as such was given preference in the analysis. Nonetheless, the algorithm requires high memory and is sensitive to non-contributed features despite being considered insensitive to outliers and versatile among many other qualifying features. The algorithm creates classes or clusters based on the mean distance between data-points. The mean distance is calculated using the following equation. $$ \mathrm{\Psi}(x)=\frac{1}{k}. {\sum}{\left(x_{i},y_{i}\right)\in kN N(x,L,K)} y_{i} $$ In Eq. 9, kNN(x,L,K), k denotes the K nearest neighbors of the input attribute (x) in the learning set space (i). The classification and prediction application of the algorithm depends on the dominant k class and the predictive equation is as the following: $$ \mathrm{\Psi}(x)={argmax}{c\in y}.{\sum}{\left(x_{i},y_{i}\right)\in N\left(x,L,K\right)} y_{i} $$ It is imperative to note that output class consists of members from the target attribute and the distance used in assigning the attributes to classes is based on Euclidean distance. The implementation of the algorithm consists of six steps. The first step involves the computation of Euclidean distance. In the second step, the computed n distances are arranged in a non-decreasing order, and in the third step, a positive integer k is drawn from the sorted Euclidean distances. In the fourth step, k-points corresponding to the k-distances are established and assigned based on proximity to the center of the class. Finally, for k >0 and for (number of points in the i, an attribute x is assigned to that class if ki>kjfor all i≠j is true. Algorithm 4 shows the kNN steps process: Even though Naïve Bayes is one of the supervised learning techniques, it is probabilistic in nature so that the classification is based on Naïve Bayes' rules of probability, especially those of association. Conditional probability is the construct of Naïve Bayes classifier [9–16]. The algorithm assigns instance probabilities to the predictors parsed in a vector format representing each probable outcome. Naïve Bayes classifier is the posterior probability that the dividend of the product of prior with likelihood and evidence returns. The construction of the model from the output of the analysis is quite complex although the probabilistic computation from the generated classes is straightforward [17–22]. The Bayes Theorem upon which the Naïve Bayes classifier is based can be written as follows: $$ P\left(\mu|\nu\right)=\frac{P(\nu|\mu)P(\mu)}{P(\nu)} $$ Where μ and v are events or instances in an experiment and P(μ) and P(ν) are the probability of their occurrence. The conditional probability of an event μ occurring after v is the basis of Naïve Bayes classifier. The classifier uses maximum likelihood hypothesis to assign data points to classes. The algorithm assumes that each feature is independent and makes equal contribution to the outcome or all features belonging to the same class have the same influence on that class. In Eq. 11, the algorithm computes the probability of event μ provided that v already occurred, and as such v is the evidence and the probability P(μ) is regarded as the priori probability. That is, it refers to probability obtained before seeing the evidence while the conditional probability P(μ|ν) is priori probability of v since it is a probability computed with evidence. J48 is one of the decision tree growing algorithm. However, J48 is the reincarnation of the C4.5 algorithm, which is an extension of the ID3 algorithm [23]. As such, J48 is a hierarchical tree learning technique and it has several mandatory parameters including the confidence value and the minimum learning instance, which are translated to branches and nodes in the final decision tree [23–29]. Data assembly and pre-processing The study used diabetes data that was collected across 130 hospitals in the US in the years between 1999–2008 [30]. The dataset includes data systematically composed from contributing electronic health records' providers that contained encounter data such as inpatient, outpatient and emergency, demographics, provider specialty, diagnosis, in-hospital procedures, in-hospital mortality, laboratory and pharmacy data. The complete list of the features and description is provided in Table S1 (Additional file 1). The data has 55 attributes, about 100,000 observations, and has missing values. However, the study used a sample based on the treatment of diabetes. In specific, of the 100,000 cases, 78,363 meet the inclusion criteria since they received medication for diabetes. Consequently, the study explored re-admission incidences among patients who had received treatment. The amount of missing information, the type of the data (categorical or numeric) that guided the data cleaning process, re-admission, Insulin prescription, HbA1c test results, and observed changes were retained as the major out-come associated with time spent in the hospital, the number of diagnoses, lab procedures, procedures, and medications [31, 32]. Of the 55, only 18 variables were selected as per the scope for analysis and even about 8 of the selected served as proxy controls. The study was split into 70% training and 30% validation subsets. K-fold validation To improve the overall accuracy and validate a model, we relied on the 10-fold cross validation method applied for estimating accuracy. The training dataset is split into k-subsets and the subset held out while the model is fully trained on remaining subsets. Figure 2 illustrates the validation method. The K-fold Cross-validation method utilizes the defined training feature set and randomly splits it into k equal subsets. The model is trained k times. During each iteration, 1 subset is excluded for use as validation. This technique reduces over-fitting issues, which occurs when a model trains the data too closely to a set of data, which can result in failure to predict future information reliably [2, 12, 33]. Cross-Validation Scheme for both training validation subsets Exploratory analysis Of the 47.7% diabetic patients who were readmitted, 11.6% stayed in the hospital for less than 30 days while 36.1% stayed for more than 30 days. A majority (52.3%) of those who stayed for more than 30 days did not receive any medical procedures during the first visit. In general, diabetic patients who received a fewer number of lab procedures, treatment procedures, medications, and diagnoses are more likely to be readmitted than their counterparts. Furthermore, the more frequent a patient is admitted as an in-patient the less likely the probability of re-admission. Our study indicated that, women (53.3%) and Caucasian (74.6%) diabetic patients are more vulnerable to re-admission than male and the other races. Besides several lab procedures, medications, and diagnoses, insulin administration and HbA1c results exacerbate the re-admission rates among diabetic patients. The Scatterplots of re-admission incidences with an overlay of HbA1c measurements and change recorded at the time of discharge are shown in Figs. 3 and 4. Scatterplot of Medications and Diagnoses Figure 3 illustrates the Scatterplot of the number of diagnoses and lab procedures that patient received for re-admission rates. The figures have 8 panels displaying scatters of diagnoses and lab procedures for different instances of HbA1c results and change. The plot shows that patients who had negative HbA1c tests results received several diagnosis and very few were readmitted. Those who received less than 10 diagnoses and less than 70 procedures were more likely to be readmitted. None of the patients received more than diagnosis and a majority were admitted for more than 30 days. Figure 4 depicts a scatter plot of a number of diagnoses and lab procedures. The re-admission rates are quite different between a group of patients who noted change at discharge than those who did not. Those who failed to note significant improvement at discharge received more than 50 medications and less than 10 diagnoses. However, re-admission is higher among those who noted improvement at discharge. Density distributions The distribution of re-admission and subsequent patterns associated with reported change and results of HbA1c are shown in Figs. 5 and 6. Density Plots of Predictors by re-admission and HbA1c Density Plots of Predictors by Insulin and change Figures 5 and 6 illustrate the density distribution of number of medications, lab procedures, and diagnoses grouped by re-admission, HbA1c results, insulin administration change at discharge. Notably, the distribution density of the number of lab procedures, medications, and diagnoses are the same for grouping categories. Figure 6 shows significant differences in the number of medications and lab procedures. For instance, the average number of medications differs between 'No', 'Up', 'Steady', and 'Down' insulin categories. A similar difference in mean of the number of medications is observed in the change distribution curve with those recording change at discharge receiving more medications than their counterparts. Smooth linear fits Figures 7 and 8 illustrate the smooth line fits associated with Scatterplots. The smoothen fits include a 95% confidence interval and demonstrates the likely performance of linear regression models in forecasting re-admission. Smooth Linear Fits with Insulin and Change as Facets Smooth Linear Fits with re-admission and HbA1c as Facets Figures 7 and 8 depict smooth linear fits of the Scatterplots and density plots in Figs. 3, 4, 5, and 6. The figures illustrate that the number of lab procedures has linear relationships with the number of diagnoses although the data is likely to be heteroskedastic. The number of diagnoses and medications also have the same relationship and plot patterns. For medication versus procedures, the relationship is linear and change in diabetes status increases with medications and lab procedures. As for re-admission, incidents of more than 30 days re-admission reduced with increasing number of diagnoses, lab procedures, and medications. Similarly, the probability of detecting HbA1c increases with increasing number of diagnoses and lab procedures. The performance of the models in predicting re-admission incidence was based on the confusion matrix and in specific the percentage of the correctly predicted read-mission categories. Table 1 depicts that Naïve Bayes correctly classified the re-admission rates less than 30 days and none re-admission incidences. SVM accurately classified 48.3% of the re-admission incidence exceeding 30 days. The objective is to obtain the performing model. Table 1 True Positive Rate Comparison Table Individual model performance The LDA model yields two linear discriminants LD1 and LD2 with proportion trace of 0.9646 and 0.0354 respectively. Hence, the first LD explains more than 96.46% of the between-group variance while the second account for 3.54% of the between-group variance. $$ {}\begin{aligned} LD_{1}=003*\text{ Lab Procedures }-0.102*{ Procedures }\\ + 0.08*\text{ Medications} +0.18 *\text{ Emergency }+0.67\ \text{ Inpatient }\\ + 0.17 \text{Diagnoses} \end{aligned} $$ Figure 9 illustrates the plot of LD1 versus LD2. Equation 12 depicts the profile of diabetic patients. Plot of two linear discriminants obtained from LDA learner The predictors were significantly correlated at 5% level and they influenced re-admission based on the frequency of each. The kNN model used all the 16 predictors to learn the data and selected three as significant predictors. In specific, the kNN model proposes that high re-admission for diabetes treatment is caused by a fewer number of lab procedures, diagnoses, and medications. However, the rates are higher among patients who tested positive for HbA1c and did not fail to receive insulin treatment (Fig. 3). SVM classified the readmitted diabetic patients into three classes using a polynomial of degree 3 suggesting that diabetes re-admission cases do not have a linear relationship with the predictors. As an inference, the polynomial relationship illustrated by the kernel and degree of the SVM indicates higher re-admission rates among patients discharged without any significant changes (Fig. 4). Naïve Bayes classifier yields two classes using the Laplace approach. The classification from the model depicts a reduced likelihood of re-admission in cases where the patients undergo a series of laboratory tests, rigorous diagnosis, proper medication, and discharge after confirmation of improvement. The density distributions in Figs. 5 and 6 compliments the findings of the model. In specific, the distributions of the number of medications and lab procedures show a noticeable difference in the distribution when considering insulin administration as part of treatment. Regarding aggregation of the distribution of the number of medications and lab procedures by status at discharge (change), the distribution curves suggest that patients are more likely to feel better at time of discharge provided that the lab services and medications are of superior quality. It is important to reiterate that Naves Bayes' model has true positive and false negative rates showing that it had 13.78% accuracy and 13.78% sensitivity. Finally, random forest classified diabetic patients using linear approaches with re-admission as the control. Figures 7 and 8 demonstrate that the smoothen linear of the paired predictors shows that re-admissions taking more than 30 days is reduced by increasing the number of medical diagnoses. Further, the HbA1c results increase with increasing number of diagnoses. However, it is important to note that the association between the number of lab procedures and medications tends to be non-linear while that between the number of diagnoses and medication is linear regardless of the grouping variable. The J48 based tree shown in Fig. 9 does not consider the linear relationships and omits diabetic patients who were never re-admitted. The resultant tree included a number of inpatient treatment days, number of emergencies, number of medications, lab procedures, and diagnoses in the model. The model suggests that diabetic patients admitted as in-patients tend not to be re-admitted. Similarly, the tree demonstrates that several diagnoses improve health outcomes and reduce re-admission. Best fit model The best fitting model is based on the performance measures summarized in Table 2. The key decision relies on the efficiency of the model in predicting the re-admission rates and the area under the curve (AUC) and precision/recall curve are the best measures for such a task. Table 2 Comparison of model efficiency and sensitivity Table 2 illustrates that Naïve Bayes is the most sensitive and efficient model for learning, classifying and predicting re-admission rates using mHealth data. It has an efficiency of 64% and a sensitivity of 52.4%. The ROC curves associated with the predictions of re-admission that exceeded 30 days are displayed in the figures below. The larger the area covered the more efficient the model is, and this principle Fig. 10 depicts that Naïve Bayes is the most efficient. ROC curves illustrating the Areas Under Curve for the models Naïve bayes analysis The model focused on the top 5 best factors (exposures) that contributed to re-admission for less and more than 30 days. The association between the exposures and outcome (re-admission instances) are given as log odds ratio in the nomograms illustrated in Fig. 11. The three classes model are Class 0 (No re-admission), Class 1 (re-admission for less than 30 days), and Class 3 (re-admission for more than 30 days). Nomogram visualization of Naïve Bayes classifier on target class 0 Figure 11 depicts the exposure factors with absolute importance on Class 0 including number of emergencies, number of patients, discharge disposition ID, admission source ID, and number of diagnoses. The log odds ratios illustrate the association between these exposure factors. The conditional probability for re-admission after discharge based on these exposure factors is 0.5. Figure 12 depicts the exposure factors with absolute importance on Class 1 including the number of emergencies, the number of patients, discharge disposition ID, time in hospital ID, and number of diagnoses. The log odds ratios display the association between these exposure factors to lack of re-admission after discharge. The conditional probability for re-admission after discharge based on these exposure factors is 14%. In specific, there is a 48% chance of re-admission for patients with a number of diagnoses between 8.5 and 9.5, and a 52% chance for those with diagnoses between 5.5 and 8.5. Similarly, those spending between 2.5 to 3.5 days in the hospital is more likely to be readmitted (59%) for less than 30 days than their counterparts with 41% chance of re-admission. Finally, those with fewer emergency admission history stand higher chances of re-admission (80%) than those with sufficient emergency admission history. Figure 13 depicts the exposure factors with absolute importance on Class 2 including the number of emergencies, the number of patients, discharge disposition ID, admission source ID, and number of diagnoses. The log odds ratios illustrate the association between these exposure factors to lack of re-admission after discharge. The conditional probability for re-admission after discharge based on these exposure factors is 0.42. The number of emergency admission increases re-admission chances by 80% for those with least history. Further, those with higher inpatient admission history have 65% chance of re-admission for more than 30 days. Most importantly, patients who undergo more than 9.5 diagnoses tests have 70% chance of re-admission for more than 30 days after discharge. The size of the health data and the amount of information contained exemplifies the importance of machine learning in the health sector. Developing the profiles for the patients can help in understanding the factors that help reduce the burden of the disease while at the same time improve outcomes. Diabetes is a major problem given that over 78% of the patients admitted across the 130 hospitals were treated for the condition. Of the total number of diabetic patients who participated in the study, over 47% were readmitted with over 36% percent staying in the hospital for over 30 days. This study has also established that women and Caucasians are more vulnerable to hospital re-admissions [5, 33–39]. Each of the machine learning models has established different combinations of features influencing the admission rates. For instance, LDA proposes a linear combination while the SVM suggests a third-degree polynomial degree of association between re-admission and its predictors. Further, J48 models the relationship as non-linear with emphasis on the importance of emergency admission and in-patient treatment on re-admission rates. kNN models lead to the conclusion that fewer number of lab procedures, diagnoses, and medications lead to increased higher re-admission rates. Diabetic patients who do not undergo vigorous lab assessments, diagnosis, medications are more likely to be readmitted when discharged without improvements and without receiving insulin administration, especially if they are women, Caucasians, or both. AI: ANOVA: C4.5: Data mining algorithm Discriminant analysis HbA1c: Glycated hemoglobin test ID3: Iterative dichotomiser 3 J48: Decision tree J48 KNN: K-nearest neighbors Linear discriminant Naive Bayes Non-ICU: Non intensive care unit RF: SVM: Guoa W-L, DS H. An efficient method to transcription factor binding sites imputation via simultaneous completion of multiple matrices with positional consistency. Mol BioSyst. 2017; 13(9):1827–37. https://doi.org/10.1039/C7MB00155J. Strack B, DeShazo JP, Clore JN. Impact of hba1c measurement on hospital readmission rates: Analysis of 70,000 clinical database patient records. BioMed Res Int. 2014; 11. https://doi.org/10.1155/2014/781670. Bengio Y, Grandvalet Y. No unbiased estimator of the variance of k-fold cross-validation. J Mach Learn Res. 2004; 5:1089–105. Bo LJ. Song: Naive bayesian classifier based on genetic simulated annealing algorithm. Procedia Eng. 2011; 23:504–9. https://doi.org/10.1016/j.proeng.2011.11.2538. Chan M. Global report on diabetes. Report. 2016; 978:9241565257. https://apps.who.int/iris/bitstream/handle/10665/204871/9789241565257_eng%.pdf;jsessionid=BE557465C4C16EF288D80B9E41AE01C8?sequence=1. Chen Peng LZ, Huang D-s. Discovery of relationships between long non-coding rnas and genes in human diseases based on tensor completion. IEEE Access. 2018; 6:59152–62. https://doi.org/10.1109/ACCESS.2018.2873013. Bansal D, Khanna K, Chhikara R, Gupta P. Comparative analysis of various machine learning algorithms for detecting dementia. Procedia Comput Sci. 2018; 132:1497–502. https://doi.org/10.1016/j.procs.2018.05.102. Deepti Sisodia DSS. Prediction of diabetes using classification algorithms. Procedia Comput Sci. 2018; 132:1578–85. https://doi.org/10.1016/j.procs.2018.05.122. Kavakiotis I, Tsave O, Salifoglou A, Maglaveras N, Vlahavas I, Chouvarda I. Machine learning and data mining methods in diabetes research. Comput Struct Biotechnol J. 2017; 15:104–16. https://doi.org/10.1016/j.csbj.2016.12.005. Chuai G, Jifang Y, Chen M, et al.Deepcrispr: optimized crispr guide rna design by deep learning. Genome Biol. 2018; 19(1):18. Yi H-C, Huang D-S, Li X, Jiang T-H, Li L-P. A deep learning framework for robust and accurate prediction of ncrna-protein interactions using evolutionary information. Mol Ther-Nucleic Acids. 2018; 1(11):337–44. https://doi.org/10.1016/j.omtn.2018.03.001. Ling H, Kang W, Liang C, Chen H. Combination of support vector machine and k-fold cross validation to predict compressive strength of concrete in marine environment. Constr Build Mater. 2019; 206:355–63. https://doi.org/10.1016/j.conbuildmat.2019.02.071. Harleen Kaur VK. Predictive modelling and analytics for diabetes using a machine learning approach. Appl Comput Inform. 2018. https://doi.org/10.1016/j.aci.2018.12.004. Zhang H, Yu P, et al.Development of novel prediction model for drug-induced mitochondrial toxicity by using naïve bayes classifier method. Food Chem Toxicol. 2017; 10:122–9. https://doi.org/10.1016/j.fct.2017.10.021. Donzé J, Bates DW, Schnipper JL. Causes and patterns of readmissions in patients with common comorbidities: retrospective cohort study. BMJ. 2013; 347(7171). https://doi.org/10.1136/bmj.f7171. Smith DM, Giobbie-Hurder A, Weinberger M, Oddone EZ, Henderson WG, Asch DA, et al.Predicting non-elective hospital readmissions: a multi-site study. Department of veterans affairs cooperative study group on primary care and readmissions. J Clin Epidemiol. 2000; 53(11):1113–8. Han J, Choi Y, Lee C, et al.Expression and regulation of inhibitor of dna binding proteins id1, id2, id3, and id4 at the maternal-conceptus interface in pigs. Theriogenology. 2018; 108:46–55. https://doi.org/10.1016/j.theriogenology.2017.11.029. Jiang L, Wang D, Cai Z, Yan X. Survey of Improving Naive Bayes for Classification In: Alhajj R, Gao H, et al., editors. Lecture Notes in Computer Science. Springer: 2007. https://doi.org/10.1007/978-3-540-73871-8_14. Jianga L, Zhang L, Yu L, Wang D. Class-specific attribute weighted naive bayes. Pattern Recogn. 2019; 88:321–30. https://doi.org/10.1016/j.patcog.2018.11.032. Han Lu LW, Zhi S. An assertive reasoning method for emergency response management based on knowledge elements c4.5 decision tree. Expert Syst Appl. 2019; 122:65–74. https://doi.org/10.1016/j.eswa.2018.12.042. Skriver MVJKK, Sandbæk A, Støvring H. Relationship of hba1c variability, absolute changes in hba1c, and all-cause mortality in type 2 diabetes: a danish population-based prospective observational study. Epidemiology. 2015; 3(1):8. https://doi.org/10.1136/bmjdrc-2014-000060. ADA: Economic Costs of Diabetes in the U.S. in 2012. Diabetes Care; 2013. Sun NJDL, Sun B, Wu MY-C. Lossless pruned naive bayes for big data classifications. Big Data Res. 2018; 14:27–36. https://doi.org/10.1016/j.bdr.2018.05.007. Nima Shiri Harzevili SHA. Mixture of latent multinomial naive bayes classifier. Appl Soft Comput. 2018; 69:516–27. https://doi.org/10.1016/j.asoc.2018.04.020. Nongyao Nai-arun RM. Comparison of classifiers for the risk of diabetes prediction. Procedia Comput Sci. 2015; 69:132–42. https://doi.org/10.1016/j.procs.2015.10.014. Arar OFKA. A feature dependent naive bayes approach and its application to the software defect prediction problem. Appl Soft Comput. 2017; 59:197–209. https://doi.org/10.1016/j.asoc.2017.05.043. Wyckoff OPCCB, Ciarkowski SL. Gianchandani: The relationship between diabetes mellitus and 30-day readmission rates. Clin Diabetes Endocrinol. 2017; 3(3):8. https://doi.org/10.1186/s40842-016-0040-x. Ranjit Panigrahi SB. Rank allocation to j48 group of decision tree classifiers using binary and multiclass intrusion detection datasets. Procedia Comput Sci. 2018; 132:323–32. https://doi.org/10.1016/j.procs.2018.05.186. Dungan KM. The effect of diabetes on hospital readmissions.J Diabetes Sci Technol. 1045; 6(5). Sajida Perveen MSea. Performance analysis of data mining classification techniques to predict diabetes. Procedia Comput Sci. 2016; 82:115–21. https://doi.org/10.1016/j.procs.2016.04.016. Ye SYJSHLZ, Ruan P. Dong: The impact of the hba1c level of type 2 diabetics on the structure of haemoglobin. Report. 2016; 33352. https://doi.org/10.1038/srep33352. Kripalani SAB, Theobald CN, EE V. Reducing hospital readmission rates: current strategies and future directions. Annu Rev Med. 2014; 65:471–85. https://doi.org/10.1146/annurev-med-022613-090415. Wong T-T. Parametric methods for comparing the performance of two classification algorithms evaluated by k-fold cross validation on multiple data sets. Pattern Recogn. 2017; 65:97–107. https://doi.org/10.1016/j.patcog.2016.12.018. Wang Xiaohu WL, Nianfeng L. An application of decision tree based on id3. Phys Procedia. 2012; 25:1017–21. https://doi.org/10.1016/j.phpro.2012.03.193. Trishan Panch PS, Atun R. Artificial intelligence, machine learning and health systems. J Global Health. 2018; 8(2). https://doi.org/10.7189/jogh.08.020303. Wenzheng Bao ZJ, Huang D-S. Novel human microbe-disease association prediction using network consistency projection. BMC Bioinformatics. 2017; 18(S116):173–259. https://doi.org/10.1186/s12859-017-1968-2. Wu J. A generalized tree augmented naive bayes link prediction model. J Comput Sci. 2018; 27:206–17. https://doi.org/10.1016/j.jocs.2018.04.006. Mu YFBHUZXea. Pan C: Efficacy and safety of linagliptin/metformin single-pill combination as initial therapy in drug-naïve asian patients with type 2 diabetes. Diabetes Res Clin Pract. 2017; 124:48–56. https://doi.org/10.1016/j.diabres.2016.11.026. Zhen Shen WB, Huang D-S. Recurrent neural network for predicting transcription factor binding sites. 2018; 8(15270):10. https://doi.org/10.1038/s41598-018-33321-1. The data sources used in the paper was retrieved from UCI Machine Learning Repository as submitted by the Center for Clinical and Translational Research. The organization and the characteristics of the data made it easy to complete classification and clustering tasks. We are also grateful to the artificial intelligence department for providing the necessary support to carry out such a research. This article has been published as part of BMC Medical Informatics and Decision Making Volume 19 Supplement 9, 2019: Proceedings of the 2018 International Conference on Intelligent Computing (ICIC 2018) and Intelligent Computing and Biomedical Informatics (ICBI) 2018 conference: medical informatics and decision making. The full contents of the supplement are available online at https://bmcmedinformdecismak.biomedcentral.com/articles/supplements/volume-19-supplement-9. The Artificial Intelligence Department-, Dubai, UAE Mohamed Alloghani , Ahmed Aljaaf , Abir Hussain , Thar Baker & Dhiya Al-Jumeily Liverpool John Moores University, Liverpool, UAE The University of Anbar, Al-Tameem Street, Al-Anbar, Al-Ramadi, 55431, Iraq Ahmed Aljaaf Kazan Federal University, Kremlyovskaya St, Kazan, Republic of Tatarstan, 420008, Russia Jamila Mustafina Department of Computer Science, Al-Maarif University College, Anbar, The city of Ramadi, 31001, Iraq Mohammed Khalaf Search for Mohamed Alloghani in: Search for Ahmed Aljaaf in: Search for Abir Hussain in: Search for Thar Baker in: Search for Jamila Mustafina in: Search for Dhiya Al-Jumeily in: Search for Mohammed Khalaf in: All authors read and approved the final manuscript. Correspondence to Mohamed Alloghani. Additional file 1 List of features and descriptions in the experiment datasets. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Alloghani, M., Aljaaf, A., Hussain, A. et al. Implementation of machine learning algorithms to create diabetic patient re-admission profiles. BMC Med Inform Decis Mak 19, 253 (2019) doi:10.1186/s12911-019-0990-x Diabetes re-admission
CommonCrawl
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) The First Simultaneous X-Ray/Radio Detection of the First Be/BH System MWC 656 (1701.07265) M. Ribó, B. Marcote, S. Migliari INAF/IAPS-Roma, ICREA, Universitat de Barcelona, Jodrell Bank Center for Astrophysics, University of Manchester, Universidad de La Laguna, European Space Astronomy Centre) Jan. 25, 2017 astro-ph.HE MWC 656 is the first known Be/black hole (BH) binary system. Be/BH binaries are important in the context of binary system evolution and sources of detectable gravitational waves because they are possible precursors of coalescing neutron star/BH binaries. X-ray observations conducted in 2013 revealed that MWC 656 is a quiescent high-mass X-ray binary (HMXB), opening the possibility to explore X-ray/radio correlations and the accretion/ejection coupling down to low luminosities for BH HMXBs. Here we report on a deep joint Chandra/VLA observation of MWC 656 (and contemporaneous optical data) conducted in 2015 July that has allowed us to unambiguously identify the X-ray counterpart of the source. The X-ray spectrum can be fitted with a power law with $\Gamma\sim2$, providing a flux of $\simeq4\times10^{-15}$ erg cm$^{-2}$ s$^{-1}$ in the 0.5-8 keV energy range and a luminosity of $L_{\rm X}\simeq3\times10^{30}$ erg s$^{-1}$ at a 2.6 kpc distance. For a 5 M$_\odot$ BH this translates into $\simeq5\times10^{-9}$ $L_{\rm Edd}$. These results imply that MWC 656 is about 7 times fainter in X-rays than it was two years before and reaches the faintest X-ray luminosities ever detected in stellar-mass BHs. The radio data provide a detection with a peak flux density of $3.5\pm1.1$ $\mu$Jy beam$^{-1}$. The obtained X-ray/radio luminosities for this quiescent BH HMXB are fully compatible with those of the X-ray/radio correlations derived from quiescent BH low-mass X-ray binaries. These results show that the accretion/ejection coupling in stellar-mass BHs is independent of the nature of the donor star. A large light-mass component of cosmic rays at 10^{17} - 10^{17.5} eV from radio observations (1603.01594) S. Buitink, J. R. Hörandel, L. Rossetto, S. Thoudam, I. M. Avruch, P. Best, W. N. Brouw, B. Ciardi, A. Deller, J. Eislöffel, R. Fender, J. M. Griessmeier, T. E. Hassall, A. Horneffer, A. Karastergiou, G. Kuper, S. Markoff, J. P. McKean, E. Orru, M. Pietka, H. J. A. Röttgering, J. Sluman, A. Stewart, C. Tasse, R. J. van Weeren, M. W. Wise, J. A. Zensus Department of Astrophysics/IMAPP, Radboud University Nijmegen, ASTRON, Netherlands Institute for Radio Astronomy, Max-Planck-Institut für Radioastronomie, IKP, Karlsruhe Institute of Technology Department of Physics, Astronomy, University of California Irvine, Vrije Universiteit Brussel, D Helmholtz-Zentrum Potsdam, DeutschesGeoForschungsZentrum GFZ, SRON Netherlands Insitute for Space Research, Kapteyn Astronomical Institute, University of Twente, Institute for Astronomy, University of Edinburgh, University of Hamburg, School of Physics, Astronomy, University of Southampton, Research School of Astronomy, Astrophysics, Australian National University, Anton Pannekoek Institute for Astronomy, University of Amsterdam, Onsala Space Observatory, Dept. of Earth, Space Sciences, Chalmers University of Technology, Astronomisches Institut der Ruhr-Universität Bochum, Hamburger Sternwarte, Leiden Observatory, Leiden University, LPC2E - Universite d'Orleans/CNRS, Station de Radioastronomie de Nancay, Observatoire de Paris - CNRS/INSU, Astrophysics, University of Oxford, Astro Space Center of the Lebedev Physical Institute, National Astronomical Observatory of Japan, Japan STFC Rutherford Appleton Laboratory, Harwell Science, Innovation Campus, Center for Information Technology Centre de Recherche Astrophysique de Lyon, Observatoire de Lyon, Fakultät für Physik, Universität Bielefeld, Department of Physics, Electronics, Rhodes University, Jodrell Bank Center for Astrophysics, School of Physics, Astronomy, The University of Manchester, Department of Astrophysical Sciences, Princeton University, GEPI, Observatoire de Paris, CNRS, Université Paris Diderot, May 1, 2016 hep-ex, astro-ph.HE Cosmic rays are the highest energy particles found in nature. Measurements of the mass composition of cosmic rays between 10^{17} eV and 10^{18} eV are essential to understand whether this energy range is dominated by Galactic or extragalactic sources. It has also been proposed that the astrophysical neutrino signal comes from accelerators capable of producing cosmic rays of these energies. Cosmic rays initiate cascades of secondary particles (air showers) in the atmosphere and their masses are inferred from measurements of the atmospheric depth of the shower maximum, Xmax, or the composition of shower particles reaching the ground. Current measurements suffer from either low precision, or a low duty cycle and a high energy threshold. Radio detection of cosmic rays is a rapidly developing technique, suitable for determination of Xmax with a duty cycle of in principle nearly 100%. The radiation is generated by the separation of relativistic charged particles in the geomagnetic field and a negative charge excess in the shower front. Here we report radio measurements of Xmax with a mean precision of 16 g/cm^2 between 10^{17}-10^{17.5} eV. Because of the high resolution in $Xmax we can determine the mass spectrum and find a mixed composition, containing a light mass fraction of ~80%. Unless the extragalactic component becomes significant already below 10^{17.5} eV, our measurements indicate an additional Galactic component dominating at this energy range. Synchronous X-ray and Radio Mode Switches: a Rapid Global Transformation of the Pulsar Magnetosphere (1302.0203) W. Hermsen, J. van Leeuwen, G.A.E. Wright SRON, Netherlands Institute for Space Research, Astronomical Institute Anton Pannekoek, University of Amsterdam, ASTRON, the Netherlands Institute for Radio Astronomy, Physics Department, University of Vermont, Jodrell Bank Center for Astrophysics, School of Physics, Astronomy Astronomy Centre, University of Sussex) Feb. 1, 2013 astro-ph.HE Pulsars emit low-frequency radio waves through to high-energy gamma-rays that are generated anywhere from the surface out to the edges of the magnetosphere. Detecting correlated mode changes in the multi-wavelength emission is therefore key to understanding the physical relationship between these emission sites. Through simultaneous observations, we have detected synchronous switching in the radio and X-ray emission properties of PSR B0943+10. When the pulsar is in a sustained radio 'bright' mode, the X-rays show only an un-pulsed, non-thermal component. Conversely, when the pulsar is in a radio 'quiet' mode, the X-ray luminosity more than doubles and a 100%-pulsed thermal component is observed along with the non-thermal component. This indicates rapid, global changes to the conditions in the magnetosphere, which challenge all proposed pulsar emission theories.
CommonCrawl
The Numerical Solution of the space-time fractional diffusion equation involving the Caputo-Katugampola fractional derivative NACO Home $ V $-$ E $-invexity in $ E $-differentiable multiobjective programming doi: 10.3934/naco.2021028 Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible. Readers can access Online First articles via the "Online First" tab for the selected journal. A dual Bregman proximal gradient method for relatively-strongly convex optimization Jin-Zan Liu 1, and Xin-Wei Liu 2,, School of Science, Hebei University of Technology, Tianjin, China Institute of Mathematics, Hebei University of Technology, Tianjin, China * Corresponding author: Xin-Wei Liu Received January 2021 Revised June 2021 Early access July 2021 Fund Project: The research is partially supported by NSFC grant 11671116 Full Text(HTML) We consider a convex composite minimization problem, whose objective is the sum of a relatively-strongly convex function and a closed proper convex function. A dual Bregman proximal gradient method is proposed for solving this problem and is shown that the convergence rate of the primal sequence is $ O(\frac{1}{k}) $. Moreover, based on the acceleration scheme, we prove that the convergence rate of the primal sequence is $ O(\frac{1}{k^{\gamma}}) $, where $ \gamma\in[1,2] $ is determined by the triangle scaling property of the Bregman distance. Keywords: Relatively strong convexity, Dual method, Bregman distance, Relative smoothness. Mathematics Subject Classification: Primary: 90C25, 90C46; Secondary: 65K05. Citation: Jin-Zan Liu, Xin-Wei Liu. A dual Bregman proximal gradient method for relatively-strongly convex optimization. Numerical Algebra, Control & Optimization, doi: 10.3934/naco.2021028 H. H. Bauschke, J. Bolte and M. Teboulle, A descent lemma beyond Lipschitz gradient continuity: first-order method revisited and applications, Mathematics of Operations Research, 42 (2017), 330-348. doi: 10.1287/moor.2016.0817. Google Scholar H. H. Bauschke and J. M. Borwein, Joint and separate convexity of the Bregman distance, in Inherently Parallel Algorithms in Feasibility and Optimization and their Applications (eds. D. Butnariu, Y. Censor, and S. Reich), Elsevier, (2001), 23–26. doi: 10.1016/S1570-579X(01)80004-5. Google Scholar A. Beck, First-Order Methods in Optimization, SIAM, Philadelphia, 2017. doi: 10.1137/1.9781611974997. Google Scholar A. Beck and M. Teboulle, Gradient-based algorithms with applications to signal recovery, Convex Optimization in Signal Processing and Communications, (2009), 42–88. doi: 10.1017/CBO9780511804458.003. Google Scholar A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM Journal on Imaging Science, 2 (2009), 183-202. doi: 10.1137/080716542. Google Scholar A. Beck and M. Teboulle, A fast dual proximal gradient algorithm for convex minimization and applications, Operations Research Letters, 42 (2014), 1-6. doi: 10.1016/j.orl.2013.10.007. Google Scholar M. Bertero, P. Boccacci, G. Desider and G. Vicidomini, Image deblurring with Poisson data: from cells to galaxies, Inverse Problems, 25 (2009), 1-26. doi: 10.1088/0266-5611/25/12/123006. Google Scholar L. M. Bregman, The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming, USSR Computational Mathematics and Mathematical Physics, 7 (1967), 200-217. doi: 10.1016/0041-5553(67)90040-7. Google Scholar R. Bruck, On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space, Journal of Mathematical Analysis and Applications, 61 (1977), 159-164. doi: 10.1016/0022-247X(77)90152-4. Google Scholar G. Chen and M. Teboulle, Convergence analysis of a proximal-like minimization algorithm using Bregman functions, SIAM Journal on Optimization, 3 (1993), 538-543. doi: 10.1137/0803026. Google Scholar M. Fukushima and H. Milne, A generalized proximal point algorithm for certain nonconvex minimization problems, International Journal of Systems Science, 12 (1981), 989-1000. doi: 10.1080/00207728108963798. Google Scholar D. H. Gutman and J. F. Pena, Perturbed Fenchel duality and first-order methods, preprint, arXiv: 1812.10198. Google Scholar F. Hanzely, P. Richtarik and L. Xiao, Accelerated Bregman proximal gradient methods for relatively smooth convex optimization, preprint, arXiv: 1808.03045. doi: 10.1007/s10589-021-00273-8. Google Scholar J. Kiefer and J. Wolfowitz, Optimal design in regression problems, The Annals of Mathematical Statistics, 30 (1959), 271-294. doi: 10.1214/aoms/1177706252. Google Scholar Fast dual proximal gradient algorithms with rate $O(1/k^{1.5} )$ for convex minimization, preprint, arXiv: 1609.09441., Google Scholar E. Klintberg and S. Gros, Approximate inverses in preconditioned fast dual gradient methods for MPC, IFAC-PapersOnLine, 50 (2017), 5901-5906. doi: 10.1016/j.ifacol.2017.08.1321. Google Scholar P. L. Lions and B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964-979. doi: 10.1137/0716071. Google Scholar H. Lu, M. Freund and Y. Nesterov, Relatively smooth convex optimization by first-order methods and applications, SIAM Journal on Optimization, 28 (2018), 333-354. doi: 10.1137/16M1099546. Google Scholar J. Li, G. Chen and Z. Dong, A fast dual proximal-gradient method for separable convex optimization with linear coupled constraints, Computational Optimization and Applications, 64 (2016), 671-697. doi: 10.1007/s10589-016-9826-0. Google Scholar [20] D. P. Palomar and Y. C. Eldar, Convex Optimization in Signal Processing and Communications, Cambridge University Press, Cambridge, 2010. doi: 10.1017/CBO9780511804458. Google Scholar [21] S. Sra, S. Nowozin and S. J. Wright, Optimization for Machine Learning, MIT Press, Massachusetts, 2011. doi: 10.7551/mitpress/8996.001.0001. Google Scholar M. Teboulle, A simplified view of first order methods for optimization, Mathematical Programming, 170 (2018), 67-96. doi: 10.1007/s10107-018-1284-2. Google Scholar T. Treskatis, M. A. Moyers-Gonzalez and C. J. Price, An accelerated dual gradient method and applications in viscoplasticity, Journal of Non-Newtonian Fluid Mechanics, 238 (2016), 115-130. doi: 10.1016/j.jnnfm.2016.09.004. Google Scholar H. Zhang, Y. H. Dai and L. Guo, Linear convergence of random dual coordinate incremental aggregated gradient methods, preprint, arXiv: 2008.13080. Google Scholar Carlos Munuera, Morgan Barbier. Wet paper codes and the dual distance in steganography. Advances in Mathematics of Communications, 2012, 6 (3) : 273-285. doi: 10.3934/amc.2012.6.273 Jia Cai, Junyi Huo. Sparse generalized canonical correlation analysis via linearized Bregman method. Communications on Pure & Applied Analysis, 2020, 19 (8) : 3933-3945. doi: 10.3934/cpaa.2020173 Bram van Asch, Frans Martens. A note on the minimum Lee distance of certain self-dual modular codes. Advances in Mathematics of Communications, 2012, 6 (1) : 65-68. doi: 10.3934/amc.2012.6.65 José Antonio Carrillo, Yingping Peng, Aneta Wróblewska-Kamińska. Relative entropy method for the relaxation limit of hydrodynamic models. Networks & Heterogeneous Media, 2020, 15 (3) : 369-387. doi: 10.3934/nhm.2020023 Sohana Jahan. Supervised distance preserving projection using alternating direction method of multipliers. Journal of Industrial & Management Optimization, 2020, 16 (4) : 1783-1799. doi: 10.3934/jimo.2019029 Denis Serre, Alexis F. Vasseur. The relative entropy method for the stability of intermediate shock waves; the rich case. Discrete & Continuous Dynamical Systems, 2016, 36 (8) : 4569-4577. doi: 10.3934/dcds.2016.36.4569 Xinmin Yang, Jin Yang, Heung Wing Joseph Lee. Strong duality theorem for multiobjective higher order nondifferentiable symmetric dual programs. Journal of Industrial & Management Optimization, 2013, 9 (3) : 525-530. doi: 10.3934/jimo.2013.9.525 D. V. Osin. Peripheral fillings of relatively hyperbolic groups. Electronic Research Announcements, 2006, 12: 44-52. Lateef Olakunle Jolaoso, Maggie Aphane. Bregman subgradient extragradient method with monotone self-adjustment stepsize for solving pseudo-monotone variational inequalities and fixed point problems. Journal of Industrial & Management Optimization, 2022, 18 (2) : 773-794. doi: 10.3934/jimo.2020178 Y. Goto, K. Ishii, T. Ogawa. Method of the distance function to the Bence-Merriman-Osher algorithm for motion by mean curvature. Communications on Pure & Applied Analysis, 2005, 4 (2) : 311-339. doi: 10.3934/cpaa.2005.4.311 Xiaoling Sun, Xiaojin Zheng, Juan Sun. A Lagrangian dual and surrogate method for multi-dimensional quadratic knapsack problems. Journal of Industrial & Management Optimization, 2009, 5 (1) : 47-60. doi: 10.3934/jimo.2009.5.47 Yunmei Chen, Xianqi Li, Yuyuan Ouyang, Eduardo Pasiliao. Accelerated bregman operator splitting with backtracking. Inverse Problems & Imaging, 2017, 11 (6) : 1047-1070. doi: 10.3934/ipi.2017048 Artur Avila, Sébastien Gouëzel, Masato Tsujii. Smoothness of solenoidal attractors. Discrete & Continuous Dynamical Systems, 2006, 15 (1) : 21-35. doi: 10.3934/dcds.2006.15.21 Yue Zheng, Zhongping Wan, Shihui Jia, Guangmin Wang. A new method for strong-weak linear bilevel programming problem. Journal of Industrial & Management Optimization, 2015, 11 (2) : 529-547. doi: 10.3934/jimo.2015.11.529 Weijun Zhou, Youhua Zhou. On the strong convergence of a modified Hestenes-Stiefel method for nonconvex optimization. Journal of Industrial & Management Optimization, 2013, 9 (4) : 893-899. doi: 10.3934/jimo.2013.9.893 David Salas, Lionel Thibault, Emilio Vilches. On smoothness of solutions to projected differential equations. Discrete & Continuous Dynamical Systems, 2019, 39 (4) : 2255-2283. doi: 10.3934/dcds.2019095 Wenye Ma, Stanley Osher. A TV Bregman iterative model of Retinex theory. Inverse Problems & Imaging, 2012, 6 (4) : 697-708. doi: 10.3934/ipi.2012.6.697 Yu-Hong Dai, Xin-Wei Liu, Jie Sun. A primal-dual interior-point method capable of rapidly detecting infeasibility for nonlinear programs. Journal of Industrial & Management Optimization, 2020, 16 (2) : 1009-1035. doi: 10.3934/jimo.2018190 Netra Khanal, Ramjee Sharma, Jiahong Wu, Juan-Ming Yuan. A dual-Petrov-Galerkin method for extended fifth-order Korteweg-de Vries type equations. Conference Publications, 2009, 2009 (Special) : 442-450. doi: 10.3934/proc.2009.2009.442 Bin Li, Hai Huyen Dam, Antonio Cantoni. A low-complexity zero-forcing Beamformer design for multiuser MIMO systems via a dual gradient method. Numerical Algebra, Control & Optimization, 2016, 6 (3) : 297-304. doi: 10.3934/naco.2016012 Impact Factor: HTML views (230) Jin-Zan Liu Xin-Wei Liu
CommonCrawl
Proceedings of the American Mathematical Society Published by the American Mathematical Society, the Proceedings of the American Mathematical Society (PROC) is devoted to research articles of the highest quality in all areas of pure and applied mathematics. The 2020 MCQ for Proceedings of the American Mathematical Society is 0.85. Journals Home eContent Search About PROC Editorial Board Author and Submission Information Journal Policies Subscription Information On chaotic $C_0$-semigroups and infinitely regular hypercyclic vectors by T. Kalmes PDF Proc. Amer. Math. Soc. 134 (2006), 2997-3002 Request permission A $C_0$-semigroup $\mathcal {T}=(T(t))_{t\geq 0}$ on a Banach space $X$ is called hypercyclic if there exists an element $x\in X$ such that $\{T(t)x; t\geq 0\}$ is dense in $X$. $\mathcal {T}$ is called chaotic if $\mathcal {T}$ is hypercyclic and the set of its periodic vectors is dense in $X$ as well. We show that a spectral condition introduced by Desch, Schappacher and Webb requiring many eigenvectors of the generator which depend analytically on the eigenvalues not only implies the chaoticity of the semigroup but the chaoticity of every $T(t), t>0$. Furthermore, we show that semigroups whose generators have compact resolvent are never chaotic. In a second part we prove the existence of hypercyclic vectors in $D(A^\infty )$ for a hypercyclic semigroup $\mathcal {T}$, where $A$ is its generator. Robert A. Adams, Sobolev spaces, Pure and Applied Mathematics, Vol. 65, Academic Press [Harcourt Brace Jovanovich, Publishers], New York-London, 1975. MR 0450957 J. Bonet, L. Frerick, A. Peris, and J. Wengenroth, Transitive and hypercyclic operators on locally convex spaces, Bull. London Math. Soc. 37 (2005), no. 2, 254–264. MR 2119025, DOI 10.1112/S0024609304003698 R. deLaubenfels and H. Emamirad, Chaos for functions of discrete and continuous weighted shift operators, Ergodic Theory Dynam. Systems 21 (2001), no. 5, 1411–1427. MR 1855839, DOI 10.1017/S0143385701001675 Wolfgang Desch, Wilhelm Schappacher, and Glenn F. Webb, Hypercyclic and chaotic semigroups of linear operators, Ergodic Theory Dynam. Systems 17 (1997), no. 4, 793–819. MR 1468101, DOI 10.1017/S0143385797084976 J. Dieudonné, Treatise on analysis. Vol. II, Pure and Applied Mathematics, Vol. 10-II, Academic Press, New York-London, 1970. Translated from the French by I. G. Macdonald. MR 0258551 Klaus-Jochen Engel and Rainer Nagel, One-parameter semigroups for linear evolution equations, Graduate Texts in Mathematics, vol. 194, Springer-Verlag, New York, 2000. With contributions by S. Brendle, M. Campiti, T. Hahn, G. Metafune, G. Nickel, D. Pallara, C. Perazzoli, A. Rhandi, S. Romanelli and R. Schnaubelt. MR 1721989 Gilles Godefroy and Joel H. Shapiro, Operators with dense, invariant, cyclic vector manifolds, J. Funct. Anal. 98 (1991), no. 2, 229–269. MR 1111569, DOI 10.1016/0022-1236(91)90078-J Karl-Goswin Grosse-Erdmann, Universal families and hypercyclic operators, Bull. Amer. Math. Soc. (N.S.) 36 (1999), no. 3, 345–381. MR 1685272, DOI 10.1090/S0273-0979-99-00788-0 K.-G. Grosse-Erdmann, Recent developments in hypercyclicity, RACSAM. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. 97 (2003), no. 2, 273–286 (English, with English and Spanish summaries). MR 2068180 Mai Matsui, Mino Yamada, and Fukiko Takeo, Supercyclic and chaotic translation semigroups, Proc. Amer. Math. Soc. 131 (2003), no. 11, 3535–3546. MR 1991766, DOI 10.1090/S0002-9939-03-06960-0 J. C. Oxtoby and S. M. Ulam, Measure-preserving homeomorphisms and metrical transitivity, Ann. of Math. (2) 42 (1941), 874–920. MR 5803, DOI 10.2307/1968772 Retrieve articles in Proceedings of the American Mathematical Society with MSC (2000): 47A16, 47D03 Retrieve articles in all journals with MSC (2000): 47A16, 47D03 T. Kalmes Affiliation: FB IV - Mathematik, Universität Trier, D - 54286 Trier, Germany MR Author ID: 717771 Email: [email protected] Received by editor(s): May 4, 2005 Published electronically: May 5, 2006 Communicated by: Jonathan M. Borwein © Copyright 2006 American Mathematical Society The copyright for this article reverts to public domain 28 years after publication. Journal: Proc. Amer. Math. Soc. 134 (2006), 2997-3002 MSC (2000): Primary 47A16, 47D03 DOI: https://doi.org/10.1090/S0002-9939-06-08391-2
CommonCrawl
What Is Days Sales of Inventory? Formula and Calculation What DSI Tells You DSI vs. Inventory Turnover Why the DSI Matters Days Sales of Inventory FAQs Days Sales of Inventory (DSI): Definition, Formula, Importance David Kindness Reviewed by David Kindness David Kindness is a Certified Public Accountant (CPA) and an expert in the fields of financial accounting, corporate and individual tax planning and preparation, and investing and retirement planning. David has helped thousands of clients improve their accounting and financial systems, create budgets, and minimize their taxes. Suzanne Kvilhaug Fact checked by Suzanne Kvilhaug Suzanne is a content marketer, writer, and fact-checker. She holds a Bachelor of Science in Finance degree from Bridgewater State University and helps develop content strategies for financial brands. Investopedia / Zoe Hansen What Is Days Sales of Inventory (DSI)? The days sales of inventory (DSI) is a financial ratio that indicates the average time in days that a company takes to turn its inventory, including goods that are a work in progress, into sales. DSI is also known as the average age of inventory, days inventory outstanding (DIO), days in inventory (DII), days sales in inventory, or days inventory and is interpreted in multiple ways. Indicating the liquidity of the inventory, the figure represents how many days a company's current stock of inventory will last. Generally, a lower DSI is preferred as it indicates a shorter duration to clear off the inventory, though the average DSI varies from one industry to another. Days sales of inventory (DSI) is the average number of days it takes for a firm to sell off inventory. DSI is a metric that analysts use to determine the efficiency of sales. A high DSI can indicate that a firm is not properly managing its inventory or that it has inventory that is difficult to sell. Days Sales of Inventory Days Sales of Inventory (DSI) Formula and Calculation D S I = Average inventory C O G S × 365 days where: D S I = days sales of inventory C O G S = cost of goods sold \begin{aligned} &DSI = \frac{\text{Average inventory}}{COGS} \times 365 \text{ days}\\ &\textbf{where:}\\ &DSI=\text{days sales of inventory}\\ &COGS=\text{cost of goods sold}\\ \end{aligned} ​DSI=COGSAverage inventory​×365 dayswhere:DSI=days sales of inventoryCOGS=cost of goods sold​ To manufacture a salable product, a company needs raw material and other resources which form the inventory and come at a cost. Additionally, there is a cost linked to the manufacturing of the salable product using the inventory. Such costs include labor costs and payments towards utilities like electricity, which is represented by the cost of goods sold (COGS) and is defined as the cost of acquiring or manufacturing the products that a company sells during a period. DSI is calculated based on the average value of the inventory and cost of goods sold during a given period or as of a particular date. Mathematically, the number of days in the corresponding period is calculated using 365 for a year and 90 for a quarter. In some cases, 360 days is used instead. The numerator figure represents the valuation of the inventory. The denominator (Cost of Sales / Number of Days) represents the average per day cost being spent by the company for manufacturing a salable product. The net factor gives the average number of days taken by the company to clear the inventory it possesses. Two different versions of the DSI formula can be used depending upon the accounting practices. In the first version, the average inventory amount is taken as the figure reported at the end of the accounting period, such as at the end of the fiscal year ending June 30. This version represents DSI value "as of" the mentioned date. In another version, the average value of Start Date Inventory and End Date Inventory is taken, and the resulting figure represents DSI value "during" that particular period. Therefore, Average Inventory = Ending Inventory \text{Average Inventory} = \text{Ending Inventory} Average Inventory=Ending Inventory Average Inventory = ( Beginning Inventory + Ending Inventory ) 2 \text{Average Inventory} = \frac{(\text{Beginning Inventory} + \text{Ending Inventory})}{2} Average Inventory=2(Beginning Inventory+Ending Inventory)​ COGS value remains the same in both the versions. Since DSI indicates the duration of time a company's cash is tied up in its inventory, a smaller value of DSI is preferred. A smaller number indicates that a company is more efficiently and frequently selling off its inventory, which means rapid turnover leading to the potential for higher profits (assuming that sales are being made in profit). On the other hand, a large DSI value indicates that the company may be struggling with obsolete, high-volume inventory and may have invested too much into the same. It is also possible that the company may be retaining high inventory levels in order to achieve high order fulfillment rates, such as in anticipation of bumper sales during an upcoming holiday season. DSI is a measure of the effectiveness of inventory management by a company. Inventory forms a significant chunk of the operational capital requirements for a business. By calculating the number of days that a company holds onto the inventory before it is able to sell it, this efficiency ratio measures the average length of time that a company's cash is locked up in the inventory. However, this number should be looked upon cautiously as it often lacks context. DSI tends to vary greatly among industries depending on various factors like product type and business model. Therefore, it is important to compare the value among the same sector peer companies. Companies in the technology, automobile, and furniture sectors can afford to hold on to their inventories for long, but those in the business of perishable or fast-moving consumer goods (FMCG) cannot. Therefore, sector-specific comparisons should be made for DSI values. One must also note that a high DSI value may be preferred at times depending on the market dynamics. If a short supply is expected for a particular product in the next quarter, a business may be better off holding on to its inventory and then selling it later for a much higher price, thus leading to improved profits in the long run. For example, a drought situation in a particular soft water region may mean that authorities will be forced to supply water from another area where water quality is hard. It may lead to a surge in demand for water purifiers after a certain period, which may benefit the companies if they hold onto inventories. Irrespective of the single-value figure indicated by DSI, the company management should find a mutually beneficial balance between optimal inventory levels and market demand. A similar ratio related to DSI is inventory turnover, which refers to the number of times a company is able to sell or use its inventory over the course of a particular time period, such as quarterly or annually. Inventory turnover is calculated as the cost of goods sold divided by average inventory. It is linked to DSI via the following relationship: D S I = 1 inventory turnover × 365 days DSI = \frac{1}{\text{inventory turnover}}\times 365 \text{ days} DSI=inventory turnover1​×365 days Basically, DSI is an inverse of inventory turnover over a given period. Higher DSI means lower turnover and vice versa. In general, the higher the inventory turnover ratio, the better it is for the company, as it indicates a greater generation of sales. A smaller inventory and the same amount of sales will also result in high inventory turnover. In some cases, if the demand for a product outweighs the inventory on hand, a company will see a loss in sales despite the high turnover ratio, thus confirming the importance of contextualizing these figures by comparing them against those of industry competitors. DSI is the first part of the three-part cash conversion cycle (CCC), which represents the overall process of turning raw materials into realizable cash from sales. The other two stages are days sales outstanding (DSO) and days payable outstanding (DPO). While the DSO ratio measures how long it takes a company to receive payment on accounts receivable, the DPO value measures how long it takes a company to pay off its accounts payable. Overall, the CCC value attempts to measure the average duration of time for which each net input dollar (cash) is tied up in the production and sales process before it gets converted into cash received through sales made to customers. Managing inventory levels is vital for most businesses, and it is especially important for retail companies or those selling physical goods. While the inventory turnover ratio is one of the best indicators of a company's level of efficiency at turning over its inventory and generating sales from that inventory, the days sales of inventory ratio goes a step further by putting that figure into a daily context and providing a more accurate picture of the company's inventory management and overall efficiency. DSI and inventory turnover ratio can help investors to know whether a company can effectively manage its inventory when compared to competitors. A 2014 paper in Management Science, "Does Inventory Productivity Predict Future Stock Returns? A Retailing Industry Perspective," suggests that stocks in companies with high inventory ratios tend to outperform industry averages. A stock that brings in a higher gross margin than predicted can give investors an edge over competitors due to the potential surprise factor. Conversely, a low inventory ratio may suggest overstocking, market or product deficiencies, or otherwise poorly managed inventory–signs that generally do not bode well for a company's overall productivity and performance. Example of DSI The leading retail corporation Walmart (WMT) had inventory worth $56.5 billion and cost of goods sold worth $429 billion for the fiscal year 2022. DSI is therefore: DSI = (56.5/429) x 365= 48.1 days While inventory value is available on the balance sheet of the company, the COGS value can be sourced from the annual financial statement. Care should be taken to include the sum total of all the categories of inventory which includes finished goods, work in progress, raw materials, and progress payments. Since Walmart is a retailer, it does not have any raw material, works in progress, and progress payments. Its entire inventory is comprised of finished goods. What Does a Low Days Sales of Inventory Indicate? A low DSI suggests that a firm is able to efficiently convert its inventories into sales. This is considered to be beneficial to a company's margins and bottom line, and so a lower DSI is preferred to a higher one. A very low DSI, however, can indicate that a company does not have enough inventory stock to meet demand, which could be viewed as suboptimal. How Do You Interpret Days Sales of Inventory? DSI estimates how many days it takes on average to completely sell a company's current inventories. What Is a Good Days Sale of Inventory Number? In order to efficiently manage inventories and balance idle stock with being understocked, many experts agree that a good DSI is somewhere between 30 and 60 days. This, of course, will vary by industry, company size, and other factors. Yasin ,Alan, and George P. Gao. "Does Inventory Productivity Predict Future Stock Returns? A Retailing Industry Perspective." Management Science, Vol. 60, Issue 10, 2014, Pages 2416-2434. Wall Street Journal. "WMT Financials." Inventory Management Defined, Plus Methods and Techniques Inventory management is the process of ordering, storing and using a company's inventory: raw materials, components, and finished products. Learn about the different types of inventory management and pros and cons of each. What Is Turnover in Business, and Why Is It Important? Turnover is an accounting term that calculates how quickly a business collects cash from accounts receivable or how fast the company sells its inventory. Cash Conversion Cycle (CCC): What Is It, and How Is It Calculated? Cash conversion cycle (CCC) is a metric that expresses the length of time, in days, that it takes for a company to convert resources into cash flows. Inventory Turnover Ratio: What It Is, How It Works, and Formula Inventory turnover is a financial ratio that measures a company's efficiency in managing its stock of goods. Beginning Inventory Beginning inventory is the book value of a company's inventory at the start of an accounting period. It is also the value of inventory carried over from the end of the preceding accounting period. Cost of Goods Sold (COGS) Explained With Methods to Calculate It Cost of goods sold (COGS) is defined as the direct costs attributable to the production of the goods sold in a company. FIFO vs. LIFO Inventory Valuation Who Are Costco's Main Competitors? How to Analyze a Company's Inventory Cash Conversion Cycle: Definition, Formulas, and Example How Working Capital Works What Are the Main Benefits of a JIT (Just in Time) Production Strategy?
CommonCrawl
Davis, Richard A. Time series: theory and methods (1987) Brockwell, Peter J., Davis, Richard A. Introduction to time series and forecasting (1996) Wan, Phyllis, Mikosch, Thomas, Matsui, Muneya, Davis, Richard A.. Applications of distance correlation to time series. Bernoulli 2018. 2018:- Resnick, Sidney I., Davis, Richard A., Wang, Tiandong, Wan, Phyllis. Fitting the linear preferential attachment model. Electronic Journal of Statistics 2017. 11:3738-3780 Xie, Xiaolei, Mikosch, Thomas, Heiny, Johannes, Davis, Richard A.. Extreme value analysis for the sample autocovariance matrices of heavy-tailed multivariate time series. Extremes 2016. 19:517-547 Ghosh, Souvik, Davis, Richard A., Cho, Yong Bum. Asymptotic Properties of the Empirical Spatial Extremogram. Scandinavian Journal of Statistics 2016. 43:757-773 Liu, Heng, Davis, Richard A.. Theory and inference for a class of nonlinear models with application to time series of counts. Statistica Sinica 2016. 26:1673-1707 Steinkohl, Christina, Klüppelberg, Claudia, Davis, Richard A.. Statistical inference for max-stable processes in space and time. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 2013. 75:791-819 Davis, Richard A., French, Joshua P.. The asymptotic distribution of the maxima of a Gaussian random field on a lattice. Extremes 2013. 16:1-26 Hueter, Irene, Davis, Richard A.. The convex hull of consecutive pairs of observations from some time series models. Extremes 2013. 16:487-505 Davis, Richard A., Yau, Chun Yip. Consistency of minimum description length model selection for piecewise stationary time series models. Electronic Journal of Statistics 2013. 7:381-411 Davis, Richard A., Yau, Chun Yip. Likelihood inference for discriminating between long-memory and change-point models. Journal of Time Series Analysis 2012. 33:649-664 Naveau, Philippe, Davis, Richard A., Cooley, Daniel. Approximating the conditional density given large observed values via a multivariate extremes framework, with application to environmental data. The Annals of Applied Statistics 2012. 6:1406-1429 Brockwell, Peter J., Davis, Richard A., Yang, Yu. Estimation for non-negative Lévy-driven CARMA processes. Journal of Business & Economic Statistics 2011. 29:250-259 Davis, Richard A., Liu, Jingchen. Comment on ``A statistical, analysis of multiple temperature proxies: Are reconstructions of surface temperatures over the last 1000 years reliable?'' (Pkg: p5-123). The Annals of Applied Statistics 2011. 5:52-55 Chen, Mei-Ching, Davis, Richard A., Song, Li. Inference for regression models with errors from a non-invertible MA(1) process. Journal of Forecasting 2011. 30:6-30 Huang, Wenying, Wang, Ke, Breidt, F. Jay, Davis, Richard A.. A class of stochastic volatility models for environmental applications. Journal of Time Series Analysis 2011. 32:364-377 Davis, Richard A., Yau, Chun Yip. Comments on pairwise likelihood in time series models. Statistica Sinica 2011. 21:255-277 Davis, Richard A., Song, Li. Unit roots in moving averages beyond first order. The Annals of Statistics 2011. 39:3062-3091 Wu, Rongning, Davis, Richard A.. Least absolute deviation estimation for general autoregressive moving average time-series models. Journal of Time Series Analysis 2010. 31:98-112 Cooley, Daniel, Davis, Richard A., Naveau, Philippe. The pairwise beta distribution: A flexible parametric multivariate model for extremes. Journal of Multivariate Analysis 2010. 101:2103-2117 Brillinger, David R., Davis, Richard A.. A conversation with Murray Rosenblatt. Statistical Science 2009. 24:116-140 Davis, Richard A., Calder, Matthew, Andrews, Beth. Maximum likelihood estimation for α-stable autoregressive processes. The Annals of Statistics 2009. 37:1946-1982 Davis, Richard A., Mikosch, Thomas. The extremogram: A correlogram for extreme events. Bernoulli 2009. 15:977-1009 Davis, Richard A., Wu, Rongning. A negative binomial model for time series of counts. Biometrika 2009. 96:735-749 Tadjuidje Kamgaing, Joseph, Ombao, Hernando, Davis, Richard A.. Autoregressive processes with data-driven regime switching. Journal of Time Series Analysis 2009. 30:505-533 Davis, Richard A., Mikosch, Thomas. Extreme value theory for space-time processes with heavy-tailed distributions. Stochastic Processes and their Applications 2008. 118:560-584 Brockwell, Peter J., Davis, Richard A., Yang , Yu. Continuous-time Gaussian autoregression. Statistica Sinica 2007. 17:63-80 Brockwell, Peter J., Davis, Richard A., Yang, Yu. Estimation for nonnegative Lévy-driven Ornstein-Uhlenbeck processes. Journal of Applied Probability 2007. 44:977-989 Andrews, Beth, Davis, Richard A., Breidt, F. Jay. Rank-based estimation for all-pass time series models. The Annals of Statistics 2007. 35:844-869 Breidt, F. Jay, Davis, Richard A., Hsu, Nan-Jung, Rosenblatt, Murray. Pile-up probabilities for the Laplace likelihood estimator of a non-invertible first order moving average. Institute of Mathematical Statistics Lecture Notes - Monograph Series 2006. 2006:1-19 Davis, Richard A., Lee, Thomas C. M., Rodriguez-Yam, Gabriel A.. Structural break estimation for nonstationary time series models. Journal of the American Statistical Association 2006. 101:223-239 Andrews, Beth, Davis, Richard A., Breidt, F. Jay. Maximum likelihood estimation for all-pass time series models. Journal of Multivariate Analysis 2006. 97:1638-1659 Davis, Richard A., Dunsmuir, William T. M., Streett, Sarah B.. Maximum likelihood estimation for an observation driven model for Poisson counts. Methodology and Computing in Applied Probability 2005. 7:149-159 Davis, Richard A., Rodriguez-Yam, Gabriel. Estimation for state-space models based on a likelihood approximation. Statistica Sinica 2005. 15:381-406 Davis, Richard A., Klüppelberg, Claudia. Statistics in finance. Oberwolfach Reports 2004. 1:111-190 Brockwell, Peter J., Davis, Richard A., Trindade, A. Alexandre. Asymptotic properties of some subset vector autoregressive process estimators. Journal of Multivariate Analysis 2004. 90:327-347 Davis, Richard A., Dunsmuir, William T. M., Streett, Sarah B.. Observationdriven models for Poisson counts. Biometrika 2003. 90:777-790 Davis, Richard A., Dunsmuir, William T. M., Streett, Sarah B.. Observation-driven models for Poisson counts. Biometrika 2003. 90:777-790 Basrak, Bojan, Davis, Richard A., Mikosch, Thomas. Regular variation of GARCH processes. Stochastic Processes and their Applications 2002. 99:95-115 Basrak, Bojan, Davis, Richard A., Mikosch, Thomas. A characterization of multivariate regular variation. The Annals of Applied Probability 2002. 12:908-920 Davis, Richard A.. Gaussian process. Breidt, F. Jay, Davis, Richard A., Trindade, A. Alexandre. Least absolute deviation estimation for all-pass time series models. The Annals of Statistics 2001. 29:919-946 Davis, Richard A., Mikosch, Thomas. Point process convergence of stochastic volatility processes with application to sample autocorrelation. Journal of Applied Probability 2001. 38A:93-104 Davis, Richard A., Mikosch, Thomas. Point process convergence of stochastic volatility processes with application to sample autocorrelation. Trindade, A. Alexandre, Davis, Richard A., Breidt, F. Jay. Least absolute deviation estimation for all-pass time series models. The Annals of Statistics 2001. 29:919-946 Davis, Richard A., Dunsmuir, William T. M., Wang, Ying. On autocorrelation in a Poisson regression model. Biometrika 2000. 87:491-505 Basrak, Bojan, Davis, Richard A., Mikosch, Thomas. The sample ACF of a simple bilinear process. Stochastic Processes and their Applications 1999. 83:1-14 Davis, Richard A., Dunsmuir, William T. M., Wang, Ying. Modeling time series of count data. Davis, Richard A., Mikosch, Thomas. The maximum of the periodogram of a Non-Gaussian sequence. The Annals of Probability 1999. 27:522-536 Davis, Richard A., Mikosch, Thomas. The sample autocorrelations of heavy-tailed processes with applications to ARCH. The Annals of Statistics 1998. 26:2049-2080 Breidt, F. Jay, Davis, Richard A.. Extremes of stochastic volatility models. The Annals of Applied Probability 1998. 8:664-675 Mikosch, Thomas, Davis, Richard A.. The sample autocorrelations of heavy-tailed processes with applications to ARCH. The Annals of Statistics 1998. 26:2049-2080 Davis, Richard A., Mikosch, Thomas. Gaussian likelihood-based inference for non-invertible MA$(1)$ processes with S$\alpha$S noise. Stochastic Processes and their Applications 1998. 77:99-122 Davis, Richard A., Dunsmuir, William T. M.. Least absolute deviation estimation for regression with ARMA errors. Journal of Theoretical Probability 1997. 10:481-497 Calder, Matthew, Davis, Richard A.. Introduction to Whittle (1953) ``The analysis of multiple stationary time series'' (Pkg: p141-169). Davis, Richard A., Dunsmuir, William T. M.. Maximum likelihood estimation for MA$(1)$ processes with a root on or near the unit circle. Econometric Theory 1996. 12:1-29 Resnick, Sidney I., Davis, Richard A.. Limit theory for bilinear processes with heavy-tailed noise. The Annals of Applied Probability 1996. 6:1191-1210 Davis, Richard A., Resnick, Sidney I.. Limit theory for bilinear processes with heavy-tailed noise. The Annals of Applied Probability 1996. 6:1191-1210 Chen, Changhua, Davis, Richard A., Brockwell, Peter J.. Order determination for multivariate autoregressive processes using resampling methods. Journal of Multivariate Analysis 1996. 57:175-190 Davis, Richard A.. Gauss-Newton and $M$-estimation for ARMA processes with infinite variance. Stochastic Processes and their Applications 1996. 63:75-95 Breidt, F. Jay, Davis, Richard A., Dunsmuir, William T. M.. Improved bootstrap prediction intervals for autoregressions. Journal of Time Series Analysis 1995. 16:177-200 Davis, Richard A., Hsing, Tailen. Point process and partial sum convergence for weakly dependent random variables with infinite variance. The Annals of Probability 1995. 23:879-917 Davis, Richard A., Huang, Dawei, Yao, Yi-Ching. Testing for a change in the parameter values and order of an autoregressive model. The Annals of Statistics 1995. 23:282-304 Davis, Richard A., Resnick, Sidney I.. Crossings of max-stable processes. Journal of Applied Probability 1994. 31:130-138 Davis, Richard A., Resnick, Sidney I.. Prediction of stationary max-stable processes. The Annals of Applied Probability 1993. 3:497-525 Chen, Changhua, Davis, Richard A., Brockwell, Peter J., Bai, Zhi Dong. Order determination for autoregressive processes using resampling methods. Statistica Sinica 1993. 3:481-500 Davis, Richard A., Knight, Keith, Liu, Jian. $M$-estimation for autoregressions with infinite variance. Stochastic Processes and their Applications 1992. 40:145-180 Davis, Richard A., Rosenblatt, Murray. Parameter estimation for some time series models without contiguity. Statistics & Probability Letters 1991. 11:515-521 Davis, Richard A., Resnick, Sidney I.. Extremes of moving averages of random variables with finite endpoint. The Annals of Probability 1991. 19:312-328 Breidt, F. Jay, Davis, Richard A., Lii, Keh-Shin, Rosenblatt, Murray. Maximum likelihood estimation for noncausal autoregressive processes. Journal of Multivariate Analysis 1991. 36:175-198 Breidt, F. Jay, Davis, Richard A., Lii, Keh-Shin, Rosenblatt, Murray. Nonminimum phase non-Gaussian autoregressive processes. Proceedings of the National Academy of Sciences of the United States of America 1990. 87:179-181 Davis, Richard A., Marengo, James E.. Limit theory for the sample covariance and correlation matrix functions of a class of multivariate linear processes. Communications in Statistics: Stochastic Models 1990. 6:483-497 Davis, Richard A., Resnick, Sidney I.. Basic properties and prediction of MAX-ARMA processes. Advances in Applied Probability 1989. 21:781-803 Davis, Richard A., McCormick, William P.. Estimation for first-order autoregressive processes with positive or bounded innovations. Stochastic Processes and their Applications 1989. 31:237-250 Davis, Richard A., Mulrow, Edward, Resnick, Sidney I.. Almost sure limit sets of random samples in $R^d$. Advances in Applied Probability 1988. 20:573-599 Yao, Yi-Ching, Davis, Richard A.. The asymptotic behavior of the likelihood ratio statistic for testing a shift in mean in a sequence of independent normal variates. Sankhyā, Series A 1986. 48:339-353 Davis, Richard A.. On upper and lower extremes in stationary sequences. Davis, Richard A.. Limit laws for upper and lower extremes from stationary mixing sequences. Journal of Multivariate Analysis 1983. 13:273-286 Davis, Richard A.. Stable limits for partial sums of dependent random variables. The Annals of Probability 1983. 11:262-269 Davis, Richard A.. Limit laws for the maximum and minimum of stationary sequences. Probability Theory and Related Fields 1982. 61:31-42 Chernick, Michael R., Davis, Richard A.. Extremes in autoregressive processes with uniform marginal distributions. Statistics & Probability Letters 1982. 1:85-88 Davis, Richard A.. Maximum and minimum of one-dimensional diffusions. Stochastic Processes and their Applications 1982. 13:1-9 Davis, Richard A.. The rate of convergence in distribution of the maxima. Statistica Neerlandica 1982. 36:31-35 Davis, Richard A.. Maxima and minima of stationary sequences. The Annals of Probability 1979. 7:453-460
CommonCrawl
Home > Journals > Ann. Statist. > Volume 17 > Issue 4 > Article December, 1989 On the Relation between $S$-Estimators and $M$-Estimators of Multivariate Location and Covariance Hendrik P. Lopuhaa Ann. Statist. 17(4): 1662-1683 (December, 1989). DOI: 10.1214/aos/1176347386 We discuss the relation between $S$-estimators and $M$-estimators of multivariate location and covariance. As in the case of the estimation of a multiple regression parameter, $S$-estimators are shown to satisfy first-order conditions of $M$-estimators. We show that the influence function IF $(\mathbf{x; S}, F)$ of $S$-functionals exists and is the same as that of corresponding $M$-functionals. Also, we show that $S$-estimators have a limiting normal distribution which is similar to the limiting normal distribution which is similar to the limiting normal distribution of $M$-estimators. Finally, we compare asymptotic variances and breakdown point of both types of estimators. Hendrik P. Lopuhaa. "On the Relation between $S$-Estimators and $M$-Estimators of Multivariate Location and Covariance." Ann. Statist. 17 (4) 1662 - 1683, December, 1989. https://doi.org/10.1214/aos/1176347386 First available in Project Euclid: 12 April 2007 Digital Object Identifier: 10.1214/aos/1176347386 Primary: 62F35 Secondary: 62H12 Keywords: $M$-estimators , $S$-estimators , asymptotic normality , efficiency , influence function Rights: Copyright © 1989 Institute of Mathematical Statistics Ann. Statist. Vol.17 • No. 4 • December, 1989 Institute of Mathematical Statistics Hendrik P. Lopuhaa "On the Relation between $S$-Estimators and $M$-Estimators of Multivariate Location and Covariance," The Annals of Statistics, Ann. Statist. 17(4), 1662-1683, (December, 1989)
CommonCrawl
nLab > nLab General Discussions: Some thoughts on the Poincaré conjecture for knots CommentTimeMay 1st 2018 (edited Jun 8th 2018) Format: MarkdownItexI would like to make a new attempt to describe my work on an approach to the Poincaré conjecture for knots. The arguments have essentially been stable for a couple of years, though I have a tidied up a few details here and there. See the nLab page [[Poincaré conjecture - diagrammatic formulation]] for the reformulation of the Poincaré conjecture into a statement in diagrammatic theory. It is this reformulated statement that I will discuss here; everything at the nLab page involved in the reformulation can be regarded as a black box. Begin, then, with a knot diagram $K$. Fix a point $p$ on it. Equip it with an orientation (any will do), and use this orientation to define its fundamental group and longitude. Label the arcs of $K$. By a _word in the arcs of $K$_, I shall mean a monomial $a_{1}^{\pm 1} \cdots a_{n}^{\pm 1}$, where $a_{1}$, $\ldots$, $a_{n}$ are (labels of) arcs of $K$. I will say that a word $w$ in the arcs of $K$ is _realisable_ if we can find a welded knot $K_{w}$ which is equivalent to $K$ under the welded framed Reidemeister moves (ordinary framed Reidemeister moves plus the virtual Reidemeister moves and one of the forbidden moves, namely that which allows to slide a classical arc over a virtual crossing), and which has the following property. First, if there are any occurrences of the longitude $l$ of $K$ as sub-words of $w$, then remove them all except one (any of the ways to do this may be used). For ease of notation, I will suppose that we have already done this for $w$. Secondly, we allow that we apply a permutation to $w$. Again, I will assume that we have already done this. Finally, we allow that if, for some arc $a$ of $K$, both $a$ and $a^{-1}$ occur in $w$ (not necessarily consecutively), then we can remove them from $w$. Once again, I will assume that we have made any such deletions that we wish to make. Then we ask that the following is the case. 1) There is a point $q$ on $K_{w}$ such that as we walk around $K_{w}$ exactly once, in the direction defined by the orientation, beginning at $q$ and returning to $q$, then we pass successively, in order (though this doesn't really matter, since we allow a permutation to be applied to $w$ as a preliminary step), and ignoring when we pass through virtual crossings and over a crossing, under the arcs $a_{1}$, \ldots, $a_{n}$, and under no other arcs. 2) The power of $a_{i}$ is the sign of the crossing at which we pass under $a_{i}$. Given a crossing $C$ of $K$ as follows, irrespective of the orientation of the horizontal arcs, I shall denote by $w_{C}$ the word $c^{-1}b^{-1}ab$ in the arcs of $K$. / \ | | ----- | ---- c | a | b Now, $\pi_{1}(K) / \langle l \rangle$ is isomorphic to the quotient of the free group $F(K)$ on the arcs of $K$ by the normal subgroup $N$ consisting exactly of words in the arcs of $K$ of the form $a^{-1}va$ and their inverses, where $v$ is any concatenation of copies of $l$ and of words in the arcs of $K$ of the form $w_{C}$ for various crossings $C$ of $K$, and where $a$ is any arc of $K$. My key claim is that every word of $N$ of the form $g^{-1}vg$ which contains a copy of $l$ is realisable, and moreover, if a word $w$ is equivalent to such a word of $N$ under the equivalence relation of being able to add or delete pairs $aa^{-1}$ and $a^{-1}a$, where $a$ is an arc of $K$, then $w$ is also realisable. Let us prove this. Take any $v$ as above. Since we delete all copies of $l$ from $v$ except one when defining realisability, and since I am assuming that there is at least one copy of $l$, we may assume that $v$ is of the form $w_{C_{1}} \cdots w_{C_{i}} \cdot l \cdot w_{C_{i+1}} \cdots w_{C_{n}}$ for some $n$. We begin at $p$. We will walk around $K$ in the same direction as our original orientation which we are using to define $\pi_{1}(K)$ and $l$. Suppose that $w_{C_{1}}$ looks as in the figure above. Take a small piece of the arc on which $p$ lies, just after $p$. Drag it, using only virtual R2 moves, so that it is near the above figure. We suppose that the arc on which $p$ lies has label $d$. / \ | | | | <----------- | ---------- c | a | ----- | | | | d | | | | \ / | b Then slide it (using two R2 moves and an R3 move) under the above crossing, so that we have the following local picture. Noam Zeilberger described this move aptly as a 'lasso move' in an earlier discussion. / \ | ------- | --- | | | | | | <----------- | ---------- | c | | a d | | | <-- | --- d | | b Note that we do not label the new arcs, i.e. we leave the labellings as they were except that there is a 'break' in the arc labelled $d$. An entirely analogous construction can be given in the case that the horizontal arcs have the opposite orientation. We now proceed in exactly the same way for $w_{C_{2}}$, using the arc labelled $d$ with an arrow on it in the above figure. And so on until we have done the same for $w_{C_{i}}$. At this point, we now encounter $l$ in our word. And we now walk all the way around the virtual knot which we have obtained so far, from where we were after carrying out the above procedure for $w_{C_{i}}$, stopping when we reach arc $d$, a little before we reach $p$. After this, we carry out the procedure above for $w_{C_{i+1}}$, ..., $w_{C_{n}}$, beginning with a little piece of arc between where we stopped and $p$. After we are finished with $w_{C_{n}}$, we simply walk to $p$. This completes the realisation of $v$. Now, take the welded knot $K_{v}$ that we have constructed to realise $v$. To realise $a^{-1}va$, where $a$ is any arc of $K$, we proceed as follows. Take a small piece of the arc $a$. Using virtual R2 moves, drag it across $K_{v}$ so that it is near the point $p$. Then apply an R2 move so that we have the following local picture. --------- | | --- | ---•--- | ---> d | p | | \ / a Walking around our new virtual knot from $p$ in the same direction as before, we obtain a realisation of $a^{-1}va$. it remains to show that if we add or delete a pair $bb^{-1}$ or $b^{-1}b$, where $b$ is an arc of $K$, from such a $a^{-1}va$, then the resulting word is also realisable. To add a pair $bb^{-1}$ between, say, $x$ and $y$ in $a^{-1}va$, then we apply the same idea that we have just seen: take a small piece of the arc labelled $b$ on $K_{a^{-1}va}$, drag it using virtual R2 moves so that it is near $x$ and $y$, and then apply an R2 move so that we have the following picture. / \ ----- / \ | | | | --- | --- | --- | --- | --- | | | | | \ / | | x b y The same argument works for adding $b^{-1}b$, just using the virtual R2 moves in a different way so that we can drag the arc $b$ over from the opposite side; and of course we could have $x^{-1}$ or $y^{-1}$ or both, and would be able to apply the same argument. Suppose now that we wish to delete a pair $bb^{-1}$ from $K_{a^{-1}va}$. This is a crucial part of the argument, and we need to be careful. The idea is simple. If we have a pair $bb^{-1}$, then, ignoring virtual crossings and crossings which we travel over, we must successively walk under $a$ and then under $a$ again in the opposite direction, without walking under any other arcs. This means that we have a local picture as follows, except that there may be other arcs which pass under those shown, or which cross those shown vertically. ----- | | --- | --- | --- | | | \ / b Now, in welded knot theory, even if there are arcs which cross under those shown or cross them vertically, we can slide the arc $a$ over the other depicted arc, so that we have the following local picture. --------------- ----- | | | | | | | \ / b With only virtual moves available, i.e. without the forbidden move, we would not necessarily be able to carry out this slide, because we would not be able to handle the case that there were some virtual crossings involved. This is the reason that we work in welded knot theory and not virtual knot theory. A second point of note which we must take care to address is that this slide may permute the arcs making up $a^{-1}va$, and may lead to further deletions of $b$ and $b^{-1}$ (always in pairs, though this $b$ and $b^{-1}$ might not occur consecutively) from $a^{-1}va$. But both of these operations are permitted in the definition of realisability (they are two of the three preliminary operations that may be applied). The same argument works for deleting a pair $b^{-1}b$. This completes the demonstration of the claim. Suppose that $\pi_{1}(K) / \langle l \rangle$ is trivial, so that $N$ is $F(K)$. Let $b$ be any arc of $K$. Since $N$ is all of $F(K)$, either $b$ or $b^{-1}$ is equal in $F(K)$ to a word of the form $a^{-1}va$ for some $a$ and $v$, where $v$ is any concatenation of copies of $l$ and of words in the arcs of $K$ of the form $w_{C}$ for some crossings $C$ of $K$. I now claim that for at least one arc $b$ of $K$, the word $v$ in the word $a^{-1}va$ which is equal to either $b$ or $b^{-1}$ contains at least one copy of $l$. Indeed, if this is not the case, then every arc of $K$ or its inverse is equal in $N$ to a $a^{-1}va$ where $v$ is a product of $w_{C}$'s. That is to say, every arc of $K$ or its inverse is then trivial in $\pi_{1}(K)$. But this implies that $\pi_{1}(K)$ is trivial, and no knot has a trivial fundamental group. Putting the two claims together, we obtain that, for at least one arc $b$ of $K$, the word consisting just of $b$ or of $b^{-1}$ is realisable. This could happen in two ways. One is that $b$ or $b^{-1}$ is the longitude of $K$. In this case, $K$ must be a $\pm 1$-framed unknot. The other is that there is a virtual knot $K_{b}$ which is equivalent to $K$ as a framed welded knot, and on which there is a point $q$ from which, when we walk around $K_{b}$ in a particular direction and return back to $q$, we pass only under a single arc, namely $b$ (note that no non-trivial permutations and deletions are possible in this case). But every welded knot with only one classical crossing is equivalent (as a welded knot) to a classical $\pm 1$-framed unknot. Hence $K$ itself is equivalent as a welded knot to a $\pm 1$-framed unknot. But a pair of classical knots are equivalent as welded knots if and only if they are equivalent as classical knots. We conclude that $K$ is equivalent to a $\pm 1$-framed unknot as a classical knot. If this is correct, this is the Poincaré conjecture for knots (also known as the Property P conjecture). In fact, since we do not use any Kirby moves, the result is a bit stronger: I believe that it also establishes the Gordon-Luecke theorem (if a Dehn surgery on a knot gives $S^{3}$, then the knot must be the unknot). I believe that the argument adapts to arbitrary links, with one caveat: we do need Kirby moves in that case, and we then need a version of the fact that classical links are welded-equivalent if and only if they are classically equivalent which includes certain kinds of Kirby moves. This is conjectured in the literature, but has not previously been known. I had tried off and on for a year or more to prove it without succeeding (I am trying to avoid the use of Waldhausen's work on 3-manifolds), but I now do think to have a proof, using not exactly the same moves as in welded knot theory (for these moves I still do not have a proof), but a modified collection for which the arguments still go through. But let's just ignore the links case, for now, I suggest; I am very happy to provide details to anybody interested. I would be delighted with any feedback. As I say, the argument has been in a stable form for getting on for two years, and I'd really like to find out whether or not this argument is essentially correct, even if it turns out to be nonsense or crucially flawed. Since I seem to have immense difficulty in writing it down in a paper, this forum may be the best place for me to communicate the argument. I am very happy to elaborate on anything or to give examples. I would like to make a new attempt to describe my work on an approach to the Poincaré conjecture for knots. The arguments have essentially been stable for a couple of years, though I have a tidied up a few details here and there. See the nLab page Poincaré conjecture - diagrammatic formulation for the reformulation of the Poincaré conjecture into a statement in diagrammatic theory. It is this reformulated statement that I will discuss here; everything at the nLab page involved in the reformulation can be regarded as a black box. Begin, then, with a knot diagram KK. Fix a point pp on it. Equip it with an orientation (any will do), and use this orientation to define its fundamental group and longitude. Label the arcs of KK. By a word in the arcs of KK, I shall mean a monomial a 1 ±1⋯a n ±1a_{1}^{\pm 1} \cdots a_{n}^{\pm 1}, where a 1a_{1}, …\ldots, a na_{n} are (labels of) arcs of KK. I will say that a word ww in the arcs of KK is realisable if we can find a welded knot K wK_{w} which is equivalent to KK under the welded framed Reidemeister moves (ordinary framed Reidemeister moves plus the virtual Reidemeister moves and one of the forbidden moves, namely that which allows to slide a classical arc over a virtual crossing), and which has the following property. First, if there are any occurrences of the longitude ll of KK as sub-words of ww, then remove them all except one (any of the ways to do this may be used). For ease of notation, I will suppose that we have already done this for ww. Secondly, we allow that we apply a permutation to ww. Again, I will assume that we have already done this. Finally, we allow that if, for some arc aa of KK, both aa and a −1a^{-1} occur in ww (not necessarily consecutively), then we can remove them from ww. Once again, I will assume that we have made any such deletions that we wish to make. Then we ask that the following is the case. 1) There is a point qq on K wK_{w} such that as we walk around K wK_{w} exactly once, in the direction defined by the orientation, beginning at qq and returning to qq, then we pass successively, in order (though this doesn't really matter, since we allow a permutation to be applied to ww as a preliminary step), and ignoring when we pass through virtual crossings and over a crossing, under the arcs a 1a_{1}, \ldots, a na_{n}, and under no other arcs. 2) The power of a ia_{i} is the sign of the crossing at which we pass under a ia_{i}. Given a crossing CC of KK as follows, irrespective of the orientation of the horizontal arcs, I shall denote by w Cw_{C} the word c −1b −1abc^{-1}b^{-1}ab in the arcs of KK. ----- | ---- c | a Now, π 1(K)/⟨l⟩\pi_{1}(K) / \langle l \rangle is isomorphic to the quotient of the free group F(K)F(K) on the arcs of KK by the normal subgroup NN consisting exactly of words in the arcs of KK of the form a −1vaa^{-1}va and their inverses, where vv is any concatenation of copies of ll and of words in the arcs of KK of the form w Cw_{C} for various crossings CC of KK, and where aa is any arc of KK. My key claim is that every word of NN of the form g −1vgg^{-1}vg which contains a copy of ll is realisable, and moreover, if a word ww is equivalent to such a word of NN under the equivalence relation of being able to add or delete pairs aa −1aa^{-1} and a −1aa^{-1}a, where aa is an arc of KK, then ww is also realisable. Let us prove this. Take any vv as above. Since we delete all copies of ll from vv except one when defining realisability, and since I am assuming that there is at least one copy of ll, we may assume that vv is of the form w C 1⋯w C i⋅l⋅w C i+1⋯w C nw_{C_{1}} \cdots w_{C_{i}} \cdot l \cdot w_{C_{i+1}} \cdots w_{C_{n}} for some nn. We begin at pp. We will walk around KK in the same direction as our original orientation which we are using to define π 1(K)\pi_{1}(K) and ll. Suppose that w C 1w_{C_{1}} looks as in the figure above. Take a small piece of the arc on which pp lies, just after pp. Drag it, using only virtual R2 moves, so that it is near the above figure. We suppose that the arc on which pp lies has label dd. <----------- | ---------- c | a ----- | | | | d | | | | \ / | Then slide it (using two R2 moves and an R3 move) under the above crossing, so that we have the following local picture. Noam Zeilberger described this move aptly as a 'lasso move' in an earlier discussion. ------- | --- | | | | c | | a d | | | <-- | --- d | Note that we do not label the new arcs, i.e. we leave the labellings as they were except that there is a 'break' in the arc labelled dd. An entirely analogous construction can be given in the case that the horizontal arcs have the opposite orientation. We now proceed in exactly the same way for w C 2w_{C_{2}}, using the arc labelled dd with an arrow on it in the above figure. And so on until we have done the same for w C iw_{C_{i}}. At this point, we now encounter ll in our word. And we now walk all the way around the virtual knot which we have obtained so far, from where we were after carrying out the above procedure for w C iw_{C_{i}}, stopping when we reach arc dd, a little before we reach pp. After this, we carry out the procedure above for w C i+1w_{C_{i+1}}, …, w C nw_{C_{n}}, beginning with a little piece of arc between where we stopped and pp. After we are finished with w C nw_{C_{n}}, we simply walk to pp. This completes the realisation of vv. Now, take the welded knot K vK_{v} that we have constructed to realise vv. To realise a −1vaa^{-1}va, where aa is any arc of KK, we proceed as follows. Take a small piece of the arc aa. Using virtual R2 moves, drag it across K vK_{v} so that it is near the point pp. Then apply an R2 move so that we have the following local picture. | | --- | ---•--- | ---> d | p | | \ / Walking around our new virtual knot from pp in the same direction as before, we obtain a realisation of a −1vaa^{-1}va. it remains to show that if we add or delete a pair bb −1bb^{-1} or b −1bb^{-1}b, where bb is an arc of KK, from such a a −1vaa^{-1}va, then the resulting word is also realisable. To add a pair bb −1bb^{-1} between, say, xx and yy in a −1vaa^{-1}va, then we apply the same idea that we have just seen: take a small piece of the arc labelled bb on K a −1vaK_{a^{-1}va}, drag it using virtual R2 moves so that it is near xx and yy, and then apply an R2 move so that we have the following picture. / \ ----- / \ | | | | --- | --- | --- | --- | --- | \ / | | x b y The same argument works for adding b −1bb^{-1}b, just using the virtual R2 moves in a different way so that we can drag the arc bb over from the opposite side; and of course we could have x −1x^{-1} or y −1y^{-1} or both, and would be able to apply the same argument. Suppose now that we wish to delete a pair bb −1bb^{-1} from K a −1vaK_{a^{-1}va}. This is a crucial part of the argument, and we need to be careful. The idea is simple. If we have a pair bb −1bb^{-1}, then, ignoring virtual crossings and crossings which we travel over, we must successively walk under aa and then under aa again in the opposite direction, without walking under any other arcs. This means that we have a local picture as follows, except that there may be other arcs which pass under those shown, or which cross those shown vertically. --- | --- | --- | \ / Now, in welded knot theory, even if there are arcs which cross under those shown or cross them vertically, we can slide the arc aa over the other depicted arc, so that we have the following local picture. With only virtual moves available, i.e. without the forbidden move, we would not necessarily be able to carry out this slide, because we would not be able to handle the case that there were some virtual crossings involved. This is the reason that we work in welded knot theory and not virtual knot theory. A second point of note which we must take care to address is that this slide may permute the arcs making up a −1vaa^{-1}va, and may lead to further deletions of bb and b −1b^{-1} (always in pairs, though this bb and b −1b^{-1} might not occur consecutively) from a −1vaa^{-1}va. But both of these operations are permitted in the definition of realisability (they are two of the three preliminary operations that may be applied). The same argument works for deleting a pair b −1bb^{-1}b. This completes the demonstration of the claim. Suppose that π 1(K)/⟨l⟩\pi_{1}(K) / \langle l \rangle is trivial, so that NN is F(K)F(K). Let bb be any arc of KK. Since NN is all of F(K)F(K), either bb or b −1b^{-1} is equal in F(K)F(K) to a word of the form a −1vaa^{-1}va for some aa and vv, where vv is any concatenation of copies of ll and of words in the arcs of KK of the form w Cw_{C} for some crossings CC of KK. I now claim that for at least one arc bb of KK, the word vv in the word a −1vaa^{-1}va which is equal to either bb or b −1b^{-1} contains at least one copy of ll. Indeed, if this is not the case, then every arc of KK or its inverse is equal in NN to a a −1vaa^{-1}va where vv is a product of w Cw_{C}'s. That is to say, every arc of KK or its inverse is then trivial in π 1(K)\pi_{1}(K). But this implies that π 1(K)\pi_{1}(K) is trivial, and no knot has a trivial fundamental group. Putting the two claims together, we obtain that, for at least one arc bb of KK, the word consisting just of bb or of b −1b^{-1} is realisable. This could happen in two ways. One is that bb or b −1b^{-1} is the longitude of KK. In this case, KK must be a ±1\pm 1-framed unknot. The other is that there is a virtual knot K bK_{b} which is equivalent to KK as a framed welded knot, and on which there is a point qq from which, when we walk around K bK_{b} in a particular direction and return back to qq, we pass only under a single arc, namely bb (note that no non-trivial permutations and deletions are possible in this case). But every welded knot with only one classical crossing is equivalent (as a welded knot) to a classical ±1\pm 1-framed unknot. Hence KK itself is equivalent as a welded knot to a ±1\pm 1-framed unknot. But a pair of classical knots are equivalent as welded knots if and only if they are equivalent as classical knots. We conclude that KK is equivalent to a ±1\pm 1-framed unknot as a classical knot. If this is correct, this is the Poincaré conjecture for knots (also known as the Property P conjecture). In fact, since we do not use any Kirby moves, the result is a bit stronger: I believe that it also establishes the Gordon-Luecke theorem (if a Dehn surgery on a knot gives S 3S^{3}, then the knot must be the unknot). I believe that the argument adapts to arbitrary links, with one caveat: we do need Kirby moves in that case, and we then need a version of the fact that classical links are welded-equivalent if and only if they are classically equivalent which includes certain kinds of Kirby moves. This is conjectured in the literature, but has not previously been known. I had tried off and on for a year or more to prove it without succeeding (I am trying to avoid the use of Waldhausen's work on 3-manifolds), but I now do think to have a proof, using not exactly the same moves as in welded knot theory (for these moves I still do not have a proof), but a modified collection for which the arguments still go through. But let's just ignore the links case, for now, I suggest; I am very happy to provide details to anybody interested. I would be delighted with any feedback. As I say, the argument has been in a stable form for getting on for two years, and I'd really like to find out whether or not this argument is essentially correct, even if it turns out to be nonsense or crucially flawed. Since I seem to have immense difficulty in writing it down in a paper, this forum may be the best place for me to communicate the argument. I am very happy to elaborate on anything or to give examples. CommentTimeMay 4th 2018 Format: MarkdownItexI am wondering about the possibility of submitting this as a 'Publications of the nLab' article. Does anybody have any thoughts on that? I.e. is the 'publications of the nLab' project still in principle active? I am wondering about the possibility of submitting this as a 'Publications of the nLab' article. Does anybody have any thoughts on that? I.e. is the 'publications of the nLab' project still in principle active? Format: MarkdownItexIt hardly got going, with only two publications [there](https://ncatlab.org/publications/published/). I doubt the 'tentatively confirmed' names on the editoral board still consider themselves in that light. It hardly got going, with only two publications there. I doubt the 'tentatively confirmed' names on the editoral board still consider themselves in that light. Format: MarkdownItexIndeed! I really like many ideas of the project, though, especially the hyperlinking and the transparency of the review process (one could even imagine taking it one step further, where there is an open discussion at the nForum). I suppose the editorial board would probably not be appropriate for the stuff in #1 anyhow. Indeed! I really like many ideas of the project, though, especially the hyperlinking and the transparency of the review process (one could even imagine taking it one step further, where there is an open discussion at the nForum). I suppose the editorial board would probably not be appropriate for the stuff in #1 anyhow. Format: MarkdownItexI am thinking of trying to build up a write up of the above on the nLab, which I then produce a pdf from. Would this be better in a personal web, or would it be OK to use the main nLab, as long as the pages are indicated to be unpublished research? I'd incline to the second option myself, but am happy to follow whatever the consensus is. I am thinking of trying to build up a write up of the above on the nLab, which I then produce a pdf from. Would this be better in a personal web, or would it be OK to use the main nLab, as long as the pages are indicated to be unpublished research? I'd incline to the second option myself, but am happy to follow whatever the consensus is. Format: MarkdownItexPersonally, I think it makes sense to keep dedicated pages on a personal web (perhaps read-only for others), so that you could preserve your personal vision as well as maintain a central repository for this project. There is always a possibility of copying over to main those parts of the project that have matured under the eyes of yourself and perhaps more especially others who are following what you are doing, and that would be sensibly assimilated into the main lab. (For example, any improvements in the exposition of known results would of course be very welcome.) You could also link, within articles on main, back to your personal pages. I seem to recall a few years ago that Emily Riehl had put some critical questions to you over the earlier version of the project, but don't know what became of that. I believe you were in contact with Lou Kauffman as well, but I don't think I ever saw anything of his own reactions. Was or is anyone else following your more recent developments? As you know, Urs maintains a lot of his projects on his personal web, so he might be able to describe in more detail how the dialectic between his web and the main lab works out in practice. I wish you luck with this project. Far be it from me to give advice, but obviously the experts will want to know what are the main new ideas which sets this apart from other attempts which bear a family resemblance to this (I seem to recall you anticipated this to some extent in the earlier version, but don't recall what you said specifically, beyond a plea to put preconceptions aside). Personally, I think it makes sense to keep dedicated pages on a personal web (perhaps read-only for others), so that you could preserve your personal vision as well as maintain a central repository for this project. There is always a possibility of copying over to main those parts of the project that have matured under the eyes of yourself and perhaps more especially others who are following what you are doing, and that would be sensibly assimilated into the main lab. (For example, any improvements in the exposition of known results would of course be very welcome.) You could also link, within articles on main, back to your personal pages. I seem to recall a few years ago that Emily Riehl had put some critical questions to you over the earlier version of the project, but don't know what became of that. I believe you were in contact with Lou Kauffman as well, but I don't think I ever saw anything of his own reactions. Was or is anyone else following your more recent developments? As you know, Urs maintains a lot of his projects on his personal web, so he might be able to describe in more detail how the dialectic between his web and the main lab works out in practice. I wish you luck with this project. Far be it from me to give advice, but obviously the experts will want to know what are the main new ideas which sets this apart from other attempts which bear a family resemblance to this (I seem to recall you anticipated this to some extent in the earlier version, but don't recall what you said specifically, beyond a plea to put preconceptions aside). (edited Aug 26th 2018) Format: MarkdownItexThanks very much for the reply, Todd! Before I received your message, a few hours ago, I made a start at [[towards a diagrammatic proof of the Poincaré conjecture for knots]] at the main nLab. But I can easily move this to a personal web if that is preferred (and if people do not mind me having a personal web, I do not have one at the moment), it would be absolutely fine by me. I am trying to use the work as a point of departure for improving the existing nLab pages on knot theory (have edited a couple today), and for adding some new ones. > I seem to recall a few years ago that Emily Riehl had put some critical questions to you over the earlier version of the project, but don't know what became of that. I believe you were in contact with Lou Kauffman as well, but I don't think I ever saw anything of his own reactions. Was or is anyone else following your more recent developments? Nobody unfortunately has offered any reactions on recent versions. Lou Kauffman is kindly still interested, but I have not received any mathematical feedback. The project has moved on quite a bit from the earlier time you are referring to; the fundamental ideas have been the same all the way through, but a few technical points have been tidied up, and a few new ideas occurred along the way which helped to make it easier to write up. The current version has been stable for quite a long time, though, over 1.5 years in the knot case. I have tried to ask people to look at it, but with no success. I think the only thing I can do is submit it to a journal, which is what I plan to do (write it up on the nLab and export to a pdf). > As you know, Urs maintains a lot of his projects on his personal web, so he might be able to describe in more detail how the dialectic between his web and the main lab works out in practice. Absolutely, I'd be very happy to hear from Urs about what he suggests. > I wish you luck with this project. Thank you very much, I appreciate it! > Far be it from me to give advice, but obviously the experts will want to know what are the main new ideas which sets this apart from other attempts which bear a family resemblance to this (I seem to recall you anticipated this to some extent in the earlier version, but don't recall what you said specifically, beyond a plea to put preconceptions aside). Yes, definitely. I feel able to answer this. Indeed, the approach is completely different to anything I've seen, to the extent that one notable expert (whose name has not been mentioned in any of my public posts on this work) was not able to be convinced of the validity of the _logic_ of the approach, which I don't think is in any doubt. The use of virtual knot theory is for instance a novel technical aspect, as far as I know. But I'll treat this in more detail in the introduction, when I get to it. Thanks very much for the reply, Todd! Before I received your message, a few hours ago, I made a start at towards a diagrammatic proof of the Poincaré conjecture for knots at the main nLab. But I can easily move this to a personal web if that is preferred (and if people do not mind me having a personal web, I do not have one at the moment), it would be absolutely fine by me. I am trying to use the work as a point of departure for improving the existing nLab pages on knot theory (have edited a couple today), and for adding some new ones. Nobody unfortunately has offered any reactions on recent versions. Lou Kauffman is kindly still interested, but I have not received any mathematical feedback. The project has moved on quite a bit from the earlier time you are referring to; the fundamental ideas have been the same all the way through, but a few technical points have been tidied up, and a few new ideas occurred along the way which helped to make it easier to write up. The current version has been stable for quite a long time, though, over 1.5 years in the knot case. I have tried to ask people to look at it, but with no success. I think the only thing I can do is submit it to a journal, which is what I plan to do (write it up on the nLab and export to a pdf). Absolutely, I'd be very happy to hear from Urs about what he suggests. I wish you luck with this project. Thank you very much, I appreciate it! Far be it from me to give advice, but obviously the experts will want to know what are the main new ideas which sets this apart from other attempts which bear a family resemblance to this (I seem to recall you anticipated this to some extent in the earlier version, but don't recall what you said specifically, beyond a plea to put preconceptions aside). Yes, definitely. I feel able to answer this. Indeed, the approach is completely different to anything I've seen, to the extent that one notable expert (whose name has not been mentioned in any of my public posts on this work) was not able to be convinced of the validity of the logic of the approach, which I don't think is in any doubt. The use of virtual knot theory is for instance a novel technical aspect, as far as I know. But I'll treat this in more detail in the introduction, when I get to it. Format: MarkdownItexI second Todd, this would be good to do on a private web. Also, having a private web would be useful for you, as the developer/maintainer of the main lab, to try out some things on a decent collection of mathematical material without nudging the nLab too much. I second Todd, this would be good to do on a private web. Also, having a private web would be useful for you, as the developer/maintainer of the main lab, to try out some things on a decent collection of mathematical material without nudging the nLab too much. Format: MarkdownItexI can't speak for others on the Steering Committee, but as a member I would definitely support the SC granting you a personal web (which sounds funny, as I guess you might be the one setting it up!). I can't speak for others on the Steering Committee, but as a member I would definitely support the SC granting you a personal web (which sounds funny, as I guess you might be the one setting it up!). Format: MarkdownItexYes, I agree with what was said above by Todd and David R. Yes, I agree with what was said above by Todd and David R. Format: MarkdownItexOK, thanks all! I will move to a personal web later. OK, thanks all! I will move to a personal web later.
CommonCrawl
Optimal reinsurance-investment and dividends problem with fixed transaction costs March 2021, 17(2): 1001-1023. doi: 10.3934/jimo.2020009 Non-dominated sorting methods for multi-objective optimization: Review and numerical comparison Qiang Long 1, , Xue Wu 1, and Changzhi Wu 2,, School of science, Southwest University of Science and Technology, Mianyang 621010, China School of Management, Guangzhou University, Guangzhou 510006, China * Corresponding author: Changzhi Wu, [email protected] Received October 2018 Revised September 2019 Published January 2020 In multi-objective evolutionary algorithms (MOEAs), non-domina-ted sorting is one of the critical steps to locate efficient solutions. A large percentage of computational cost of MOEAs is on non-dominated sorting for it involves numerous comparisons. By now, there are more than ten different non-dominated sorting algorithms, but their numerical performance comparing with each other is not clear yet. It is necessary to investigate the advantage and disadvantage of these algorithms and consequently give suggestions to specific users and algorithm designers. Therefore, a comprehensively numerical study of non-dominated sorting algorithms is presented in this paper. Firstly, we design a population generator. This generator can generate populations with specific features, such as population size, number of Pareto fronts and number of points in each Pareto front. Then non-dominated sorting algorithms were tested using populations generated in certain structures, and results were compared with respect to number of comparisons and time consumption. Furthermore, In order to compare the performance of sorting algorithms in MOEAs, we embed them into a specific MOEA, dynamic sorting genetic algorithm (DSGA), and use these variations of DSGA to solve some multi-objective benchmarks. Results show that dominance degree sorting outperforms the other methods, fast non-dominance sorting performs the worst and the other sorting algorithms performs equally. Keywords: Multi-objective optimization, non-dominated sorting, pareto front, multi-objective evolutionary algorithm. Mathematics Subject Classification: Primary: 90C26, 90C59; Secondary: 30E1. Citation: Qiang Long, Xue Wu, Changzhi Wu. Non-dominated sorting methods for multi-objective optimization: Review and numerical comparison. Journal of Industrial & Management Optimization, 2021, 17 (2) : 1001-1023. doi: 10.3934/jimo.2020009 A. Cheng and L. Cheng-Chew, Optimizing System-On-Chip verifications with multi-objective genetic evolutionary algorithms, Journal of Industrial and Management Optimization, 10 (2014), 383-396. doi: 10.3934/jimo.2014.10.383. Google Scholar D. W. Corne, J. D. Knowles and M. J. Oates, The pareto envelope-based selection algorithm for multiobjective optimization, Parallel Problem Solving from Nature PPSN VI. Springer, (2000), 839–848. Google Scholar D. W. Corne, N. R. Jerram, J. D. Knowles and M. J. Oates, PESA-II: Region-based selection in evolutionary multi-objective optimization, Genetic and Evolutionary Computation Conference, (2001), 283–290. Google Scholar K. Deb and J. Himanshu, An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints, IEEE Transactions Evolutionary Computation, 18 (2014), 577-601. Google Scholar K. Deb, A. Pratap, S. Agarwal and T. A. M. T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Transactions on Evolutionary Computation, 6 (2002), 182-197. Google Scholar M. Drozdik, A. Youhei, A. Hernan and T. Kiyoshi, Computational cost reduction of nondominated sorting using the M-front, IEEE Transactions on Evolutionary Computation, 19 (2015), 659-678. Google Scholar H. B. Fang, Q. Wang, Y. C. Tu and M. F. Horstemeyer, An efficient non-dominated sorting method for evolutionary algorithms, Evolutionary Computation, 16 (2008), 355-384. Google Scholar F. A. Fortin, S. Grenier and M. Parizeau, Generalizing the Improved Run-time Complexity Algorithm for Non-Dominated Sorting, Proceedings of the 15th annual conference on Genetic and evolutionary computation, 2013. Google Scholar J. Himanshu and K. Deb, An evolutionary many-objective optimization algorithm using reference-point based nondominated sorting approach, part II: Handling constraints and extending to an adaptive approach, IEEE Transations Evolutionary Computation, 18 (2014), 602-622. Google Scholar M. T. Jensen, Reducing the run-time complexity of multiobjective EAs: The NSGA-II and other algorithms, IEEE Transactions on Evolutionary Computation, 7 (2003), 503-515. Google Scholar M. Kent and E. Keedwell, Deductive sort and climbing sort: New methods for non-dominated sorting, Evolutionary Computation, 20 (2012), 1-26. Google Scholar T. C. Koopmans and others, Activity Analysis of Production and Allocation, , Wiley New York, 1951. Google Scholar H. T. Kung, L. Fabrizio and F. P. Franco, On finding the maxima of a set of vectors, Journal of the ACM (JACM), 22 (1975), 469-476. doi: 10.1145/321906.321910. Google Scholar Q. Long, W. N. Xu and K. Q. Zhao, Dynamic sorting genetic algorithm for multi-objective optimization, Swarm and Evolutionary Computation, (2017). Google Scholar B. Maxim and A. Shalyto, A provably asymptotically fast version of the generalized Jensen algorithm for non-dominated sorting, International Conference on Parallel Problem Solving from Nature, Springer, Cham, 2014. Google Scholar S. Nidamarthi and K. Deb, Muiltiobjective optimization using nondominated sorting in genetic algorithms, Evolutionary Computation, 2 (1994), 221-248. Google Scholar G. Patrik and A. Syberfeldt, A new algorithm using the non-dominated tree to improve non-dominated sorting, Evolutionary Computation, 26 (2018), 89-116. Google Scholar K. Samuel and J. Gillis, Mathematical methods and theory in games, programming, and economics, Physics Today, 13 (1960), 54. Google Scholar C. Shi, Z. Y. Yan, Z. Z. Shi and L. Zhang, A fast multi-objective evolutionary algorithm based on a tree structure, Applied Soft Computing, 10 (2010), 468-480. Google Scholar N. Srinivas and K. Deb, Muiltiobjective optimization using nondominated sorting in genetic algorithms, Evolutionary Computation, 2 (1994), 221-248. Google Scholar S. Q. Tang, Z. X. Cai and J. H. Zheng, A fast method of constructing the non-dominated set: Arena's principle, The Fourth International Conference on Natural Computation, 2008. Google Scholar C. K. Vira and Y. Y. Haimes, Multiobjective Decision Making: Theory and Methodology, Courier Dover Publications, 2008. Google Scholar H. D. Wang and Y. Xin, Corner sort for Pareto-based many-objective optimization, IEEE Transactions on Cybernetics, 44 (2014), 92-102. Google Scholar J. Xiong, Z. B. Zhou, K. Tian, T. J. Liao and J. M. Shi, A multi-objective approach for weapon selection and planning problems in dynamic environments, Journal of Industrial and Management Optimization, 13 (2017), 1189-1211. doi: 10.3934/jimo.2016068. Google Scholar X. Y. Zhang, Y. Tian, R. Cheng and Y. C. jin, An efficient approach to nondominated sorting for evolutionary multiobjective optimization, IEEE Transactions on Evolutionary Computation, 19 (2015), 201-213. Google Scholar X. Y. Zhang, Y. Tian, R. Cheng and Y. C. Jin, Empirical analysis of a tree-based efficient non-dominated sorting approach for many-objective optimization, Computational Intelligence, IEEE, 2016. Google Scholar L. Zhang, J. Zhang and Y. Zhang, Second-order optimality conditions for cone constrained multi-objective optimization, Journal of Industrial and Management Optimization, 14 (2018), 1041-1054. doi: 10.3934/jimo.2017089. Google Scholar Y. R. Zhou, Z. F. Chen and J. Zhang, Ranking vectors by means of the dominance degree matrix, IEEE Transactions on Evolutionary Computation, 21 (2017), 34-51. Google Scholar E. Zitzler and L. Thiele, Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach, IEEE Transations Evolutionary Computation, 3 (1999), 257-271. Google Scholar E. Zitzler, M. Laumanns and L. Thiele, SPEA2: Improving the strength pareto evolutionary algorithm for multiobjective optimization., Fifth Conference on Evolutionary Methods for Design, Optimization and Control with Applications to Industrial Problems, (2001), 95–100. Google Scholar Figure 1. Cases of dominance comparisons Figure 2. Generate a point belonging to $ \mathcal{F}_2 $ Figure 3. An example of fixed features population generator Figure 4. Time consumption for series (ⅰ) Figure 5. Number of comparisons for series (ⅰ) Figure 6. Time consumption for series (ⅱ) Figure 7. Number of comparisons for series (ⅱ) Figure 8. Time consumption for series (ⅲ) Figure 9. Number of comparisons for series (ⅲ) Figure 10. Time consumption for series (ⅳ) Figure 11. Number of comparisons for series (ⅳ) Figure 12. Time consumption for series (ⅴ) Figure 13. Number of comparisons for series (ⅴ) Figure 14. Average time consumption for algorithms Figure 15. Average number of comparison for algorithms Figure 16. Average Comparison efficiency for algorithms Figure 17. Objective function value space Figure 18. Numerical performance on SCH Figure 19. Numerical performance on FON Figure 20. Numerical performance on KUR Table 1. Five series of populations Series No. Description $ m $ $ k $ $ N $ Series (ⅰ) fixed $ m $ 3 1 $ N=(200) $ various $ k $ 3 2 $ N=(100,100) $ $ \sum N=200 $ 3 3 $ N=(70,70,60) $ 3 4 $ N=(50,50,50,50) $ 3 5 $ N=(40,40,40,40,40) $ 3 6 $ N=(33,33,33,33,33,35) $ Series (ⅱ) fixed $ m $ 3 5 $ N=(10,10,10,10,10) $ fixed $ k $ 3 5 $ N=(20,20,20,20,20) $ various $ N $ 3 5 $ N=(30,30,30,30,30) $ Series (ⅲ) various $ m $ 2 5 $ N=(20,20,20,20,20) $ fixed $ N $ 4 5 $ N=(20,20,20,20,20) $ Series (ⅳ) fixed $ m $ 3 1 $ N=50 $ fixed $ k $ 3 1 $ N=100 $ various $ N $ 3 1 $ N=150 $ 3 1 $ N=200 $ Series (ⅴ) fixed $ m $ 3 10 $ N_i=1,\; i=1,\cdots,k $ various $ k $ 3 20 $ N_i=1,\; i=1,\cdots,k $ various $ N $ 3 30 $ N_i=1,\; i=1,\cdots,k $ 3 40 $ N_i=1,\; i=1,\cdots,k $ Series (vi) fixed $ m $ 3 5 $ N_i $ is a fixed $ k $ 3 5 random integer various $ N $ 3 5 between 1 and 50 Table 2. Multi-objective test problems Pro. $ n $ Variable Objective bounds functions SCH 1 $ [-5,10] $ $ \begin{array}{l}f_1(x)=x^2 \\f_2(x)=(x-2)^2\end{array} $ FON 3 $ [-4,4] $ $ \begin{array}{l}f_1(x)=1-\exp(-\sum_{i=1}^3(x_i-\frac{1}{\sqrt{3}})^2)\\f_2(x)=1-\exp(-\sum_{i=1}^3(x_i+\frac{1}{\sqrt{3}})^2)\end{array} $ KUR 3 $ [-5,5] $ $ \begin{array}{l}f_1(x)=\sum_{i=1}^{n-1}(-10\exp(-0.2\sqrt{x_i^2+x_{i+1}^2}\; ))\\ f_2(x)=\sum_{i=1}^n(|x_i|^{0.8}+5\sin^3(x_i))\end{array} $ Lin Jiang, Song Wang. Robust multi-period and multi-objective portfolio selection. Journal of Industrial & Management Optimization, 2021, 17 (2) : 695-709. doi: 10.3934/jimo.2019130 Ripeng Huang, Shaojian Qu, Xiaoguang Yang, Zhimin Liu. Multi-stage distributionally robust optimization with risk aversion. Journal of Industrial & Management Optimization, 2021, 17 (1) : 233-259. doi: 10.3934/jimo.2019109 Mahdi Karimi, Seyed Jafar Sadjadi. Optimization of a Multi-Item Inventory model for deteriorating items with capacity constraint using dynamic programming. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021013 Bilel Elbetch, Tounsia Benzekri, Daniel Massart, Tewfik Sari. The multi-patch logistic equation. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021025 Wolfgang Riedl, Robert Baier, Matthias Gerdts. Optimization-based subdivision algorithm for reachable sets. Journal of Computational Dynamics, 2021, 8 (1) : 99-130. doi: 10.3934/jcd.2021005 Liang Huang, Jiao Chen. The boundedness of multi-linear and multi-parameter pseudo-differential operators. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020291 Hirokazu Ninomiya. Entire solutions of the Allen–Cahn–Nagumo equation in a multi-dimensional space. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 395-412. doi: 10.3934/dcds.2020364 Shun Zhang, Jianlin Jiang, Su Zhang, Yibing Lv, Yuzhen Guo. ADMM-type methods for generalized multi-facility Weber problem. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020171 Caterina Balzotti, Simone Göttlich. A two-dimensional multi-class traffic flow model. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020034 Hongguang Ma, Xiang Li. Multi-period hazardous waste collection planning with consideration of risk stability. Journal of Industrial & Management Optimization, 2021, 17 (1) : 393-408. doi: 10.3934/jimo.2019117 Nicholas Geneva, Nicholas Zabaras. Multi-fidelity generative deep learning turbulent flows. Foundations of Data Science, 2020, 2 (4) : 391-428. doi: 10.3934/fods.2020019 Jong Yoon Hyun, Boran Kim, Minwon Na. Construction of minimal linear codes from multi-variable functions. Advances in Mathematics of Communications, 2021, 15 (2) : 227-240. doi: 10.3934/amc.2020055 Gi-Chan Bae, Christian Klingenberg, Marlies Pirner, Seok-Bae Yun. BGK model of the multi-species Uehling-Uhlenbeck equation. Kinetic & Related Models, 2021, 14 (1) : 25-44. doi: 10.3934/krm.2020047 Mohammed Abdulrazaq Kahya, Suhaib Abduljabbar Altamir, Zakariya Yahya Algamal. Improving whale optimization algorithm for feature selection with a time-varying transfer function. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 87-98. doi: 10.3934/naco.2020017 Guo Zhou, Yongquan Zhou, Ruxin Zhao. Hybrid social spider optimization algorithm with differential mutation operator for the job-shop scheduling problem. Journal of Industrial & Management Optimization, 2021, 17 (2) : 533-548. doi: 10.3934/jimo.2019122 Ömer Arslan, Selçuk Kürşat İşleyen. A model and two heuristic methods for The Multi-Product Inventory-Location-Routing Problem with heterogeneous fleet. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021002 PDF downloads (142) Qiang Long Xue Wu Changzhi Wu
CommonCrawl
Architecture and Architectonics (7) Arts and Humanities (216) Behavioral Sciences (183) Biology and Life Sciences (10) Business and Economics (4) Chemistry and Chemical Engineering (19) Earth and Environmental Sciences (4) Materials and Applied Sciences (7) Mathematics and Statistics (148) Medical and Health Sciences (25) Social Sciences and Law (11) From 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1986 1985 1984 1983 1982 1981 1980 1979 1978 1977 1976 1975 1974 1973 1972 1971 1970 1969 1968 1967 1966 1965 1964 1963 1962 1961 1960 1959 1958 1957 1956 1955 1954 1953 1952 1951 1950 1949 1948 1944 1943 1942 1941 1940 1939 1938 1937 1936 1935 1934 1933 1932 1931 1930 1929 1928 1927 1926 1925 1924 1923 1922 1921 1920 1919 1918 1917 1916 1915 1914 1913 1912 1911 1910 1909 1908 1907 1906 1905 1904 1903 1902 1901 1900 1899 1898 1897 1896 1895 1894 1893 1892 1891 1890 1889 1888 1887 1886 1885 1884 1883 1882 1881 1880 1879 1878 1877 1876 1875 1874 1873 1872 1871 1870 1869 1868 1867 1866 1865 1864 1863 1862 1861 1860 1859 1858 1857 1-01 — To 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1986 1985 1984 1983 1982 1981 1980 1979 1978 1977 1976 1975 1974 1973 1972 1971 1970 1969 1968 1967 1966 1965 1964 1963 1962 1961 1960 1959 1958 1957 1956 1955 1954 1953 1952 1951 1950 1949 1948 1944 1943 1942 1941 1940 1939 1938 1937 1936 1935 1934 1933 1932 1931 1930 1929 1928 1927 1926 1925 1924 1923 1922 1921 1920 1919 1918 1917 1916 1915 1914 1913 1912 1911 1910 1909 1908 1907 1906 1905 1904 1903 1902 1901 1900 1899 1898 1897 1896 1895 1894 1893 1892 1891 1890 1889 1888 1887 1886 1885 1884 1883 1882 1881 1880 1879 1878 1877 1876 1875 1874 1873 1872 1871 1870 1869 1868 1867 1866 1865 1864 1863 1862 1861 1860 1859 1858 1857 1-01 You are looking at 101 - 110 of 626 items for : "representations" x Sort by RelevanceArticle A-ZArticle Z-ADate - Old to RecentDate - Recent to OldAuthor A-ZAuthor Z-AJournal A-ZJournal Z-A Page:12 ... 678910111213141516 ... Word meaning and lexical pragmatics Acta Linguistica Hungarica Volume 51: Issue 3-4 https://doi.org/10.1556/aling.51.2004.3-4.3 Author: Károly Bibok In spite of their differences, Two-level Conceptual Semantics, Generative Lexicon Theory and Relevance Theory also have similarities with respect to treatment of the relation of word meanings and contexts. Therefore, the three theories can be considered as complementing each other in analysing word meanings in utterances. In the present paper I will outline a conception of lexical pragmatics which critically amalgamates the views of these theories and has more explanatory power than each theory does separately. Such a lexical pragmatic conception accepts lexical-semantic representations which can be radically underspecified and allow for other methods of meaning description than componential analysis. As words have underspecified meaning representations, they reach their full meanings in corresponding contexts (immediate or extended) through considerable pragmatic inference. The Cognitive Principle of Relevance regulates the way in which the utterance meaning is construed. New estimates in the four-dimensional divisor problem with applications Acta Mathematica Hungarica Author: E. Krätzel In this article we consider three problems: 1. The asymptotic behaviour of the quadratic moment of the exponential divisor function. 2. The distribution of powerful integers of type 4. 3. The average number of direct factors of a finite Abelian group. We prove new estimates for the error terms in the asymptotic representations. For this purpose new estimates in the general four-dimensional divisor problem are needed. Мотивационная структура лексико-семантического поля «Общественное мнение» в русских народных говорах Studia Slavica https://doi.org/10.1556/sslav.59.2014.2.8 Author: Татьяна Леонтьева The dialect lexical representations of notions from the "public opinion" sphere are the subject of the research. The motivational analysis of such words allowed to reveal the key meanings "showing the attitude to the person", "assessment", "influence on the person", and "the person's image". The native Russian speaker bases on them the choice of the motivational feature for the words that represent the lexical-semantic field "public opinion". Die Sarkophagwerkstätten von Aquincum und Brigetio Acta Archaeologica Academiae Scientiarum Hungaricae https://doi.org/10.1556/AArch.65.2014.2.6 Author: Erwin Pochmarski Workshops of sarcophagi in Aquincum and Brigetio. This contribution deals with problems of chronology, iconography and decoration of the sarcophagi of Aquincum and Brigetio. For the chronology the inscriptions, which name the cities as municipium or colonia are more helpful than the dates of the stationing of the legio I adiutrix and the legio II adiutrix respectively. Regarding the iconography of the many sarcophagi with erotes in the fields on both sides of the inscription the type of this representations is decisive. Reducible cubic CNS polynomials Periodica Mathematica Hungarica Authors: Shigeki Akiyama, Horst Brunotte, and Attila Pethő The concept of a canonical number system can be regarded as a natural generalization of decimal representations of rational integers to elements of residue class rings of polynomial rings. Generators of canonical number systems are CNS polynomials which are known in the linear and quadratic cases, but whose complete description is still open. In the present note reducible CNS polynomials are treated, and the main result is the characterization of reducible cubic CNS polynomials. Collaboration patterns in theoretical population genetics Authors: Hildrun Kretschmer and B. Gupta The paper points out that the characteristic properties of general social networks are reflected in co-authorship patterns of theoretical population genetics as studied from 1900 to 1980. The results are consistent with the analyses of bibliographies where the co-authorship networks in invisible colleges probably have shown the same behavioural patterns as the non-scientific populations. The patterns of behaviour are portrayed in two-dimensional as well as three-dimensional representations of co-authorship data in theoretical population genetics. Variability in the articulation and perception of a word Author: Mária Gósy The words making up a speaker's mental lexicon may be stored as abstract phonological representations or else they may be stored as detailed acoustic-phonetic representations. The speaker's articulatory gestures intended to represent a word show relatively high variability in spontaneous speech. The aim of this paper is to explore the acoustic-phonetic patterns of the Hungarian word akkor 'then, at that time'. Ten speakers' recorded spontaneous speech with a total duration of 255 minutes and containing 286 occurrences of akkor were submitted to analysis. Durational and frequency patterns were measured by means of the Praat software. The results obtained show higher variability both within and across speakers than it had been expected. Both the durations of the words and those of the speech sounds, as well as the vowel formants, turned out to significantly differ across speakers. In addition, the results showed considerable within-speaker variation as well. The correspondence between variability in the objective acoustic-phonetic data and the flexibility and adaptive nature of the mental representation of a word will be discussed.For the perception experiments, two speakers of the previous experiment were selected whose 48 words were then used as speech material. The listeners had to judge the quality of the words they heard using a five-point scale. The results confirmed that the listeners used diverse strategies and representations depending on the acoustic-phonetic parameters of the series of occurrences of akkor . Determination of jumps of distributions by differentiated means Authors: R. Estrada and J. Vindas Differentiated means are defined in order to find formulas for jumps of distributions. We analyze two types of jumps occurring in the notions of distributional jump behavior and symmetric jump behavior. We start by defining what we call Riesz differentiated means for numerical series, then the differentiated means are extended to distributional evaluations for the Schwartz class of tempered distributions. The jumps of tempered distributions are completely determined by the differentiated means of the Fourier transform. We also find formulas for the jumps in terms of the asymptotic behavior of partial derivatives of harmonic representations and harmonic conjugate functions. Applications to Fourier series are given. On the error terms for representation numbers of quadratic forms Author: Z. Xu Let f be a primitive positive integral binary quadratic form of discriminant −D, and r f (n) the number of representations of n by f up to automorphisms of f. We first improve the error term E(x) of \documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \usepackage{bbm} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$\sum\limits_{n \leqq x} {r_f (n)^\beta }$$ \end{document} for any positive integer β. Next, we give an estimate of ∫1 T|E(x)|2 x −3/2 dx when β = 1. An Extension of the Stone Duality Authors: M. Sonia and P. Sabogal We establish a duality between two categories, extending the Stone duality between totally disconnected compact Hausdorff spaces (Stone spaces) and Boolean rings with a unit. The first category denoted by RHQS, has as objects the representations of Hansdorff quotients of Stone spaces and as morphisms all compatible continuous functions. The second category, denoted by BRLR, has as objects all Boolean rings with a unit endowed with a link relation and as morphisms all compatible Boolean rings with unit morphisms. Furthermore, we study connectedness from an algebraic point of view, in the context of the proposed generalized Stone duality.
CommonCrawl
Experimental investigation to study the effect of reinforcement on strength behavior of fly ash Salman Asrar Ahmad ORCID: orcid.org/0000-0002-5472-80431 & Malik Shoeb Ahmad1 Stabilization of the subgrade soil is a primary and significant phase in highway construction. In constructing a flexible pavement subgrade, soil investigation is an important parameter, as the load is transferred to it under a repetitive vehicle load. Subgrade soils with low strength of bearing are incapable of bearing heavy loads and are considered unsuitable for construction. The author proposed a solution for the weak subgrade of flexible pavements in this paper. This study aims to address weak subgrade issues by using fly ash reinforced with reinforcement. The California bearing ratio test (CBR) unsoaked was performed on the fly ash positioning with square and circular reinforcement patterns in the center of the loaded area. The test was performed using different reinforcement of diameters 1 and 2 and twisted 2 mm (1 mm diameter reinforcement overlap, then it was twisted over each other to make a 2 mm diameter twisted reinforcement). The CBR value for plain fly ash is found to be 14.64%, and the maximum CBR value for square and circular reinforcement is 34.89% and 24.23%, respectively. The percentage increase in the CBR value for square and circular reinforcement is to be 138.31% and 65.50%, respectively. The study found that the reinforcement spacing pattern affects the subgrade bearing capacity. As the reinforcement spacing decreases, the bearing capacity of the fly ash increases with the increment of reinforcement diameter. This study is important for subgrade soil strengthening since this fine reinforcement has increased the bearing capacity of poor soils. In the socioeconomic development of a country, creating an adequate road network is crucial, especially in rural areas. India is striving to establish high and uniform technical and management standards and to facilitate policy development and planning at the national level to ensure the sustainable management of rural roads. Under a survey to identify core networks as part of the PMGSY (Pradhan Mantri Gram Sadak Yojana) program, approximately 1.67 lakh unconnected habitations are eligible for coverage under the program, according to the latest figures made available by the state governments. This includes building 3.71 lakh km of new network roads and restoration of 3.68 lakh km of roads [1]. Total construction of road length by year in India is given in Fig. 1a; India has a total of 5.89 million km and is one of the largest road networks in the world. This road network carries 64.5% of all goods across the country, and 90% of India's total passenger traffic uses a highway network to travel. Road transport has progressively increased with improved connectivity between cities, villages, and towns over the years. The total length of national highways in India was 132,500 km as of 1 March 2019. Through a series of measures, the government is working on policies to draw substantial investor interest [1]. A total of 200,000 km of national roads is targeted to be completed by 2022. A comparison of recent targets with achieved road lengths is represented in Fig. 1b. However, the construction of an extensive network of roads involves heavy financial investments through conventional means and approaches. Engineers are constantly confronted with the maintenance and growth of low financial capital floor infrastructure. To meet construction requirements, the traditional design of pavement and construction practice requires high-quality material. Quality products are in short supply in many parts of the globe. a Road length sanctioned (2000–2021) [1]. b Length completed under PMGSY [1] As these pavements will be vulnerable to damage, building concrete or asphalt on these surfaces is not practical. As the soil function is of a subgrade nature, the loads applied from the pavement to the underlying layer should be transmitted sufficiently. If used in weak soils, fly ash enhances the soil's CBR value by the interlocking phenomenon of soil fly ash [2,3,4]. By consuming lime kiln dust (LKD) and class F fly ash, Mohammadinia et al. [5] stabilized the mechanical properties of recycled construction and demolition (C&D) aggregates as a feasible replacement for road bases/subbases. For light traffic road, the granular materials stabilized with 10% S (slag) and 5% FA (fly ash) + 5% S blends have been discovered as a viable, long-term option for stabilizing forthcoming road bases and subbases [6]. Replacement of poor subgrade soil and the load-carrying capacity was found to improve to a greater extent by stabilizing the fly ash with electroplating waste and other additives like cement [7, 8]. Industrial waste, such as rice husk powder, foundry sands, foundry slag, and cement kiln mud, has shown adequate strength and durability [9]. Recycled glass (RG) and spent coffee ground (CG) geopolymers are innovative cementitious materials that use industrial wastes as precursors, such as fly ash and ground granulated blast-furnace slag. Tests on the California bearing ratio (CBR) of the CG + RG geopolymers show that their CBR value is higher than that of naturally occurring subgrade materials [10, 11]. The benefit of recycled plastic waste from waste bottles is to enhance the bearing capacity of soft soil [12]. The use of reclaimed plastic (RP), recycled concrete aggregates (RCA), and crushed brick (CB) is blended with demolition aggregates to form a railway capping substance. With up to 7% RP and 3% RP, the RCA was found to have a higher CBR value of 50% as proposed for capping materials by local authorities [13, 14]. Improving the subbase or subgrade layer is also a popular field of study. The CBR value of peat soils stabilized with polypropylene fiber increased significantly from 15 to 22 times [15]. The CBR value of fly ash mixed with electroplating waste sludge and cement has been raised considerably, resulting in significantly reduced pavement construction costs [16]. Adams et al. [17], Rao and Nasr [18], Ingle and Bhosale [19], and Gowthaman et al. [20] have carried out pioneering CBR soil reinforcement studies. The results of a series of laboratory CBR tests on silty sand reinforced by randomly dispersed polypropylene fibers (soaked and unsoaked) showed substantial improvement. Zornberg and Gupta [21] and Perkins and Ismeik [22] analyzed the results of a full pavement test carried out with a CBR value of about 1 to 8% on several reinforced parts using geogrids from saturated silty clay soils. The test results showed that the multilayer reinforcement for subbase soils with CBRs of 3% or more minor was the highest. There were no significant variations between different integral geogrids of the single layer. The higher tensile modulus geogrids showed a strong 3% or lower contribution to CBRs. The decrease in the rutting rate between reinforced and unreinforced parts increases as the subgrade CBR decreases for all geosynthetics. The road service life improvement factor increases for strongly approved roads, with lower CBR values and fewer structural flooring. Kumar et al. [23] studied silty sand reinforced engineering properties. Bergado et al. [24] carried out CBR studies on compacted sand overlapped soft weathered clay. Geotextiles with different rigidities between clay and sand improved the specimens. The findings have shown that the use of geotextiles in soil improves the potential for bearing. The load value increases in some strains by increasing the loading speed and geotextile rigidity (i.e., rise in CBR value). Gosavi et al. [25] findings of the soaked California bearing ratio study indicate a significant increase in the CBR performance of black cotton soil after usage of reinforcing. The benefit of black cotton soil rises from 42 to 55% if 1% of tissue and fiber glass is added at random. With the introduction of 2% of fibers, the rate of increase in the CBR value becomes smaller, and the total value of CBR tends to decline for different fibers. The primary goal of this research is to evaluate the influence of reinforcement on subgrade soil and how different reinforcing patterns improve its strength. An attempt has been made to investigate an alternate subgrade reinforcing method. Several researchers looked at the effects of geotextiles, cement, lime, and other elements on the behavior of subgrade soils. However, only a limited examination of the reinforcing strength of soil subgrade with these fine reinforcement bars has been conducted. The goal of this study is to see how reinforcement affects the soil subgrade. Methods/experimental The main objective of this research is to address a weak subgrade soil that is unsuitable for pavement construction. As a result, waste material, such as fly ash combined with reinforcement, will be used, which has potential uses in highway and geotechnical applications. The widespread use of fly ash will not only assist to maintain ecological balances, but it will also allow the industry to develop a low-cost material based on this waste for mass-scale applications. Materials and design Fly ash The fly ash used in the analysis was taken from the Kasimpur thermal power plant in the vicinity of Aligarh in Uttar Pradesh as shown in Fig. 2. Fly ash is classified as low-compressibility silt (ML). Figure 3 and Table 1 demonstrate the physical properties of fly ash. Fly ash from electrostatic precipitator (ESP) is continuously collected in buffer hoppers near ESP by vacuum pumps. Dry fly ash is pneumatically transferred from buffer hoppers to storage silos, and then dry or hydro mixing dust conditioners can be unloaded to pneumatic tank trucks to discharge them into open bed trucks, Belt conveyors transport ash into the ash storage zone (ash mound). Basin ash is regularly accumulated in wet hoppers, ground to sand, and occasionally moved to one of six hydro bins to be decanted. The accumulated ash is dispatched through belt conveyors to the ash mound area or discharged to the truck by the conveyors. Dry fly ash from hoppers is picked up in plastic bags for this research. Kasimpur fly ash Grain size distribution curve of Kasimpur fly ash Table 1 Physical properties of Kasimpur fly ash Reinforcing The reinforcement used in this study of diameters 1 and 2 and twisted 2 mm was made up of galvanized mild steel as sho wn in Fig. 4. Micropiles used as reinforcement and structural support are generally around 300 mm in diameter [30]. Authors have also to make sure that the reinforcement should not bend while placed in the CBR mold. The reinforcement of the required length was cut and made perfectly straight before pushing them into the CBR mold. The size reinforcement used in this work is 45 mm. The reinforcement was arranged in circular and square patterns located at the center of the CBR mold as shown in Fig. 5. Reinforcement. a 1 mm diameter and 45 m m length. b Twisted 2 mm diameter and 45 mm length. c 2 mm diameter and 45 mm length Top view of reinforcement arrangement in CBR mold Calculation of reinforcement length If pneumatic or vibratory rollers are used, the maximum lightweight lift thickness is almost infinite. Lift thickness is typically limited to 152.4 to 203.2 mm, as shown in Fig. 6. Proper positioning in lifts more significant than 152.4 mm to 203.2 mm poses a problem. Hence, the chosen lift thickness is 152.4 to 203.2 mm [31]. $${\displaystyle \begin{array}{l}\frac{203.2}{152.4}=\frac{58.33}{x}\\ {}x=\frac{58.33\times 152.4}{203.2}\\ {}x=43.74 mm\\ {}x\approx 45 mm\end{array}}$$ where x is the length of reinforcement. Field dimension of lift thickness comparing with CBR mold Experimental methodology The preparation and testing of specimens were performed according to IS: 2720 (part 16) to 1987 [32] for California bearing ratio test. The standard Proctor compaction test was performed using the equipment and procedures defined in IS: 2720 (part 7) to 1987 [33] equivalent (ASTM D 698-2000) [28] to achieve maximum dry density (MDD) and optimum moisture content (OMC) which sets a benchmark for a further test as the water to be added according to the OMC in the various CBR test. The OMC of the fly ash was 37% collected. Oven-dried and sieved through 5 kg of fly ash was taken, and water was added equivalent to OMC. The fly ash was thoroughly mixed so that water was uniformly mixed. The spacer disc was placed at the bottom of the mold; the first coat of wet mixing mixture is placed over it and uniformly compacted by 56 blows using a hammer of 2.6 kg with a free fall of 310 mm. Similarly, the second and top layers (using a collar) were put again by 56 blows compacting each layer. The collar was removed instead with the aid of the steel cutting point, and excess content was shaved to the surface of the top of the mold. The filter paper was then placed on the base plate, and the mold was turned upside down so that the top of the specimen now turns backward on the penetration test. The specimens used for different positioning of reinforcement were inserted in the top layer of the CBR mold, and the corresponding test was carried out in the loading area of the plunger, as shown in Fig. 7. CBR test in progress The authors of this paper presented a solution for the reinforcement of subgrade soil. Effects of circular reinforcement on the strength of subgrade soil were also examined in this work; different types of circular arrangements were used within the center of the loaded area of the CBR mold in which the reinforcement is inserted in the prepared CBR mold, and other tests were performed by using these combinations such as the following: at the center, two reinforcements in the diametrically opposite position, four reinforcements in the diametrically opposite position, and four reinforcements in diametrically opposite position with one at the center as shown in Fig. 8. The load was applied through the CBR apparatus, and the results have been displayed in the graph in Fig. 9. The authors analyze the impact of the diameter of reinforcement on the strength parameters of fly ash. Different diameter reinforcements have been placed as described above in the center of the loaded area of CBR mold with various configurations. CBR mold with reinforcement. a One reinforcement at center. b Two reinforcements at diametrically opposite. c Four reinforcement at diametrically opposite. d Four reinforcements at diametrically opposite and one at center Load penetration curves of non-reinforced and reinforced fly ash Effect of spacing of reinforcement on the strength of fly ash The strength of the fly ash has been improved by reinforcing it with reinforcement. In this paper, the authors have tried various square combinations such as 30 mm × 30 mm square reinforcement at a spacing of 5 mm and 10 mm and 15 mm × 15 mm square reinforcement at a spacing of 5 mm as shown in Fig. 10. CBR mold with different reinforcement arrangements. a 15 mm × 15 mm square reinforcement at a spacing of 5 mm, b 30 mm × 30 mm square reinforcement at a spacing of 10 mm, and c 30 mm × 30 mm square reinforcement at a spacing of 5 mm For the present analysis, reinforcement of diameters 1, 2, and twisted 2 mm is used in these tests. Figure 10a shows the orientation of the reinforcement in a square pattern. Square reinforcement of 15 mm × 15 mm with a spacing of 5 mm is provided so that the whole square is divided into nine sub-squares with 5 mm sides, and reinforcement with a diameter of 1, 2, and twisted 2 mm is inserted at each node of every square. The load was introduced through the CBR apparatus. Figure 11 shows that the strength of fly ash with reinforcement can be substantially increased. CBR value for fly ash was found to be 14.64% in the absence of reinforcement. As shown in Fig. 10a following the reinforcement provision, the obtained CBR value is 18.58%, 23.94%, and 18.0% with a diameter of 1, 2, and twisted 2 mm, respectively, with CBR tests performed on fly ash using the reinforcement. There is a percentage increase in the CBR values, as seen in Table 2, due to fly ash mobilization with reinforcement. It has been observed from Fig. 12 that the CBR value of all three types of reinforcement, as provided by the square reinforcement, is continuously increasing. Figure 10b represents the reinforcement orientation in a square pattern. Square reinforcement of 30 mm × 30 mm with a spacing of 10 mm is used so that the whole square is divided into nine sub-squares of 10 mm sides, and reinforcement of diameters 1, 2, and twisted 2 mm is inserted at each node of every square. The load was applied via the CBR unit, and the effects were shown as a graph in Fig. 11. Table 2 Varying CBR values with reinforcement addition Comparison of CBR value for different square reinforcements By using a pattern as shown in Fig. 10b, the reinforcement will significantly increase the strength of fly ash as seen in Fig. 11. The CBR values for the reinforcement having diameters 1, 2, and twisted 2 mm are found to be 26.08%, 28.22%, and 18.68%, respectively. Moreover, Fig. 11 shows that the strength of the fly ash can be significantly enhanced with reinforcement. After providing reinforcement, as shown in Fig. 10c, for diameters 1, 2, and twisted 2 mm reinforcement, the CBR values are found to be 30.41%, 34.89%, and 23.35%, respectively. There is a percentage increase in CBR, as shown in Table 2, which is due to the mobilization of fly ash. From Fig. 11, it has been concluded that the CBR value is growing steadily in all three forms of reinforcement. However, it was observed that reducing the spacing between reinforcements improves subgrade strength substantially, and the optimum CBR value for both reinforcements of diameters 1 mm and 2 mm is found to be 5 mm as shown in Fig. 12. Effect of the diameter of reinforcement on the strength of fly ash The plunger is loaded into the CBR mold, and the subsequent test reading was noted. The CBR value for reinforcement of diameter 1 mm is found to be 16.64%, 19.90%, 21.89%, and 22.77% with one reinforcement inserted at the center, two reinforcements are inserted in diametrically opposite directions, four reinforcements are inserted in diametrically opposite directions, and four reinforcements are inserted in diametrically opposite directions with one at the center, respectively. The reinforcement is installed in the center of the loaded area of the CBR mold in a circular way. For twisted reinforcement of a diameter of 2 mm, the obtained CBR value is 16.0%, 18.83%, 20.82%, and 23.26% with the same reinforcement pattern as shown in Fig. 8. The reinforcement is placed circularly in the center of the loaded area of the CBR mold. The obtained CBR value for reinforcement of diameter 2 mm is 21.36%, 21.45%, 22.28%, and 24.23%, with the same reinforcement pattern as shown in Fig. 8. Reinforcement is placed in a circular pattern in the center of the loaded area of the CBR mold. In the case of circular reinforcement, the CBR value for the provision of the circular pattern increases in all three types of reinforcement arrangement. However, it was found that the increasing diameter significantly increases the subgrade strength as shown in Fig. 9, and the optimum CBR value for reinforcement of diameters 1 and 2 and twisted 2 mm is found when the reinforcement is placed at the circumference and center as shown in Fig. 13. There is a percentage increase in CBR, as seen in Table 2, due to fly ash mobilization. Comparison of CBR values for different circular reinforcements The influence of reinforcement parameters such as diameter and spacing of reinforcing components was evaluated in an experimental investigation on unreinforced and reinforced soil samples. The following are some concluding observations: The CBR value of fly ash was found to be 14.64%. The fly ash containing reinforcement of different geometries has shown good bearing strength characteristics. The use of reinforcement with varying spacing and diameter in the fly ash enhances the bearing strengths significantly. The spacing pattern of the reinforcement has a considerable impact on the subgrade bearing capacity. The reinforcing spacing has an inverse relationship with the increase in bearing capacity, and the bearing capacity is increased as the diameter of reinforcement increases. The bearing strengths of fly ash increase with reinforcement of diameters 1 mm and 2 mm to a great extent. When samples were placed with 5 mm spacing, the increment in CBR was a maximum of 107.71 and 138.31%, respectively. The strength of subgrade soil in case of circular reinforcement increases up to 55.53, 58.87, and 56.50% for the reinforcement of diameters 1, 2, and twisted 2 mm, respectively. This research is beneficial for strengthening subgrade soil as this fine reinforcement has boosted the bearing capacity of weak soils. CBR: California bearing ratio MDD: Maximum dry density OMC: Optimum moisture content FA: ESP: Electrostatic precipitator ML: Low-compressibility silt PMGSY: Pradhan Mantri Gram Sadak Yojana RP: Reclaimed plastic RCA: Recycled concrete aggregates CB: Crushed brick N. R. I. D. (2021) Agency, PRADHAN MANTRI GRAM SADAK YOJANA. India [Online]. http://omms.nic.in/# Prabhakar J, Dendorkar N, Morchhale RK (2004) Influence of fly ash on strength behavior of typical soils. Constr Build Mater 18(4):263–267. https://doi.org/10.1016/j.conbuildmat.2003.11.003 Chandra S, Viladkar M, Nagrale P (2008) Mechanistic approach for fiber-reinforced. J Transport Eng Asce 134:15–23 Ghosh A, Dey U (2009) Bearing ratio of reinforced fly ash overlying soft soil and deformation modulus of fly ash. Geotext Geomembr 27:313–320. https://doi.org/10.1016/j.geotexmem.2008.12.002 Mohammadinia A, Arulrajah A, D'Amico A, Horpibulsuk S (2020) Alkali activation of lime kiln dust and fly ash blends for the stabilisation of demolition wastes. Road Mater Pavement Des 21(6):1514–1528. https://doi.org/10.1080/14680629.2018.1555095 Arulrajah A, Perera S, Wong YC, Maghool F, Horpibulsuk S (2021) Stabilization of PET plastic-demolition waste blends using fly ash and slag-based geopolymers in light traffic road bases/subbases. Constr Build Mater 284:122809. https://doi.org/10.1016/j.conbuildmat.2021.122809 Ahmad MS, Shah SS (2016) Load settlement behaviour of fly ash mixed with waste sludge and cement. Geotech Geol Eng 34(1):37–58. https://doi.org/10.1007/s10706-015-9927-z Ahmad MS (2015) Long term assessment of strength and heavy metal concentration in cement-fly ash stabilized electroplating waste sludge. Malaysian J Civ Eng 27(1). https://doi.org/10.11113/mjce.v27.15905 Edil Tuncer B, Acosta Hector A, Benson Craig H (2006) Stabilizing soft fine-grained soils with fly ash. J Mater Civ Eng 18(2):283–294. https://doi.org/10.1061/(ASCE)0899-1561(2006)18:2(283) Arulrajah A, Kua T-A, Horpibulsuk S, Mirzababaei M, Chinkulkijniwat A (2017) Recycled glass as a supplementary filler material in spent coffee grounds geopolymers. Constr Build Mater 151:18–27. https://doi.org/10.1016/j.conbuildmat.2017.06.050 Arulrajah A, Kua T-A, Suksiripattanapong C, Horpibulsuk S (2019) Stiffness and strength properties of spent coffee grounds-recycled glass geopolymers. Road Mater Pavement Des 20(3):623–638. https://doi.org/10.1080/14680629.2017.1408483 Babu GLS, Chouksey SK (2011) Stress-strain response of plastic waste mixed soil. Waste Manag 31(3):481–488. https://doi.org/10.1016/j.wasman.2010.09.018 Arulrajah A, Naeini M, Mohammadinia A, Horpibulsuk S, Leong M (2020) Recovered plastic and demolition waste blends as railway capping materials. Transp Geotech 22:100320. https://doi.org/10.1016/j.trgeo.2020.100320 Naeini M, Mohammadinia A, Arulrajah A, Horpibulsuk S, Leong M (2019) Stiffness and strength characteristics of demolition waste, glass and plastics in railway capping layers. Soils Found 59(6):2238–2253. https://doi.org/10.1016/j.sandf.2019.12.009 Huat B, Kalantari B, Prasad A (2010) Effect of polypropylene fibers on the California bearing ratio of air cured stabilized tropical peat soil. Am J Eng Appl Sci 3. https://doi.org/10.3844/ajeassp.2010.1.6 Ahmad MS, Salahuddin Shah S (2010) Load bearing strength of fly ash modified with cement and waste sludge. Int J Civ Eng 8(4):315–326 Adams C, Amofa N, Opoku-Boahen R (2014) Effect of geogrid reinforced subgrade on layer thickness design of low volume bituminous sealed road pavements. IRJES 3:59–67 Rao SVK, Nasr AMA (2012) Laboratory Study on the Relative Performance of Silty-Sand Soils Reinforced with Linen Fiber. Geotech Geol Eng 30(1):63–74. https://doi.org/10.1007/s10706-011-9449-2 Ingle GS, Bhosale SS (2017) Full-scale laboratory accelerated test on geotextile reinforced unpaved road. Int J Geosynthetics Ground Eng 3(4):33. https://doi.org/10.1007/s40891-017-0110-x Gowthaman S, Nakashima K, Kawasaki S (2018) A state-of-the-art review on soil reinforcement technology using natural plant fiber materials: past findings, present trends and future directions. Materials (Basel) 11(4). https://doi.org/10.3390/ma11040553 Zornberg J, Gupta R (2010) Geosynthetics in pavements: North American contributions. In: 9th International Conference on Geosynthetics - Geosynthetics: advanced solutions for a challenging world, ICG 2010, pp 379–400 Perkins SW, Ismeik M (1997) A synthesis and evaluation of geosynthetic-reinforced base layers in flexible pavements- part I. 4(6):549–604. https://doi.org/10.1680/gein.4.0106 Kumar R, Kanaujia V, Chandra D (1999) Engineering behaviour of fibre-reinforced pond ash and silty sand. Geosynth Int 6:01/01. https://doi.org/10.1680/gein.6.0162 Bergado DT, Youwai S, Hai CN, Voottipruex P (2001) Interaction of nonwoven needle-punched geotextiles under axisymmetric loading conditions. Geotext Geomembr 19:299–328. https://doi.org/10.1016/S0266-1144(01)00010-3 Gosavi M, Patil KA, Mittal S, Saran S (2004) Improvement of properties of black cotton soil subgrade through synthetic reinforcement. J Inst Eng Civ Eng Div Bureau of Indian Standards, "IS: 2720 (Part 3)-1980 Code of practice for determination of specific gravity," New Delhi, 1980. [Online]. Available: https://doczz.net/doc/7243053/2720--part-3--section-ii- Bureau of Indian Standards, "IS: 2720 (Part 4)-1985 Code of practice for grain size analysis.," New Delhi, 1994. [Online]. Available: https://ia803007.us.archive.org/35/items/gov.in.is.2720.4.1985/is.2720.4.1985.pdf American Society for Testing and Materials, "ASTM D-698 (2000), Standard test method for laboratory compaction characteristics of soil using standard effort," America, 2000. [Online]. Available: https://infostore.saiglobal.com/en-gb/standards/astm-d-698-2000-151203_saig_astm_astm_2670400/ Bureau of Indian Standards, "IS: 1498-1970 Code of practice for Identification and classification of soil," New Delhi, 1970. [Online]. Available: https://www.scribd.com/document/359132096/Is-1498-1970-Soil-Classification Armour T, Groneck P, Keeley J, Sharma S (2000) Micropile design and construction guidelines: implementation manual. Federal Highway Administration, United States Asphalt Institute, "Asphalt Pavement Thickness and Mix Design," Lexington, 2020. [Online]. Available: https://www.asphaltinstitute.org/engineering/frequently-asked-questions-faqs/asphalt-pavement-thickness-and-mixdesign/ Bureau of Indian Standards, "IS: 2720 (Part 16)-1987 Code of practice for laboratory CBR test," New Delhi, 1987. [Online]. Available: https://ia803004.us.archive.org/5/items/gov.in.is.2720.16.1987/is.2720.16.1987.pdf Bureau of Indian Standards, "IS: 2720 (Part 7)-1987 Code for determination of water content dry density relation using light compaction," New Delhi, 1980. [Online]. Available: https://civilengineer.co.in/wp-content/uploads/2017/03/IS-2720-PART-7-1980-INDIAN-STANDARD-METHODS-OF-TEST-FOR-SOILS-DETERMINATION-OF-WATERCONTENT-DRY-DENSITY-RELATION-USING-LIGHT-COMPACTION-SECOND-EDITION.pdf Department of Civil Engineering, Aligarh Muslim University, Aligarh, 202002, India Salman Asrar Ahmad & Malik Shoeb Ahmad Salman Asrar Ahmad Malik Shoeb Ahmad Conceptualization: Prof. MSA, methodology, formal analysis, and investigation. SAA, writing — original draft preparation and writing — review and editing. Prof. MSA, funding acquisition, resources, Department of Civil Engineering, ZHCET, AMU, and supervision. Both authors read and approved the final manuscript. Correspondence to Salman Asrar Ahmad. Ahmad, S.A., Ahmad, M.S. Experimental investigation to study the effect of reinforcement on strength behavior of fly ash. J. Eng. Appl. Sci. 69, 41 (2022). https://doi.org/10.1186/s44147-022-00096-2 Received: 14 December 2021 Subgrade soil CBR test Flexible pavement
CommonCrawl
Application of the boundary control method to partial data Borg-Levinson inverse spectral problem MCRF Home A partially observed non-zero sum differential game of forward-backward stochastic differential equations and its application in finance June 2019, 9(2): 277-287. doi: 10.3934/mcrf.2019014 On a logarithmic stability estimate for an inverse heat conduction problem Aymen Jbalia , Department of Mathematics, Faculty of Sciences of Bizerte, 7021 Jarzouna Bizerte, Tunisia * Corresponding author: Aymen Jbalia Received December 2017 Revised April 2018 Published November 2018 Full Text(HTML) We are concerned with an inverse problem arising in thermal imaging in a bounded domain $Ω\subset \mathbb{R}^n$, $n=2, 3$. This inverse problem consists in the determination of the heat exchange coefficient $q(x)$ appearing in the boundary of a heat equation with Robin boundary condition. Keywords: Inverse boundary coefficient problem, heat equation, Robin boundary condition, thermal imaging, logarithmic stability estimate. Mathematics Subject Classification: Primary: 65N21, 35K05, 35R30. Citation: Aymen Jbalia. On a logarithmic stability estimate for an inverse heat conduction problem. Mathematical Control & Related Fields, 2019, 9 (2) : 277-287. doi: 10.3934/mcrf.2019014 G. Alessandrini, L. Del Piero and L. Rondi, Stable determination of corrosion by a single electrostatic boundary measurement, Inverse Probl., 19 (2003), 973-984. Google Scholar G. Alessandrini and E. Sincich, Solving elliptic Cauchy problems and the identification of nonlinear corrosion, J. Comput. Appl. Math., 198 (2007), 307-320. doi: 10.1016/j.cam.2005.06.048. Google Scholar M. Bellassoued, J. Cheng and M. Choulli, Stability estimate for an inverse boundary coefficient problem in thermal imaging, J. Math Anal. Appl., 343 (2008), 328-336. doi: 10.1016/j.jmaa.2008.01.066. Google Scholar M. Bellassoued, M. Choulli and A. Jbalia, Stability of the determination of the surface impedance of an obstacle from the scattering amplitude, Math. Meth. Appl. Sci., 36 (2013), 2429-2448. Google Scholar L. Bourgeois, About stability and regularization of ill-posed elliptic Cauchy problems: the case of C1, 1 domains, Math. Model. Numer. Anal., 44 (2010), 715-735. doi: 10.1051/m2an/2010016. Google Scholar K. Bryan and Jr. L. F. Caudill, An inverse problem in thermal imaging, SIAM J. Appl. Math., 56 (1996), 715-735. doi: 10.1137/S0036139994277828. Google Scholar K. Bryan and Jr. L. F. Caudill, Uniqueness for a boundary identification problem in thermal imaging. in: J. Graef, R. Shivaji, B. Soni, Zhu (Eds. ), Differential Equations and Computational Simulations III, in: Electron. J. Differ. Equ. Conf., 1 (1998), 23-39. Google Scholar S. Busenberg and W. Fang, Identification of semiconductor contact resistivity, Quar. J. Appl. Math., 49 (1991), 639-649. doi: 10.1090/qam/1134746. Google Scholar S. Chaabane, I. Fellah, M. Jaoua and J. Leblond, Logarithmic stability estimates for a robin coefficient in two-dimensional Laplace inverse problems, Inverse Probl., 20 (2004), 47-59. Google Scholar S. Chaabane, I. Feki and N. Mars, Numerical reconstruction of a piecewise constant Robin parameter in the two- or three-dimensional case, Inverse Probl., 28 (2012), 065016. Google Scholar S. Chaabane and M. Jaoua, Identification of Robin coefficient by means of boundary measurements, Inverse Probl., 15 (1999), 1425-1438. doi: 10.1088/0266-5611/15/6/303. Google Scholar J. Cheng, M. Choulli and J. Lin, Stable determination of a boundary coefficient in an elliptic equation, Math Models Methods Appl Sci., 18 (2008), 107-123. doi: 10.1142/S0218202508002620. Google Scholar J. Cheng, M. Choulli and X. Yang, An iterative BEM for the inverse problem of detecting corrosion in a pipe, Numer. Math. J. Chinese Univ., 14 (2005), 252-266. Google Scholar M. Choulli and A. Jbalia, The problem of detecting corrosion by electric measurements revisited, Discrete Contin. Dyn. Syst. Ser. S, 9 (2016), 643-650. doi: 10.3934/dcdss.2016018. Google Scholar M. Choulli, An inverse problem in corrosion detection: Stability estimates, J. Inverse Ill-Posed Probl., 12 (2004), 349-367. doi: 10.1515/1569394042248247. Google Scholar M. Choulli, Stability estimates for an inverse elliptic problem, J. Inverse Ill-Posed Probl., 10 (2002), 601-610. doi: 10.1515/jiip.2002.10.6.601. Google Scholar W. Fang and E. Cumberbatch, Inverse problems for metal oxide semiconductor field-effect transistor contact resistivity, SIAM J. Appl. Math., 52 (1992), 699-709. doi: 10.1137/0152039. Google Scholar W. Fang and M. Lu, A fast collocation method for an inverse boundary value problem, Int. J. Numer. Methods Eng., 59 (2004), 1563-1585. doi: 10.1002/nme.928. Google Scholar D. Fasino and G. Inglese, Stability of the solutions of an inverse problem for Laplace's equation in a thin strip, Numer. Func. Anal. Opt., 22 (2001), 549-560. doi: 10.1081/NFA-100105307. Google Scholar D. Fujiwara, Concrete characterization of the domains of fractional powers of some elliptic differential operators of the second order, Proc.Japan Acad., 43 (1967), 82-86. doi: 10.3792/pja/1195521686. Google Scholar P. Germain, Thèse de doctorat: Solutions fortes, solutions faibles d'équations aux dérivées partielles d'évolution, Ecole polytechnique France, 2005. Google Scholar L. Hörmander, The Analysis of Partial Differential Operators, 2, 2d ed: Springer-Verlag, Berlin, 1990. Google Scholar G. Inglese, An inverse problem in corrosion detection, Inverse Probl., 13 (1977), 977-994. doi: 10.1088/0266-5611/13/4/006. Google Scholar M. Jaoua, S. Chaabane, C. Elhechmi, J. Leblond, M. Mahjoub and J. R. Partington, On some robust algorithms for the Robin inverse problem. International conference in honor of Claude Lobry, 2007. Google Scholar B. Jin and X. Lu, Numerical identification for a Robin coefficient in parabolic problems, Math. Comp., 81 (2012), 1369-1398. doi: 10.1090/S0025-5718-2012-02559-2. Google Scholar B. Jin and J. Zou, Numerical estimation of the Robin coefficient in a stationary diffusion equation, IMA J. Numer. Anal., 30 (2010), 677-701. doi: 10.1093/imanum/drn066. Google Scholar B. Jin and J. Zou, Numerical estimation of piecewise constant Robin coefficient, SIAM J. Control Optim., 48 (2009), 1977-2002. doi: 10.1137/070710846. Google Scholar P. G. Kaup, F. Santosa and M. Vogelius, Method for imaging corrosion damage in thin plates from electrostatic data, Inverse Probl., 12 (1996), 279-293. Google Scholar F. Lin and W. Fang, A linear integral equation approach to the Robin inverse problem, Inverse Probl., 21 (2005), 1757-1772. doi: 10.1088/0266-5611/21/5/015. Google Scholar A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations, Applied Mathematical Sciences, 44. Springer-Verlag, New York, 1983. doi: 10.1007/978-1-4612-5561-1. Google Scholar M. Renardy and R. C. Rogers, An Introduction to Partial Differential Equations, Springer-Verlag: New York; 1993. Google Scholar Z. Sun, Y. Jiao, B. Jin and X. Lu, Numerical identification of a sparse Robin coefficient, Adv. Comput. Math., 41 (2015), 131-148. doi: 10.1007/s10444-014-9352-5. Google Scholar F. M. White, Heat and Mass Transfer: Addison-Wesley, Reading, MA, 1988. Google Scholar Y. Xu and J. Zou, Analysis of an adaptive finite element method for recovering the Robin coefficient, SIAM J. Control Optimiz., 53 (2015), 622-644. doi: 10.1137/130941742. Google Scholar F. Yang, L. Yan and T. Wei, The identification of a Robin coefficient by a conjugate gradient method, Int. J. Numer. Meth. Engng., 78 (2009), 800-816. doi: 10.1002/nme.2507. Google Scholar Masaru Ikehata, Mishio Kawashita. An inverse problem for a three-dimensional heat equation in thermal imaging and the enclosure method. Inverse Problems & Imaging, 2014, 8 (4) : 1073-1116. doi: 10.3934/ipi.2014.8.1073 Xiaofei Cao, Guowei Dai. Stability analysis of a model on varying domain with the Robin boundary condition. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 935-942. doi: 10.3934/dcdss.2017048 Soumen Senapati, Manmohan Vashisth. Stability estimate for a partial data inverse problem for the convection-diffusion equation. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021060 Alexander Gladkov. Blow-up problem for semilinear heat equation with nonlinear nonlocal Neumann boundary condition. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2053-2068. doi: 10.3934/cpaa.2017101 Raffaela Capitanelli. Robin boundary condition on scale irregular fractals. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1221-1234. doi: 10.3934/cpaa.2010.9.1221 Keng Deng, Zhihua Dong. Blow-up for the heat equation with a general memory boundary condition. Communications on Pure & Applied Analysis, 2012, 11 (5) : 2147-2156. doi: 10.3934/cpaa.2012.11.2147 Kazuhiro Ishige, Ryuichi Sato. Heat equation with a nonlinear boundary condition and uniformly local $L^r$ spaces. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2627-2652. doi: 10.3934/dcds.2016.36.2627 VicenŢiu D. RǍdulescu, Somayeh Saiedinezhad. A nonlinear eigenvalue problem with $ p(x) $-growth and generalized Robin boundary value condition. Communications on Pure & Applied Analysis, 2018, 17 (1) : 39-52. doi: 10.3934/cpaa.2018003 Haiyang He. Asymptotic behavior of the ground state Solutions for Hénon equation with Robin boundary condition. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2393-2408. doi: 10.3934/cpaa.2013.12.2393 Yaguang Wang, Shiyong Zhu. Blowup of solutions to the thermal boundary layer problem in two-dimensional incompressible heat conducting flow. Communications on Pure & Applied Analysis, 2020, 19 (6) : 3233-3244. doi: 10.3934/cpaa.2020141 Shenglan Xie, Maoan Han, Peng Zhu. A posteriori error estimate of weak Galerkin fem for second order elliptic problem with mixed boundary condition. Discrete & Continuous Dynamical Systems - B, 2021, 26 (10) : 5217-5226. doi: 10.3934/dcdsb.2020340 Gen Nakamura, Michiyuki Watanabe. An inverse boundary value problem for a nonlinear wave equation. Inverse Problems & Imaging, 2008, 2 (1) : 121-131. doi: 10.3934/ipi.2008.2.121 J. F. Padial. Existence and estimate of the location of the free-boundary for a non local inverse elliptic-parabolic problem arising in nuclear fusion. Conference Publications, 2011, 2011 (Special) : 1176-1185. doi: 10.3934/proc.2011.2011.1176 Zhi-Xue Zhao, Mapundi K. Banda, Bao-Zhu Guo. Boundary switch on/off control approach to simultaneous identification of diffusion coefficient and initial state for one-dimensional heat equation. Discrete & Continuous Dynamical Systems - B, 2020, 25 (7) : 2533-2554. doi: 10.3934/dcdsb.2020021 Guowei Dai, Ruyun Ma, Haiyan Wang, Feng Wang, Kuai Xu. Partial differential equations with Robin boundary condition in online social networks. Discrete & Continuous Dynamical Systems - B, 2015, 20 (6) : 1609-1624. doi: 10.3934/dcdsb.2015.20.1609 Antonio Suárez. A logistic equation with degenerate diffusion and Robin boundary conditions. Communications on Pure & Applied Analysis, 2008, 7 (5) : 1255-1267. doi: 10.3934/cpaa.2008.7.1255 Atsushi Kawamoto. Hölder stability estimate in an inverse source problem for a first and half order time fractional diffusion equation. Inverse Problems & Imaging, 2018, 12 (2) : 315-330. doi: 10.3934/ipi.2018014 Jihoon Lee, Nguyen Thanh Nguyen. Gromov-Hausdorff stability of reaction diffusion equations with Robin boundary conditions under perturbations of the domain and equation. Communications on Pure & Applied Analysis, 2021, 20 (3) : 1263-1296. doi: 10.3934/cpaa.2021020 Marek Fila, Kazuhiro Ishige, Tatsuki Kawakami, Johannes Lankeit. The large diffusion limit for the heat equation in the exterior of the unit ball with a dynamical boundary condition. Discrete & Continuous Dynamical Systems, 2020, 40 (11) : 6529-6546. doi: 10.3934/dcds.2020289 Larissa Fardigola, Kateryna Khalina. Controllability problems for the heat equation on a half-axis with a bounded control in the Neumann boundary condition. Mathematical Control & Related Fields, 2021, 11 (1) : 211-236. doi: 10.3934/mcrf.2020034 HTML views (745) Aymen Jbalia
CommonCrawl
A coin sequence conundrum Written by Colin+ in algebra, big in finland, probability, puzzles. Zeke and Monty play a game. They repeatedly toss a coin until either the sequence tail-tail-head (TTH) or the sequence tail-head-head (THH) appears. If TTH shows up first, Zeke wins; if THH shows up first, Monty wins. What is the probability that Zeke wins? My first reaction to this question was, "It's 50-50, right? It has to be 50-50." But then a moment's pause: "If it was 50-50, you wouldn't be asking." Of course it isn't 50-50 If you were to play the game, or simulate it, you'd find that TTH shows up first - on average - about two-thirds of the time. But that's weird! Surely TTH and THH are equally likely to show up? Evidently not. I've got a heuristic explanation of why not and a more algebraic explanation, both of which I'm quite fond of. Heuristically At any given point in the game, the only relevant information is the result of the last two coin tosses - and it's pretty self-evident that you have equal chances of the last two results being either HH, HT, TH or TT. If you're in situation TT, Zeke must win. The next toss is either an H (in which case Zeke wins) or a T (in which case we've not moved); in either case, Zeke wins. If you're in situation TH, the next toss is either an H (in which case Monty wins) or a T (in which case we're in situation HT, where neither player has an obvious advantage.) Looking at the two situations from which the game can be won, Zeke always wins from his situation, and Monty only wins half the time, so it's reasonable to conclude that Zeke has a 2-in-3 chance overall. Algebraically There's something a little unsatisfactory and hand-wavy about that, though. It's true, but it feels somehow off. Let's do a little more analysis. Let's denote $p_{xy}$ being the probability of Zeke winning from situation XY. If we're in situation HH, we can move to situation HT or remain in HH, so $p_{HH}= \frac{1}{2} p_{HH} + \frac{1}{2} p_{HT}$ [1]. If we're in situation HT, we can move to situation TT or to TH, so $p_{HT}= \frac{1}{2} p_{TT} + \frac{1}{2} p_{TH}$ [2]. If we're in situation TH, we can move to situation THH (which is a win for Monty) or to HT, so $p_{TH}= \frac{1}{2} (0) + \frac{1}{2} p_{HT}$ [3]. If we're in situation TT, we can move to situation TTH (which is a win for Zeke) or to TT, so $p_{TT}= \frac{1}{2} (1) + \frac{1}{2} p_{TT}$ [4]. Four equations in four unknowns! Bring it. Equations [1] and [4] are the simplest. The first resolves to $p_{HH}=p_{HT}$, which makes sense; if you throw two heads in a row, the only different place to go next is HT. The second resolves to $p_{TT}=1$, which agrees with our heuristic approach earlier. Looking at [3], we have $p_{TH}= \frac{1}{2} p_{HT}$, so $2p_{TH} = p_{HT}$. We can substitute that into (3): $2p_{TH}= \frac{1}{2} + \frac{1}{2} p_{TH}$. Rearranging gives $p_{TH} = \frac{1}{3}$, and in turn that $p_{HT}=\frac{2}{3}$. $p_{HH} = \frac{2}{3}$ as well. Given that the probabilities of the first two tosses being HH, HT, TH and TT are each $\frac{1}{4}$, the probability of Zeke winning overall is $\frac{1}{4}\left(p_{HH}+p_{HT}+p_{TH}+p_{TT}\right) = \frac{1}{4}\left( \frac{2}{3} + \frac{2}{3} + \frac{1}{3} + 1 \right) = \frac{2}{3}$. If you know of a better, especially a more intuitive, argument for the answer, I'd love to hear it! Colin is a Weymouth maths tutor, author of several Maths For Dummies books and A-level maths guides. He started Flying Colours Maths in 2008. He lives with an espresso pot and nothing to prove. How big a lead can a football team have? Two coins, one fair, one biased Heads, Tails and bumps-a-daisy A tennis puzzle An innovative algebraic approach One comment on "A coin sequence conundrum" Pingback: Kolikkopeli | Opettaja H:n pulmakulma Wrong, but Useful: Episode 75 www.flyingcoloursmaths.co.uk/w… pic.twitter.com/Libwl6PtNr
CommonCrawl
Learning about GNU Radio - How the impedance tester works A program doing mostly what computers do best - math. Some time back, I described how I manually measure impedance on those rare occasions when I need to measure an inductor or capacitor. It's nothing really more than an application of the math involved in a voltage divider. The example circuit in the last post is nothing more than a voltage divider connected to some audio cables. Here's the drawing again: Just a voltage divider Written for the circuit above, the voltage divider formula looks like this: $V_{LineIn \text{_} Right} = \frac{Z_{DUT} * V_{LineIn \text{_} Left}}{Z_{R \text{_} series} + Z_{DUT}}$ Rearranged to find the impedance of the device under test (DUT) it looks like this: $Z_{DUT} = \frac{Z_{R \text{_} series}}{\frac{V_{LineIn \text{_} Left}}{V_{LineIn \text{_} Right}} - 1}$ To measure impedance at one frequency, you apply a signal with the desired frequency to $LineOut \text{_} Left$, measure at $LineIn \text{_} Left$ and $LineIn \text{_} Right$, do a little math and you're done. Well, not quite. The goal here is to measure the impedance of a speaker across the entire audio spectrum - from down near DC to over 20 kHz. You can do the manual measurements a few times and plot the results to get an idea of the impedance across the whole range, but if you want a really detailed plot you'll have to do that hundreds (if not thousands) of times. It'd take forever, and it'd be no fun at all. Fortunately, computers are good at math and good at doing things repeatedly. This needs both. Repeatedly doing math. The obvious way to do this is to have the computer generate sine waves at various frequencies, then measure the voltages, do the math, plot points, repeat until done. I did in fact do that many years ago with a computer controlled signal generator and a computer controlled voltmeter. It was slow, but it worked. It did what I needed until I could get hold of something better. I don't have that program any more, and couldn't use it if I did have it. It misused a Motorola R2600 communications system analyser as an audio signal generator and AC voltmeter. The better way to attack the problem is to generate a sweep that runs the whole range of frequencies in a fraction of a second. You capture the voltages over the entire period and apply a Fourier transformation to get the voltage for each frequency for a block of measurements in one go. The down side there is that you have to have the measurement synchronized with the generator. Once a long time ago, I had access to a Stac AD416 data acquisition card that could do that very trick. You sent a chirp (frequency sweep) out through the built in digital to analog converter (DAC) in a block, and it could hand you back a block of data from the analog to digital converters (ADC) that exactly contained the time period of your chirp. I "built" a very nice impedance tester with the AD416 and LabView. It worked quite well, and I used it to solve a nasty problem with impedances that we had at work. Without that synchronization, you get your chirp spread over multiple blocks of data for your Fourier spectrum analysis. That chops things up, and you get some seriously messed up measurements. Been there, done that, it isn't useful. PC sound cards don't have that ability, and implementing a work around for it is ugly. Way more work than I'm going to do in the evenings after spending all day programming. The alternative to a chirp is white noise. White noise (strictly speaking, band width limited white noise) contains all frequencies simultaneously. Over time, all frequencies are equally represented at the same average intensity. This is the solution that I chose for this impedance tester. The impedance tester generates white noise, and does two Fourier transforms - one for the applied signal and one for the signal as attenuated by the speaker. With the Fourier results in hand, you do the math to calculate the impedance for each frequency and plot the results. It sounds horribly complicated, but it isn't. GNU Radio (and SciPy and NumPy) all have methods to do the spectrum analysis and apply a single math function to all of the frequencies at once. GNU Radio handles getting the audio into and out of the program, and most of the rest is just boiler plate to set things up. White noise does have a draw back as a signal source, though. It is noise. By its very nature it is squiggly and jiggly and not very reliable. I mean, look at this: That's not what you want to see when trying to make accurate measurements. Noise is well behaved over time, though. The trick is to give it time by averaging a lot of measurements. GNU Radio has averaging functions built in. They can reduce that wild mess to this much cleaner plot: Not as noisy noise That's still not as clean as I want it, and its not as clean as I got it. I have a few tricks up my sleeve that GNU Radio has heard of, and some that it hasn't. One trick that I used is a median filter on the spectrum data before averaging. That greatly smooths the plot for one block of measurements. The other trick is one that GNU Radio doesn't have. It is related to a moving average filter, but isn't (quite) the same. I call it a "walking average filter." It takes the difference between an existing value and a new value and multiplies the difference by some small value (less than 1.) It then adds that product to the original value. It does much the same job and has much the same effect as a regular moving average, but it has a few advantages over the standard GNU Radio moving average. Not so important in this day and age, but a large consideration back when I first started doing this kind of thing is the lower memory usage. Rather than keep the last 100 (or whatever number) of spectrum blocks in memory, the walking average never has more than two at a time. The really nice thing about the walking average, though, is that it produces an output immediately. You see it beginning to move and converge to its average from the very beginning. The standard GNU Radio moving average doesn't produce any output until it has as many blocks in its memory as you told it to use. If you are using many large blocks of audio, it can take a very long time before you see anything at all. I tried, but failed to implement the walking average with standard GNU Radio Companion blocks in the GUI. I eventually gave up and implemented it in a Python block. That doesn't sound like it'd be very fast, but you never use plain Python for this kind of stuff. You use the NumPy library for fast math. Python pretty much just pushes pointers around and asks NumPy to do things. While I was at it, I put the median filter in the Python block. SciPy has one that suits me better than the standard one in GNU Radio. That's pretty much it. Generate white noise, capture two audio channels, do two spectrum analysis, do a bit of math, do some averaging, repeat. You might notice that nowhere in all of that do I try to calibrate anything. Despite that, the results are within 0.1 ohms when testing a resistor with the impedance tester and comparing to the measured value with an ohmmeter. There are two tricks that go into that. One trick is that it doesn't matter what units the measurements are made in. Look at the math. You have a voltage divided by a voltage. The units cancel, so it is just a ratio. The nameless units of the sound card samples work just as well as volts, so I don't have to figure out volts from samples. The other is that I am counting on the analog to digital converters on the line-in of the soundcard to be identical as far as amplification and frequency response go. They generally are, at least to the point that the differences between the two channels are smaller than all the other sources of error. I mean, I can only measure the series resistor to 0.1 ohms. That's 0.5 %. The sound card channels will be far more identical than that. Given all of the above, I wouldn't call the simple impedance tester a precision test tool. Still, it works and is precise enough to detect tiny variations in impedance - though you can't really say it is accurate in an absolute sense. As an example of how well the filtering tames the noise, here's the impedance of an eight ohm speaker. This is a little 2 inch, low power speaker I had kicking around in the junk drawer. 8 ohm speaker impedance plot Nice and smooth, and clear enough to see a couple of little resonance bumps at around 800 Hz and 2.5kHz. At any rate, that's the math and the software behind the simple impedance tester. The hardware is easier to explain, but I'll do that another time.
CommonCrawl
Intervening in gun markets: an experiment to assess the impact of targeted gun-law messaging Greg Ridgeway1, Anthony A. Braga2, George Tita3 & Glenn L. Pierce4 Journal of Experimental Criminology volume 7, pages 103–109 (2011)Cite this article The objective of this study was to assess whether targeting new gun buyers with a public safety message aimed at improving gun law awareness can modify gun purchasers' behaviors. Between May 2007 and September 2008, 2,120 guns were purchased in two target neighborhoods of the City of Los Angeles. Starting in August 2007, gun buyers initiating transactions on odd-numbered days received a letter signed by prominent law enforcement officials, indicating that law enforcement had a record of their gun purchase and that the gun buyer should properly record future transfers of the gun. The letters arrived during buyers' 10-day waiting periods, before they could legally return to the store to collect their new gun. Subsequent gun records were extracted to assess the letter's effect on legal secondary sales, reports of stolen guns, and recovery of the gun in a crime. An intent-to-treat analysis was also conducted as a sensitivity check to remedy a lapse in the letter program between May and August 2007. The letter appears to have no effect on the legal transfer rate or on the short-term rate of guns subsequently turning up in a crime. However, we found that the rate at which guns are reported stolen for those who received the letter is more than twice the rate for those who did not receive the letter (p value = 0.01). Those receiving the letter reported their gun stolen at a rate of 18 guns per 1,000 gun-years and those not receiving the letter reported their gun stolen at a rate of 7 guns per 1,000 gun-years. Of those receiving the letter, 1.9% reported their gun stolen during the study period compared to 1.0% for those who did not receive the letter. The percentage of guns reported stolen in these neighborhoods is high, indicating a high rate of true gun theft, a regular practice of using stolen-gun reports to separate the gun buyer from future misuse of the gun, or some blend of both. Simple, targeted gun law awareness campaigns can modify new gun buyers' behaviors. Additional follow-up or modifications to this initiative might be needed to impact the rate at which guns enter the illegal gun market and ultimately are recovered in crimes. It is against federal law to knowingly transfer a firearm to those prohibited from possessing a firearm. California law further requires that all gun transfers be conducted at licensed dealers and that stolen guns need to be reported. Nevertheless, current research evidence suggests that illegal diversions from legitimate commerce and theft are important sources of guns for criminals (Braga et al. 2002). While there may be promising avenues to control gun theft and informal transfers, we do not yet know whether it is possible to shut down illegal pipelines of guns to criminals or what the costs of such a shutdown would be to legitimate purchasers. As the U.S. National Research Council's Committee to Improve Research Information and Data on Firearms concluded, answering these questions is essential in understanding whether market-based approaches can reduce criminal access to guns and lower gun violence (Wellford et al. 2005). An interagency working group of law enforcement officials and academics collaborated on problem-oriented research to understand the workings of illegal gun markets in Los Angeles (Ridgeway et al. 2008). Participants included the U.S. Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF), California Department of Justice (CalDOJ), Los Angeles Police Department, Los Angeles City Attorney's Office, and others. The research suggested that one important flow of illegal guns to criminals in targeted areas involved legal purchasers who engaged in one or two "straw purchases" to provide guns to someone with a disqualifying criminal record. The interagency working group then developed a plan to deter legal purchasers from acquiring guns for criminals in the targeted areas. Among other strategies, the working group developed a mail campaign to target new gun buyers before they had an opportunity to transfer their firearm to someone else. They developed a letter that arrives in the new gun buyer's mailbox during the 10-day waiting period and reminds gun buyers of their legal obligations. The letter indicates that the firearm purchase has been documented and that, should it be used in a crime, the gun can and will be traced back to them as the first legal purchaser (see Appendix). The mail campaign was premised on the idea that straw purchasers can be deterred from illegally transferring guns. The working group posited that, because these individuals had no prior arrests or convictions that prohibited them from making a legal firearm purchase, they represented a target population that could be deterred easily. Specific-deterrence perspectives suggest that such individuals are more likely than those with criminal histories to be deterred by the prospect of increased risks for criminal sanctions (see Paternoster 1987, and Nagin 1998, for a review). A letter that clearly informs the straw purchaser of the risk of sanctions might generate that specific deterrence. In California, all firearm purchases must be conducted through a Federal Firearm Licensee (FFL)Footnote 1 and CalDOJ maintains a permanent record of those sales in the Automated Firearms System (AFS). To initiate the sale the gun dealer completes a Dealer Record of Sale (DROS), recording information about the gun, such as the make, model, manufacturer, serial number, caliber, contact information for the purchaser, and the date of the transaction.Footnote 2 AFS maintains a permanent record of subsequent events related to the gun. This includes reports of the gun's loss or theft, subsequent transfers of the gun, and the gun turning up as a "crime gun," a gun recovered by law enforcement that was in the possession of a prohibited possessor (felons, some violent misdemeanants, youths, those adjudicated mentally ill, and individuals with restraining orders) or recovered in connection with a crime. The working group identified two geographically distinct areas, LAPD's Devonshire and the 77th Street policing districts, for inclusion in the letter program. The 77th Street area is 12 mi.2 with 175,000 residents, and Devonshire is nearly 54 mi.2, with a total population of 250,000. Both areas have large numbers of residents who are legally buying guns that are ultimately being recovered as crime guns in the possession of others. Regardless of where in California the gun transaction occurred, if a purchaser's residential ZIP code was in either Devonshire or the 77th Street areaFootnote 3 then CalDOJ sent the Los Angeles City Attorney's Office details of the gun transactions on the following day. Between May 2007 and September 2008, 2,120 gun purchases were initiated in the two target areas. Starting in August 2007, letters were mailed to those potential gun buyers who initiated their gun purchase on an odd-numbered day. Note that this means that we have data on guns purchased between May 2007 and August 2007, but no letters were sent during this period. The study period ended in June 2009 by which time 878 gun buyers received a letter. The control cases consist of those who did not receive the letter. Because the letter was not sent on odd-numbered days between May and August 2007 due to a lapse in the letter program following the election of a new California attorney general, there is a validity threat that sales before August 2007 differ from those after August 2007. An analysis that simply excludes the 410 guns purchased during this period, however, would result in an underpowered analysis (power < 0.10). Instead we also conducted, as a sensitivity check, an analysis that considered guns purchased on any odd-numbered day, including those purchased between May and August 2007 as treated cases, and all guns purchased on even numbered days as control cases. This is an intent-to-treat analysis, common in randomized trials with lapses in compliance, that preserves some power (since it supplies 202 control cases to improve precision of the control group rates) but biases the treatment effect toward 0, resulting in a more conservative assessment. For each gun we computed an exposure time, the duration of time the gun was owned by the gun purchaser. For 94% of the gun buyers this was the time between their purchase date and the end of the study period. For the remainder, this was the time between their purchase date and when they transferred their gun, reported their gun stolen, or law enforcement recovered their gun. The exposure time ranged between a few days and 25 months, but 80% of the exposure times were between 6 and 23 months. To measure the effect of the letter, we estimated the rate ratio, RR. The rate ratio computes the rate at which incidents occur per year of exposure for those who received the letter relative to the incidence rate per year of exposure for those who did not receive the letter. For example, the RR for the letter's effect on reports of stolen guns is: $$ RR = \frac{{\frac{\text{number of stolen guns with letter}}{{\text{total years of exposure for those with the letter}}}}}{{\frac{\text{number of stolen guns with no letter}}{{\hbox{total years of exposure for those without the letter}}}}} $$ The RR can be estimated with Poisson regression. The purchase location is predictive of a gun's future (Wintemute et al. 2005). Since letters were sent on odd-numbered days, the purchase location is uncorrelated with receipt of the letter. Therefore, including dealer indicators reduces the model's residual error and improves the precision of the RR estimator without biasing the treatment effect estimate. We included indicator variables for the four largest gun dealers, each with more than 100 transactions during the study period. The model has the form: $$ {Y_i}\sim Poisson({\lambda_i}),\,\,\,\log ({\lambda_i}) = \log ({\hbox{exposure time}}) + {\beta_0} + {\beta_1}letter + {\beta_2}FFL1 + {\beta_3}FFL2 + {\beta_4}FFL3 + {\beta_5}FFL4 $$ where Y i is the outcome indicator (i.e., stolen, crime gun) and the four FFL variables are indicators for a gun being sold from a specific dealer. exp(β 1) equals RR. We found that those who received the letter reported their guns stolen at a significantly higher rate (p value = 0.01). Table 1 shows the calculation details. The adjusted rates account for the purchase location. The rate ratio of 2.6 indicates that those who were sent the letter reported their gun stolen at more than twice the rate of those who did not receive the letter. Stolen guns were reported on average 6 months after purchase; those receiving the letter reported the gun stolen 20 days sooner on average, but there are too few stolen guns for the difference to be statistically significant. Table 1 Relative risks of stolen guns and crime guns As noted previously, between May 2007 and August 2007, we collected data on the guns, but the letter distribution lapsed until August 2007. As a sensitivity check, we conducted an intent-to-treat analysis that regarded all odd days, even those before August 2007, as "letter" days and found that the significant finding still held (p value = 0.05). We also posited that guns purchased by buyers targeted with the letter would be less likely to become crime guns, but the rates at which they became crime guns were statistically indistinguishable: 17 per 1,000 guns per year for letter recipients and 16 per 1,000 guns per year for those not receiving the letter. Roughly 1% of the guns were recovered as crime guns during the study period. Our ability to detect an impact on the likelihood of becoming crime guns may be limited by the 22-month post-intervention period. In 2005 and 2006, only 13% of crime guns in California were recovered within 2 years of the first retail sale. The results of this study suggest that in some respects, legal gun purchasers do respond to market-based interventions. Gun-law messaging increased the likelihood that new gun owners reported thefts of recently purchased firearms. Enhanced reporting of gun theft will improve our understanding of the role of theft in supplying criminals with firearms. Official data systems that record stolen guns are well known to be limited by the problem that many stolen guns are never reported to the authorities (Kennedy et al. 1996) and much of what we know about stolen guns is derived from one-time or occasional surveys of criminals (e.g., Wright and Rossi 1994). Unfortunately, the more complete reporting of theft and the notification to recent purchasers that they would be held responsible for making legal subsequent transfers did not impact the short-term likelihood that these guns were recovered in crime by law enforcement agencies. The available data do not allow us to determine whether recently purchased guns that were reported stolen were all actually stolen or, as some in the working group suggested, some proportion were falsely being reported as stolen to break the paper trail between the straw purchaser and the actual criminal owner of the gun. It is possible that the gun letter initiative could have some longer-term impacts as particular neighborhoods are saturated with letters and casual straw purchasers decide not to make additional purchases. Given the short-term impacts on gun-purchaser behavior, longer-term study of the gun letter initiative seems to be warranted. Calif. Penal Code §§12072[a][5], 12072[d] Calif. Penal Code §12077[b] We defined the 77th Street area as ZIP codes 90001, 90003, 90037, 90043, 90044, 90047, and 90062 and Devonshire as ZIP codes 91401, 91402, 91403, 91405, 91406, 91411, 91423, and 91436, excluding the parts that extend beyond the city limits, beyond the jurisdiction of the Los Angeles city attorney. Braga, A. A., Cook, P. J., Kennedy, D. M., & Moore, M. H. (2002). The illegal supply of firearms. In M. Tonry (Ed.), Crime and justice: A review of research, 29 (pp. 319–352). Chicago: University of Chicago Press. Kennedy, D. M., Piehl, A. M., & Braga, A. A. (1996). Youth violence in Boston: Gun markets, serious youth offenders, and a use-reduction strategy. Law and Contemporary Problems, 59, 147–196. Nagin, D. S. (1998). Criminal deterrence research at the outset of the twenty-first century. In M. Tonry (Ed.), Crime and justice: A review of research, 23 (pp. 1–42). Chicago: University of Chicago Press. Paternoster, R. (1987). The deterrent effect of the perceived certainty and severity of punishment: a review of the evidence and issues. Justice Q, 4, 173–218. Ridgeway, G., Pierce, G. L., Braga, A. A., Tita, G., Wintemute, G., & Roberts, W. (2008). Strategies for disrupting illegal firearm markets: A case study of Los Angeles. TR-512-NIJ Santa Monica: RAND Corporation. Available at http://www.rand.org/pubs/technical_reports/TR512/. Wellford, C., Pepper, J. V., & Petrie, C. (Eds.). (2005). Firearms and violence: A critical review. Committee to Improve Research Information and Data on Firearms. Washington, DC: National Academies Press. Wintemute, G. J., Cook, P. J., & Wright, M. A. (2005). Risk factors among handgun retailers for frequent and disproportionate sales of guns used in violent and firearm related crimes. Injury Prevention, 11, 357–363. Wright, J. D., & Rossi, P. H. (1994). Armed and considered dangerous: A survey of felons and their firearms (2nd ed.). Hawthorne: Aldine de Gruyter. Peter Shutan, assistant supervising attorney for the gang division of the Los Angeles city attorney's office, coordinated the letter campaign with the California Department of Justice's Firearm Division. Paul Seave, director of the Office of Gang and Youth Violence Policy for California, was instrumental in engaging key parties and facilitating the letter program. Denise Stearns at ATF's Southern California Regional Crime Gun Center analyzed and recorded the histories of each of the guns in the study. National Institute of Justice grant 2001-IJ-CS-0028 funded this research. This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. RAND Corporation, Santa Monica, CA, USA Greg Ridgeway Rutgers University, Newark, NJ and Harvard University, Cambridge, MA, USA Anthony A. Braga University of California, Irvine, CA, USA George Tita Northeastern University, Boston, MA, USA Glenn L. Pierce Correspondence to Greg Ridgeway. Appendix: The gun letter Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://creativecommons.org/licenses/by-nc/2.0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. Ridgeway, G., Braga, A.A., Tita, G. et al. Intervening in gun markets: an experiment to assess the impact of targeted gun-law messaging. J Exp Criminol 7, 103–109 (2011). https://doi.org/10.1007/s11292-010-9113-5 Crime guns Illegal gun markets Stolen guns
CommonCrawl
An ensemble-based convolutional neural network model powered by a genetic algorithm for melanoma diagnosis Part of a collection: Special Issue on Effective and Efficient Deep Learning Based Solutions S. I. : Effective and Efficient Deep Learning Eduardo Pérez1,2 & Sebastián Ventura ORCID: orcid.org/0000-0003-4216-63781,2,3 Neural Computing and Applications (2021)Cite this article Melanoma is one of the main causes of cancer-related deaths. The development of new computational methods as an important tool for assisting doctors can lead to early diagnosis and effectively reduce mortality. In this work, we propose a convolutional neural network architecture for melanoma diagnosis inspired by ensemble learning and genetic algorithms. The architecture is designed by a genetic algorithm that finds optimal members of the ensemble. Additionally, the abstract features of all models are merged and, as a result, additional prediction capabilities are obtained. The diagnosis is achieved by combining all individual predictions. In this manner, the training process is implicitly regularized, showing better convergence, mitigating the overfitting of the model, and improving the generalization performance. The aim is to find the models that best contribute to the ensemble. The proposed approach also leverages data augmentation, transfer learning, and a segmentation algorithm. The segmentation can be performed without training and with a central processing unit, thus avoiding a significant amount of computational power, while maintaining its competitive performance. To evaluate the proposal, an extensive experimental study was conducted on sixteen skin image datasets, where state-of-the-art models were significantly outperformed. This study corroborated that genetic algorithms can be employed to effectively find suitable architectures for the diagnosis of melanoma, achieving in overall 11% and 13% better prediction performances compared to the closest model in dermoscopic and non-dermoscopic images, respectively. Finally, the proposal was implemented in a web application in order to assist dermatologists and it can be consulted at http://skinensemble.com. Melanoma is the most serious form of skin cancer that begins in cells known as melanocytes. Melanoma has an increasing incidence, where just in Europe were estimated 144,200 cases and 20,000 deaths in 2018 [1], whereas in USA, 106,110 new cases of invasive melanoma will be diagnosed (62,260 in men and 43,850 in women) and 7180 deaths are expected in 2021 (4600 men and 2580 women) [2]. The lesion is first diagnosed through an initial clinical screening, and then potentially through a dermoscopic analysis, biopsy and histopathological examination [3]. Despite the expertise of dermatologists, early diagnosis of melanoma remains a challenging task since it is presented in many different shapes, sizes and colors even between samples in the same category [4]. Providing a comprehensive set of tools is necessary for simplifying diagnosis and assisting dermatologists in their decision-making processes [3]. Several automated computer image analysis strategies have been used as tools for medical practitioners to provide accurate lesion diagnostics, including descriptor-based methods [5, 6] and convolutional neural networks (CNNs) [3, 7, 8]. Descriptor-based methods require the previous extraction of handcrafted features [9], which rely on the expertise of dermatologists and introduce a margin of error. By contrast, CNN models can automatically learn high-level features from raw images [3], thus allowing for the development of applications in a shorter timeframe. Furthermore, Nasr-Esfahani et al. [10] showed that CNN models can overcome handcrafted features-based methods. The authors obtained 7% and 19% better sensitivity performance compared to color-based descriptor and texture-based descriptor, respectively. Recently, Brinker et al. [11] demonstrated that CNN models can match the prediction performance of 145 dermatologists. CNN models have shown to be effective in solving several complex problems [12, 13]. However, they still present several issues which hamper their accuracy in diagnosing skin conditions. CNN models can learn from a wide variety of nonlinear data points. As such, they are prone to overfitting on datasets with small numbers of samples per category, thus attaining a poor generalization capacity. It is noteworthy that so far, most of the existing public skin datasets only encompass a few hundreds or thousands of images. This can limit the learning capacity of CNN models. On the other hand, CNN models are sensitive to some characteristics in data, such as large inter-class similarities and intra-class variances, variations in viewpoints, changes in lighting conditions, occlusions, and background clutter [14]. These days, the majority of skin datasets are made up of dermoscopic images reviewed by expert dermatologists. However, bear in mind that there is an increased tendency to collect images taken by common digital cameras [15]. The above can reduce invasive treatments and their associated expenses in addition to augmenting the development of modern tools for cheaper and better melanoma diagnoses. Finally, CNN models are approximately invariant with regard to small translations to the input, but they are not rotation, color or lighting-invariant [16, 17]. Invariance is an important concept in the area of image recognition. It means that if you take the input and transform it, the representation you get is the same as the representation of the original. Due to the vast variability in the morphology of moles, this is an important issue to resolve in order to attain a more effective melanoma diagnosis, thus allowing a model to detect rotations or changes to proportion and adapt itself in a way that the learned representation is the same. Several techniques can be applied to overcome some of these issues, but the most proven include data augmentation [18], transfer learning [3], ensemble learning [19] and, more recently, generative adversarial networks [20] and multi-task learning [21]. However, in most cases, researchers rely on their expertise to select which techniques to apply, and there is no specific pattern to follow that will definitively produce a model with a high level of reliability. Furthermore, most research does not follow a standard experimental study and only includes a limited number of datasets. As a consequence of the above, in this work, a novel approach for diagnosing melanoma via the use of images is proposed. First, the proposal is inspired by ensemble learning and is built via a genetic algorithm. The genetic algorithm is designed to find the optimal members of such ensembles while considering the entire training phase of the possible members. In addition, the abstract features of all models in the ensemble are merged, resulting in an additional prediction. Next, these individual predictions are combined and a final diagnosis is made. This approach can be seen as a double ensemble, in which all predictive components are double related. The aim is to find the models that best contribute to the ensemble, rather than the individual level. As a result, the state of each CNN model that best trains, generalizes and suits with the other CNN models is selected [22]. In this manner, the training process is implicitly regularized, which has shown better convergence, mitigates the overfitting of the model and improves the generalization performance [19, 23]. To the best of our knowledge, this is the first attempt at intelligently constructing ensembles of CNN models by following a genetic algorithm to better solve the challenge of diagnosing melanoma through the use of images. Second, a novel lesion segmentation method, which is capable of efficiently obtaining reliable segmentation masks without prior information through the use of just a CPU, is applied. Bear in mind that state-of-the-art biomedical segmentation methods commonly require a prior training stage and the use of GPUs. In addition, common techniques such as transfer learning and data augmentation have been applied to further improve performance. To evaluate the suitability of this proposal, an extensive experimental study was conducted on sixteen melanoma-image datasets, enabling a better analysis of the effectiveness of the model. The results showed that the proposed approach achieved promising results and was competitive compared to six state-of-the-art CNN models which have previously been used for diagnosing melanoma [3, 10, 24,25,26]. These works were summarized in Pérez et al. [8], and the most relevant are explained in the next section. To perform a fair comparison, the above architectures were assessed in Sect. 5.5 by using the same large number of datasets and a quantification of the results was included. Finally, a web application was developed in order to support dermatologists in decision making. The rest of this work is organized as follows: Sect. 2 briefly presents the state of the art in solving melanoma diagnosis problems mainly by using CNN models; Sect. 3 describes the proposed ensemble convolutional architecture; Sect. 4 describes the genetic algorithm application; the analysis and discussion of the results are portrayed in Sect. 5; and finally, Sect. 6 outlines conclusions and future works. The popularity of CNN models increased when AlexNet [27] won the well-known ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2012, reducing the top-5 labels error rate from 26.1% to 15.3%. Since then, CNN models are widely applied in image classification tasks [28,29,30]. However, most authors apply other sophisticated techniques over CNN models to achieve a better performance in melanoma diagnosis, such as segmentation [31], data augmentation [18], transfer learning techniques [3], and CNN-based ensembles [19]. Data augmentation is employed to add new data to the input space, which helps reducing overfitting [32] and obtaining transformation-invariant models [17]. This technique is usually performed by means of random transformations [33]. Also, most of the datasets available for melanoma diagnosis lack of balance between the categories, so data augmentation can help to tackle the imbalance issue [34]. For example, Esteva et al. [3] showed the suitability of CNN models as a powerful tool for melanoma diagnosis. The authors augmented the images by a factor of 720\(\times\) using basic transformations, such as rotation, flips and crops. Also, they compared the performance of one CNN model to 21 board-certified dermatologists on biopsy-proven clinical images. The results showed that the CNN model achieved a performance on par with experts. Pérez et al. [8] showed that 12 CNN models achieve better performance when applying data augmentation. For example, Xception [35] increased its average performance by 76%. Transfer learning has been successfully applied in image classification tasks [3, 30, 36, 37]. For example, Shin et al. [30] applied it specifically in thoraco abdominal lymph node detection and interstitial lung disease classification, in both cases the weights obtained after training with ImageNet were beneficial. Esteva et al. [3] used Google's InceptionV3 [38] architecture pretrained on ImageNet. The authors removed the final classification layer and then they re-trained with 129,450 skin lesions images. Pérez et al. [8] studied the impact of applying transfer learning in skin images, where MobileNet [39] increased its average performance by 42%. On the other hand, ensemble learning [40] has also shown to be effective in solving complex problems [41, 42]. The combination of several classifiers built from different hypothesis spaces can reach better results than a single classifier [43]. For example, Mahbod et al. [19] proposed an ensemble-based approach where two different CNN architectures were trained with skin lesion images. The results showed to be competitive compared to the state-of-the-art methods for melanoma diagnosis. Harangi et al. [23] proposed an ensemble composed by the well-known CNN architectures AlexNet [27], VGG and InceptionV1 [44]. The ensemble was assessed on the International Symposium on Biomedical Imaging (ISBI) 2017 challenge and obtained very competitive results. However, the ensemble models are usually built trusting in prior knowledge. In order to achieve better performance, not only new strategies to train the models have been developed, but also efforts have been made to improve the input data. Skin lesion segmentation plays an important role in melanoma diagnosis. It isolates the region of interest and significantly improves the performance of the model. This is a highly complex task, and it is important because some areas not related to the lesion can lead CNN models to misclassify samples. However, some authors decided not to use some of these preprocessing techniques. Mahbod et al. [19] ignored complex preprocessing steps, as well segmentation methods, but applied basic data augmentation techniques to prevent overfitting. Nevertheless, if irrelevant information is removed from the image, the models could be able to achieve better performance. Since 2016, The International Skin Imaging CollaborationFootnote 1 (ISIC) project annually organizes a challenge in which more than 180 teams have already participated. From 2016 to 2018, there was a special task about lesion segmentation. In ISIC-2016, considering the top performances, the use of segmentation obtained 8% better sensitivity compared to its non-use. Several segmentation methods can be found. For example, Ronneberger et al. [45] designed a CNN model (U-Net) for biomedical image segmentation. The model relies on data augmentation to use the available labeled images more efficiently. The U-Net architecture achieved high performance on different biomedical segmentation applications, such as neuronal structures in electron microscopic recordings and skin lesion segmentation [46, 47]. However, U-Net requires a costly training process using GPU and more important, a prior knowledge from data already segmented by expert dermatologists. Furthermore, Alom et al. [48] proposed a recurrent residual U-Net model (R2U-Net). The model was tested on blood vessel segmentation in retinal images, skin cancer segmentation, and lung lesion segmentation. The results showed better performance compared to the U-Net model. Huang et al. [31] proposed a new segmentation method based on end-to-end object scale-oriented fully convolutional networks. The authors achieved 92.5% of sensitivity and outperformed all CNN models in their study, which was 1.4% better compared to the winner in ISIC-2016. Considering the above, it would be interesting to design a deep learning model that combines features from different approaches such as segmentation, data augmentation, transfer learning and ensemble learning. We hypothesized that evolutionary optimization methods could be an effective approach to find the optimal combination of CNN models [49, 50], considering all states from all the models. Evolutionary methods have proven to be useful in solving many complex problems, finding and optimizing architectures of neural networks [51], and mining imbalanced data [52]. Regarding segmentation methods, in this work it is applied an extension of the Chan-Vese segmentation algorithm [53] and it is evaluated using specialized datasets. To augment data, it is important to perform a data augmentation both on training and test phases [18], which can increase the performance significantly. Ensemble-based convolutional architecture First of all, the source datasets were preprocessed following an extension of the Chan-Vese segmentation algorithm, as shown in Fig. 1. This algorithm is designed to segment objects without clearly defined boundaries and is based on techniques of curve evolution, Mumford-Shah [54] functional for segmentation and level sets. Chan-Vese has been previously applied in skin image segmentation [55], being an effective starting point to accurately segment skin images. First, Chan-Vese is applied on each input image and as a result, a mask with positive and negative values is obtained. After that, the positive pixels within 40% of the center are selected (cluster of pixels P). After using several recognized skin lesion detection applications such as SkinVisionFootnote 2, we realized that most of them demand that images must be centered in the lesion in order to perform an accurate diagnosisFootnote 3 [56]. In addition, after reviewing a large number of skin image datasets, it was noticed that most images are centered in the lesion, which is an advantage when applying segmentation. Third, all positive clusters (Q) that intersect P are merged, obtaining a new segmentation mask M, \(M = Q_1 \cup Q_2 \cup \text {...} \cup Q_n, P \cap Q_i \ne \emptyset\). Finally, the segmented image is obtained after applying the mask M on the original image. In Sect. 5, the proposed segmentation method is compared to other state-of-the-art biomedical segmentation methods to corroborate its effectiveness. Next, the segmented images are used as input to the architecture described below. Preprocessing steps applied before training. U-Net score is the average between Dice coefficient and Jaccard similarity, which have been used before as evaluation metrics in the segmentation task of the ISIC-2018 contest. Input image taken from the DERM-LIB dataset Example architecture of the proposed ensemble model. Figure shows how a new ensemble is obtained by selecting those CNN models represented by \(g^j_k\) \(>0\). In (a) the chromosome contains the CNN models and their respective epochs; b the ensemble is composed of DenseNet (epoch 33), NASNet (epoch 32) and Xception (epoch 11); c a late fusion approach is used to merge all the representations Let us say \(\Phi\) is a model with m independent CNN models, which learn the representations from the same feature space, and an extra prediction block (stacking of dense layers) that yields an extra prediction. The prediction block is obtained by first concatenating all the representations learned by the individual CNN models. Figure 2 shows a specific example of the proposal. An image i is passed as input to the jth CNN model of \(\Phi\), and a chromosome indicates which tuples of CNN models and their weights determine the ensemble's architecture. Each CNN outputs the learned representation (\(r^{j}_{i}\)) and a partial prediction \(\hat{o}^{j}_{i}\) for the label of this sample by considering the weights required in the chromosome, e.g., the weights from DenseNet trained until epoch 33 (\(g_1=33\)). Thereafter, the representations \(r^{j}_{i}\) learned by each CNN model are flattened, then concatenated and then passed to the prediction block of the model. A late fusion approach is used to concatenate all the representations learned by the CNN models (Fig. 2c), which has proven to obtain better performance compared to merely combining the individual predictions [57]. Nevertheless, in Sect. 5.5 the prediction block is assessed versus its non-use. Thereafter, we freeze the weights from the individual CNN models and only the prediction block is trained during 20 epochs, providing an additional prediction (\(\hat{o}^{m+1}_{i}\)) for the label of the sample. In this work, the prediction block is composed of Dense(512 ReLUs) \(\rightarrow\) Dense(256 ReLUs) \(\rightarrow\) Dense(128 ReLUs) \(\rightarrow\) Dense(1 unit - sigmoid). Then, \(\Phi\) predicts the label of an image by using a soft-voting procedure based on the individual predictions. Regarding efficiency, each CNN model was trained just once and its weights along every epoch of interest were stored in a hard drive. Once the prediction for a given training sample is computed, the losses produced by each CNN model and the prediction block of \(\Phi\) are calculated. The loss produced by the jth CNN model of \(\Phi\) on the ith training image (denoted as \(\mathcal {L}^{j}(i)\)) is computed by means of a binary cross-entropy. The goal is to iteratively minimize the prediction errors on the training samples. Mini-batch Gradient Descent (MGD) [58] can be applied to solve this optimization problem, since the first derivative of the loss function is well-defined. This algorithm consists in randomly splitting the training set in small batches in order to calculate the model error. The above method has several advantages, such as the model update frequency is higher than batch gradient descent which allows for a more robust convergence, avoiding local minima; batch-based updates provide a computationally more efficient process than stochastic gradient descent and the split in small batches allows the efficiency of not having all training data in memory. Ensemble model via genetic algorithm In this work, a genetic algorithm (GA) was designed to find the optimal members of an ensemble model. Next, the different components and steps of the proposed method are explained in detail. Individuals and chromosome codification Let say that the population of the GA has q individuals \(\{I_1, I_2, \ldots , I_q\}\), where the j-th individual (\(I_j\)) has a chromosome encoded as a list of integer values (from 0 to 150), as shown in Fig. 2a. A chromosome is composed of K genes, where \(g^j_k\) represents the k-th gene of the individual \(I_j\). Each gene index k is independent of the others and is related to a specific CNN model. Its value means the epoch of the CNN model to be considered as part of the ensemble; if \(g^j_k>0\), the CNN model is selected; otherwise, it is not selected. Consequently, using this encoding, each individual of the population represents a full solution to the problem. Fitness function The fitness function used to evaluate the individual \(I_j\) can be calculated as $$\begin{aligned} f_j= \frac{1}{n \times p}\sum _{i=1}^{n} \left[ \mathcal {L}_{m+1}^{j}(i) + \sum _{k=1}^{m} \mathcal {L}_k^{j}(i)\right] \,\textit{ if }\, g^j_k>0, \end{aligned}$$ where \(\mathcal {L}_k^{j}(i)\) calculates the loss values for the ith image in the CNN model with index k, and \(\mathcal {L}_{m+1}^{j}(i)\) means the loss value of the prediction block; p means the number of times that \(g^j_k>0\) is fulfilled; m represents the total number of CNN models encoded in the chromosome; and n is the total number of images. In summary, for each individual, the fitness function calculates the average loss value between the chosen CNN models and the prediction block; lower average means a more desirable individual. Creation of the initial population Maintaining a diverse population, especially in the early iterations of the algorithm, is crucial to ensure a good exploration of the search space. The training epochs (e) and the CNN models (m) determine the total number of possible combinations in the search space. To guarantee the diversity, the chromosome of each individual \(I_j\) of the population is randomly created, but repeated individuals are not allowed. In this manner, it is possible to avoid the early convergence of the method to local minima. In this work, we have considered an individual as repeated when all genes \(g_k^j\) in the chromosome are identical. For example, let us say \(g_k^A\) and \(g_k^B\) are genes from chromosomes A and B, respectively; k means the index related to a CNN model (e.g., DenseNet, InceptionV3 and Xception) and m means the maximum number of CNN models in each ensemble. A and B are identical if for every \(k={1,2,...,m}\), \(g_1^A=g_1^B\), \(g_2^A=g_2^B\), \(g_k^A=g_k^B\),..., \(g_m^A=g_m^B\). Having said the above, the individuals are repeated if they use a different CNN in any of their genes, i.e., two models are considered different if they have different architectures or the same architecture but with different weights. Parent selection The parents are selected by a tournament selection procedure to create the intermediate population [59]. A tournament size equal 2 was used in this work; the smaller the tournament size, the lower the selection pressure and the search space can be widely considered. To achieve this, two individuals are randomly selected. Then, the individuals are compared and the best individual is selected with replacement, i.e., this individual could be selected in further rounds. This process is repeated until the number of individuals is completed. As the generations increase, the algorithm can focus more on promising regions of the search space. Genetic operators Figure 3 shows the genetic operators applied. First, a custom Flat crossover [60, 61] was performed with a crossover rate \(p_c\), where its general mechanism is adapted for an integer representation scheme. Flat crossover is applied in each pair of genes that are located in the same locus and represents the same CNN model. As a result, a random integer value is chosen from the interval \([g^1_{k}, g^2_{k}]\). Once the new offspring is generated, an one-point mutator operator is applied with a probability \(p_m\). The mutation operator randomly selects a gene, then, the value of epoch is changed by a new value selected from a range of possible epochs, e.g., [0; 150]. Finally, the offspring was generated. It is noteworthy that valid individuals are always obtained after performing these operators. Example of the genetic operators applied in the genetic algorithm Population update In this work, a generational elitist algorithm [62] was used to update the population passed from one generation to the next one. As a result, the best individual in the last generation is the best individual of the evolution. To achieve this, the population in each generation keeps all new individuals, as long as the best parent is not better than all the children. In such cases, the best parent replaces the worst child. The worst child is determined by sorting the individuals of the new population according to the fitness value. After replacing the worst child, the new population replaces the previous one. At the end of the algorithm, the best individual in the last generation will be the best ensemble. In this work, the number of individuals generated in each generation is 100 and the population size is kept constant in order to alleviate the computational cost of CNN. However, these values could be tuned depending on the context, including the possibility of increasing/decreasing them. Regarding efficiency, a map with each explored individual and its corresponding fitness value is cached. Integration of the genetic algorithm in the training phase Figure 4 shows how the training phase was conducted. First, the source images are preprocessed by using the method explained above. Second, given a set of CNN models the initial population of the GA is created. For example, if we have an individual with \(g_1^j=50\) and another one is \(g_1^{j+z}=51\) (\(z>0\)), the last one is obtained after training \(g_1^j\) for one more epoch. In general, each CNN model was trained just one time when needed and its weights along every demanded epoch were stored in a hard drive. In the worst case, we train each CNN model 150 epochs once. Third, each ensemble represented by a chromosome is trained for n epochs, bearing in mind that only the prediction block is updated. Then, the fitness of each individual of the current population is calculated by considering the loss obtained in the training images by the members of the ensemble. Four, the parents are selected, then, a crossover and a mutation are performed, and after that a new population is created. Five, the population of the GA is updated. Steps from three to five are repeated until one of the stopping criteria is satisfied. In this work, we have applied the most frequently used stopping criterion which is a specified maximum number of generations. In addition, we stopped the search when an individual had achieved the top performance. Finally, the ensemble model \(\Phi\) is obtained. This section describes the experimental study carried out in this work. First, the datasets and the experimental protocol are portrayed, and then, the experimental results and a discussion of them are presented. Table 1 shows a summary of the benchmark datasets. UDA [66], MSK [67], HAM10000 [65] and BCN20000 [63] datasets are included in the ISIC repository. The images are composed strictly of melanocytic lesions that are biopsy-proven and annotated as malignant or benign. Also, images from BCN20000 would be considered hard-to-diagnose and had to be excised and histopathologically diagnosed. PH2Footnote 4 [69] comprises dermoscopic images, clinical diagnosis and the identification of several dermoscopic structures. The Dermofit Image LibraryFootnote 5 [64] gathers 1,300 focal high-quality skin lesion images under standardised conditions. Each image has a diagnosis based on expert opinion and like PH2, it includes a binary segmentation mask that denotes the lesion area. MED-NODEFootnote 6 [68] collects 170 non-dermoscopic images from common digital cameras; this type of image is very important to prove the models with data from affordable devices. SDC-198Footnote 7 [70] contains 6,584 real-world images from 198 categories to encourage further research and its application in real-life scenarios. Finally, DERM7PTFootnote 8 [21] is a benchmark dataset composed of clinical and dermoscopic images, allowing to assess how different it is to use dermoscopic images versus images taken with digital cameras. Table 1 Summary of the benchmark datasets Only the images labeled as melanoma and nevus were considered, being in total 36,703 images. Most datasets present a high imbalance ratio (ImbR), up to ten times in the case of MSK-3, commonly hampering the learning process. The intra-class (IntraC) and inter-class (InterC) metrics show the average distances between images belonging to different classes, as well as between images belonging to the same class. Both metrics were computed using the Euclidean distance; each image i was represented as a vector. Then, the ratio (DistR) between these metrics showed that both distances are similar, which commonly indicates a high degree of overlapping between classes. Finally, the silhouette score (Silho) [71] was calculated, representing how similar an image is to its own cluster compared to other clusters. The results indicated that images were not well matched to their own cluster, and even samples belonging to different clusters are close in the feature space. Experimental settings Firstly, the proposed segmentation algorithm was applied over PH2 and DERM-LIB, where both datasets were manually segmented by expert dermatologists and they are commonly used as benchmarks due its quality [64, 69]. The aim was to find how much the method is close to expert segmentation. Also, we compared the segmentation algorithm to U-Net and R2U-Net, which are CNN architectures designed for biomedical image segmentation. Bear in mind that the above CNN architectures need prior training. In order to obtain the segmentation mask of an image X, the CNN architectures were trained during 150 epochs with 10% of images as validation set and the rest as training set. The best model obtained during validation was applied on the image X and a segmentation mask was obtained. The above procedure was applied in both PH2 and DERM-LIB datasets. The second phase aimed at analyzing the GA's hyperparameters on the model's performance. The model was dubbed as Genetic Algorithm Programming-based Ensemble CNN model (GAPE-CNN). It should be noted that the main aim of this work was to perform a preliminary study to assess the effectiveness that can be attained by using the proposed architecture in melanoma diagnosis. Consequently, only the hyperparameter values listed in Table 2 values were considered, because including more settings requires high computational resources during training CNN models. Three different number of generations were evaluated, where larger values can lead to a larger number of possible ensembles, but increasing the training cost. Three crossover probabilities were tested, where higher values increase the probability that the genetic information of parents can be combined to generate new individuals. Three mutation probabilities were also evaluated, where higher values allow to escape from local minima and add more exploration of the search space. Table 2 Basic configuration used In the third phase, the proposal was compared to the following state-of-the-art CNN models that have previously been used in melanoma diagnosis: InceptionV3 [3], DenseNet [25], VGG16 [10], MobileNet [24], Xception [26] and NASNetMobile [72, 73]. Table 2 shows the configuration used to train all the models: the \(\text {learning rate}\) \((\alpha )\) was equal to 0.01 and it was reduced by a factor of 0.2 if an improvement in predictive performance was not observed during 10 epochs; the weights of the networks were initialized using Xavier method [74] in those cases where transfer learning was not present, e.g., the prediction block and baseline CNN models; a batch of size 8 was used due the medium size of the used datasets and the models were trained along 150 epochs. Mini-batch gradient descent was used for training the models, which is one of the most used optimizers for training CNNs. Despite its simplicity, it performs well across a variety of applications [75] and has been successfully applied for training networks in melanoma diagnosis [7, 18, 76]. In this work, a tuning process was not carried out and so the results could not be conferred to an over-adjustment. The datasets utilized in this work correspond to binary classification problems, so the cost function used for training the models was defined as the average of the binary cross-entropy along all training samples. Data augmentation technique was mainly applied to tackle the imbalance problem in melanoma diagnosis by applying and combining random rotation-based, flip-based and crop-based transformations over the original images. Bear in mind that color-based transformations were not considered in order not to alter the color space, which is important for the diagnosis of melanoma [77]. As a result, the changes performed during data augmentation do not change the labels of the samples. The data augmentation process was assessed in both training and test data, which can increase the performance significantly [18]. After splitting a dataset into training and test sets, training data were balanced by creating new images until the number of melanoma images was equal to the normal ones, and the generated training images were considered as independent from the original ones. On the other hand, test data were expanded by randomly augmenting each test image at least ten times, but the generated images remained related to the original ones. Consequently, given an original test image X, the classes' probabilities for X and its related set of images \(S_X\) were averaged to yield the final prediction; so any CNN model performs like an ensemble one, where the final probabilities for a test image was computed using a soft-voting strategy. In order to evaluate quantitatively the segmentation method, several performance metrics are considered, including accuracy (ACC), F1-score, Dice coefficient (DC), Jaccard similarity (JS), and the U-Net score (UNS), which is the average between DC and JS. Accuracy and F1-score are calculated using Eqs. 2, 3 and Dice coefficient and Jaccard similarity are calculated using Eqs. 4 and 5. $$\begin{aligned}&ACC = \frac{TP+TN}{TP+TN+FP+FN}, \end{aligned}$$ $$\begin{aligned}&F1 = \frac{TP}{TP+\frac{1}{2}(FP+FN)}, \end{aligned}$$ $$\begin{aligned}&DC = 2\frac{|GT \cap SR|}{|GT| + |SR|}, \end{aligned}$$ $$\begin{aligned}&JS = \frac{|GT \cap SR|}{|GT \cup SR|}, \end{aligned}$$ $$\begin{aligned}&UN = \frac{DC+JS}{2}, \end{aligned}$$ where TP, FP, TN and FN represent well-selected pixels, mis-selected pixels, well-discarded pixels and mis-discarded pixels, respectively; GT and SR mean the ground truth pixels and the segmentation result, respectively. The comparison between the segmentation methods is performed using UNS, summarizing DC and JS. The above metrics have been used before in the segmentation task of the ISIC-2018 contest. Regarding the evaluation process of the CNN models, a 3-times 10-fold cross validation process was performed on the datasets, and the results were averaged across all fold executions. In each fold, Matthews Correlation Coefficient (MCC) was used to measure the predictive performance of the models. MCC is widely used in Bioinformatics as a performance metric [78], and it is specially designed to analyze the predictive performance on unbalanced data. MCC is computed as: $$\begin{aligned} \small { MCC=\frac{t_p\times t_n - f_p\times f_n}{\sqrt{(t_p + f_p)(t_p + f_n)(t_n + f_p)(t_n + f_n)}}, } \end{aligned}$$ where \(t_p\), \(t_n\), \(f_p\), and \(f_n\) are the number of true positives, true negative, false positives, and false negatives, respectively. MCC value is always in the range \([-1,1]\), where 1 represents a perfect prediction, 0 indicates a performance similar to a random prediction, and -1 an inverse prediction. Finally, non-parametric statistical tests were used to detect whether there was any significant difference in predictive performance. Friedman's test [79] was conducted in cases where a multiple comparison was carried out, Hommel's post hoc test [80] was employed to perform a multiple comparison with a control method, Shaffer post hoc test [81] was employed to perform pairwise comparisons, and finally Wilcoxon Signed-Rank test [82] was performed in those cases where only two individual methods were compared. All hypothesis testing was conducted at 95% confidence. Software and hardware The experimental study (training and test) was executed with Ubuntu 18.04, four GPUs NVIDIA Geforce RTX 2080-Ti with 11 GB DDR6 each one and four GPUs NVIDIA Geforce RTX 1080-Ti with 11 GB DDR5X each one. All the experiments were implemented in Python v3.6, and the CNN models were developed by using Keras framework v2.2.4 [83] as high level API, and TensorFlow v1.12 [84] as backend. In addition, the proposals were implemented in a web-based application in order to assist dermatologists in decision making. Amazon web servicesFootnote 9 were used in deployment time as platform, providing secure and resizable compute capacity in the cloud. Regarding efficiency, we converted all tensorflow models to Tensorflow Lite models. Tensorflow Lite is a lightweight library for deploying models using a minimum of computational resources. Table 3 shows the minimum amount of hard drive space, RAM and inference time required by each CNN model. The time was measured when processing one sample. Bear in mind that these values were analyzed in deployment time by using CPU and not GPU. As expected, MobileNet was the lightest-weight model which required the least amount of resources. In addition, the highest difference between using or not Tensorflow Lite is regarding RAM—this technology saves more than ten times compared to default deployment. As a result the models only needed 510 MB of RAM and one CPU core to perform inference. In this work was selected basic Amazon EC2 t2.micro, with one GB of RAM and one CPU core (\(\$5.26\)/month), saving \(\$36.72/\)month compared to Amazon EC2 t2.large (\(\$41.98\)/month), which is the closest architecture capable of supporting more than five GB of RAM. Table 3 Summary of the computational resources in deployment time; "D" and "L" mean using default tensorflow-cpu and tensorflow-lite; HDD includes both architecture and weights In this section, the main results are presented. Additional material for supporting this work can be found at the available web pageFootnote 10. Segmentation methods Table 4 shows the segmentation performance obtained from a small sample (594 images in total). Both datasets contain high-quality images and manual segmentation performed by expert dermatologists. Each image was evaluated applying UNS and other metrics. The proposed segmentation method obtained the best average performance in both datasets with 81% and 80% UNS in PH2 and DERM-LIB, respectively. In addition, Fig. 5 shows Friedman's test ranking, where the proposed method outperformed both U-Net and R2U-Net. Then, Friedman's test rejected the null hypothesis with a p-value < 2.2E-16 in DERM-LIB dataset. The proposal was ranked first, and afterward, the Shaffer's post hoc test was conducted, where the proposal achieved significantly better performance compared to U-Net and R2U-Net. In addition, U-Net significantly outperformed R2U-Net. On the other hand, in PH2 no significant differences were encountered between the three methods, the Friedman's test did not reject the null hypothesis with a p-value equal to 6.476E-1. (The test was conducted with two degrees of freedom, and the Friedman's statistic was equal to 86.898E-2.) However, the proposal was ranked first and it attained 111% and 74% less variance compared to U-Net and R2U-Net, respectively. In this way, the CNN models can focus on relevant pixels, so easing the learning of better abstract and discriminative features for melanoma diagnosis. Also, the proposal did not require prior training, which is a clear advantage compared to those based on CNN models. In addition, the proposed segmentation method can be used with only a CPU, avoiding a significant amount of computational power for training like those using GPU. Finally, R2U-Net obtained a better performance compared to U-Net in PH2, but the opposite occurred in DERMLIB. A larger number of datasets are needed to validate the differences between both methods regarding skin lesion diagnosis. Table 4 Preprocessing performance obtained in DERM-LIB dataset Table 5 Average MCC values obtained by using six state-of-the-art CNN models. All pairwise comparisons between the proposed method and the state-of-the-art biomedical segmentation methods in DERM-LIB. The null hypothesis was rejected with adjusted p-value < 2.2E-16 Each sub-figure shows the methods ordered from left to right according to the ranking computed by Friedman's test; the proposed segmentation method achieved the best performance in all CNN models. The lines located summarizes the significant differences encountered by the Shaffer's post hoc test, in such a way that groups of models that are not significantly different (at α = 0.05) are connected by a line In addition, we corroborated that the CNN models trained with segmented data were able to achieve better performance. All models achieved the best performance using segmented data, and on average the best ones were MobileNet, DenseNet, and InceptionV3, in that order. Despite MobileNet was designed with efficiency in mind, it managed to overcome more complex CNN models. Nevertheless, DenseNet and InceptionV3 surpassed MobileNet in BCN20000, which is the largest and one of the more complex datasets. Skin image datasets are commonly small, which is where MobileNet usually achieved its best performance [85, 86]. Although these results could be also caused by the use of default hyperparameters, all CNN models shared the same condition in order to avoid advantages between them. On the other hand, the number of epochs could be another cause. This is traditionally large when training CNN models, often hundreds or thousands, allowing the learning algorithm to run until the error from the model has been sufficiently minimized. It is common to find examples in the literature and in tutorials where the number of epochs is set to 500, 1000, and larger. However, in this work we used 150 epochs, mainly because of the number of available images, and even then all models and the proposal achieved competitive performances. UDA-2 was the most challenging dataset. On average, the models achieved only 44% MCC; UDA-2 has the lowest Silhouette value (0.020), indicating a high overlapping level between classes and increasing the difficulty. The overall best performance was achieved in DERM-LIB, PH2 and HAM10000 datasets with a 90%, 85% and 79% MCC, respectively. The above datasets have the three highest Silhouette values, meaning that they have a low overlapping level between images, which makes easier the task. Table 5 and Fig. 6 summarize the results obtained after training the six CNN models with the segmentation method. The models in Fig. 6 are ordered from left to right according to the ranking computed by Friedman's test, where the proposed segmentation method achieved the best performance in all datasets, followed by TDA. Friedman's test rejected the null hypothesis on all CNN models. It was observed that overall all CNN models using the segmented data presented better results compared to its non-use, proving to be suitable for melanoma diagnosis. Shaffer's post hoc test found that InceptionV3 and Xception significantly improved their performance when segmented data was used compared to the baseline and transfer-learning combined with data augmentation. All baseline CNN models were significantly surpassed by the other techniques. Results showed the proposed segmentation method was able to improve the performance in all CNN models considered in this work. In the following sections, all CNN models are compared using segmented data, which already proved to achieve the best performance. Analyzing the impact of three main hyperparameters First of all, the advantages of using the extra prediction block were analyzed. The best epochs from each CNN model were combined and all possible ensembles were evaluated. In the end, 57 ensembles were obtained after discarding the empty set and the one-element sets. Then, the best one was selected as baseline (BL) and included in the comparison. In addition, it was obtained another baseline model applying the same above procedure by simply combining the predictions of the CNN models without using the prediction block (BLs). Table 6 shows the average MCC values on test sets comparing BL versus BLs. Results show that models using the extra prediction block obtained the best performance in all datasets compared to BLs. Finally, the Wilcoxon's test rejected the null hypothesis with a p-value equal to 2.189E-4, confirming the benefit and effectiveness of using the prediction block for combining the features from the individual CNN models. Henceforth, the rest of the experimental study was executed using the proposal architecture. Table 6 Average MCC values on test sets; BL represents the best ensemble obtained by combining the individual best models from each architecture and BLs represents the same as BL, but simply merging the predictions without using the extra prediction block Table 7 Average MCC values on test sets; g, \(p_c\) and \(p_m\) represent generations, crossover rate and mutation rate, respectively Table 7 shows the average MCC results attained by the proposed genetic algorithm using different values of g, \(p_c\) and \(p_m\). For each dataset, the best MCC value is highlighted in bold typeface. Overall, all models obtained a high performance, being the lowest and the highest average performance 96.9% and 97.8%, respectively. The models attained their best performance in MSK-3, PH2, DERM-LIB and SDC-198 datasets. Regarding the parameters that control the GA, the results showed the higher the number of generations, the better was the predictive performance. A higher number of generations could imply more exploration and higher possibilities to create offspring with a more accurate set of CNN models. Furthermore, the best results peaked when using a high mutation probability, denoting that a higher mutation probability could imply a better exploration of good sets of epochs. Finally, it was observed that 250 generations are good enough to obtain a performance which is significantly similar to the top performance obtained by using 400 generations. The Rank column shows the average ranking computed by Friedman's test, and this ranking found that \(g=400\), \(p_c=90\%\) and \(p_m=30\%\) were the best settings for the GA, obtaining on average the best predictive performance. The Friedman's test rejected the null hypothesis with a p-value equal to 8.771E-15; Friedman's statistic was equal to 124.28 with 26 degrees of freedom. Afterward, the Hommel's post hoc test was conducted by considering the GA as the control method, which significantly outperformed 50% of all other considered configurations. Next, the best GA is compared to several state-of-the-art CNN models that have previously been used in melanoma diagnosis and at the same time are the baseline to build the ensemble. Comparing with state-of-the-art CNN models Table 8 shows the fold changes between the proposed model and each of the state-of-the-art CNN models. The BL ensemble overcame all individual state-of-the-art models, except in DERM-LIB, where DenseNet201 and MobileNet surpassed it by a small margin. It should be noticed that these models are the same considered by the GA to build the ensemble, so this comparison plays another role, which is to corroborate that ensemble learning is more suitable for melanoma diagnosis tasks compared to individual models. The results were very promising since the proposal GAPE-CNN achieved the highest MCC values in all datasets. It is noteworthy that the proposal achieved a predictive performance 165% and 130% higher than Xception and InceptionV3 on UDA-2 dataset, respectively. Also, it achieved high performance in the two largest datasets. The best average predictive performance was observed on DERM-LIB and PH2 datasets, where it was obtained 98% and 96% MCC values, respectively. The lowest overall predictive performance was observed on the dataset UDA-2 with 56% of MCC. However, the proposal was at least 62% and 21% better than all the individual CNN models and the BL, respectively. Overall, the worst performance was attained by VGG16. Table 8 Average MCC values of the different models on each dataset Table 9 Average MCC values on test sets; BL represents the best ensemble obtained by combining the individual best model from each architecture and GAPE-CNN represents our proposal Table 9 summarizes the average MCC values on test sets comparing the baseline versus GAPE-CNN. The proposal surpassed BL in all datasets, and finally, the Wilcoxon's test rejected the null hypothesis with a p-value equal to 2.189E-4, confirming the benefit and effectiveness of using genetic algorithms for learning the set of CNN models to build an ensemble. Each independent CNN model provides a partial prediction, which is aggregated to yield a final decision. The architecture follows an ensemble approach, which has demonstrated to be an effective way to improve the learning process in many real-world problems [87]. Furthermore, the proposed model applies transfer learning and data augmentation techniques. Data are augmented not only at training stage, but also at test stage, thus allowing to attain a better predictive performance. The data augmentation process on both phases has shown to be an effective way to improve melanoma diagnosis [88], and also it is an excellent approach to cope with the imbalance data issue, and the high inter- and intra-class variability present in most skin image datasets. Dermoscopic versus non-dermoscopic images Figure 7 shows the average performance attained by the models by grouping the datasets in dermoscopic and non-dermoscopic ones. The results showed that the proposed architecture attained the best performance whatever the type of image, denoting the effectiveness of the approach. All CNN models attained their best performance using dermoscopic images by a slight margin. NASNetMobile was the architecture most benefited from using dermoscopic images, with an improvement of about 8%. The biggest improvement was found comparing with VGG; the proposal outperformed VGG considering dermoscopic and non-dermoscopic images in 31% and 30%, respectively. The second best performance behind the proposal was achieved by the BL in both types of images; the proposal surpassed it by 5% in both types of images. To sum up, the results obtained through the experimental study revealed that GAPE-CNN was effective for diagnosing melanoma, attaining better predictive performance with respect to the state-of-the-art models. Average MCC values on test sets by grouping the datasets in dermoscopic (green bars) and non-dermoscopic (orange bars); "D": DenseNet201; "I": InceptionV3; "M": MobileNet; "N": NASNetMobile; "V": VGG16; "X": Xception; "B": base line; "G": our proposal It is clear that automatic melanoma diagnosis via deep learning models is a challenging task, mainly due to a lack of data and differences even between samples from the same category. We addressed these problems via a series of contributions. First, to preprocess data, we validated and applied an extension of the Chan-Vese algorithm. The segmentation masks indicated that the proposal achieved a better performance compared to state-of-the-art segmentation methods. Also, the results showed that all CNN models improved their performance by using segmented data. Second, the training and testing data were enriched using data augmentation, reducing overfitting and obtaining transformation-invariant models. Third, we increased the performance by applying transfer learning from the pre-trained ImageNet. Also, transfer learning alleviated the requirement for a large number of training data. Finally, to further improve the discriminative power of CNN models, we proposed a novel ensemble method based on a genetic algorithm, which finds an optimal set of CNN models. An extensive experimental study was conducted on sixteen image datasets, demonstrating the utility and effectiveness of the proposed approach, attaining a high predictive performance even in datasets with complex properties. Results also showed the proposed ensemble model is competitive with regard to state-of-the-art computational methods. Future works will conduct more extensive experiments to validate the full potential of the proposed architecture, for example by considering a wide set of hyperparameters to be tuned as well as a larger number of datasets. Finally, it is noteworthy that our approach benefits from combining different CNN models—abstract features are combined in the extra prediction block, and individual predictions from the ensemble and the mentioned block are combined to obtain a diagnosis. This approach is not restricted to melanoma diagnosis problems and could be applied on other real-world problems in the future. https://www.isic-archive.com. https://www.skinvision.com/. https://www.firstderm.com/ai-dermatology/. https://www.fc.up.pt/addi/ph2%20database.html. https://bit.ly/3gi6rKR. http://www.cs.rug.nl/imaging/databases/melanoma_naevi/. https://bit.ly/2TdZWQ6. http://derm.cs.sfu.ca. https://aws.amazon.com/. http://www.uco.es/kdis/ensemble-based-cnn-melanoma/.. Ferlay J, Colombet M, Soerjomataram I, Dyba T, Randi G, Bettio M, Gavin A, Visser O, Bray F (2018) Cancer incidence and mortality patterns in Europe: estimates for 40 countries and 25 major cancers in 2018. Eur J Cancer 103:356–387 American Cancer Society: Cancer Facts and Figures (2021). https://bit.ly/3gNDBVr. Consulted on June 22, 2021 Esteva A, Kuprel B, Novoa R, Ko J, Swetter S, Blau H, Thrun S (2017) Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639):115–118 Geller AC, Swetter SM, Brooks K, Demierre MF, Yaroch AL (2007) Screening, early detection, and trends for melanoma: current status (2000–2006) and future directions. J Am Acad Dermatol 57(4):555–572 Rastgoo M, Lemaître G, Morel O, Massich J, Garcia R, Mériaudeau F, Marzani F, Sidibé D (2016) Classification of melanoma lesions using sparse coded features and random forests. In: progress in biomedical optics and imaging - proceedings of SPIE, vol. 9785. San Diego, California, USA Sánchez-Monedero J, Pérez-Ortiz M, Sáez A, Gutiérrez PA, Hervás-Martínez C (2018) Partial order label decomposition approaches for melanoma diagnosis. Appl Soft Comput 64:341–355 Li X, Yu L, Fu C.W., Heng P.A. (2018) Deeply supervised rotation equivariant network for lesion segmentation in dermoscopy images. Lect Notes Comput Sc 11041 LNCS, 235–243 Pérez E, Reyes O, Ventura S (2021) Convolutional neural networks for the automatic diagnosis of melanoma: an extensive experimental study. Med Image Anal 67:101858 Jin L, Gao S, Li Z, Tang J (2015) Hand-crafted features or machine learnt features? together they improve RGB-D object recognition. In: proceedings of the IEEE ISM-2014, pp. 311–319. Taichung, Taiwan Nasr-Esfahani E, Samavi S, Karimi N, Soroushmehr S, Jafari M, Ward K, Najarian K (2016)Melanoma detection by analysis of clinical images using convolutional neural network. In: proceedings of the IEEE EMBS, pp. 1373–1376. Florida, USA Brinker TJ, Hekler A, Enk AH, Klode J, Hauschild A (2019) A convolutional neural network trained with dermoscopic images performed on par with 145 dermatologists in a clinical melanoma image classification task. Eur J Cancer 111:148–154 Liu Y, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inform Fusion 36:191–207 Hossain MS, Muhammad G (2019) Emotion recognition using deep learning approach from audio-visual emotional big data. Inform Fusion 49:69–78 Asif U, Bennamoun M, Sohel F (2018) A multi-modal, discriminative and spatially invariant CNN for RGB-D object labeling. IEEE T Pattern Anal 40(9):2051–2065 Ericsson: on the pulse of the networked society. Tech. rep. (2015). https://apo.org.au/node/59109 Sabour S, Frosst N, Hinton GE (2017) Dynamic routing between capsules. arXiv:1710.09829 Lenc K, Vedaldi A (2019) Understanding image representations by measuring their equivariance and equivalence. Int J Comput Vision 127(5):456–476 MathSciNet MATH Google Scholar Perez F, Vasconcelos C, Avila S, Valle E (2018) Data augmentation for skin lesion analysis. In: OR 2.0 context-aware operating theaters, computer assisted robotic endoscopy, clinical image-based procedures, and skin image analysis, pp. 303–311. Springer, Granada, Spain Mahbod A, Schaefer G, Ellinger I, Ecker R, Pitiot A, Wang C (2019) Fusing fine-tuned deep features for skin lesion classification. Comput Med Imag Grap 71:19–29 Baur C, Albarqouni S, Navab N (2018) MelanoGANs: high resolution skin lesion synthesis with GANs. arXiv preprint: arXiv:1804.04338 Kawahara J, Daneshvar S, Argenziano G, Hamarneh G (2019) Seven-point checklist and skin lesion classification using multitask multimodal neural nets. IEEE J Biomed Health 23(2):538–546 Zeng H, Haleem H, Plantaz X, Cao N, Qu H (2017) CNNComparator: Comparative Analytics of Convolutional Neural Networks. arXiv:1710.05285 Harangi B, Baran A, Hajdu A (2018) Classification of skin lesions using an ensemble of deep neural networks. In: proceedings of the annual international conference of the IEEE EMBS, vol. 2018-July, pp. 2575–2578. Honolulu, HI, USA Sahu P, Yu D, Qin H (2018) Apply lightweight deep learning on internet of things for low-cost and easy-To-Access skin cancer detection. In: progress in biomedical optics and imaging - proceedings of SPIE, vol. 10579. Houston, Texas, USA Zeng G, Zheng G (2018) Multi-scale fully convolutional denseNets for automated skin lesion segmentation in dermoscopy images. Lect Notes Comput Sci 10882 LNCS, 513–521 Zhao XY, Wu X, Li FF, Li Y, Huang WH, Huang K, He XY, Fan W, Wu Z, Chen ML, Li J, Luo ZL, Su J, Xie B, Zhao S (2019) The application of deep learning in the risk grading of skin tumors for patients using clinical images. J Med Syst 43(8):1–7 Krizhevsky A, Sutskever I, Hinton G (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst, vol 2. Harrahs and Harveys, Lake Tahoe, NV, USA, pp 1097–1105 Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Li F.F. (2014) Large-scale video classification with convolutional neural networks. In: proceedings of the IEEE computer society CVPR, pp. 1725–1732. Washington, USA He K, Zhang X, Ren S, Sun J (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE T Pattern Anal 37(9):1904–1916 Shin HC, Roth HR, Gao M, Lu L, Xu Z, Nogues I, Yao J, Mollura D, Summers RM (2016) Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE T Med Imaging 35(5):1285–1298 Huang L, Zhao YG, Yang TJ (2019) Skin lesion segmentation using object scale-oriented fully convolutional neural networks. Signal Image Video Process 13(3):431–438 Ciresan DC, Meier U, Gambardella LM, Schmidhuber J (2010) Deep, big, simple neural nets for handwritten digit recognition. Neural Comput 22(12):3207–3220 Wang J, Perez L (2017) The effectiveness of data augmentation in image classification using deep learning. arXiv preprint. arXiv:1712.04621 Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, London MATH Google Scholar Chollet F (2017) Xception: deep learning with depthwise separable convolutions. In: proceedings of the 30th IEEE CVPR-2017, pp. 1800–1807. Honolulu, HI, USA Schwarz M, Schulz H, Behnke S (2015) RGB-D object recognition and pose estimation based on pre-trained convolutional neural network features. In: proceedings - IEEE International conference robotics automatics, vol. 2015-June, pp. 1329–1335. Washington, USA Sa I, Ge Z, Dayoub F, Upcroft B, Perez T, McCool C (2016) Deepfruits: a fruit detection system using deep neural networks. Sensors (Switzerland) 16(8):1222 Szegedy C, Ioffe S, Vanhoucke V, Alemi A (2017) Inception-v4, inception-resnet and the impact of residual connections on learning. In: 31st AAAI-2017. California, USA, San Francisco, pp 4278–4284 Howard A.G., Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint. arXiv:1704.04861 Dietterich T (2017) Ensemble methods in machine learning, vol. 1857 LNCS (2000) Singh B, Davis L.S. (2018) An analysis of scale invariance in object detection - SNIP. In: proceedings of the IEEE computer society CVPR, pp. 3578–3587. Utah, USA Zhang C, Pan X, Li H, Gardiner A, Sargent I, Hare J, Atkinson PM (2018) A hybrid MLP-CNN classifier for very fine resolution remotely sensed image classification. J Photogramm Remote Sens 140:133–144 He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: proceedings of the IEEE CVPR, pp. 770–778. Las Vegas, NV, USA Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: proceedings of the IEEE CVPR, vol. 07-12-June-2015, pp. 1–9. Boston, Massachusetts, USA Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: international conference on medical image computing and computer-assisted intervention. Springer, Munich, Germany, pp 234–241 Al-masni MA, Al-antari MA, Choi MT, Han SM, Kim TS (2018) Skin lesion segmentation in dermoscopy images via deep full resolution convolutional networks. Comput Meth Prog Bio 162:221–231 Lin B.S., Michael K, Kalra S, Tizhoosh H.R. (2018) Skin lesion segmentation: U-Nets versus clustering. In: IEEE SSCI-2017, vol. 2018-Janua, pp. 1–7. Hawaii, USA Alom MZ, Yakopcic C, Hasan M, Taha TM, Asari VK (2019) Recurrent residual U-Net for medical image segmentation. J Med Imaging 6(1):014006 Drown DJ, Khoshgoftaar TM, Seliya N (2009) Evolutionary sampling and software quality modeling of high-assurance systems. IEEE Trans Syst Man Cybern Part A Syst Hum 39(5):1097–1107 Shorten C, Khoshgoftaar TM (2019) A survey on image data augmentation for deep learning. J Big Data 6(1):1–48 Miikkulainen R, Liang J, Meyerson E, Rawal A, Fink D, Francon O, Raju B, Shahrzad H. Navruzyan A, Duffy N, et al. (2019) Evolving deep neural networks. In: artificial intelligence in the age of neural networks and brain computing, pp. 293–312. Elsevier Drown D.J., Khoshgoftaar T.M., Narayanan, R (2007) Using evolutionary sampling to mine imbalanced data. In: Sixth ICMLA-2007, pp. 363–368. IEEE Chan T, Vese L (1999) An active contour model without edges. In: international conference on scale-space theories in computer vision. Springer, Corfu, Greece, pp 141–151 Mumford D, Shah J (1989) Optimal approximations by piecewise smooth functions and associated variational problems. Commun Pur Appl Math 42(5):577–685 Kowsalya N, Kalyani A, Varsha Shree T.D., Sri Madhava Raja N, Rajinikanth V (2018) Skin-Melanoma evaluation with Tsallis's thresholding and Chan-Vese approach. In: IEEE ICSCA-2018. Pondicherry, India Suzuki K, Wang F, Shen D, Yan P (2011) Machine learning in medical imaging: second international workshop, MLMI 2011, held in conjunction with MICCAI 2011, vol 7009. Springer, Toronto, Canada Setio AAA, Ciompi F, Litjens G, Gerke P, Jacobs C, Van Riel SJ, Wille MMW, Naqibullah M, Sanchez CI, Van Ginneken B (2016) Pulmonary nodule detection in CT images: false positive reduction using multi-view convolutional networks. IEEE T Med Imaging 35(5):1160–1169 Goodfellow I, Bengio Y, Courville A, Bengio Y (2016) Deep learning, vol 1. MIT press, Cambridge Deb K (1996) Genetic algorithms for function optimisation. Genetic Algorithms Soft Comput 8:4–31 Radcliffe NJ (1991) Equivalence class analysis of genetic algorithms. Complex Syst 5(2):183–205 Herrera F, Lozano M, Verdegay JL (1998) Tackling real-coded genetic algorithms: operators and tools for behavioural analysis. Artif Intell Rev 12(4):265–319 Bäck T (1996) Evolutionary algorithms in theory and practice: evolution strategies. Oxford University Press Inc, USA Combalia M, Codella N.C.F., Rotemberg V, Helba B, Vilaplana V, Reiter O, Carrera C, Barreiro A, Halpern A.C., Puig S, Malvehy J (2019) BCN20000: dermoscopic lesions in the wild. arXiv:1908.02288 Ballerini L, Fisher R, Aldridge B, Rees J (2013) A color and texture based hierarchical K-NN approach to the classification of non-melanoma skin lesions, vol. 6 Tschandl P, Rosendahl C, Kittler H (2018) The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci Data 5:1–9 Gutman D, Codella N.C.F., Celebi E, Helba B, Marchetti M, Mishra N, Halpern A (2016) Skin lesion Analysis toward melanoma detection: a challenge at ISBI-2016, hosted by the international skin imaging collaboration. arxiv:1605.01397 Codella N.C.F., Gutman D, Celebi M.E., Helba B, Marchetti M.A., Dusza S.W., Kalloo A, Liopyris K, Mishra N, Kittler H, Halpern A (2018) Skin lesion analysis toward melanoma detection: a challenge at ISBI-2018, hosted by the international skin imaging collaboration. In: proceedings of the international symposium on biomedical imaging, vol. 2018-April, pp. 168–172. Washington, USA Giotis I, Molders N, Land S, Biehl M, Jonkman M, Petkov N (2015) Med-node: a computer-assisted melanoma diagnosis system using non-dermoscopic images. Expert Syst Appl 42(19):6578–6585 Mendonca T, Ferreira P, Marques J, Marcal A, Rozeira J (2013)Ph2 - a dermoscopic image database for research and benchmarking. In: proceedings of the annual international conference of the IEEE Eng Med Biol Soc, pp. 5437–5440. Osaka, Japan Sun X, Yang J, Sun M, Wang K (2016) A benchmark for automatic visual classification of clinical skin disease images. In: European conference on computer vision. Springer, Amsterdam, The Netherlands, pp 206–222 Rousseeuw PJ (1987) Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Comput Appl Math 20:53–65 Rodrigues DDA, Ivo RF, Satapathy SC, Wang S, Hemanth J, Filho PPR (2020) A new approach for classification skin lesion based on transfer learning, deep learning, and IoT system. Pattern Recogn Lett 136:8–15 El-Khatib H, Popescu D, Ichim L (2020) Deep learning-based methods for automatic diagnosis of skin lesions. Sensors (Switzerland) 20(6):1753 Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. In: proceedings of the 13th AISTATS, pp. 249–256. Sardinia, Italy Dolata P, Mrzygłód M, Reiner J (2017) Double-stream convolutional neural networks for machine vision inspection of natural products. Appl Artif Intell 31(7–8):643–659 Yu L, Chen H, Dou Q, Qin J, Heng PA (2017) Automated melanoma recognition in dermoscopy images via very deep residual networks. IEEE T Med Imaging 36(4):994–1004 Abbasi NR, Shaw HM, Rigel DS, Friedman RJ, McCarthy WH, Osman I, Kopf AW, Polsky D (2004) Early diagnosis of cutaneous melanoma: revisiting the ABCD criteria. J Amer Med Assoc 292(22):2771–2776 Boughorbel S, Jarray F, El-Anbari M (2017) Optimal classifier for imbalanced data using matthews correlation coefficient metric. PloS One 12(6):e0177678 Friedman M (1940) A comparison of alternative tests of significance for the problem of \(m\) rankings. Ann Math Stat 11(1):86–92 Hommel G (1988) A stagewise rejective multiple test procedure based on a modified bonferroni test. Biometrika 75(2):383–386 Shaffer JP (1986) Modified sequentially rejective multiple test procedures. J Am Stat Assoc 81(395):826–831 Wilcoxon F (1945) Individual comparisons by ranking methods. Biometrics 1(6):80–83 MathSciNet Google Scholar Chollet F, et al. (2015) Keras. https://keras.io Abadi M, et al. (2015) TensorFlow: Large-scale machine learning on heterogeneous systems. https://www.tensorflow.org/. Software available from tensorflow.org Gavai N.R., Jakhade Y.A., Tribhuvan S.A., Bhattad R (2017) Mobilenets for flower classification using tensorflow. In: 2017 BID, pp. 154–158 Liu X, Jia Z, Hou X, Fu M, Ma L, Sun Q (2019) Real-time marine animal images classification by embedded system based on mobilenet and transfer learning. In: OCEANS 2019 - Marseille, pp. 1–5. https://doi.org/10.1109/OCEANSE.2019.8867190 Rokach L (2010) Ensemble-based classifiers. Artif Intell Rev 33(1–2):1–39 Menegola A, Tavares J, Fornaciali M, Li L.T., Avila S, Valle E (2017) RECOD Titans at ISIC Challenge 2017. arXiv:1703.04819 Ackowledgements This research was supported by the i-PFIS contract no. IFI17/00015 granted by Health Institute Carlos III of Spain. It was also supported by the University of Cordoba and the European Regional Development Fund, project UCO-FEDER 18 REF.1263116 MOD.A. Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Consejería de Transformación Económica, Industria, Conocimiento y Universidades. Junta de Andalucía (ES) (Grant No. UCO-1262678) and ministerio de ciencia e innovación (Grant No. PID2020-115832GB-I00). Andalusian Research Institute in Data Science and Computacional Intelligence, DaSCI, University of Córdoba, 14071, Córdoba, Spain Eduardo Pérez & Sebastián Ventura Maimonides Biomedical Research Institute of Cordoba, IMIBIC, University of Córdoba, 14071, Córdoba, Spain Department of Information Systems, King Abdulaziz University, Jeddah, Kingdom of Saudi Arabia Sebastián Ventura Eduardo Pérez Correspondence to Sebastián Ventura. Pérez, E., Ventura, S. An ensemble-based convolutional neural network model powered by a genetic algorithm for melanoma diagnosis. Neural Comput & Applic (2021). https://doi.org/10.1007/s00521-021-06655-7 Convolutional neural networks Melanoma diagnosis Genetic algorithm Lesion segmentation
CommonCrawl
Genetic characterisation of PPARG, CEBPA and RXRA, and their influence on meat quality traits in cattle Daniel Estanislao Goszczynski1,3, Juliana Papaleo Mazzucco2, María Verónica Ripoli1, Edgardo Leopoldo Villarreal2, Andrés Rogberg-Muñoz1, Carlos Alberto Mezzadra2, Lilia Magdalena Melucci2 & Guillermo Giovambattista1 Peroxisome proliferator-activated receptor gamma (PPARG), CCAAT/enhancer binding protein alpha (CEBPA) and retinoid X receptor alpha (RXRA) are nuclear transcription factors that play important roles in regulation of adipogenesis and fat deposition. The objectives of this study were to characterise the variability of these three candidate genes in a mixed sample panel composed of several cattle breeds with different meat quality, validate single nucleotide polymorphisms (SNPs) in a local crossbred population (Angus - Hereford - Limousin) and evaluate their effects on meat quality traits (backfat thickness, intramuscular fat content and fatty acid composition), supporting the association tests with bioinformatic predictive studies. Globally, nine SNPs were detected in the PPARG and CEBPA genes within our mixed panel, including a novel SNP in the latter. Three of these nine, along with seven other SNPs selected from the Single Nucleotide Polymorphism database (SNPdb), including SNPs in the RXRA gene, were validated in the crossbred population (N = 260). After validation, five of these SNPs were evaluated for genotype effects on fatty acid content and composition. Significant effects were observed on backfat thickness and different fatty acid contents (P < 0.05). Some of these SNPs caused slight differences in mRNA structure stability and/or putative binding sites for proteins. PPARG and CEBPA showed low to moderate variability in our sample panel. Variations in these genes, along with RXRA, may explain part of the genetic variation in fat content and composition. Our results may contribute to knowledge about genetic variation in meat quality traits in cattle and should be evaluated in larger independent populations. Fat content and composition are considered major economically important traits in livestock, since variations in these two factors affect several meat properties [1]. These traits are the result of several biological processes, such as adipogenesis, lipolysis and fatty acid-transfer. Therefore, a part of the variability produced by these processes may be attributed to the genetic variants of the pathway members. Peroxisome proliferator-activated receptor gamma (PPARG), CCAAT/enhancer binding protein alpha (CEBPA) and retinoid X receptor alpha (RXRA) are important nuclear transcription factors involved in numerous cellular processes [2], and are considered key molecules in regulation of adipogenesis. PPARG and CEBPA are induced early in the signaling pathway, they work together to trigger the process and regulate each other [3]. PPARG acts as heterodimer with RXRA, which belongs to a family of nuclear receptors that act as homodimers and heterodimers. In view of their roles, the genetic control of adipogenesis by PPARG, CEBPA and RXRA may be important and helpful for animal improvement. During the last years, SNPs in PPARG and CEBPA have been associated with a group of meat quality traits in Chinese and Korean cattle, including tenderness, backfat thickness, water holding capacity, fatty acid composition, weight at slaughter and marbling, among others [4–9]. However, those works have been performed almost entirely using Asian cattle under feedlot conditions and the results might not be necessarily comparable with researches performed with other breeds under pasture-based feeding. For instance, these two conditions may activate specific metabolic pathways governed by different genes. Nowadays, most of the exported beef in the world is produced on pasture-based systems [10], as in countries like Argentina, Brazil, New Zealand, Paraguay and Uruguay, among others. In this context, we searched for gene variants in PPARG and CEBPA in a sample set composed of nine cattle breeds with different meat quality. Then, we validated some of these SNPs, along with SNPs in the RXRA gene, in a local Angus-Hereford-Limousin crossbred population (N = 260) fed on pasture-based conditions. We used this population to evaluate the association of the SNPs with intramuscular fat content (IF), backfat thickness (BT) and fatty acid composition. Finally, we analysed the molecular effects of these SNPs through bioinformatic predictive tools. Animal samples and DNA extraction Two groups of samples were collected: the first group comprised blood samples from 43 unrelated purebred animals (Angus, n = 5; Brahman, n = 5; Creole, n = 5; Hereford, n = 5; Holstein, n = 5; Limousin, n = 4; Nellore, n = 4; Shorthorn, n = 5; Wagyu, n = 5), which were used to identify polymorphisms in the bovine PPARG and CEBPA genes. The second group comprised 260 steers (15–29 month-old), born between 2006 and 2010, which were used to perform population analyses, including SNP validation and association tests. This group of animals had been used in previous studies to evaluate crossbreeding systems under pasture grazing with strategic supplementation at the Experimental Station of the National Institute of Agricultural Technology (INTA, Balcarce, Argentina; Coordinates: 37°49S 58°15 W). Steers included: purebred Angus -A- (n = 44) and Hereford -H- (n = 26) steers, their crossbreeds F1 and F2 -½AH- (n = 95), reciprocal backcrosses -¾A and ¾ H- (n = 54), and steers produced by mating Limousin -L- sires with F1 crossbred cows -LX- (n = 41) (Additional file 1: Table S1). Fifty-four sires were used, including 17 A sires (1-16 steers), 18 H sires (1-11 steers), 8 AH sires (1-7 steers), 8 HA sires (1-7 steers) and 4 L sires (1-34 steers). L sires were mated only with AH and HA cows and, every year, some of the A and H sires were mated with more than one genetic group. All animals grazed a sown pasture (predominantly Lolium multiflorum, Dactylis glomerata, Bromus catarthicus, Trifolium repens and Trifolium pratense). They were slaughtered in eight groups and meat blocks were taken from the 13th rib to perform association studies (Additional file 2: Table S1). The decision to sample this experimental population instead of other commercial cattle populations was based on the availability of reliable information in terms of phenotypic data, management and genetic background of the animals. DNA was isolated from blood lymphocytes using Wizard® Genomic DNA purification kit (Promega, Madison, WI, USA) following the instructions of the supplier, and from meat samples as previously described in [11]. Re-sequencing study of the bovine PPARG and CEBPA genes To amplify the coding regions of the PPARG and CEBPA genes, ten pairs of primers were designed according to the DNA sequences available in GenBank [Gene IDs: 281677, 281993] (Additional file 2: Table S2). PCR reactions were performed, verified, purified and sequenced as described in [12]. Sequences were aligned using CLUSTAL-X 2.1 [13] and variants were defined by direct comparison with the bovine reference sequences. SNP selection and genotyping As the crossbred population used to validate SNPs and perform association tests included animals from Angus, Hereford and Limousin (pure or crossbred), only some of the SNPs detected in the re-sequencing stage were considered for further validation. In other words, many of the SNPs detected by re-sequencing showed no variation in the Taurine breeds and were not considered, since they would probably show no variation in the crossbred population. For this reason, additional SNPs were selected from dbSNP [14] to have a better covering of the length of the genes using markers with proven variation in Taurine breeds. At this stage, the addition of another candidate gene was decided, and SNPs in the RXRA gene were also selected from dbSNP to be validated and tested for associations. Genotyping was performed by the Neogen genotyping service (USA) using the Sequenom platform [15]. Meat quality measurement Fatty acid content and composition measurements were gathered from blocks of meat obtained from the 260 animals of the crossbred population to perform association studies. These blocks, corresponding to the Longissimus dorsi muscle (13th rib) were extracted from the carcass 24 h after slaughter. Backfat Thickness (BT) was measured perpendicular to the outer surface at a point three quarters of the length of the Longissimus dorsi muscle from the end of the loin bone, and expressed in millimetres. The Intramuscular Fat (IF) and fatty acid composition were then measured as described in [12]. The measured fatty acids were: myristic acid (C14:0); myristoleic acid (C14:1); palmitic acid (C16:0); palmitoleic acid (C16:1); stearic acid (C18:0); oleic acid (C18:1 cis-9); linoleic acid (C18:2 cis-9,12); γ-linolenic acid (C18:3 cis-6,9,12); α-linolenic acid (C18:3 cis-9,12,15); total saturated fatty acids (SFA); total monounsaturated fatty acids (MUFA); and proportion between omega-6 and omega-3 fatty acids (Ω6/Ω3). The fatty acid contents were expressed as percentage of total fatty acids per sample. C20:0 and other long-chain fatty acids were not included in the analysis since their percentages were lower than 0.5 %. The means, standard deviations, minimum and maximum values of all these measurements were included in Additional file 3: Table S3. Statistical Analysis and association with meat quality Haplotypes and linkage disequilibrium (LD) among SNPs were estimated and visualized on HAPLOVIEW v4.2 [16] using the four gamete rule and the solid spine of LD. Allele frequencies and Hardy-Weinberg equilibrium (HWE) were analysed using GENEPOP software [17]. The 95 % confidence intervals for allele frequencies were computed using the binomial distribution implemented in R with the binom.confint function (http://cran.r-project.org/web/packages/binom/). Values for unbiased expected (he) and observed (ho) heterozygosity were calculated using ARLEQUIN v3.5 [18]. The association of genotypes with BT, IF and fatty acid composition was evaluated using mixed models. BT and IF were analysed according to the following model: $$ {Y}_{ijkl} = \mu + G{G}_j + S{G}_k + a\ Z{1}_i + d\ Z{2}_i + \beta\ {W}_i + {S}_l + {e}_{ijkl} $$ Where Y ijkl = observed value of the phenotypic variable, μ = intercept, GG j = fixed effect of the jth genetic group, SG k = fixed effect of the kth slaughter group, a = additive effect for the SNP, Z1 i is the incidence variable for the additive effect (that is 0 for one of the homozygous genotypes, 1 for the heterozygous genotype and 2 for the alternative homozygous one), d = dominant effect for the SNP, Z2 i is the incidence variable for the dominance effect (that is 0 for both of the homozygous genotypes and 1 for the heterozygous genotype), βW i = animal weight covariate for the ith animal, S l = random effect of the lth sire, e ijkl = random error. The same single trait model was used for fatty acid composition variables, but using ether extract instead of animal weight as covariate. All statistical analyses were performed using the MIXED procedure of SAS software [19]. When the additive or dominance effects of the SNP were statistically significant (P < 0.05), the substitution effect (α) was calculated considering the frequencies of the major and minor alleles (p and q, respectively) and using the following equation [20]: $$ \alpha =a+d\left(q-p\right) $$ The variance explained by the SNP (σ SNP 2 ) was also estimated for each SNP-trait test as follows: $$ {\sigma}_{SNP}^2=100\kern0.75em \times \frac{\left(RMS-FMS\right)}{RMS} $$ where RMS is the residual of the reduced model (SNP effect excluded), and FMS is the residual of the full model (SNP effect included). After all trait-SNP tests were performed, the false discovery rate (FDR) for multiple comparisons was controlled through the Benjamini & Hochberg method [21]. Bioinformatic analyses The SNPs were also analysed through different bioinformatic prediction tools. For synonymous SNPs, changes in codon frequency usage were analysed through the Codon Usage Database [22]. For SNPs located in 5' UTR regions, the complete 5' UTR fragments of the RNA sequences were run on the Mfold Web Server [23] to compare stability among variants. These fragments were also analysed using RBPDB [24], the database of RNA-binding protein (RBP) specificities, considering a threshold of 0.8 to identify putative RBP binding sites. Finally, variations located in promoter regions were analysed through PhysBinder [25], considering all human and murine models available and the "average" threshold to predict putative transcription factor binding sites. Re-sequencing study A total of seven SNPs were identified in the PPARG gene using the panel of nine breeds. All of them had been previously reported and most showed very low frequencies. In fact, no homozygous genotypes were detected for any of the alternative alleles. Three of the SNPs were located in UTR regions: rs207671117 (5' UTR), rs211388309 (3' UTR) and rs207724742 (3' UTR); the first in Angus and Hereford, and the other two in Brahman and Nellore. The remaining four SNPs (rs207739706, rs41610552, rs110194439 and rs42661651) were detected in non-coding regions in different breeds (Table 1). Table 1 Genetic variants detected in the bovine PPARG and CEBPA genes. Variants were identified by re-sequencing a mixed sample panel (N = 43) composed of cattle breeds with different meat quality (Angus, Brahman, Creole, Hereford, Holstein, Limousin, Nellore, Shorthorn, Wagyu) The haplotype and LD analysis, estimated with the four gamete rule, showed two blocks: a small one (3' end), composed by two completely linked SNPs (rs42661651 and rs110194439), and a big one that included the other five SNPs and consisted of five haplotypes, with three of them in very low frequencies (Fig. 1). The same study, but estimated with solid spine of LD, showed one big block constituted by all the SNPs, with three haplotypes in very low frequencies. Haplotypes (upper part) and linkage disequilibrium (lower part) among SNPs in the PPARG gene estimated in a mixed sample panel (n = 43). Blocks were estimated using the four gamete rule (a) and solid spine of LD (b). In both cases, r2 values are indicated inside the boxes and blocks are indicated in thick lines Only two SNPs were detected in the CEBPA gene. These SNPs were located in the coding region of the gene, which comprised only one exon, and one of them had no previous reports in dbSNP. This novel SNP (ss1751108604) was detected in both Zebuine breeds (Brahman and Nellore) and the Japanese breed Wagyu. This SNP caused an amino acid change involving two neutral and polar residues (Ser139Asn). The other one was a synonymous SNP (rs110793792), widely distributed among the breeds, with the exception of Nellore and Shorthorn, which showed different homozygous genotypes for the mutation and no variability within the samples (Table 1). Nine mutations were detected in total in this first stage. In other terms, we detected one SNP every 423 bp over 3809 bp analysed. Considering subspecies distribution, our variation values translate to one SNP every 762 bp for Bos taurus and one SNP every 544 bp for B. indicus. As expected, variability was higher in the Zebuine group. A few years ago, the Bovine HapMap Consortium [26] obtained one SNP every 714 bp for Angus or Holstein, and one SNP every 285 bp for Brahman. Therefore, the variability obtained in this work was similar in the case of Taurine breeds but lower for Zebuine breeds. On the other hand, the variability observed here was lower than that reported lately by our group for the LIPE gene in this same sample panel, where a SNP was detected every 123 bp [12]. This is consistent with the roles these genes play in lipid metabolism, since PPARG and CEBPA are key regulators in the first stages of fat deposition, among other processes, and LIPE codifies for an enzyme with very specific functions in lipid hydrolysis. In this scenario, LIPE may be subject to a less selective pressure than PPARG and CEBPA. In the case of PPARG, mutations were generally detected in low frequencies, which generated large linkage blocks with few haplotypes carrying most of the variation. Allelic and genotypic frequencies A group of SNPs was selected either from the results of the re-sequencing study or dbSNP to perform validation. In the case of PPARG, two SNPs were selected from the re-sequencing study (rs207671117 and rs41610552) and two other SNPs were selected from dbSNP. These two SNPs from the dbSNP had been previously associated with a group of meat quality traits: rs42016945, located upstream of PPARG-2 and also part of the 5' UTR of splice-variant 1 (PPARG-1), and rs109613657, which caused an amino acid change in the seventh exon (Glu448His). For CEBPA, we selected rs110793792 from the re-sequencing stage, since the novel SNP (ss1751108604) showed no variability in the Taurine breeds despite its novelty, and rs210446561 from the dbSNP. We also selected four SNPs from the RXRA gene. Since this gene was included in the study after the re-sequencing stage and there were no previous reports of associations with meat quality traits to our knowledge, the SNPs were chosen directly from dbSNP. These SNPs were rs209839910 (Pro108Ser), located in the second exon, rs136289117 (synonymous), located in the ninth exon, and rs133517803 and rs207774429, which may be located in the first intron or a putative promoter region for the splice-variant 3. Regarding the SNPs in PPARG, rs207671117 showed low variability in subpopulations A, ¾ H, ½AH and LX (Minimum Allele Frequency [MAF] ≥ 0.02), and showed no variability in H and ¾A. On the other hand, SNPs rs41610552 and rs42016945 showed moderate allele frequencies among the subpopulations (MAF ≥ 0.12). Surprisingly, SNP rs109613657 showed no variability at all (Table 2). The HWE test showed no significant deviations from the theoretical proportions, with the exception of rs42016945 in subpopulation LX (P < 0.05) (Table 3). The unbiased expected heterozygosity (he) of the two balanced SNPs (rs41610552 and rs42016945) varied between 0.22 (¾ H) and 0.49 (LX). Observed heterozygosity (ho) varied between 0.25 (¾ H) and 0.55 (LX). When linkage disequilibrium was analysed, rs207671117 and rs42016945 showed a small block with three haplotypes, two of them with more than 95 % of the haplotype frequencies (Additional file 4: Figure S1). Table 2 Observed allele frequencies and 95 % confidence intervals for SNPs in the PPARG, CEBPA and RXRA genes in an Argentinean crossbred population (Angus-Hereford-Limousin). A: purebred Angus; H. purebred Hereford; ¾A: 75 % Angus steers; ¾ H: 75 % Hereford steers; ½AH: 50 % Angus -50 % Hereford steers; LX: Limousine crossbred steers. N: number of animals genotyped efficiently for that SNP per genetic group. SNP rs109613657 (PPARG) showed no variability Table 3 Unbiased expected heterozygosity (he), observed heterozygosity (ho) and Hardy-Weinberg Equilibrium (HWE) p-values for SNPs in the PPARG, CEBPA and RXRA genes in an Argentinean crossbred population (Angus-Hereford-Limousin). A: purebred Angus; H. purebred Hereford; ¾A: 75 % Angus steers; ¾ H: 75 % Hereford steers; ½AH: 50 % Angus -50 % Hereford steers; LX: Limousine crossbred steers SNP rs110793792, located in CEBPA, could not be genotyped by the Sequenom platform, reason why it was discarded from the analysis. The other SNP from this gene, rs210446561, was genotyped efficiently in only 143 samples. The analysis showed higher frequencies for allele C, with MAF ≥ 0.09 in the subpopulations (Table 2). The HWE test showed no deviations from the theoretical proportions. Values for he ranged from 0.16 (LX) to 0.38 (A), and ho ranged from 0.17 (LX) to 0.50 (A) (Table 3). Two of the SNPs from RXRA, rs207774429 and rs133517803, showed relatively balanced allele frequencies (MAF ≥ 0.06). It is worth mentioning that rs207774429, as happened with rs210446561 (CEBPA), was genotyped efficiently in only 153 samples. The other two SNPs, rs136289117 and rs209839910, showed very low variation and were not considered for association (Table 2). According to the HWE test, significant deviations were observed for rs207774429 in the whole population (P < 0.01) and rs133517803 in subpopulation ¾A (P < 0.01). The remaining SNPs showed no significant deviations from the theoretical proportions. Three of the SNPs in this gene were part of a linkage block constituted by three haplotypes, with two of them accounting for over 97 % of the haplotype frequencies (Additional file 4: Figure S1). As we already mentioned, some of the SNPs selected for validation were not efficiently genotyped by Sequenom, but the rest showed different allele frequencies among the genetic groups. In general, the highest or lowest frequencies were observed in the LX subpopulation, which is historically and productively the most different breed, since Limousin is a European continental breed, and Angus and Hereford are Scottish. Deviations from the HWE proportions were observed mainly in crossbreeds, as expected. These deviations would also suggest the violation of some of Hardy-Weinberg assumptions and the existence of phenomena like selective mating, small population size, endogamy, and especially migration, considering the original purpose of the population, i.e., the evaluation of crossbreeding systems. Association study When individual tests were performed, several genotype effects were observed on the evaluated traits. The least square means for each genotype, percentages of phenotypic variance explained by the SNPs, and substitution effects were estimated for all the evaluated traits and presented in Table 4. Table 4 Association of SNPs in the PPARG, CEBPA and RXRA genes with meat quality traits in an Argentinean crossbred cattle population: least square means and standard deviation (s.d.) of the genotypic classes based on the individual polymorphisms, additive and dominance effects, substitution effect and percentage of phenotypic variance explained by the SNP (σ SNP 2 ). N: number of samples; n.e.: non estimable; C18:0: stearic acid (%); C18:1 cis-9: oleic acid (%);C18:3 cis-6,9,12: ϒ-linolenic acid (%); C18:3 cis-9,12,15: α-linolenic acid (%); MUFA: monounsaturated fatty acids (%); Ω-6/Ω-3: omega-6/omega-3 proportion; BT: backfat thickness of beef (mm) Regarding SNPs in PPARG, rs207671117 showed a significant effect on Ω6/Ω3 (P < 0.05), but small genetic variation in general. An additive effect on BT was detected (P < 0.05) for rs42016945, while dominance effects were significant (P < 0.05) for rs41610552 on C18:1 cis-9 and rs42016945 on C18:0. For these SNPs, the percentages of phenotypic variance explained ranged from 0.31 to 2.09 %. SNP rs133517803 (RXRA) showed significant effects on several measures: additive effects were detected on C18:1 cis-9; C18:3 cis-6,9,12; C18:3 cis-9,12,15; MUFA and BT; while a dominance effect was detected on C18:1 cis-9; C18:3 cis-9,12,15 and MUFA. For these traits, the SNP explained from 0.35 to 1.77 % of the phenotypic variance (Table 4). No significant effects were observed on the other traits. It is worth mentioning that none of these effects reached the threshold for statistical significance in the FDR control by means of the Benjamini-Hochberg method considering 70 tests. The explained percentages of variance were interestingly high for oleic acid and MUFA compared with other traits such as BT and stearic acid. In particular, the percentage of phenotypic variance explained by SNP rs133517803 in RXRA for oleic acid, and subsequently for MUFA, was high, which may suggest an important role of this gene in the oleic acid deposition in muscle. If we consider the first approach, our results were partially consistent with previous reports from other authors. Sevane et al. [9] reported associations for SNP rs42016945 with several Ω-3 fatty acids. Here, rs42016945 showed a significant effect on fatty acid composition, but on C18:0 instead. We also found a possible effect on BT, which was interesting and expectable given the nature of PPARG. None of the mutations in PPARG reported in Chinese and Korean breeds [5, 7] was detected in our panel, probably due to different breed origins, although it is important to mention the finding of significant effects on BT in Chinese cattle, which is concordant with our findings in the European breeds. SNP rs110793792 (CEBPA) had been associated with BT and marbling in Chinese breeds [8], but these effects could not be tested in our population due to a genotyping problem. To our knowledge, there are no previous works reporting associations for RXRA, so this work may provide evidence that SNPs in this gene may be helpful for animal improvement by means of marker-assisted selection programs. Bioinformatic predictions Since SNP rs207671117 was located in the 5' UTR of PPARG, we analysed the possible effects on mRNA stability and putative RBP binding sites. Two highly similar structures were obtained running the UTR sequences of the alternative variants in The Mfold Web Server, but the structure for allele A (ΔG = -42.60 kcal/mol) seemed slightly more stable than that for allele G (ΔG = -42.20 kcal/mol). When RBP binding sites were analysed (threshold = 0.8), we found that this SNP was located one base away from a putative binding site for protein FUS (SCORE = 7.36), which is important in maintaining genomic integrity. These same studies were performed for rs42016945 and we found that the structures provided by Mfold were quite different despite the similarity of energy values. The structure for allele G (ΔG = -27.70 kcal/mol) seemed more stable than structure for allele A (ΔG = -26.90 kcal/mol). According to the analysis on RBPDB, no sites for RBP were identified promptly at the mutated site, but the SNP was immediately next to a NONO (non-POU domain-containing octamer-binding protein) binding site (SCORE = 8.95). NONO is a protein involved in numerous nuclear processes like unwinding, recombination, DNA binding and regulation of splicing. SNPs rs133517803 (RXRA) was located in a possible alternative promoter region. Therefore, we searched for putative transcription factor binding sites through PhysBinder. According to this tool, rs133517803 was located over an ESRRB (estrogen-related receptor beta) binding site (threshold = 308), whose role is still not clear. PPARG and CEBPA showed low to moderate variability in our mixed sample panel. Variations in these genes, along with RXRA, may explain part of the phenotypic variation in fat content and composition of meat, especially SNPs in RXRA, which explained an important part of the variation in the highly heritable oleic acid percentage and MUFA. The molecular bases of the phenotypic differences may be partially explained by changes in RNA structures, RBP binding sites, codon usage frequencies and TF binding sites. The SNPs we analysed should be evaluated in independent populations with in-vitro and in-vivo analyses to explain the mechanisms by which these polymorphisms may be involved in the traits. Shahidi F. Lipid-derived flavors in meat products. In: Kerry J, Kerry J, Ledward D, editors. Meat processing: improving meat quality. Cambridge: Woodhead Publishing Limited; 2002. p. 105–21. Du M, Yin J, Zhu MJ. Cellular signaling pathways regulating the initial stage of adipogenesis and marbling of skeletal muscle. Meat Sci. 2010;86(1):103–9. Hausman GJ, Dodson MV, Ajuwon K, Azain M, Barnes KM, Guan LL, Jiang Z, Poulos SP, Sainz RD, Smith S, Spurlock M, Novakofski J, Fernyhough ME, Bergen WG. Board-invited review: the biology and regulation of preadipocytes and adipocytes in meat animals. J Anim Sci. 2009;87(4):1218–46. Barendse W. Haplotype Analysis Improved Evidence for Candidate Genes for Intramuscular Fat Percentage from a Genome Wide Association Study of Cattle. PLoS One. 2011;6(12):e29601. doi:10.1371/journal.pone.0029601. Fan YY, Zan LS, Fu CZ, Tian WQ, Wang HB, Liu YY, Xin YP. Three novel SNPs in the coding region of PPARγ gene and their associations with meat quality traits in cattle. Mol Biol Rep. 2011;38(1):131–7. He H, Liu X, Gu Y, Liu Y, Yang J. Effect of genetic variation of CEBPA gene on body measurement and carcass traits of Qinchuan cattle. Mol Biol Rep. 2011;38:4965–9. Oh D, Lee Y, Lee C, Chung E, Yeo J. Association of bovine fatty acid composition with missense nucleotide polymorphism in exon 7 of peroxisome proliferator-activated receptor gamma gene. Anim Genet. 2011;43(4):474. Wang H, Zan LS, Wang HB, Song FB. A novel SNP of the C/EBPα gene associated with superior meat quality in indigenous Chinese cattle. Gen Mol Res. 2011;10(3):2069–77. Sevane N, Armstrong E, Cortés O, Wiener P, Pong Wong R, Dunner S, Gemqual Consortium. Association of bovine meat quality traits with genes included in the PPARG and PPARGC1A networks. Meat Sci. 2013;94:328–35. USDA (United States Department of Agriculture). Livestock and Poultry: World Markets and Trade. Foreign Agricultural Service. 2015. http://apps.fas.usda.gov/psdonline/circulars/livestock_poultry.pdf. Accessed 18 March 2016. Giovambattista G, Ripoli MV, Lirón JP, Villegas Castagnasso EE, Peral-García P, Lojo MM. DNA typing in a cattle stealing case. J Forensic Sci. 2001;46(6):1484–6. Goszczynski DE, Mazzucco JP, Ripoli MV, Villarreal EL, Rogberg-Muñoz A, Mezzadra CA, Melucci LM, Giovambattista G. Characterisation of the bovine gene LIPE and possible influence on fatty acid composition of meat. Meta Gene. 2014;16(2):746–60. Larkin MA, Blackshields G, Brown NP, Chenna R, McGettigan PA, McWilliam H, Valentin F, Wallace IM, Wilm A, Lopez R, Thompson JD, Gibson TJ, Higgins DG. Clustal W and Clustal X version 2.0. Bioinformatics. 2007;23:2947–8. The Single Nucleotide Polymorphism Database (dbSNP). http://www.ncbi.nlm.nih.gov/snp. Accessed 6 January 2016. Sequenom, Inc. https://www.sequenom.com. Accessed 6 January 2016. Barrett JC, Fry B, Maller J, Daly MJ. Haploview: analysis and visualization of LD and haplotype maps. Bioinformatics. 2005;21:263–5. Rousset F. GENEPOP'007: a complete re-implementation of the GENEPOP software for Windows and Linux. Mol Ecol Res. 2008;8:103–6. Schneider S, Roessli D, Excoffier L. Arlequin, a software for Population Genetics Data Analysis. University of Geneva: Ver 2.0. Genetics and Biometry Lab, Department of Anthropology; 2000. SAS software. Copyright, SAS Institute Inc., Cary, NC, USA. Falconer DS, Mackay TFC. Introduction to Quantitative Genetics. Harlow: Addison Wesley Longman Limited; 1996. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Statist Soc B. 1995;57:289–300. The Codon Usage Database. http://www.kazusa.or.jp/codon/. Bos taurus [gbmam]: 13374. Accessed 6 January 2016. Zuker M. Mfold web server for nucleic acid folding and hybridization prediction. Nucleic Acids Res. 2003;31(13):3406–15. Cook KB, Kazan H, Zuberi K, Morris Q, Hughes TR. RBPDB: a database of RNA-binding specificities. Nucleic Acids Res. 2011;39:D301–8. Broos S, Soete A, Hooghe B, Moran R, van Roy F, De Bleser P. PhysBinder: Improving the prediction of transcription factor binding sites by flexible inclusion of biophysical properties. Nucleic Acids Res. 2013;41:W531–4. Bovine HapMap Consortium. Genome-wide survey of SNP variation uncovers the genetic structure of cattle breeds. Science. 2009;324(5926):528–32. This research was funded with grants provided by ANPCYT (PICT 08-04156; PICTR2002-0017), INTA (PNPA-1126033; PNCAR-334), UNMdP (AGR456/14; AGR393/12; AGR330/10; AGR270/08; AGR202/05; AGR137/01), CONICET (PIP2010-11220090100379) and UNLP (ID V206/12, JI 9861/3/11). Instituto de Genética Veterinaria "Ing. Fernando Noel Dulout" (IGEVET), CONICET, Facultad de Ciencias Veterinarias, Universidad Nacional de La Plata, CC 296, La Plata, B1900AVW, Argentina Daniel Estanislao Goszczynski , María Verónica Ripoli , Andrés Rogberg-Muñoz & Guillermo Giovambattista Unidad Integrada INTA Balcarce-Facultad de Ciencias Agrarias, Universidad Nacional de Mar del Plata, Balcarce, Argentina Juliana Papaleo Mazzucco , Edgardo Leopoldo Villarreal , Carlos Alberto Mezzadra & Lilia Magdalena Melucci Fellow of the Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Buenos Aires, Argentina Search for Daniel Estanislao Goszczynski in: Search for Juliana Papaleo Mazzucco in: Search for María Verónica Ripoli in: Search for Edgardo Leopoldo Villarreal in: Search for Andrés Rogberg-Muñoz in: Search for Carlos Alberto Mezzadra in: Search for Lilia Magdalena Melucci in: Search for Guillermo Giovambattista in: Correspondence to Daniel Estanislao Goszczynski. All of the authors conceived and supervised the whole study. JPM, ELV, CAM and LMM bred the animals and collected the phenotypic data. DEG, MVR and GG performed DNA sequencing and SNP genotyping. DEG, ARM and LMM analysed the data. DEG, MVR, ARM and GG drafted the manuscript together. All authors read and approved the final manuscript. Genetic structure of the crossbred population used to perform validation and association studies. N: number of samples; A: purebred Angus; H: purebred Hereford; ¾A: 75 % Angus steers; ¾ H: 75 % Hereford steers; ½AH: 50 % Angus- 50 % Hereford steers; L: Limousin sire; LX: Limousin crossbred steers. (DOC 43 kb) Primers used to amplify and re-sequence the PPARG and CEBPA genes in a panel composed of 43 samples from nine cattle breeds with different meat quality. (DOC 42 kb) Fat content and composition in the local crossbred population (Angus-Hereford-Limousin). BT was measured in millimeters, IF was expressed as the amount of fat in 100 g of fresh muscle excluding the external adipose tissue, and the fatty acid content was expressed as percentage of total fatty acids. (DOC 43 kb) Additional file 4: Figure S1. Haplotypes (upper part) and linkage disequilibrium (lower part) among SNPs in the PPARG (A) and RXRA (B) genes in 260 samples from an Argentinean crossbred population (Angus-Hereford-Limousin, N = 260). Blocks were defined with the solid spine of LD method and indicated in thick lines. r2 values are indicated inside the boxes of the linkage scheme. PPARG5UTR represents SNP rs207671117. (TIF 512 kb) Goszczynski, D.E., Mazzucco, J.P., Ripoli, M.V. et al. Genetic characterisation of PPARG, CEBPA and RXRA, and their influence on meat quality traits in cattle. J Anim Sci Technol 58, 14 (2016) doi:10.1186/s40781-016-0095-3
CommonCrawl
Milton Rocks DSu SPSS Labcoat Leni Oditi's lantern Oliver Twisted Smart Alex adventr Troubleshoot R Smart Alex Answers These pages provide the answers to the Smart Alex questions at the end of each chapter of Discovering Statistics Using IBM SPSS Statistics (5th edition). Task 1.1 What are (broadly speaking) the five stages of the research process? Generating a research question: through an initial observation (hopefully backed up by some data). Generate a theory to explain your initial observation. Generate hypotheses: break your theory down into a set of testable predictions. Collect data to test the theory: decide on what variables you need to measure to test your predictions and how best to measure or manipulate those variables. Analyse the data: look at the data visually and by fitting a statistical model to see if it supports your predictions (and therefore your theory). At this point you should return to your theory and revise it if necessary. What is the fundamental difference between experimental and correlational research? In a word, causality. In experimental research we manipulate a variable (predictor, independent variable) to see what effect it has on another variable (outcome, dependent variable). This manipulation, if done properly, allows us to compare situations where the causal factor is present to situations where it is absent. Therefore, if there are differences between these situations, we can attribute cause to the variable that we manipulated. In correlational research, we measure things that naturally occur and so we cannot attribute cause but instead look at natural covariation between variables. What is the level of measurement of the following variables? The number of downloads of different bands' songs on iTunes: This is a discrete ratio measure. It is discrete because you can download only whole songs, and it is ratio because it has a true and meaningful zero (no downloads at all). The names of the bands downloaded. This is a nominal variable. Bands can be identified by their name, but the names have no meaningful order. The fact that Norwegian black metal band 1349 called themselves 1349 does not make them better than British boy-band has-beens 911; the fact that 911 were a bunch of talentless idiots does, though. Their positions in the iTunes download chart. This is an ordinal variable. We know that the band at number 1 sold more than the band at number 2 or 3 (and so on) but we don't know how many more downloads they had. So, this variable tells us the order of magnitude of downloads, but doesn't tell us how many downloads there actually were. The money earned by the bands from the downloads. This variable is continuous and ratio. It is continuous because money (pounds, dollars, euros or whatever) can be broken down into very small amounts (you can earn fractions of euros even though there may not be an actual coin to represent these fractions). The weight of drugs bought by the band with their royalties. This variable is continuous and ratio. If the drummer buys 100 g of cocaine and the singer buys 1 kg, then the singer has 10 times as much. The type of drugs bought by the band with their royalties. This variable is categorical and nominal: the name of the drug tells us something meaningful (crack, cannabis, amphetamine, etc.) but has no meaningful order. The phone numbers that the bands obtained because of their fame. This variable is categorical and nominal too: the phone numbers have no meaningful order; they might as well be letters. A bigger phone number did not mean that it was given by a better person. The gender of the people giving the bands their phone numbers. This variable is categorical: the people dishing out their phone numbers could fall into one of several categories based on how they self-identify when asked about their gender (their gender identity could be fluid). Taking a very simplistic view of gender, the variable might contain categories of male, female, and non-binary. The instruments played by the band members. This variable is categorical and nominal too: the instruments have no meaningful order but their names tell us something useful (guitar, bass, drums, etc.). The time they had spent learning to play their instruments. This is a continuous and ratio variable. The amount of time could be split into infinitely small divisions (nanoseconds even) and there is a meaningful true zero (no time spent learning your instrument means that, like 911, you can't play at all). Say I own 857 CDs. My friend has written a computer program that uses a webcam to scan my shelves in my house where I keep my CDs and measure how many I have. His program says that I have 863 CDs. Define measurement error. What is the measurement error in my friend's CD counting device? Measurement error is the difference between the true value of something and the numbers used to represent that value. In this trivial example, the measurement error is 6 CDs. In this example we know the true value of what we're measuring; usually we don't have this information, so we have to estimate this error rather than knowing its actual value. Sketch the shape of a normal distribution, a positively skewed distribution and a negatively skewed distribution. Positive skew Negative skew In 2011 I got married and we went to Disney Florida for our honeymoon. We bought some bride and groom Mickey Mouse hats and wore them around the parks. The staff at Disney are really nice and upon seeing our hats would say 'congratulations' to us. We counted how many times people said congratulations over 7 days of the honeymoon: 5, 13, 7, 14, 11, 9, 17. Calculate the mean, median, sum of squares, variance and standard deviation of these data. First compute the mean: \[ \begin{aligned} \overline{X} &= \frac{\sum_{i=1}^{n} x_i}{n} \\ \ &= \frac{5+13+7+14+11+9+17}{7} \\ \ &= \frac{76}{7} \\ \ &= 10.86 \end{aligned} \] To calculate the median, first let's arrange the scores in ascending order: 5, 7, 9, 11, 13, 14, 17. The median will be the (n + 1)/2th score. There are 7 scores, so this will be the 8/2 = 4th. The 4th score in our ordered list is 11. To calculate the sum of squares, first take the mean from each score, then square this difference, finally, add up these squared values: Error (score - mean) Error squared 5 -5.86 34.34 9 -1.86 3.46 17 6.14 37.70 So, the sum of squared errors is: \[ \begin{aligned} \ SS &= 34.34 + 4.58 + 14.90 + 9.86 + 0.02 + 3.46 + 37.70 \\ \ &= 104.86 \\ \end{aligned} \] The variance is the sum of squared errors divided by the degrees of freedom: \[ \begin{aligned} \ s^2 &= \frac{SS}{N - 1} \\ \ &= \frac{104.86}{6} \\ \ &= 17.48 \end{aligned} \] The standard deviation is the square root of the variance: \[ \begin{aligned} \ s &= \sqrt{s^2} \\ \ &= \sqrt{17.48} \\ \ &= 4.18 \end{aligned} \] In this chapter we used an example of the time taken for 21 heavy smokers to fall off a treadmill at the fastest setting (18, 16, 18, 24, 23, 22, 22, 23, 26, 29, 32, 34, 34, 36, 36, 43, 42, 49, 46, 46, 57). Calculate the sums of squares, variance and standard deviation of these data. To calculate the sum of squares, take the mean from each value, then square this difference. Finally, add up these squared values (the values in the final column). The sum of squared errors is a massive 2685.24. Difference squared 18 32.19 -14.19 201.36 24 32.19 -8.19 67.08 32 32.19 -0.19 0.04 34 32.19 1.81 3.28 36 32.19 3.81 14.52 The variance is the sum of squared errors divided by the degrees of freedom (\(N-1\)). There were 21 scores and so the degrees of freedom were 20. The variance is, therefore: \[ \begin{aligned} \ s^2 &= \frac{SS}{N - 1} \\ \ &= \frac{2685.24}{20} \\ \ &= 134.26 \end{aligned} \] The standard deviation is the square root of the variance: \[ \begin{aligned} \ s &= \sqrt{s^2} \\ \ &= \sqrt{134.26} \\ \ &= 11.59 \end{aligned} \] Sports scientists sometimes talk of a 'red zone', which is a period during which players in a team are more likely to pick up injuries because they are fatigued. When a player hits the red zone it is a good idea to rest them for a game or two. At a prominent London football club that I support, they measured how many consecutive games the 11 first team players could manage before hitting the red zone: 10, 16, 8, 9, 6, 8, 9, 11, 12, 19, 5. Calculate the mean, standard deviation, median, range and interquartile range. First we need to compute the mean: \[ \begin{aligned} \overline{X} &= \frac{\sum_{i=1}^{n} x_i}{n} \\ \ &= \frac{10+16+8+9+6+8+9+11+12+19+5}{11} \\ \ &= \frac{113}{11} \\ \ &= 10.27 \end{aligned} \] Then the standard deviation, which we do as follows: 10 -0.27 0.07 \[ \begin{aligned} \ SS &= 0.07 + 32.80 + 5.17 + 1.62 + 18.26 + 5.17 + 1.62 + 0.53 + 2.98 + 76.17 + 27.80 \\ \ &= 172.18 \\ \end{aligned} \] The variance is the sum of squared errors divided by the degrees of freedom: \[ \begin{aligned} \ s^2 &= \frac{SS}{N - 1} \\ \ &= \frac{172.18}{10} \\ \ &= 17.22 \end{aligned} \] The standard deviation is the square root of the variance: To calculate the median, range and interquartile range, first let's arrange the scores in ascending order: 5, 6, 8, 8, 9, 9, 10, 11, 12, 16, 19. The median: The median will be the (\(n + 1\))/2th score. There are 11 scores, so this will be the 12/2 = 6th. The 6th score in our ordered list is 9 games. Therefore, the median number of games is 9. The lower quartile: This is the median of the lower half of scores. If we split the data at 9 (the 6th score), there are 5 scores below this value. The median of 5 = 6/2 = 3rd score. The 3rd score is 8, the lower quartile is therefore 8 games. The upper quartile: This is the median of the upper half of scores. If we split the data at 9 again (not including this score), there are 5 scores above this value. The median of 5 = 6/2 = 3rd score above the median. The 3rd score above the median is 12; the upper quartile is therefore 12 games. The range: This is the highest score (19) minus the lowest (5), i.e. 14 games. The interquartile range: This is the difference between the upper and lower quartile: 12 − 8 = 4 games. Celebrities always seem to be getting divorced. The (approximate) length of some celebrity marriages in days are: 240 (J-Lo and Cris Judd), 144 (Charlie Sheen and Donna Peele), 143 (Pamela Anderson and Kid Rock), 72 (Kim Kardashian, if you can call her a celebrity), 30 (Drew Barrymore and Jeremy Thomas), 26 (Axl Rose and Erin Everly), 2 (Britney Spears and Jason Alexander), 150 (Drew Barrymore again, but this time with Tom Green), 14 (Eddie Murphy and Tracy Edmonds), 150 (Renee Zellweger and Kenny Chesney), 1657 (Jennifer Aniston and Brad Pitt). Compute the mean, median, standard deviation, range and interquartile range for these lengths of celebrity marriages. \[ \begin{aligned} \overline{X} &= \frac{\sum_{i=1}^{n} x_i}{n} \\ \ &= \frac{240+144+143+72+30+26+2+150+14+150+1657}{11} \\ \ &= \frac{2628}{11} \\ \ &= 238.91 \end{aligned} \] 240 1.09 1.19 144 -94.91 9007.91 72 -166.91 27858.95 2 -236.91 56126.35 1657 1418.09 2010979.25 \[ \begin{aligned} \ SS &= 1.19 + 9007.74 + 9198.55 + 27858.64 + 43643.01 + 45330.28 + 56125.92 + 7904.83 + 50584.10 + 7904.83 + 2010981.83 \\ \ &= 2268540.92 \\ \end{aligned} \] The variance is the sum of squared errors divided by the degrees of freedom: \[ \begin{aligned} \ s^2 &= \frac{SS}{N - 1} \\ \ &= \frac{2268540.92}{10} \\ \ &= 226854.09 \end{aligned} \] The standard deviation is the square root of the variance: \[ \begin{aligned} \ s &= \sqrt{s^2} \\ \ &= \sqrt{226854.09} \\ \ &= 476.29 \end{aligned} \] To calculate the median, range and interquartile range, first let's arrange the scores in ascending order: 2, 14, 26, 30, 72, 143, 144, 150, 150, 240, 1657. The median: The median will be the (n + 1)/2th score. There are 11 scores, so this will be the 12/2 = 6th. The 6th score in our ordered list is 143. The median length of these celebrity marriages is therefore 143 days. The lower quartile: This is the median of the lower half of scores. If we split the data at 143 (the 6th score), there are 5 scores below this value. The median of 5 = 6/2 = 3rd score. The 3rd score is 26, the lower quartile is therefore 26 days. The upper quartile: This is the median of the upper half of scores. If we split the data at 143 again (not including this score), there are 5 scores above this value. The median of 5 = 6/2 = 3rd score above the median. The 3rd score above the median is 150; the upper quartile is therefore 150 days. The range: This is the highest score (1657) minus the lowest (2), i.e. 1655 days. The interquartile range: This is the difference between the upper and lower quartile: 150 − 26 = 124 days. Task 1.10 Repeat Task 9 but excluding Jennifer Anniston and Brad Pitt's marriage. How does this affect the mean, median, range, interquartile range, and standard deviation? What do the differences in values between Tasks 9 and 10 tell us about the influence of unusual scores on these measures? First let's compute the new mean: \[ \begin{aligned} \overline{X} &= \frac{\sum_{i=1}^{n} x_i}{n} \\ \ &= \frac{240+144+143+72+30+26+2+150+14+150}{11} \\ \ &= \frac{971}{11} \\ \ &= 97.1 \end{aligned} \] The mean length of celebrity marriages is now 97.1 days compared to 238.91 days when Jennifer Aniston and Brad Pitt's marriage was included. This demonstrates that the mean is greatly influenced by extreme scores. Let's now calculate the standard deviation excluding Jennifer Aniston and Brad Pitt's marriage: 240 142.9 20420.41 144 46.9 2199.61 72 -25.1 630.01 30 -67.1 4502.41 2 -95.1 9044.01 \[ \begin{aligned} \ SS &= 20420.41 + 2199.61 + 2106.81 + 630.01 + 4502.41 + 5055.21 + 9044.01 + 2798.41 + 6905.61 + 2798.41 \\ \ &= 56460.90 \\ \end{aligned} \] The variance is the sum of squared errors divided by the degrees of freedom: \[ \begin{aligned} \ s^2 &= \frac{SS}{N - 1} \\ \ &= \frac{56460.90}{9} \\ \ &= 6273.43 \end{aligned} \] The standard deviation is the square root of the variance: \[ \begin{aligned} \ s &= \sqrt{s^2} \\ \ &= \sqrt{6273.43} \\ \ &= 79.21 \end{aligned} \] From these calculations we can see that the variance and standard deviation, like the mean, are both greatly influenced by extreme scores. When Jennifer Aniston and Brad Pitt's marriage was included in the calculations (see Smart Alex Task 9), the variance and standard deviation were much larger, i.e. 226854.09 and 476.29 respectively. To calculate the median, range and interquartile range, first, let's again arrange the scores in ascending order but this time excluding Jennifer Aniston and Brad Pitt's marriage: 2, 14, 26, 30, 72, 143, 144, 150, 150, 240. The median: The median will be the (n + 1)/2 score. There are now 10 scores, so this will be the 11/2 = 5.5th. Therefore, we take the average of the 5th score and the 6th score. The 5th score is 72, and the 6th is 143; the median is therefore 107.5 days. The lower quartile: This is the median of the lower half of scores. If we split the data at 107.5 (this score is not in the data set), there are 5 scores below this value. The median of 5 = 6/2 = 3rd score. The 3rd score is 26; the lower quartile is therefore 26 days. The upper quartile: This is the median of the upper half of scores. If we split the data at 107.5 (this score is not actually present in the data set), there are 5 scores above this value. The median of 5 = 6/2 = 3rd score above the median. The 3rd score above the median is 150; the upper quartile is therefore 150 days. The range: This is the highest score (240) minus the lowest (2), i.e. 238 days. You'll notice that without the extreme score the range drops dramatically from 1655 to 238 – less than half the size. The interquartile range: This is the difference between the upper and lower quartile: 150 − 26 = 124 days of marriage. This is the same as the value we got when Jennifer Aniston and Brad Pitt's marriage was included. This demonstrates the advantage of the interquartile range over the range, i.e. it isn't affected by extreme scores at either end of the distribution Why do we use samples? We are usually interested in populations, but because we cannot collect data from every human being (or whatever) in the population, we collect data from a small subset of the population (known as a sample) and use these data to infer things about the population as a whole. What is the mean and how do we tell if it's representative of our data? The mean is a simple statistical model of the centre of a distribution of scores. A hypothetical estimate of the 'typical' score. We use the variance, or standard deviation, to tell us whether it is representative of our data. The standard deviation is a measure of how much error there is associated with the mean: a small standard deviation indicates that the mean is a good representation of our data. What's the difference between the standard deviation and the standard error? The standard deviation tells us how much observations in our sample differ from the mean value within our sample. The standard error tells us not about how the sample mean represents the sample itself, but how well the sample mean represents the population mean. The standard error is the standard deviation of the sampling distribution of a statistic. For a given statistic (e.g. the mean) it tells us how much variability there is in this statistic across samples from the same population. Large values, therefore, indicate that a statistic from a given sample may not be an accurate reflection of the population from which the sample came. In Chapter 1 we used an example of the time in seconds taken for 21 heavy smokers to fall off a treadmill at the fastest setting (18, 16, 18, 24, 23, 22, 22, 23, 26, 29, 32, 34, 34, 36, 36, 43, 42, 49, 46, 46, 57). Calculate standard error and 95% confidence interval for these data. If you did the tasks in Chapter 1, you'll know that the mean is 32.19 seconds: \[ \begin{aligned} \overline{X} &= \frac{\sum_{i=1}^{n} x_i}{n} \\ \ &= \frac{16+(2\times18)+(2\times22)+(2\times23)+24+26+29+32+(2\times34)+(2\times36)+42+43+(2\times46)+49+57}{21} \\ \ &= \frac{676}{21} \\ \ &= 32.19 \end{aligned} \] We also worked out that the sum of squared errors was 2685.24; the variance was 2685.24/20 = 134.26; the standard deviation is the square root of the variance, so was \(\sqrt(134.26)\) = 11.59. The standard error will be: \[ SE = \frac{s}{\sqrt{N}} = \frac{11.59}{\sqrt{21}} = 2.53\] The sample is small, so to calculate the confidence interval we need to find the appropriate value of t. First we need to calculate the degrees of freedom, \(N − 1\). With 21 data points, the degrees of freedom are 20. For a 95% confidence interval we can look up the value in the column labelled 'Two-Tailed Test', '0.05' in the table of critical values of the t-distribution (Appendix). The corresponding value is 2.09. The confidence intervals is, therefore, given by: Lower boundary of confidence interval = \(\overline{X}-(2.09\times SE)\) = 32.19 – (2.09 × 2.53) = 26.90 Upper boundary of confidence interval = \(\overline{X}+(2.09\times SE)\) = 32.19 + (2.09 × 2.53) = 37.48 What do the sum of squares, variance and standard deviation represent? How do they differ? All of these measures tell us something about how well the mean fits the observed sample data. Large values (relative to the scale of measurement) suggest the mean is a poor fit of the observed scores, and small values suggest a good fit. They are also, therefore, measures of dispersion, with large values indicating a spread-out distribution of scores and small values showing a more tightly packed distribution. These measures all represent the same thing, but differ in how they express it. The sum of squared errors is a 'total' and is, therefore, affected by the number of data points. The variance is the 'average' variability but in units squared. The standard deviation is the average variation but converted back to the original units of measurement. As such, the size of the standard deviation can be compared to the mean (because they are in the same units of measurement). What is a test statistic and what does it tell us? A test statistic is a statistic for which we know how frequently different values occur. The observed value of such a statistic is typically used to test hypotheses, or to establish whether a model is a reasonable representation of what's happening in the population. What are Type I and Type II errors? A Type I error occurs when we believe that there is a genuine effect in our population, when in fact there isn't. A Type II error occurs when we believe that there is no effect in the population when, in reality, there is. What is statistical power? Power is the ability of a test to detect an effect of a particular size (a value of 0.8 is a good level to aim for). Figure 2.16 shows two experiments that looked at the effect of singing versus conversation on how much time a woman would spend with a man. In both experiments the means were 10 (singing) and 12 (conversation), the standard deviations in all groups were 3, but the group sizes were 10 per group in the first experiment and 100 per group in the second. Compute the values of the confidence intervals displayed in the Figure. Experiment 1: In both groups, because they have a standard deviation of 3 and a sample size of 10, the standard error will be: \[ SE = \frac{s}{\sqrt{N}} = \frac{3}{\sqrt{10}} = 0.95\] The sample is small, so to calculate the confidence interval we need to find the appropriate value of t. First we need to calculate the degrees of freedom, \(N − 1\). With 10 data points, the degrees of freedom are 9. For a 95% confidence interval we can look up the value in the column labelled 'Two-Tailed Test', '0.05' in the table of critical values of the t-distribution (Appendix). The corresponding value is 2.26. The confidence interval for the singing group is, therefore, given by: Lower boundary of confidence interval = \(\overline{X}-(2.26\times SE)\) = 10 – (2.26 × 0.95) = 7.85 Upper boundary of confidence interval = \(\overline{X}+(2.26\times SE)\) = 10 + (2.26 × 0.95) = 12.15 For the conversation group: Experiment 2 In both groups, because they have a standard deviation of 3 and a sample size of 100, the standard error will be: \[ SE = \frac{s}{\sqrt{N}} = \frac{3}{\sqrt{100}} = 0.3\] The sample is large, so to calculate the confidence interval we need to find the appropriate value of z. For a 95% confidence interval we should look up the value of 0.025 in the column labelled Smaller Portion in the table of the standard normal distribution (Appendix). The corresponding value is 1.96. The confidence interval for the singing group is, therefore, given by: Lower boundary of confidence interval = \(\overline{X}-(1.96\times SE)\) = 10 – (1.96 × 0.3) = 9.41 Upper boundary of confidence interval = \(\overline{X}+(1.96\times SE)\) = 10 + (1.96 × 0.3) = 10.59 Lower boundary of confidence interval = \(\overline{X}-(1.96\times SE)\) = 12 – (1.96 × 0.3) = 11.41 Figure 2.17 shows a similar study to above, but the means were 10 (singing) and 10.01 (conversation), the standard deviations in both groups were 3, and each group contained 1 million people. Compute the values of the confidence intervals displayed in the figure. In both groups, because they have a standard deviation of 3 and a sample size of 1,000,000, the standard error will be: \[ SE = \frac{s}{\sqrt{N}} = \frac{3}{\sqrt{1000000}} = 0.003\] The sample is large, so to calculate the confidence interval we need to find the appropriate value of z. For a 95% confidence interval we should look up the value of 0.025 in the column labelled Smaller Portion in the table of the standard normal distribution (Appendix). The corresponding value is 1.96. The confidence interval for the singing group is, therefore, given by: Lower boundary of confidence interval = \(\overline{X}-(1.96\times SE)\) = 10 – (1.96 × 0.003) = 9.99412 Upper boundary of confidence interval = \(\overline{X}+(1.96\times SE)\)= 10 + (1.96 × 0.003) = 10.00588 For the conversation group: Lower boundary of confidence interval = \(\overline{X}-(1.96\times SE)\) = 10.01 – (1.96 × 0.003) = 10.00412 Upper boundary of confidence interval = \(\overline{X}+(1.96\times SE)\) = 10.01 + (1.96 × 0.003) = 10.01588 Note: these values will look slightly different than the graph because the exact means were 10.00147 and 10.01006, but we rounded off to 10 and 10.01 to make life a bit easier. If you use these exact values you'd get, for the singing group: Lower boundary of confidence interval = 10.00147 – (1.96 × 0.003) = 9.99559 Upper boundary of confidence interval = 10.00147 + (1.96 × 0.003) = 10.00735 Lower boundary of confidence interval = 10.01006 – (1.96 × 0.003) = 10.00418 In Chapter 1 (Task 8) we looked at an example of how many games it took a sportsperson before they hit the 'red zone' Calculate the standard error and confidence interval for those data. We worked out in Chapter 1 that the mean was 10.27, the standard deviation 4.15, and there were 11 sportspeople in the sample. The standard error will be: \[ SE = \frac{s}{\sqrt{N}} = \frac{4.15}{\sqrt{11}} = 1.25\] The sample is small, so to calculate the confidence interval we need to find the appropriate value of t. First we need to calculate the degrees of freedom, \(N − 1\). With 11 data points, the degrees of freedom are 10. For a 95% confidence interval we can look up the value in the column labelled 'Two-Tailed Test', '.05' in the table of critical values of the t-distribution (Appendix). The corresponding value is 2.23. The confidence interval is, therefore, given by: Lower boundary of confidence interval = \(\overline{X}-(2.23\times SE)\) = 10.27 – (2.23 × 1.25) = 7.48 At a rival club to the one I support, they similarly measured the number of consecutive games it took their players before they reached the red zone. The data are: 6, 17, 7, 3, 8, 9, 4, 13, 11, 14, 7. Calculate the mean, standard deviation, and confidence interval for these data. First we need to compute the mean: \[ \begin{aligned} \overline{X} &= \frac{\sum_{i=1}^{n} x_i}{n} \\ \ &= \frac{6+17+7+3+8+9+4+13+11+14+7}{11} \\ \ &= \frac{99}{11} \\ \ &= 9.00 \end{aligned} \] 3 -6 36 The sum of squared errors is: \[ \begin{aligned} \ SS &= 9 + 64 + 4 + 36 + 1 + 0 + 25 + 16 + 4 + 25 + 4 \\ \ &= 188 \\ \end{aligned} \] The variance is the sum of squared errors divided by the degrees of freedom: \[ \begin{aligned} \ s^2 &= \frac{SS}{N - 1} \\ \ &= \frac{188}{10} \\ \ &= 18.8 \end{aligned} \] The standard deviation is the square root of the variance: \[ \begin{aligned} \ s &= \sqrt{s^2} \\ \ &= \sqrt{18.8} \\ \ &= 4.34 \end{aligned} \] There were 11 sportspeople in the sample, so the standard error will be: \[ SE = \frac{s}{\sqrt{N}} = \frac{4.34}{\sqrt{11}} = 1.31\] Lower boundary of confidence interval = \(\overline{X}-(2.23\times SE)\) = 9 – (2.23 × 1.31) = 6.08 Upper boundary of confidence interval = \(\overline{X}+(2.23\times SE)\) = 9 + (2.23 × 1.31) = 11.92 In Chapter 1 (Task 9) we looked at the length in days of nine celebrity marriages. Here are the length in days of nine marriages, one being mine and the other eight being those of some of my friends and family (in all but one case up to the day I'm writing this, which is 8 March 2012, but in the 91-day case it was the entire duration – this isn't my marriage, in case you're wondering: 210, 91, 3901, 1339, 662, 453, 16672, 21963, 222. Calculate the mean, standard deviation and confidence interval for these data. \[ \begin{aligned} \overline{X} &= \frac{\sum_{i=1}^{n} x_i}{n} \\ \ &= \frac{210+91+3901+1339+662+453+16672+21963+222}{9} \\ \ &= \frac{45513}{9} \\ \ &= 5057 \end{aligned} \] Compute the standard deviation as follows: 210 -4847 23493409 91 -4966 24661156 3901 -1156 1336336 1339 -3718 13823524 16672 11615 134908225 \[ \begin{aligned} \ SS &= 23493409 + 24661156 + 1336336 + 13823524 + 19316025 + 21196816 + 134908225 + 285812836 + 23377225 \\ \ &= 547925552 \\ \end{aligned} \] The variance is the sum of squared errors divided by the degrees of freedom: \[ \begin{aligned} \ s^2 &= \frac{SS}{N - 1} \\ \ &= \frac{547925552}{8} \\ \ &= 68490694 \end{aligned} \] The standard deviation is the square root of the variance: \[ \begin{aligned} \ s &= \sqrt{s^2} \\ \ &= \sqrt{68490694} \\ \ &= 8275.91 \end{aligned} \] The standard error is: \[ SE = \frac{s}{\sqrt{N}} = \frac{8275.91}{\sqrt{9}} = 2758.64\] The sample is small, so to calculate the confidence interval we need to find the appropriate value of t. First we need to calculate the degrees of freedom, \(N − 1\). With 9 data points, the degrees of freedom are 8. For a 95% confidence interval we can look up the value in the column labelled 'Two-Tailed Test', '0.05' in the table of critical values of the t-distribution (Appendix). The corresponding value is 2.31. The confidence interval is, therefore, given by: Lower boundary of CI = \(\overline{X}-(2.31\times SE)\) = 5057 – (2.31 × 2758.64) = 1315.46 Upper boundary of CI = \(\overline{X}+(2.31\times SE)\) = 5057 + (2.31 × 2758.64) = 11429.46 What is an effect size and how is it measured? An effect size is an objective and standardized measure of the magnitude of an observed effect. Measures include Cohen's d, the odds ratio and Pearson's correlations coefficient, r. Cohen's d, for example, is the difference between two means divided by either the standard deviation of the control group, or by a pooled standard deviation. In Chapter 1 (Task 8) we looked at an example of how many games it took a sportsperson before they hit the 'red zone', then in Chapter 2 we looked at data from a rival club. Compute and interpret Cohen's d for the difference in the mean number of games it took players to become fatigued in the two teams mentioned in those tasks. Cohen's d is defined as: \[\hat{d} = \frac{\bar{X_1}-\bar{X_2}}{s}\] There isn't an obvious control group, so let's use a pooled estimate of the standard deviation: \[ \begin{aligned} \ s_p &= \sqrt{\frac{(N_1-1) s_1^2+(N_2-1) s_2^2}{N_1+N_2-2}} \\ \ &= \sqrt{\frac{(11-1)4.15^2+(11-1)4.34^2}{11+11-2}} \\ \ &= \sqrt{\frac{360.23}{20}} \\ \ &= 4.24 \end{aligned} \] Therefore, Cohen's d is: \[\hat{d} = \frac{10.27-9}{4.24} = 0.30\] Therefore, the second team fatigued in fewer matches than the first team by about 1/3 standard deviation. By the benchmarks that we probably shouldn't use, this is a small to medium effect, but I guess if you're managing a top-flight sports team, fatiguing 1/3 of a standard deviation faster than one of your opponents could make quite a substantial difference to your performance and team rotation over the season. Calculate and interpret Cohen's d for the difference in the mean duration of the celebrity marriages in Chapter 1 (Task 9) and me and my friend's marriages (Chapter 2, Task 13). Cohen's d is defined as: \[\hat{d} = \frac{\bar{X_1}-\bar{X_2}}{s}\] There isn't an obvious control group, so let's use a pooled estimate of the standard deviation: \[ \begin{aligned} \ s_p &= \sqrt{\frac{(N_1-1) s_1^2+(N_2-1) s_2^2}{N_1+N_2-2}} \\ \ &= \sqrt{\frac{(11-1)476.29^2+(9-1)8275.91^2}{11+9-2}} \\ \ &= \sqrt{\frac{550194093}{18}} \\ \ &= 5528.68 \end{aligned} \] Therefore, Cohen's d is: \[\hat{d} = \frac{5057-238.91}{5528.68} = 0.87\] Therefore, my friend's marriages are 0.87 standard deviations longer than the sample of celebrities. By the benchmarks that we probably shouldn't use, this is a large effect. What are the problems with null hypothesis significance testing? We can't conclude that an effect is important because the p-value from which we determine significance is affected by sample size. Therefore, the word 'significant' is meaningless when referring to a p-value. The null hypothesis is never true. If the p-value is greater than .05 then we can decide to reject the alternative hypothesis, but this is not the same thing as the null hypothesis being true: a non-significant result tells us is that the effect is not big enough to be found but it doesn't tell us that the effect is zero. A significant result does not tell us that the null hypothesis is false (see text for details). It encourages all or nothing thinking: if p < 0.05 then an effect is significant, but if p > 0.05 it is not. So, a p = 0.0499 is significant but a p = 0.0501 is not, even though these ps differ by only 0.0002. What is the difference between a confidence interval and a credible interval? A 95% confidence interval is set so that before the data are collected there is a long-run probability of 0.95 (or 95%) that the interval will contain the true value of the parameter. This means that in 100 random samples, the intervals will contain the true value in 95 of them but won't in 5. Once the data are collected, your sample is either one of the 95% that produces an interval containing the true value, or one of the 5% that does not. In other words, having collected the data, the probability of the interval containing the true value of the parameter is either 0 (it does not contain it) or 1 (it does contain it), but you do not know which. A credible interval is different in that it reflects the plausible probability that the interval contains the true value. For example, a 95% credible interval has a plausible 0.95 probability of containing the true value. What is a meta-analysis? Meta-analysis is where effect sizes from different studies testing the same hypothesis are combined to get a better estimate of the size of the effect in the population. What does a Bayes factor tell us? The Bayes factor is the ratio of the probability of the data given the alternative hypothesis to that of the data given the null hypothesis. A Bayes factor less than 1 supports the null hypothesis (it suggests the data are more likely given the null hypothesis than the alternative hypothesis); conversely, a Bayes factor greater than 1 suggests that the observed data are more likely given the alternative hypothesis than the null. Values between 1 and 3 are considered evidence for the alternative hypothesis that is 'barely worth mentioning', values between 3 and 10 are considered to indicate evidence for the alternative hypothesis that 'has substance', and values greater than 10 are strong evidence for the alternative hypothesis. Various studies have shown that students who use laptops in class often do worse on their modules (Payne-Carter, Greenberg, & Walker, 2016; Sana, Weston, & Cepeda, 2013). Table 3.3 shows some fabricated data that mimics what has been found. What is the odds ratio for passing the exam if the student uses a laptop in class compared to if they don't? Table 3.1 (reproduced): Number of people who passed or failed an exam classified by whether they take their laptop to class No Laptop Pass 24 49 73 Fail 16 11 27 Sum 40 60 100 First we compute the odds of passing when a laptop is used in class: \[ \begin{aligned} \ \text{Odds}_{\text{pass when laptop is used}} &= \frac{\text{Number of laptop users passing exam}}{\text{Number of laptop users failing exam}} \\ \ &= \frac{24}{16} \\ \ &= 1.5 \end{aligned} \] Next we compute the odds of passing when a laptop is not used in class: \[ \begin{aligned} \ \text{Odds}_{\text{pass when laptop is not used}} &= \frac{\text{Number of students without laptops passing exam}}{\text{Number of students without laptops failing exam}} \\ \ &= \frac{49}{11} \\ \ &= 4.45 \end{aligned} \] The odds ratio is the ratio of the two odds that we have just computed: \[ \begin{aligned} \ \text{Odds Ratio} &= \frac{\text{Odds}_{\text{pass when laptop is used}}}{\text{Odds}_{\text{pass when laptop is not used}}} \\ \ &= \frac{1.5}{4.45} \\ \ &= 0.34 \end{aligned} \] The odds of passing when using a laptop are 0.34 times those when a laptop is not used. If we take the reciprocal of this, we could say that the odds of passing when not using a laptop are 2.97 times those when a laptop is used. From the data in Table 3.1 (reproduced) what is the conditional probability that someone used a laptop given that they passed the exam, p(laptop|pass). What is the conditional probability of that someone didn't use a laptop in class given they passed the exam, p(no laptop |pass)? The conditional probability that someone used a laptop given they passed the exam is 0.33, or a 33% chance: \[p(\text{laptop|pass})=\frac{p(\text{laptop ∩ pass})}{p(\text{pass})}=\frac{{24}/{100}}{{73}/{100}}=\frac{0.24}{0.73}=0.33\] The conditional probability that someone didn't use a laptop in class given they passed the exam is 0.67 or a 67% chance. \[p(\text{no laptop|pass})=\frac{p(\text{no laptop ∩ pass})}{p(\text{pass})}=\frac{{49}/{100}}{{73}/{100}}=\frac{0.49}{0.73}=0.67\] Using the data in Table 3.1 (reproduced), what are the posterior odds of someone using a laptop in class (compared to not using one) given that they passed the exam? The posterior odds are the ratio of the posterior probability for one hypothesis to another. In this example it would be the ratio of the probability that a used a laptop given that they passed (which we have already calculated above to be 0.33) to the probability that they did not use a laptop in class given that they passed (which we have already calculated above to be 0.67). The value turns out to be 0.49, which means that the probability that someone used a laptop in class if they passed the exam is about half of the probability that someone didn't use a laptop in class given that they passed the exam. \[\text{posterior odds}= \frac{p(\text{hypothesis 1|data})}{p(\text{hypothesis 2|data})} = \frac{p(\text{laptop|pass})}{p(\text{no laptop| pass})} = \frac{0.33}{0.67} = 0.49\] No answer required. What are these icons shortcuts to: : This icon displays a list of the last 12 dialog boxed that you used. : Opens the Go To dialog box so that you can skip to a particular variable. : Produces descriptive statistics for the currently selected variable or variables in the data editor. : Inserts a new case (row) in the data editor. : Produces a list of variables in the data editor and summary information about each one. : In the syntax window this icon runs the currently selected syntax. : This icon opens the split file dialog box, which is used to repeat SPSS procedures on different groups/categories separately. : This icon toggles between value labels and numeric codes in the data editor The data below show the score (out of 20) for 20 different students, some of whom are male and some female, and some of whom were taught using positive reinforcement (being nice) and others who were taught using punishment (electric shock). Enter these data into SPSS and save the file as Method of Teaching.sav. (Clue: the data should not be entered in the same way that they are laid out below.) The data can be found in the file method_of_teaching.sav and should look like this: method_of_teaching.sav Or with the value labels off, like this: Thinking back to Labcoat Leni's Real Research 3.1, Oxoby also measured the minimum acceptable offer; these MAOs (in dollars) are below (again, these are approximations based on the graphs in the paper). Enter these data into the SPSS data editor and save this file as Oxoby (2008) MAO.sav. * Bon Scott group: 2, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5 * Brian Johnson group: 0, 1, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 1 The data can be found in the file oxoby_2008_moa.sav and should look like this: oxoby_2008_moa.sav According to some highly unscientific research done by a UK department store chain and reported in Marie Clare magazine (http://ow.ly/9Dxvy) shopping is good for you: they found that the average women spends 150 minutes and walks 2.6 miles when she shops, burning off around 385 calories. In contrast, men spend only about 50 minutes shopping, covering 1.5 miles. This was based on strapping a pedometer on a mere 10 participants. Although I don't have the actual data, some simulated data based on these means are below. Enter these data into SPSS and save them as Shopping Exercise.sav. The data can be found in the file shopping_exercise.sav and should look like this: shopping_exercise.sav I was taken by two new stories. The first was about a Sudanese man who was forced to marry a goat after being caught having sex with it (http://ow.ly/9DyyP). I'm not sure he treated the goat to a nice dinner in a posh restaurant before taking advantage of her, but either way you have to feel sorry for the goat. I'd barely had time to recover from that story when another appeared about an Indian man forced to marry a dog to atone for stoning two dogs and stringing them up in a tree 15 years earlier (http://ow.ly/9DyFn). Why anyone would think it's a good idea to enter a dog into matrimony with a man with a history of violent behaviour towards dogs is beyond me. Still, I wondered whether a goat or dog made a better spouse. I found some other people who had been forced to marry goats and dogs and measured their life satisfaction and, also, how much they like animals. Enter these data into SPSS and save as Goat or Dog.sav. The data can be found in the file goat_or_dog.sav and should look like this: goat_or_dog.sav One of my favourite activities, especially when trying to do brain-melting things like writing statistics books, is drinking tea. I am English, after all. Fortunately, tea improves your cognitive function, well, in old Chinese people at any rate (Feng, Gwee, Kua, & Ng, 2010). I may not be Chinese and I'm not that old, but I nevertheless enjoy the idea that tea might help me think. Here's some data based on Feng et al.'s study that measured the number of cups of tea drunk and cognitive functioning in 15 people. Enter these data in SPSS and save the file as Tea Makes You Brainy 15.sav. The data can be found in the file tea_makes_you_brainy_15.sav and should look like this: tea_makes_you_brainy_15.sav Statistics and maths anxiety are common and affect people's performance on maths and stats assignments; women in particular can lack confidence in mathematics (Field, 2010). Zhang, Schmader, and Hall (2013) did an intriguing study in which students completed a maths test in which some put their own name on the test booklet, whereas others were given a booklet that already had either a male or female name on. Participants in the latter two conditions were told that they would use this other person's name for the purpose of the test. Women who completed the test using a different name performed better than those who completed the test using their own name. (There were no such effects for men.) The data below are a random subsample of Zhang et al.'s data. Enter them into SPSS and save the file as Zhang (2013) subsample.sav The correct format is as in the file zhang_2013_subsample.sav on the companion website. The data editor should look like this: Zhan_2013_subsample.sav What is a coding variable? A variable in which numbers are used to represent group or category membership. An example would be a variable in which a score of 1 represents a person being female, and a 0 represents them being male. What is the difference between wide and long format data? Long format data are arranged such that scores on an outcome variable appear in a single column and rows represent a combination of the attributes of those scores (for example, the entity from which the scores came, when the score was recorded etc.). In long format data, scores from a single entity can appear over multiple rows where each row represents a combination of the attributes of the score (e.g., levels of an independent variable or time point at which the score was recorded etc.) In contrast, Wide format data are arranged such that scores from a single entity appear in a single row and levels of independent or predictor variables are arranged over different columns. As such, in designs with multiple measurements of an outcome variable within a case the outcome variable scores will be contained in multiple columns each representing a level of an independent variable, or a timepoint at which the score was observed. Columns can also represent attributes of the score or entity that are fixed over the duration of data collection (e.g., participant sex, employment status etc.). Using the data from Chapter 4 (which you should have saved, but if you didn't, re-enter it), plot and interpret an error bar chart showing the mean number of friends for students and lecturers. First of all access the chart builder and select a simple bar chart. The y-axis needs to be the dependent variable, or the thing you've measured, or more simply the thing for which you want to display the mean. In this case it would be number of friends, so select this variable from the variable list and drag it into the drop zone. The x-axis should be the variable by which we want to split the arousal data. To plot the means for the students and lecturers, select the variable Group from the variable list and drag it into the drop zone for the x-axis ( ). Then add error bars by selecting in the Element Properties dialog box. The finished chart builder will look like this: The error bar chart will look like this: We can conclude that, on average, students had more friends than lecturers. Using the same data, plot and interpret an error bar chart showing the mean alcohol consumption for students and lecturers. Access the chart builder and select a simple bar chart. The y-axis needs to be the thing we've measured, which in this case is alcohol consumption, so select this variable from the variable list and drag it into the drop zone. The x-axis should be the variable by which we want to split the arousal data. To plot the means for the students and lecturers, select the variable Group from the variable list and drag it into the drop zone for the x-axis ( ). Add error bars by selecting in the Element Properties dialog box. The finished chart builder will look like this: We can conclude that, on average, students and lecturers drank similar amounts, but the error bars tell us that the mean is a better representation of the population for students than for lecturers (there is more variability in lecturers' drinking habits compared to students'). Using the same data, plot and interpret an error line chart showing the mean income for students and lecturers. Access the chart builder and select a simple line chart. The y-axis needs to be the thing we've measured, which in this case is income, so select this variable from the variable list and drag it into the drop zone. The x-axis should again be students vs. lecturers, so select the variable Group from the variable list and drag it into the drop zone for the x-axis ( ). Add error bars by selecting in the Element Properties dialog box. The finished chart builder will look like this: The error line chart will look like this: We can conclude that, on average, students earn less than lecturers, but the error bars tell us that the mean is a better representation of the population for students than for lecturers (there is more variability in lecturers' income compared to students'). Using the same data, plot and interpret error a line chart showing the mean neuroticism for students and lecturers. Access the chart builder and select a simple line chart. The y-axis needs to be the thing we've measured, which in this case is neurotic, so select this variable from the variable list and drag it into the drop zone. The x-axis should again be students vs. lecturers, so select the variable Group from the variable list and drag it into the drop zone for the x-axis ( ). Add error bars by selecting in the Element Properties dialog box. The finished chart builder will look like this: We can conclude that, on average, students are slightly less neurotic than lecturers. Using the same data, plot and interpret a scatterplot with regression lines of alcohol consumption and neuroticism grouped by lecturer/student. Access the chart builder and select a grouped scatterplot. It doesn't matter which way around we plot these variables, so let's select alcohol consumption from the variable list and drag it into the y-axis drop zone, and then drag neurotic from the variable list and drag it into the drop zone. We then need to split the scatterplot by our grouping variable (lecturers or students), so select Group and drag it to the drop zone. The completed chart builder dialog box will look like this: Click on to produce the graph. To fit the regression lines double-click on the graph in the SPSS Viewer to open it in the SPSS Chart Editor. Then click on in the chart editor to open the properties dialog box. In this dialog box, ask for a linear model to be fitted to the data (this should be set by default). Click on to fit the lines: We can conclude that for lecturers, as neuroticism increases so does alcohol consumption (a positive relationship), but for students the opposite is true, as neuroticism increases alcohol consumption decreases. Note that SPSS has scaled this graph oddly because neither axis starts at zero; as a bit of extra practice, why not edit the two axes so that they start at zero? You can do this by first double-clicking on the x-axis to activate the properties dialog box and then in the custom box set the minimum to be 0 instead of 5. Repeat this process for the y-axis. The resulting graph will look like this: Using the same data, plot and interpret a scatterplot matrix with regression lines of alcohol consumption, neuroticism and number of friends. Access the chart builder and select a scatterplot matrix. We have to drag all three variables into the drop zone. Select the first variable (Friends) by clicking on it with the mouse. Now, hold down the Ctrl (Cmd on a Mac) key on the keyboard and click on a second variable (Alcohol). Finally, hold down the Ctrl (or Cmd) key and click on a third variable (Neurotic). Once the three variables are selected, click on any one of them and then drag them into the drop zone. The completed dialog box will look like this: Click on to produce the graph. To fit the regression lines double-click on the graph in the SPSS Viewer to open it in the SPSS Chart Editor. Then click on in the Chart Editor to open the properties dialog box. In this dialog box, ask for a linear model to be fitted to the data (this should be set by default). Click on to fit the lines. The resulting graph looks like this: We can conclude that there is no relationship (flat line) between the number of friends and alcohol consumption; there was a negative relationship between how neurotic a person was and their number of friends (line slopes downwards); and there was a slight positive relationship between how neurotic a person was and how much alcohol they drank (line slopes upwards). Using the Zang (2013) subsample.sav data from Chapter Error! Reference source not found. (see Smart Alex's task) plot a clustered error bar chart of the mean test accuracy as a function of the type of name participants completed the test under (x-axis) and whether they were male or female (different coloured bars). To graph these data we need to select a clustered bar chart in the chart builder. First we need to select Test Accuracy (%) and drag it into the drop zone. Next we need to select Name Condition and drag it into the drop zone. Finally, we select Participant Sex and drag it into the drop zone. The two sexes will now be displayed as different-coloured bars. Add error bars by selecting in the Element Properties dialog box. The finished chart builder will look like this: The resulting graph looks like this: The graph shows that, on average, males did better on the test than females when using their own name (the control) but also when using a fake female name. However, for participants who did the test under a fake male name, the women did better than males. Using the Method Of Teaching.sav data from Chapter 3, plot a clustered error line chart of the mean score when electric shocks were used compared to being nice, and plot males and females as different-coloured lines. To graph these data we need to select a multiple line chart in the chart builder. In the variable list select the method of teaching variable and drag it into . Then highlight and drag the variable representing score on SPSS homework into . Next, highlight and drag the grouping variable Sex into . The two groups will now be displayed as different-coloured bars. Add error bars by selecting in the Element Properties dialog box. The finished chart builder will look like this:] We can see that when the being nice method of teaching is used, males and females have comparable scores on their SPSS homework, with females scoring slightly higher than males on average, although their scores are also more variable than the males' scores as indicated by the longer error bar). However, when an electric shock is used, males score higher than females but there is more variability in the males' scores than the females' for this method (as seen by the longer error bar for males than for females). Additionally, the graph shows that females score higher when the being nice method is used compared to when an electric shock is used, but the opposite is true for males. This suggests that there may be an interaction effect of sex. Using the Shopping Exercise.sav data from Chapter 3, plot two error bar graphs comparing men and women (x-axis): one for the distance walked, and the other of the time spent shopping. Let's first do the graph for distance walked. In the chart builder double-click on the icon for a simple bar chart, then select the Distance Walked… variable from the variable list and drag it into the drop zone. The x-axis should be the variable by which we want to split the data. To plot the means for males and females, select the variable Participant Sex from the variable list and drag it into the drop zone for the x-axis ( ). Finally, add error bars to your bar chart by selecting in the Element Properties dialog box. The finished chart builder will look like this: Looking at the graph above, we can see that, on average, females walk longer distances while shopping than males. Next we need to do the graph for time spent shopping. In the chart builder double-click on the icon for a simple bar chart. Select the Time Spent … variable from the variable list and drag it into the drop zone. The x-axis should be the variable by which we want to split the data. To plot the means for males and females, select the variable Participant Sex from the variable list and drag it into the drop zone for the x-axis ( ). Finally, add error bars to your bar chart by selecting in the Element Properties dialog box. The finished chart builder will look like this: The graph shows that, on average, females spend more time shopping than males. The females' scores are more variable than the males' scores (longer error bar). Using the Goat or Dog.sav data from Chapter 3, plot two error bar graphs comparing scores when married to a goat or a dog (x-axis): one for the animal liking variable, and the other of the life satisfaction. Let's first do the graph for the animal liking variable. In the chart builder double-click on the icon for a simple bar chart, then select the Love of Animals variable from the variable list and drag it into the drop zone. The x-axis should be the variable by which we want to split the data. To plot the means for males and females, select the variable Type of Animal Wife from the variable list and drag it into the drop zone for the x-axis ( ). Finally, add error bars to your bar chart by selecting in the Element Properties dialog box. The finished chart builder will look like this: The graph shows that the mean love of animals was the same for men married to a goat as for those married to a dog. Next we need to do the graph for life satisfaction. In the chart builder double-click on the icon for a simple bar chart. Select the Life Satisfaction variable from the variable list and drag it into the drop zone. The x-axis should be the variable by which we want to split the data. To plot the means for males and females, select the variable Type of Animal Wife from the variable list and drag it into the drop zone for the x-axis ( ). Finally, add error bars to your bar chart by selecting in the Element Properties dialog box. The finished chart builder will look like this: The graph shows that, on average, life satisfaction was higher in men who were married to a dog compared to men who were married to a goat. Using the same data as above, plot a scatterplot of animal liking scores against life satisfaction (plot scores for those married to dogs or goats in different colours). Access the chart builder and select a grouped scatterplot. It doesn't matter which way around we plot these variables, so let's select Life Satisfaction from the variable list and drag it into the drop zone and then drag Love of Animals from the variable list and drag it into the drop zone for the x-axis ( ). We then need to split the scatterplot by our grouping variable (dogs or goats), so select Type of Animal Wife and drag it to the drop zone. The completed chart builder dialog box will look like this: Click on to produce the graph. Let's fit some regression lines to make the graph easier to interpret. To do this, double-click on the graph in the SPSS viewer to open it in the SPSS chart editor. Then click on in the chart editor to open the properties dialog box. In this dialog box, ask for a linear model to be fitted to the data (this should be set by default). Click on to fit the lines: We can conclude that for men married to both goats and dogs, as love of animals increases so does life satisfaction (a positive relationship). However, this relationship is more pronounced for goats than for dogs (steeper regression line for goats than for dogs). Using the Tea Makes You Brainy 15.sav data from Chapter 3, plot a scatterplot showing the number of cups of tea drunk (x-axis) against cognitive functioning (y-axis). In the chart builder double-click on the icon for a simple scatterplot. Select the cognitive functioning variable from the variable list and drag it into the drop zone. The horizontal axis should display the independent variable (the variable that predicts the outcome variable). In this case is it is the number of cups of tea drunk, so click on this variable in the variable list and drag it into the drop zone for the x-axis ( ). The completed dialog box will look like this: Click on to produce the graph. Let's fit a regression line to make the graph easier to interpret. To do this, double-click on the graph in the SPSS Viewer to open it in the SPSS Chart Editor. Then click on in the Chart Editor to open the properties dialog box. In this dialog box, ask for a linear model to be fitted to the data (this should be set by default). Click on to fit the line. The resulting graph should look like this: The scatterplot (and near-flat line especially) tells us that there is a tiny relationship (practically zero) between the number of cups of tea drunk per day and cognitive function. Using the Notebook.sav data, check the assumptions of normality and homogeneity of variance for the two films (ignore sex). Are the assumptions met? The dialog box from the explore function should look like this (you can use the default options): The resulting output looks like this: The skewness statistics gives rise to a z-score of −0.320/0.512 = –0.63 for The Notebook, and 0.04/0.512 = 0.08 for a documentary about notebooks. These show no significant skewness. For kurtosis these values are −0.281/0.992 = –0.28 for The Notebook, and –1.024/0.992 = –1.03 for a documentary about notebooks, which again are both non-significant. More important their values are close to zero. The Q-Q plots confirm these findings: for both films the expected quantile points are close to those that would be expected from a normal distribution (i.e. the dots fall close to the diagonal line). The K-S tests show no significant deviation from normality for both films. We could report that arousal scores for The Notebook, D(20) = 0.13, p = 0.20, and a documentary about notebooks, D(20) = 0.10, p = 0.20, were both not significantly different from a normal distribution. Therefore, if we believe these sorts of tests then we can assume normality in the sample data. However, the sample is small and these tests would have been very underpowered to detect a deviation from normal, so my conclusion here is based more on the Q-Q plots. In terms of homogeneity of variance, again Levene's test will be underpowered, and I prefer to ignore this test altogether, but if you're the sort of person who doesn't ignore it, it shows that the variances of arousal for the two films were not significantly different, F(1, 38) = 1.90, p = 0.753. The file SPSSExam.sav contains data on students' performance on an SPSS exam. Four variables were measured: exam (first-year SPSS exam scores as a percentage), computer (measure of computer literacy in percent), lecture (percentage of SPSS lectures attended) and numeracy (a measure of numerical ability out of 15). There is a variable called uni indicating whether the student attended Sussex University (where I work) or Duncetown University. Compute and interpret descriptive statistics for exam, computer, lecture and numeracy for the sample as a whole. To see the distribution of the variables, we can use the frequencies command. Place all four variables (exam, computer, lecture and numeracy) in the Variable(s) box in the dialog box: Click and select measures of central tendency (mean, mode, median), variability (range, standard deviation, variance, quartile splits) and shape (kurtosis and skewness). Click and select a frequency distribution of scores with a normal curve. The output shows the table of descriptive statistics for the four variables in this example. From this table, we can see that, on average, students attended nearly 60% of lectures, obtained 58% in their SPSS exam, scored only 51% on the computer literacy test, and only 5 out of 15 on the numeracy test. In addition, the standard deviation for computer literacy was relatively small compared to that of the percentage of lectures attended and exam scores. These latter two variables had several modes (multimodal). The output provides tabulated frequency distributions of each variable (not reproduced here). These tables list each score and the number of times that it is found within the data set. In addition, each frequency value is expressed as a percentage of the sample (in this case the frequencies and percentages are the same because the sample size was 100). Also, the cumulative percentage is given, which tells us how many cases (as a percentage) fell below a certain score. So, for example, we can see that 66% of numeracy scores were 5 or less, 74% were 6 or less, and so on. Looking in the other direction, we can work out that only 8% (\(100−92%\)) got scores greater than 8. The histograms show us several things. The exam scores are very interesting because this distribution is quite clearly not normal; in fact, it looks suspiciously bimodal (there are two peaks, indicative of two modes). This observation corresponds with the earlier information from the table of descriptive statistics. It looks as though computer literacy is fairly normally distributed (a few people are very good with computers and a few are very bad, but the majority of people have a similar degree of knowledge) as is the lecture attendance. Finally, the numeracy test has produced very positively skewed data (the majority of people did very badly on this test and only a few did well). This corresponds to what the skewness statistic indicated. Descriptive statistics and histograms are a good way of getting an instant picture of the distribution of your data. This snapshot can be very useful: for example, the bimodal distribution of SPSS exam scores instantly indicates a trend that students are typically either very good at statistics or struggle with it (there are relatively few who fall in between these extremes). Intuitively, this finding fits with the nature of the subject: statistics is very easy once everything falls into place, but before that enlightenment occurs it all seems hopelessly difficult! Calculate and interpret the z-scores for skewness for all variables. For the SPSS exam scores, the z-score of skewness is −0.107/0.241 = −0.44. For numeracy, the z-score of skewness is 0.961/0.241 = 3.99. For computer literacy, the z-score of skewness is −0.174/0.241 = −0.72. For lectures attended, the z-score of skewness is −0.422/0.241 = −1.75. It is pretty clear then that the numeracy scores are significantly positively skewed (p < .05) because the z-score is greater than 1.96, indicating a pile-up of scores on the left of the distribution (so most students got low scores). For the other three variables, the skewness is non-significant, p < .05, because the values lie between −1.96 and 1.96. Calculate and interpret the z-scores for kurtosis for all variables. For SPSS exam scores, the z-score of kurtosis is −1.105/0.478 = −2.31, which is significant, p < 0.05, because it lies outside −1.96 and 1.96. For computer literacy, the z-score of kurtosis is 0.364/0.478 = 0.76, which is non-significant, p < 0.05, because it lies between −1.96 and 1.96. For lectures attended, the z-score of kurtosis is −0.179/0.478 = −0.37, which is non-significant, p < 0.05, because it lies between −1.96 and 1.96. For numeracy, the z-score of kurtosis is 0.946/0.478 = 1.98, which is significant, p < 0.05, because it lies outside −1.96 and 1.96. Use the split file command to look at and interpret the descriptive statistics for numeracy and exam. If we want to obtain separate descriptive statistics for each of the universities, we can split the file, and then proceed using the frequencies command. In the split file dialog box select the option Organize output by groups. Drag Uni into the box labelled Groups Based on and click : Once you have split the file, use the frequencies command: The output is split into two sections: first the results for students at Duncetown University, then the results for those attending Sussex University. From these tables it is clear that Sussex students scored higher on both their SPSS exam and the numeracy test than their Duncetown counterparts. In fact, looking at the means reveals that, on average, Sussex students scored an amazing 36% more on the SPSS exam than Duncetown students, and had higher numeracy scores too (what can I say, my students are the best). Descriptive statistics for Duncetown University Descriptive statistics for Sussex University The histograms of these variables split according to the university attended show numerous things. The first interesting thing to note is that for exam marks, the distributions are both fairly normal. This seems odd because the overall distribution was bimodal. However, it starts to make sense when you consider that for Duncetown the distribution is centred around a mark of about 40%, but for Sussex the distribution is centred around a mark of about 76%. This illustrates how important it is to look at distributions within groups. If we were interested in comparing Duncetown to Sussex it wouldn't matter that overall the distribution of scores was bimodal; all that's important is that each group comes from a normal distribution, and in this case it appears to be true. When the two samples are combined, these two normal distributions create a bimodal one (one of the modes being around the centre of the Duncetown distribution, and the other being around the centre of the Sussex data!). For numeracy scores, the distribution is slightly positively skewed (there is a larger concentration at the lower end of scores) in both the Duncetown and Sussex groups. Therefore, the overall positive skew observed before is due to the mixture of universities. Repeat Task 5 but for the computer literacy and percentage of lectures attended. The SPSS output is split into two sections: first, the results for students at Duncetown University, then the results for those attending Sussex University. From these tables it is clear that Sussex and Duncetown students scored similarly on computer literacy (both means are very similar). Sussex students attended slightly more lectures (63.27%) than their Duncetown counterparts (56.26%). The histograms are also split according to the university attended. All of the distributions look fairly normal. The only exception is the computer literacy scores for the Sussex students. This is a fairly flat distribution apart from a huge peak between 50 and 60%. It's slightly heavy-tailed (right at the very ends of the curve the bars come above the line) and very pointy. This suggests positive kurtosis. If you examine the values of kurtosis you will find that there is significant (p < 0.05) positive kurtosis: 1.38/0.662 = 2.08, which falls outside of −1.96 and 1.96. Conduct and interpret a K-S test for numeracy and exam. The Kolmogorov–Smirnov (K-S) test can be accessed through the explore command. First, drag exam and numeracy to the box labelled Dependent List. It is also possible to select a factor (or grouping variable) by which to split the output (so if you drag Uni to the box labelled Factor List, output will be produced for each group — a bit like the split file command). Click and select . The output containing the K-S test, looks like this: For both numeracy and SPSS exam scores, the K-S test is highly significant, indicating that both distributions are not normal. This result is likely to reflect the bimodal distribution found for exam scores, and the positively skewed distribution observed in the numeracy scores. However, these tests confirm that these deviations were significant. (But bear in mind that the sample is fairly big.) We can report that the percentages on the SPSS exam, D(100) = 0.10, p = 0.012, and the numeracy scores, D(100) = 0.15, p < .001, were both significantly non-normal. As a final point, bear in mind that when we looked at the exam scores for separate groups, the distributions seemed quite normal; now if we'd asked for separate tests for the two universities (by dragging Uni in the box labelled Factor List) the K-S test will have been dfifferent. If you try this out, you'll get this output: Note that the percentages on the SPSS exam are not significantly different from normal within the two groups. This point is important because if our analysis involves comparing groups, then what's important is not the overall distribution but the distribution in each group. Because tests like K-S are at the mercy of sample size, it's also worth looking at the Q-Q plots. These plots confirm that both variables (overall) are not normal because the dots deviate substantially from the line. (incidentally, the deviation is greater for the numeracy scores, and this is consistent with the higher significance value of this variable on the K-S test.) Conduct and interpret a Levene's test for numeracy and exam. Let's begin this example by reminding ourselves that Levene's test is basically pointless (see the book!). Nevertheless, if you insist on consulting it, Levene's test is obtained using the explore dialog box. Drag the variables exam and numeracy to the box labelled Dependent List. To compare variances across the two universities we need to drag the variable Uni to the box labelled Factor List. Levene's test is non-significant for the SPSS exam scores indicating either that that the variances are not significantly different (i.e. they are similar and the homogeneity of variance assumption is tenable) or that the test is underpowered to detect a difference. For the numeracy scores, Levene's test is significant indicating that the variances are significantly different (i.e., the homogeneity of variance assumption has been violated). We could report that for the percentage on the SPSS exam, the variances for Duncetown and Sussex University students were not significantly different, F(1, 98) = 2.58, p = 0.111, but for numeracy scores the variances were significantly different, F(1, 98) = 7.37, p = 0.008. Transform the numeracy scores (which are positively skewed) using one of the transformations described in this chapter. Do the data become normal? Reproduced below are histograms of the original scores and thes ame scores after all three transformations discussed in the book: None of these histograms are particularly normal. With thenusual strong caveats that I apply to significance tests of normality (read the book!), here's the output from the K–S tests: All of these tests are significant, suggesting (to the extent to which the K-S test tells us anything useful) that although the square root transformation does the best job of normalizing the data, none of these transformations work. Use the explore command to see what effect a natural log transformation would have on the four variables measured in SPSSExam.sav. The completed dialog box should look like this: Click and select : The outputshows Levene's test on the log-transformed scores. Compare this table to the one in Task 8 (which was conducted on the untransformed SPSS exam scores and numeracy). To recap Task 8, for the untransformed scores Levene's test was non-significant for the SPSS exam scores (p = 0.111) indicating that the variances were not significantly different (i.e., the homogeneity of variance assumption is tenable). However, for the numeracy scores, Levene's test was significant (p = 0.008) indicating that the variances were significantly different (i.e. the homogeneity of variance assumption was violated). For the log-transformed scores, the problem has been reversed: Levene's test is now significant for the SPSS exam scores (p < 0.001) but is no longer significant for the numeracy scores (p = 0.647). This reiterates my point from the book chapter that transformations are often not a magic solution to problems in the data. A psychologist was interested in the cross-species differences between men and dogs. She observed a group of dogs and a group of men in a naturalistic setting (20 of each). She classified several behaviours as being dog-like (urinating against trees and lampposts, attempts to copulate with anything that moved, and attempts to lick their own genitals). For each man and dog she counted the number of dog-like behaviours displayed in a 24-hour period. It was hypothesized that dogs would display more dog-like behaviours than men. Analyze the data in MenLikeDogs.sav with a Mann–Whitney test. The output tells us that z is –0.15 (standardized test statistic), and we had 20 men and 20 dogs so the total number of observations was 40. The effect size is, therefore: \[ r = \frac{-0.15}{\sqrt{40}} = -0.02\] This represents a tiny effect (it is close to zero), which tells us that there truly isn't much difference between dogs and men. We could report something like: Men (Mdn = 27) and dogs (Mdn = 24) did not significantly differ in the extent to which they displayed dog-like behaviours, U = 194.5, p = 0.881 , r = −0.02. Both Ozzy Osbourne and Judas Priest have been accused of putting backward masked messages on their albums that subliminally influence poor unsuspecting teenagers into doing things like blowing their heads off with shotguns. A psychologist was interested in whether backward masked messages could have an effect. He created a version of Britney Spears' 'Baby one more time' that contained the masked message 'deliver your soul to the dark lord' repeated in the chorus. He took this version, and the original, and played one version (randomly) to a group of 32 people. Six months later he played them whatever version they hadn't heard the time before. So each person heard both the original and the version with the masked message, but at different points in time. The psychologist measured the number of goats that were sacrificed in the week after listening to each version. Test the hypothesis that the backward message would lead to more goats being sacrificed using a Wilcoxon signed-rank test (DarkLord.sav). The output tells us that z is 2.094 (standardized test statistic), and we had 64 observations (although we only used 32 people and tested them twice, it is the number of observations, not the number of people, that is important here). The effect size is, therefore: \[r = \frac{2.094}{\sqrt{64}} = 0.26\] This value represents a medium effect (it is close to Cohen's benchmark of 0.3), which tells us that the effect of whether or a subliminal message was present was a substantive effect. We could report something like: The number of goats sacrificed after hearing the message (Mdn = 9) was significantly less than after hearing the normal version of the song (Mdn = 11), T = 294.50, p = 0.036, r = 0.26. A media researcher was interested in the effect of television programmes on domestic life. She hypothesized that through 'learning by watching', certain programmes encourage people to behave like the characters within them. She exposed 54 couples to three popular TV shows after which the couple were left alone in the room for an hour. The experimenter measured the number of times the couple argued. Each couple viewed all TV shows but at different points in time (a week apart) and in a counterbalanced order. The TV shows were EastEnders (which portrays the lives of extremely miserable, argumentative, London folk who spend their lives assaulting each other, lying and cheating), Friends (which portrays unrealistically considerate and nice people who love each other oh so very much—but I love it anyway), and a National Geographic programme about whales (this was a control). Test the hypothesis with Friedman's ANOVA *(Eastenders.sav). The mean ranks were highest after watching EastEnders. From the chi-square test statistic we can conclude that the type of programme watched significantly affected the subsequent number of arguments (because the significance value is less than 0.05). To see where the differences lie we look at pairwise comparisons. The output of the pairwise comparisons shows that the test comparing Friends to EastEnders is significant (as indicated by the yellow line); however, the other two comparisons were both non-significant (as indicated by the black lines). The table below the diagram confirms this and tells us the significance values of the three comparisons. The significance value of the comparison between Friends and EastEnders is 0.037, which is below the criterion of 0.05, therefore we can conclude that EastEnders led to significantly more arguments than Friends. The effect seems to reflect the fact that* EastEnders* makes people argue more. For the first comparison (Friends vs. National Geographic) z is –0.529, and because this is based on comparing two groups each containing 54 observations, we have 108 observations in total (remember that it isn't important that the observations come from the same people). The effect size is, therefore: \[ r_{\text{Friends}-\text{National Geographic}} = \frac{-0.529}{\sqrt{108}} = -0.05\] This represents virtually no effect (it is close to zero). Therefore, Friends had very little effect in creating arguments compared to the control. For the second comparison (Friends compared to EastEnders) z is 2.502, and this was again based on 108 observations. The effect size is: \[ r_{\text{Friends}-\text{EastEnders}} = \frac{2.502}{\sqrt{108}} = 0.24\] This tells us that the effect of EastEnders relative to Friends was a small to medium effect. For the third comparison (EastEnders vs. National Geographic) z is 1.973, and this was again based on 108 observations. The effect size is: \[ r_{\text{National Geographic}-\text{EastEnders}} = \frac{1.973}{\sqrt{108}} = 0.19\] This also represents a small to medium effect. We could report all of this as follows: The number of arguments that couples had was significantly affected by the programme they had just watched, \(\chi^\text{2}\)(2) = 7.59, p = 0.023. Pairwise comparisons with adjusted p-values showed that watching EastEnders significantly increased the number of arguments compared to watching Friends (p = 0. 037, r = 0.24). However, there were no significant differences in number of arguments when watching Friends compared to the control programme (National Geographic), p = 1.00, r = -0.05. Finally, EastEnders did not significantly increase the number of arguments compared to the control programme; however, there was a small to medium effect (p = 0.146, r = 0.19). A researcher was interested in preventing coulrophobia (fear of clowns) in children. She did an experiment in which different groups of children (15 in each) were exposed to positive information about clowns. The first group watched adverts in which Ronald McDonald is seen cavorting with children and singing about how they should love their mums. A second group was told a story about a clown who helped some children when they got lost in a forest (what a clown was doing in a forest remains a mystery). A third group was entertained by a real clown, who made balloon animals for the children. A final, control, group had nothing done to them at all. Children rated how much they liked clowns from 0 (not scared of clowns at all) to 5 (very scared of clowns). Use a Kruskal–Wallis test to see whether the interventions were successful (coulrophobia.sav). We can conclude that the type of information presented to the children about clowns significantly affected their fear ratings of clowns. The boxplot in the output above gives us an indication of the direction of the effects, but to see where the significant differences lie we need to look at the pairwise comparisons. The test comparing the story and advert groups, and the test comparing the exposure and the advert groups were significant (yellow connecting lines). However, none of the other comparisons were significant (black connecting lines). The table below the diagram confirms this, and tells us the significance values of the comparisons. The significance value of the comparison between exposure and advert is 0.004, and between story and advert is 0.001, both of which are below the common criterion of 0.05. Therefore, we can conclude that hearing a story and exposure to a clown significantly decreased fear beliefs compared to watching the advert (I know the direction of the effects by looking at the boxplot). There was no significant difference between hearing and exposure on children's fear beliefs. Finally, none of the interventions significantly decreased fear beliefs compared to the control condition. For the first comparison (story vs. exposure) z is –0.305, and because this is based on comparing two groups each containing 15 observations, we have 30 observations in total. The effect size is: \[ r_{\text{story}-\text{exposure}} = \frac{-0.305}{\sqrt{30}} = -0.06\] This represents a very small effect, which tells us that the effect of a story relative to exposure was similar. For the second comparison (story vs. control) z is –1.518, and this was again based on 30 observations. The effect size is: \[ r_{\text{story}-\text{control}} = \frac{-1.518}{\sqrt{30}} = -0.28\] This represents a small to medium effect. Therefore, although non-significant, the effect of stories relative to the control was a fairly substantive effect. For the next comparison (story vs. advert) z is 3.714, and this was again based on 30 observations. The effect size is: \[ r_{\text{story}-\text{advert}} = \frac{3.714}{\sqrt{30}} = 0.68\] This represents a large effect. Therefore, the effect of a stories relative to adverts was a substantive effect. For the next comparison (exposure vs. control) z is –1.213, and this was again based on 30 observations. The effect size is: \[ r_{\text{exposure}-\text{control}} = \frac{-1.213}{\sqrt{30}} = -0.22\] This represents a small effect. Therefore, there was a small effect of exposure relative to the control.For the next comparison (exposure vs. advert) z is 3.410, and this was again based on 30 observations. The effect size is: \[ r_{\text{exposure}-\text{advert}} = \frac{3.419}{\sqrt{30}} = 0.62\] This represents a large effect. Therefore, the effect of a stories relative to adverts was a substantive effect. For the final comparison (adverts vs. control) z is 2.197, and this was again based on 30 observations. The effect size is, therefore: \[ r_{\text{Control}-\text{advert}} = \frac{2.197}{\sqrt{30}} = 0.40\] This represents a medium to large effect, Therefore, although non-significant, the effect of adverts relative to the control was a substantive effect. We could report something like: Children's fear beliefs about clowns was significantly affected the format of information given to them, H(3) = 17.06, p = 0.001. Pairwise comparisons with adjusted p-values showed that fear beliefs were significantly higher after the adverts compared to the story, U = 23.17, p = 0.001, r = 0.68, and exposure, U = 21.27, p = 0.004, r = 0.62. However, fear beliefs were not significantly different after the stories, U = −9.47, p = 0.774, r = −0.28, exposure, U = −7.56, p = 1.000, r = −0.22, or adverts, U = 13.70, p = 0.168, r = 0.40, relative to the control. Finally, fear beliefs were not significantly different after the stories relative to exposure, U = −1.90, p = 1.000, r = −0.06. We can conclude that clown information through adverts, stories and exposure did produce medium-size effects in reducing fear beliefs about clowns compared to the control, but not significantly so (future work with larger samples might be appropriate). Test whether the number of offers was significantly different in people listening to Bon Scott compared to those listening to Brian Johnson (Oxoby (2008) Offers.sav). Compare your results to those reported by Oxoby (2008). We need to conduct a Mann–Whitney test because we want to compare scores in two independent samples: participants who listened to Bon Scott vs. those who listened to Brian Johnson. Let's calculate an effect size, r: \[ r_{\text{Bon}-\text{Brian}} = \frac{1.850}{\sqrt{36}} = 0.31\] This represents a medium effect: when listening to Brian Johnson people proposed higher offers than when listening to Bon Scott, suggesting that they preferred Brian Johnson to Bon Scott. Although this effect has some substance, it was not significant, which shows that a fairly substantial effect size can be non-significant in a small sample. We could report something like: Offers made by people listening to Bon Scott (Mdn = 3.0) were not significantly different from offers by people listening to Brian Johnson (Mdn = 4.0), U = 218.50, z = 1.85, p = 0.074, r = 0.31. I've reported the median for each condition because this statistic is more appropriate than the mean for non-parametric tests. You'll can get these values by running descriptive statistics, or you could report the mean ranks instead of the median. We could also choose to report Wilcoxon's test rather than the Mann–Whitney U-statistic as follows: Offers made by people listening to Bon Scott (M = 15.36) were not significantly different from offers by people listening to Brian Johnson (M = 21.64), Ws = 389 Repeat the analysis above, but using the minimum acceptable offer (Oxoby (2008) MAO.sav). We again conduct a Mann–Whitney test. This is because we are comparing two independent samples (those who listened to Brian Johnson and those who listened to Bon Scott). Let's calculate the effect size, r: \[ r_{\text{Bon}-\text{Brian}} = \frac{-2.476}{\sqrt{36}} = -0.41\] This represents a medium effect. looking at the mean ranks in the output above, we can see that people accepted lower offers when listening to Brian Johnson than when listening to Bon Scott. We could report something like: The minimum acceptable offer was significantly higher in people listening to Bon Scott (Mdn = 4.0) than in people listening to Brian Johnson (Mdn = 3.0), U = 88.00, z = 2.48, p = 0.019, r = 0.41, suggesting that people preferred Brian Johnson to Bon Scott. The minimum acceptable offer was significantly higher in people listening to Bon Scott (M = 22.61) than in people listening to Brian Johnson (M = 14.39), Ws = 259.00, z = 2.48, p = 0.019, r = 0.41, suggesting that people preferred Brian Johnson to Bon Scott. Using the data in Shopping Exercise.sav test whether men and women spent significantly different amounts of time shopping? We need to conduct a Mann–Whitney test because we are comparing two independent samples (men and women). \[ r_{\text{men}-\text{women}} = \frac{1.776}{\sqrt{10}} = 0.56\] This represents a large effect, which highlights how large effects can be non-significant in small samples. The mean ranks show that women spent more time shopping than men. We could report the analysis as follows: Men (Mdn = 37.0) and women (Mdn = 160.0) did not significantly differ in the length of time they spent shopping, U = 21.00, z = 1.78, p = 0.095, r = 0.56. I've reported the median for each condition (this statistic is more appropriate than the mean for non-parametric tests). Alternatively you can report the mean ranks. If you choose to report Wilcoxon's test rather than the Mann–Whitney U-statistic you would do so as follows: Men (M = 3.8) and women (M = 7.2) did not significantly differ in the length of time they spent shopping, Ws = 36.00, z = 1.78, p = 0.095, r = 0.56. Using the same data, test whether men and women walked significantly different distances while shopping. Again, we conduct a Mann–Whitney test because – yes, you guessed it – we are once again comparing two independent samples (men and women). This represents a medium effect, which highlights how substantial effects can be non-significant in small samples. The mean ranks show that women travelled greater distances while shopping than men (but not significantly so). We could report this analysis as follows: Men (Mdn = 1.36) and women (Mdn = 1.96) did not significantly differ in the distance walked while shopping, U = 18.00, z = 1.15, p = 0.310, r = 0.36. If we reported the mean ranks (instead of the median) and Wilcoxon's test (rather than the Mann–Whitney U-statistic), we could do so as follows: Men (M = 4.4) and women (M = 6.6) did not significantly differ in the distance walked while shopping, Ws = 33.00, z = 1.15, p = 0.310, r = 0.36. Using the data in Goat or Dog.sav test whether people married to goats and dogs differed significantly in their life satisfaction. To answer this question we run a Mann–Whitney test. The reason for choosing this test is that we are comparing two independent groups (men could be married to a goat or a dog, not both – that would be weird). \[ r_{\text{goat}-\text{dog}} = \frac{3.011}{\sqrt{20}} = 0.67\] This represents a very large effect. Looking at the mean ranks in the output above, we can see that men who were married to dogs had a higher life satisfaction than those married to goats – well, they do say that dogs are man's best friend. We could report the analysis as: Men who were married to dogs (Mdn = 63) had significantly higher levels of life satisfaction than men who were married to goats (Mdn = 44), U = 87.00, z = 3.01, p = 0.002, r = 0.67. Men who were married to dogs (M = 15.38) had significantly higher levels of life satisfaction than men who were married to goats (M = 7.25), Ws = 123.00, z = 3.01, p = 0.002, r = 0.67. Use the SPSSExam.sav data to test whether students at the Universities of Sussex and Duncetown differed significantly in their SPSS exam scores, their numeracy, their computer literacy, and the number of lectures attended. To answer this question run a Mann–Whitney test. The reason for choosing this test is that we are comparing two unrelated groups (students who attended Sussex University and students who attended Duncetown University). Let's calculate the effect size, r, for the difference between Duncetown and Sussex universities for each outcome variable: \[ \begin{aligned} \ r_{\text{SPSS exam}} &= \frac{8.412}{\sqrt{100}} = 0.84 \\ \ r_{\text{computer literacy}} &= \frac{0.980}{\sqrt{100}} = 0.10 \\ \ r_{\text{lectures attended}} &= \frac{1.434}{\sqrt{100}} = 0.14 \\ \ r_{\text{numeracy}} &= \frac{2.35}{\sqrt{100}} = 0.24 \\ \end{aligned} \] We could report the analysis as: Students from the Sussex University (Mdn = 75) scored significantly higher on their SPSS exam than students from Duncetown University (Mdn = 38), U = 2,470.00, z = 8.41, p = 0.00, r = 0.84. Sussex students (Mdn = 5) were also significantly more numerate than those at Duncetown University (Mdn = 4), U = 1,588.00, z = 2.35, p = 0.019, r = 0.24. However, Sussex students (Mdn = 54), were not significantly more computer literate than Duncetown students (Mdn = 49), U = 1,392.00, z = 0.980, p = 0.327, r = 0.10, nor did Sussex students (Mdn = 65.75) attend significantly more lectures than Duncetown students (Mdn = 60.50), U = 1,458.00, z = 1.43, p = 0.152, r = 0.14. Sussex students are just more intelligent, naturally.:-) Use the DownloadFestival.sav data to test whether hygiene levels changed significantly over the three days of the festival. Conduct a Friedman's ANOVA because we want to compare more than two (day 1, day 2 and day 3) related samples (the same participants were used across the three days of the festival). The hygiene levels significantly decreased over the three days of the music festival, \(\chi^\text{2}\)(2) = 86.54, p < 0.001. However, pairwise comparisons with adjusted p-values revealed that while hygiene scores significantly decreased between days 1 and 2, (p < 0.001, r = 0.54), and days 1 and 3, (p < 0.001, r = 0.47), they did not significantly decrease between days 2 and 3 (p = 0.677, r = 0.08). \[ \begin{aligned} \ r_{\text{day 1}-\text{day 1}} &= \frac{8.544}{\sqrt{246}} = 0.54 \\ \ r_{\text{day 1}-\text{day 3}} &= \frac{7.332}{\sqrt{246}} = 0.47 \\ \ r_{\text{day 2}-\text{day 3}} &= \frac{-1.211}{\sqrt{246}} = -0.08 \\ \end{aligned} \] A student was interested in whether there was a positive relationship between the time spent doing an essay and the mark received. He got 45 of his friends and timed how long they spent writing an essay (hours) and the percentage they got in the essay (essay). He also translated these grades into their degree classifications (grade): in the UK, a student can get a first-class mark (the best), an upper-second-class mark, a lower second, a third, a pass or a fail (the worst). Using the data in the file EssayMarks.sav find out what the relationship was between the time spent doing an essay and the eventual mark in terms of percentage and degree class (draw a scatterplot too). We're interested in looking at the relationship between hours spent on an essay and the grade obtained. We could create a scatterplot of hours spent on the essay (x-axis) and essay mark (y-axis). I've chosen to highlight the degree classification grades using different colours. The resulting scatterplot looks like this: We should check whether the data are parametric using the explore menu to look at the distributions of scores. The resulting output is as follows: The histograms both look fairly normal. Also, the Kolmogorov–Smirnov and Shapiro–Wilk statistics are non-significant for both variables, which indicates that they are normally distributed (or that the test are underpowered). On balance, we can probably use Pearson's correlation coefficient. The result of this analysis is: I chose a two-tailed test because it is never really appropriate to conduct a one-tailed test (see the book chapter). I also requested the bootstrapped confidence intervals even though the data were normal because they are robust. The results in the table above indicate that the relationship between time spent writing an essay and grade awarded was not significant, Pearson's r = 0.27, 95% BCa CI [0.023, 0.517], p = 0.077. The second part of the question asks us to do the same analysis but when the percentages are recoded into degree classifications. The degree classifications are ordinal data (not interval): they are ordered categories. So we shouldn't use Pearson's test statistic, but Spearman's and Kendall's ones instead: In both cases the correlation is non-significant. There was no significant relationship between degree grade classification for an essay and the time spent doing it, ρ = 0.19, p = 0.204, and τ = –0.16, p = 0.178. Note that the direction of the relationship has reversed. This has happened because the essay marks were recoded as 1 (first), 2 (upper second), 3 (lower second), and 4 (third), so high grades were represented by low numbers. This example illustrates one of the benefits of not taking continuous data (like percentages) and transforming them into categorical data: when you do, you lose information and often statistical power! Using the Notebook.sav data, find out the size of relationship between the participant's sex and arousal. Sex is a categorical variable with two categories, therefore, we need to quantify this relationship using a point-biserial correlation. The resulting output table is as follows: I used a two-tailed test because one-tailed tests should never really be used. I have also asked for the bootstrapped confidence intervals as they are robust. There was no significant relationship between biological sex and arousal because the p-value is larger than 0.05 and the bootstrapped confidence intervals cross zero, \(r_\text{pb}\) = –0.20, 95% BCa CI [–0.47, 0.07], p = 0.266. Using the notebook data again, quantify the relationship between the film watched and arousal. There was a significant relationship between the film watched and arousal, \(r_\text{pb}\) = –0.87, 95% BCa CI [–0.92, –0.80], p < 0.001. Looking at how the groups were coded, you should see that The Notebook had a code of 1, and the documentary about notebooks had a code of 2, therefore the negative coefficient reflects the fact that as film goes up (changes from 1 to 2) arousal goes down. Put another way, as the film changes from The Notebook to a documentary about notebooks, arousal decreases. So The Notebook gave rise to the greater arousal levels. As a statistics lecturer I am interested in the factors that determine whether a student will do well on a statistics course. Imagine I took 25 students and looked at their grades for my statistics course at the end of their first year at university: first, upper second, lower second and third class (see Task 1). I also asked these students what grade they got in their high school maths exams. In the UK GCSEs are school exams taken at age 16 that are graded A, B, C, D, E or F (an A grade is the best). The data for this study are in the file grades.sav. To what degree does GCSE maths grade correlate with first-year statistics grade? Let's look at these variables. In the UK, GCSEs are school exams taken at age 16 that are graded A, B, C, D, E or F. These grades are categories that have an order of importance (an A grade is better than all of the lower grades). In the UK, a university student can get a first-class mark, an upper second, a lower second, a third, a pass or a fail. These grades are categories, but they have an order to them (an upper second is better than a lower second). When you have categories like these that can be ordered in a meaningful way, the data are said to be ordinal. The data are not interval, because a first-class degree encompasses a 30% range (70–100%), whereas an upper second only covers a 10% range (60–70%). When data have been measured at only the ordinal level they are said to be non-parametric and Pearson's correlation is not appropriate. Therefore, the Spearman correlation coefficient is used. In the file, the scores are in two columns: one labelled stats and one labelled gcse. Each of the categories described above has been coded with a numeric value. In both cases, the highest grade (first class or A grade) has been coded with the value 1, with subsequent categories being labelled 2, 3 and so on. Note that for each numeric code I have provided a value label (just like we did for coding variables). In the question I predicted that better grades in GCSE maths would correlate with better degree grades for my statistics course. This hypothesis is directional and so a one-tailed test could be selected; however, in the chapter I advised against one-tailed tests so I have done two-tailed: The SPSS output shows the Spearman correlation on the variables stats and gcse. The output shows a matrix giving the correlation coefficient between the two variables (0.455), underneath is the significance value of this coefficient (0.022) and then the sample size (25). I also requested the bootstrapped confidence intervals (–0.008, 0.758). The significance value for this correlation coefficient is less than 0.05; therefore, it can be concluded that there is a significant relationship between a student's grade in GCSE maths and their degree grade for their statistics course. However, the bootstrapped confidence interval crosses zero, suggesting that the effect in the population could be zero. It is worth remembering that if we were to rerun the analysis we would get different results for the bootstrap confidence interval. I have rerun the analysis, and the resulting output is below. You can see that this time the confidence interval does not cross zero (0.041, 0.755), which suggests that there is likely to be a positive effect in the population (as GCSE grades improve, there is a corresponding improvement in degree grades for statistics). The p-value is only just significant (0.022), although the correlation coefficient is fairly large (0.455). This situation demonstrates that it is important to replicate studies. Finally, it is good to check that the value of N corresponds to the number of observations that were made. If it doesn't then data may have been excluded for some reason. We could also look at Kendall's correlation. The output is much the same as for Spearman's correlation. The value of Kendall's coefficient is less than Spearman's (it has decreased from 0.455 to 0.354), but it is still statistically significant (because the p-value of 0.029 is less than 0.05). The bootstrapped confidence intervals do not cross zero (0.029, 0.625) suggesting that there is likely to be a positive relationship in the population. We cannot assume that the GCSE grades caused the degree students to do better in their statistics course. We could report these results as follows: Bias corrected and accelerated bootstrap 95% CIs are reported in square brackets. There was a positive relationship between a person's statistics grade and their GCSE maths grade, \(r_\text{s}\) = 0.46, 95% BCa CI [0.04, 0.76], p = 0.022. There was a positive relationship between a person's statistics grade and their GCSE maths grade, τ = 0.35, 95% BCa CI [0.03, 0.65], p = 0.029. (Note that I've quoted Kendall's τ here.) In the book we saw some data relating to people's ratings of dishonest acts and the likeableness of the perpetrator (for a full description see the book). Compute the Spearman correlation between ratings of dishonesty and likeableness of the perpetrator. The data are in HonestyLab.sav. The relationship between ratings of dishonesty and likeableness of the perpetrator was significant because the p-value is less than 0.05 (p = 0.000) and the bootstrapped confidence intervals do not cross zero (0.766, 0.896). The value of Spearman's correlation coefficient is quite large and positive (0.844), indicating a large positive effect: the more likeable the perpetrator was, the more positively their dishonest acts were viewed. We could report the results as follows: Bias corrected and accelerated bootstrap 95% CIs are reported in square brackets. There was a positive relationship between the likeableness of a perpetrator and how positively their dishonest acts were viewed, \(r_\text{s}\) = 0.84, 95% BCa CI [0.77, 0.90], p < 0.001. We looked at data from people who had been forced to marry goats and dogs and measured their life satisfaction and, also, how much they like animals (Goat or Dog.sav). Is there a significant correlation between life satisfaction and the type of animal to which a person was married? Wife is a categorical variable with two categories (goat or dog). Therefore, we need to look at this relationship using a point-biserial correlation. The resulting table is as follows: I used a two-tailed test because one-tailed tests should never really be used (see book chapter for more explanation). I have also asked for the bootstrapped confidence intervals as they are robust. As you can see there, was a significant relationship between type of animal wife and life satisfaction because our p-value is less than 0.05 and the bootstrapped confidence intervals do not cross zero, \(r_\text{pb}\) = 0.63, BCa CI [0.34, 0.84], p = 0.003. Looking at how the groups were coded, you should see that goat had a code of 1 and dog had a code of 2, therefore this result reflects the fact that as wife goes up (changes from 1 to 2) life satisfaction goes up. Put another way, as wife changes from goat to dog, life satisfaction increases. So, being married to a dog was associated with greater life satisfaction. Repeat the analysis above taking account of animal liking when computing the correlation between life satisfaction and the animal to which a person was married. We can conduct a partial correlation between life satisfaction and the animal to which a person was married while 'adjusting' for the effect of liking animals. The output for the partial correlation above is a matrix of correlations for the variables wife and life satisfaction but controlling for the effect of animal liking. Note that the top and bottom of the table contain identical values, so we can ignore one half of the table. First, notice that the partial correlation between wife and life satisfaction is 0.701, which is greater than the correlation when the effect of animal liking is not controlled for (r = 0.630). The correlation has become more statistically significant (its p-value has decreased from 0.003 to 0.001) and that the confidence interval [0.389, 0.901] still doesn't contain zero. In terms of variance, the value of \(R^2\) for the partial correlation is 0.491, which means that type of animal wife now shares 49.1% of the variance in life satisfaction (compared to 39.7% when animal liking was not controlled). Running this analysis has shown us that type of wife alone explains a large portion of the variation in life satisfaction. In other words, the relationship between wife and life satisfaction is not due to animal liking. We looked at data based on findings that the number of cups of tea drunk was related to cognitive functioning (Feng et al., 2010). The data are in the file Tea Makes You Brainy 15.sav. What is the correlation between tea drinking and cognitive functioning? Is there a significant effect? Because the numbers of cups of tea and cognitive function are both interval variables, we can conduct a Pearson's correlation coefficient. If we request bootstrapped confidence intervals then we don't need to worry about checking whether the data are normal because they are robust. I chose a two-tailed test because it is never really appropriate to conduct a one-tailed test (see the book chapter). The results in the table above indicate that the relationship between number of cups of tea drunk per day and cognitive function was not significant. We can tell this because our p-value is greater than 0.05, and the bootstrapped confidence intervals cross zero, indicating that the effect in the population could be zero (i.e. no effect). Pearson's r = 0.078, 95% BCa CI [–0.39, 0.54], p = 0.783. The research in the previous task was replicated but in a larger sample (N = 716), which is the same as the sample size in Feng et al.'s research (Tea Makes You Brainy 716.sav). Conduct a correlation between tea drinking and cognitive functioning. Compare the correlation coefficient and significance in this large sample, with the previous task What statistical point do the results illustrate? The output for the Pearson's correlation is: We can see that although the value of Pearson's r has not changed, it is still very small (0.078), the relationship between the number of cups of tea drunk per day and cognitive function is now just significant (p = 0.038) and the confidence intervals no longer cross zero (0.010, 0.145) – though the lower confidence interval is very close to zero, suggesting that the effect in the population could still be very close to zero. This example indicates one of the downfalls of significance testing; you can get significant results when you have large sample sizes even if the effect is very small. Basically, whether you get a significant result or not is entirely subject to the sample size. In Chapter 6 we looked at hygiene scores over three days of a rock music festival (Download Festival.sav). Using Spearman's correlation, were hygiene scores on day 1 of the festival significantly correlated with those on day 3? The hygiene scores on day 1 of the festival correlated significantly with hygiene scores on day 3. The value of Spearman's correlation coefficient is 0.344, which is a positive value suggesting that the smellier you are on day 1, the smellier you will be on day 3, \(r_\text{s}\) = 0.34, 95% BCa CI [0.14, 0.52], p < 0.001. Using the data in Shopping Exercise.sav find out if there is a significant relationship between the time spent shopping and the distance covered. The variables Time and Distance are both interval. Therefore, we can conduct a Pearson's correlation. I chose a two-tailed test because it is never really appropriate to conduct a one-tailed test (see the book chapter). The output indicates that there was a significant positive relationship between time spent shopping and distance covered. We can tell that the relationship was significant because the p-value is smaller than 0.05. More important, the robust confidence intervals do not cross zero (0.480, 0.960), suggesting that the effect in the population is unlikely to be zero. Also, our value for Pearson's r is very large (0.83) indicating a large effect. Pearson's r = 0.83, 95% BCa CI [0.48, 0.96], p = 0.003. What effect does accounting for the participant's sex have on the relationship between the time spent shopping and the distance covered? To answer this question, we need to conduct a partial correlation between the time spent shopping (interval variable) and the distance covered (interval variable) while 'adjusting' for the effect of sex (dicotomous variable). The partial correlation between Time and Distance is 0.820, which is slightly smaller than the correlation when the effect of sex is not controlled for (r = 0.830). The correlation has become slightly less statistically significant (its p-value has increased from 0.003 to 0.007). In terms of variance, the value of \(R^2\) for the partial correlation is 0.672, which means that time spent shopping now shares 67.2% of the variance in distance covered when shopping (compared to 68.9% when not adjusted for sex). Running this analysis has shown us that time spent shopping alone explains a large portion of the variation in distance covered. We looked at data based on findings that the number of cups of tea drunk was related to cognitive functioning (Feng, Gwee, Kua, & Ng, 2010). Using a linear model that predicts cognitive functioning from tea drinking, what would cognitive functioning be if someone drank 10 cups of tea? Is there a significant effect? (Tea Makes You Brainy 716.sav) The basic output from SPSS Statistics is as follows: Looking at the output below, we can see that we have a model that significantly improves our ability to predict cognitive functioning. The positive standardized beta value (0.078) indicates a positive relationship between number of cups of tea drunk per day and level of cognitive functioning, in that the more tea drunk, the higher your level of cognitive functioning. We can then use the model to predict level of cognitive functioning after drinking 10 cups of tea per day. The first stage is to define the model by replacing the b-values in the equation below with the values from the Coefficients output. In addition, we can replace the X and Y with the variable names so that the model becomes: \[ \begin{aligned} \text{Cognitive functioning}_i &= b_0 + b_1 \text{Tea drinking}_i \\ \ &= 49.22 +(0.460 \times \text{Tea drinking}_i) \end{aligned} \] We can predict cognitive functioning, by replacing Tea drinking in the equation with the value 10: \[ \begin{aligned} \text{Cognitive functioning}_i &= 49.22 +(0.460 \times \text{Tea drinking}_i) \\ &= 49.22 +(0.460 \times 10) \\ &= 53.82 \end{aligned} \] Therefore, if you drank 10 cups of tea per day, your level of cognitive functioning would be 53.82. Estimate a linear model for the pubs.sav data predicting mortality from the number of pubs. Try repeating the analysis but bootstrapping the confidence intervals. The key output from SPSS Statistics is as follows: Looking at the output, we can see that the number of pubs significantly predicts mortality, t(6) = 3.33, p = 0.016. The positive beta value (0.806) indicates a positive relationship between number of pubs and death rate in that, the more pubs in an area, the higher the rate of mortality (as we would expect). The value of \(R^2\) tells us that number of pubs accounts for 64.9% of the variance in mortality rate – that's over half! Looking at the table labelled Bootstrap for Coefficients we can see that the bootstrapped confidence intervals are both positive values – they do not cross zero (8.229, 100.00) – then assuming this interval is one of the 95% that contain the population value we can gain confidence that there is a positive and non-zero relationship between number of pubs in an area and its mortality rate. We encountered data (HonestyLab.sav) relating to people's ratings of dishonest acts and the likeableness of the perpetrator. Run a linear model with bootstrapping to predict ratings of dishonesty from the likeableness of the perpetrator. Looking at the output we can see that the likeableness of the perpetrator significantly predicts ratings of dishonest acts, t(98) = 14.80, p < 0.001. The positive standardized beta value (0.83) indicates a positive relationship between likeableness of the perpetrator and ratings of dishonesty, in that, the more likeable the perpetrator, the more positively their dishonest acts were viewed (remember that dishonest acts were measured on a scale from 0 = appalling behaviour to 10 = it's OK really). The value of \(R^2\) tells us that likeableness of the perpetrator accounts for 69.1% of the variance in the rating of dishonesty, which is over half. Looking at the table labelled Bootstrap for Coefficients, we can see that the bootstrapped confidence intervals do not cross zero (0.818, 1.072), then assuming this interval is one of the 95% that contain the population value we can gain confidence that there is a non-zero relationship between the likeableness of the perpetrator and ratings of dishonest acts. A fashion student was interested in factors that predicted the salaries of catwalk models. She collected data from 231 models (Supermodel.sav). For each model she asked them their salary per day (salary), their age (age), their length of experience as models (years), and their industry status as a model as their percentile position rated by a panel of experts (beauty). Use a linear model to see which variables predict a model's salary. How valid is the model? The first parts of the output are as follows: To begin with, a sample size of 231 with three predictors seems reasonable because this would easily detect medium to large effects (see the diagram in the chapter). Overall, the model accounts for 18.4% of the variance in salaries and is a significant fit to the data (F(3, 227) = 17.07, p < .001). The adjusted \(R^2\) (0.17) shows some shrinkage from the unadjusted value (0.184), indicating that the model may not generalize well. In terms of the individual predictors we could report: Std. Error t value Pr(>|t|) (Intercept) -60.890 16.497 -3.691 0.000 age 6.234 1.411 4.418 0.000 years -5.561 2.122 -2.621 0.009 beauty -0.196 0.152 -1.289 0.199 It seems as though salaries are significantly predicted by the age of the model. This is a positive relationship (look at the sign of the beta), indicating that as age increases, salaries increase too. The number of years spent as a model also seems to significantly predict salaries, but this is a negative relationship indicating that the more years you've spent as a model, the lower your salary. This finding seems very counter-intuitive, but we'll come back to it later. Finally, the attractiveness of the model doesn't seem to predict salaries significantly. If we wanted to write the regression model, we could write it as: The next part of the question asks whether this model is valid. There are six cases that have a standardized residual greater than 3, and two of these are fairly substantial (case 5 and 135). We have 5.19% of cases with standardized residuals above 2, so that's as we expect, but 3% of cases with residuals above 2.5 (we'd expect only 1%), which indicates possible outliers. Normality of errors The histogram reveals a skewed distribution, indicating that the normality of errors assumption has been broken. The normal P–P plot verifies this because the dashed line deviates considerably from the straight line (which indicates what you'd get from normally distributed errors). Homoscedasticity and independence of errors The scatterplot of ZPRED vs. ZRESID does not show a random pattern. There is a distinct funnelling, indicating heteroscedasticity. Multicollinearity For the age and experience variables in the model, VIF values are above 10 (or alternatively, tolerance values are all well below 0.2), indicating multicollinearity in the data. In fact, the correlation between these two variables is around .9! So, these two variables are measuring very similar things. Of course, this makes perfect sense because the older a model is, the more years she would've spent modelling! So, it was fairly stupid to measure both of these things! This also explains the weird result that the number of years spent modelling negatively predicted salary (i.e. more experience = less salary!): in fact if you do a simple regression with experience as the only predictor of salary you'll find it has the expected positive relationship. This hopefully demonstrates why multicollinearity can bias the regression model. All in all, several assumptions have not been met and so this model is probably fairly unreliable. A study was carried out to explore the relationship between Aggression and several potential predicting factors in 666 children who had an older sibling. Variables measured were Parenting_Style (high score = bad parenting practices), Computer_Games (high score = more time spent playing computer games), Television (high score = more time spent watching television), Diet (high score = the child has a good diet low in harmful additives), and Sibling_Aggression (high score = more aggression seen in their older sibling). Past research indicated that parenting style and sibling aggression were good predictors of the level of aggression in the younger child. All other variables were treated in an exploratory fashion. Analyse them with a linear model (Child Aggression.sav). We need to conduct this analysis hierarchically, entering parenting style and sibling aggression in the first step (forced entry): and the remaining variables in a second step (stepwise): The key output is as follows: Based on the final model (which is actually all we're interested in) the following variables predict aggression: Parenting style (b = 0.062, \(\beta\) = 0.194, t = 4.93, p < 0.001) significantly predicted aggression. The beta value indicates that as parenting increases (i.e. as bad practices increase), aggression increases also. Sibling aggression (b = 0.086, \(\beta\)= 0.088, t = 2.26, p = 0.024) significantly predicted aggression. The beta value indicates that as sibling aggression increases (became more aggressive), aggression increases also. Computer games (b = 0.143, \(\beta\) = 0.037, t= 3.89, p < .001) significantly predicted aggression. The beta value indicates that as the time spent playing computer games increases, aggression increases also. Good diet (b = –0.112, \(\beta\) = –0.118, t = –2.95, p = 0.003) significantly predicted aggression. The beta value indicates that as the diet improved, aggression decreased. The only factor not to predict aggression significantly was: Television (b if entered = 0.032, t = 0.72, p = 0.475 ) did not significantly predict aggression. Based on the standardized beta values, the most substantive predictor of aggression was actually parenting style, followed by computer games, diet and then sibling aggression. \(R^2\) is the squared correlation between the observed values of aggression and the values of aggression predicted by the model. The values in this output tell us that sibling aggression and parenting style in combination explain 5.3% of the variance in aggression. When computer game use is factored in as well, 7% of variance in aggression is explained (i.e. an additional 1.7%). Finally, when diet is added to the model, 8.2% of the variance in aggression is explained (an additional 1.2%). With all four of these predictors in the model still less than half of the variance in aggression can be explained. The histogram and P-P plots suggest that errors are (approximately) normally distrubuted: The scatterplot helps us to assess both homoscedasticity and independence of errors. The scatterplot of ZPRED vs. ZRESID does show a random pattern and so indicates no violation of the independence of errors assumption. Also, the errors on the scatterplot do not funnel out, indicating homoscedasticity of errors, thus no violations of these assumptions. Repeat the analysis in Labcoat Leni's Real Research 9.1 using bootstrapping for the confidence intervals. What are the confidence intervals for the regression parameters? To recap the dialog boxes to run the analysis (see also the Labcoat Leni answers). First, enter Grade, Age and Gender into the model: In a second block, enter NEO_FFI (extroversion): In the final block, enter NPQC_R (narcissism): We can activate bootstrapping with thes options: Facebook status update frequency The main benefit of the bootstrap confidence intervals and significance values is that they do not rely on assumptions of normality or homoscedasticity, so they give us an accurate estimate of the true population value of b for each predictor. The bootstrapped confidence intervals in the output do not affect the conclusions reported in Ong et al. (2011). Ong et al.'s prediction was still supported in that, after controlling for age, grade and gender, narcissism significantly predicted the frequency of Facebook status updates over and above extroversion, b = 0.066 [0.025, 0.107], p = 0.003. Facebook profile picture rating Similarly, the bootstrapped confidence intervals for the second regression are consistent with the conclusions reported in Ong et al. (2011). That is, after adjusting for age, grade and gender, narcissism significantly predicted the Facebook profile picture ratings over and above extroversion, b = 0.173 [0.106, 0.230], p = 0.001. Coldwell, Pike and Dunn (2006) investigated whether household chaos predicted children's problem behaviour over and above parenting. From 118 families they recorded the age and gender of the youngest child (child_age and child_gender). They measured dimensions of the child's perceived relationship with their mum: (1) warmth/enjoyment (child_warmth), and (2) anger/hostility (child_anger). Higher scores indicate more warmth/enjoyment and anger/hostility respectively. They measured the mum's perceived relationship with her child, resulting in dimensions of positivity (mum_pos) and negativity (mum_neg). Household chaos (chaos) was assessed. The outcome variable was the child's adjustment (sdq): the higher the score, the more problem behaviour the child was reported to be displaying. Conduct a hierarchical linear model in three steps: (1) enter child age and gender; (2) add the variables measuring parent-child positivity, parent-child negativity, parent-child warmth, parent-child anger; (3) add chaos. Is household chaos predictive of children's problem behaviour over and above parenting? (Coldwell et al. (2006).sav). To summarize the dialog boxes to run the analysis, first, enter child_age and child_gender into the model and set sdq as the outcome variable: In a new block, add child_anger, child_warmth, mum_pos and mum_neg into the model: In a final block, add chaos to the model: Set some basic options such as these: From the output we can conclude that household chaos significantly predicted younger sibling's problem behaviour over and above maternal parenting, child age and gender, t(88) = 2.09, p = 0.039. The positive standardized beta value (0.218) indicates that there is a positive relationship between household chaos and child's problem behaviour. In other words, the higher the level of household chaos, the more problem behaviours the child displayed. The value of \(R^2\) (0.11) tells us that household chaos accounts for 11% of the variance in child problem behaviour. Task 10.1 Is arachnophobia (fear of spiders) specific to real spiders or will pictures of spiders evoke similar levels of anxiety? Twelve arachnophobes were asked to play with a big hairy tarantula with big fangs and an evil look in its eight eyes and at a different point in time were shown only pictures of the same spider. The participants' anxiety was measured in each case. Do a t-test to see whether anxiety is higher for real spiders than pictures (Big Hairy Spider.sav). Compute the test We have 12 arachnophobes who were exposed to a picture of a spider (Picture) and on a separate occasion a real live tarantula (Real). Their anxiety was measured in each condition (half of the participants were exposed to the picture before the real spider while the other half were exposed to the real spider first). I have already described how the data are arranged, and so we can move straight onto doing the test itself. First, we need to access the main dialog box by selecting Analyze > Compare Means > Paired-Samples T Test …. Once the dialog box is activated, select the pair of variables to be analysed (Real and Picture) by clicking on one and holding down the Ctrl key (Cmd on a Mac) while clicking on the other. Drag these variables to the box labelled Paired Variables (or click ). To run the analysis click . Main dialog box for paired-samples t-test SPSS Statistics output The resulting output contains three tables. The first contains summary statistics for the two experimental conditions. For each condition we are told the mean, the number of participants (N) and the standard deviation of the sample. In the final column we are told the standard error. The second table contains the Pearson correlation between the two conditions. For these data the experimental conditions yield a fairly large, but not significant, correlation coefficient, r = 0.545, p = 0.067. The final table tells us whether the difference between the means of the two conditions was significant;y different from zero. First, the table tells us the mean difference between scores. The table also reports the standard deviation of the differences between the means and, more important, the standard error of the differences between participants' scores in each condition. The test statistic, t, is calculated by dividing the mean of differences by the standard error of differences (t = −7/2.8311 = −2.47). The size of t is compared against known values (under the null hypothesis) based on the degrees of freedom. When the same participants have been used, the degrees of freedom are the sample size minus 1 (df = N − 1 = 11). SPSS uses the degrees of freedom to calculate the exact probability that a value of t at least as big as the one obtained could occur if the null hypothesis were true (i.e., there was no difference between these means). This probability value is in the column labelled Sig. The two-tailed probability for the spider data is very low (p = 0.031) and significant because 0.031 is smaller than the widely-used criterion of 0.05. The fact that the t-value is a negative number tells us that the first condition (the picture condition) had a smaller mean than the second (the real condition) and so the real spider led to greater anxiety than the picture. Therefore, we can conclude that exposure to a real spider caused significantly more reported anxiety in arachnophobes than exposure to a picture, t(11) = −2.47, p = .031. Finally, this output contains a 95% confidence interval for the mean difference. Assuming that this sample's confidence interval is one of the 95 out of 100 that contains the population value, we can say that the true mean difference lies between −13.231 and −0.769. The importance of this interval is that it does not contain zero (i.e., both limits are negative) because this tells us that the true value of the mean difference is unlikely to be zero. Calculating the effect size We can compute the effect size from the value of t and the df from the output: \[ r = \sqrt{\frac{-2.473^2}{-2.473^2 + 11}} = \sqrt{\frac{6.116}{17.116}} = 0.60 \] This represents a very large effect. Therefore, as well as being statistically significant, this effect is large and probably a substantive finding. Reporting the analysis We could report the result as: On average, participants experienced significantly greater anxiety with real spiders (M = 47.00, SE = 3.18) than with pictures of spiders (M = 40.00, SE = 2.68), t(11) = −2.47, p = 0.031, r = 0.60. Plot an error bar graph of the data in Task 1 (remember to adjust for the fact that the data are from a repeated measures design.) (2) Step 1: Calculate the mean for each participant To correct the repeated-measures error bars, we need to use the compute command. To begin with, we need to calculate the average anxiety for each participant and so we use the mean function. Access the main compute dialog box by selecting Transform > Compute Variable. Enter the name Mean into the box labelled Target Variable and then in the list labelled Function group select Statistical and then in the list labelled Functions and Special Variables select Mean. Transfer this command to the command area by clicking on . When the command is transferred, it appears in the command area as MEAN(?,?); the question marks should be replaced with variable names (which can be typed manually or transferred from the variables list). So replace the first question mark with the variable picture and the second one with the variable real. The completed dialog box should look like the one below. Click on to create this new variable, which will appear as a new column in the data editor. Using the compute function to calculate the mean of two columns Step 2: Calculate the grand mean Access the descriptives command by selecting Analyze > Descriptive Statistics > Descriptives …. The dialog box shown below should appear. The descriptives command is used to get basic descriptive statistics for variables, and by clicking a second dialog box is activated. Select the variable Mean from the list and drag it to the box labelled Variable(s) (or click ). Then use the Options dialog box to specify only the mean (you can leave the default settings as they are, but it is only the mean in which we are interested). If you run this analysis the output should provide you with some self-explanatory descriptive statistics for each of the three variables (assuming you selected all three). You should see that we get the mean of the picture condition, and the mean of the real spider condition, but it's the final variable we're interested in: the mean of the picture and spider condition. The mean of this variable is the grand mean, and you can see from the summary table that its value is 43.50. We will use this grand mean in the following calculations. Main dialog box for descriptive statistics Options for descriptive statistics Output for descriptive statistics Step 3: Calculate the adjustment factor Next, we equalize the means between participants (i.e., adjust the scores in each condition such that when we take the mean score across conditions, it is the same for all participants). To do this, we calculate an adjustment factor by subtracting each participant's mean score from the grand mean. We can use the compute function to do this calculation for us. Activate the compute dialog box, give the target variable a name (I suggest Adjustment) and then use the command '43.5-mean'. This command will take the grand mean (43.5) and subtract from it each participant's average anxiety level: Calculating the adjustment factor This process creates a new variable in the data editor called Adjustment. The scores in the Adjustment column represent the difference between each participant's mean anxiety and the mean anxiety level across all participants. You'll notice that some of the values are positive, and these participants are one's who were less anxious than average. Other participants were more anxious than average and they have negative adjustment scores. We can now use these adjustment values to eliminate the between-subject differences in anxiety. Step 4: Create adjusted values for each variable So far, we have calculated the difference between each participant's mean score and the mean score of all participants (the grand mean). This difference can be used to adjust the existing scores for each participant. First we need to adjust the scores in the picture condition. Once again, we can use the compute command to make the adjustment. Activate the compute dialog box in the same way as before, and then title our new variable Picture_Adjusted. All we are going to do is to add each participant's score in the picture condition to their adjustment value. Select the variable picture and drag it to the command area (or click , then click on and drag the variable Adjustment to the command area (or click ). The completed dialog box is: Adjusting the values of picture Now do the same thing for the variable real: create a variable called Real_Adjusted that contains the values of real added to the value in the Adjustment column: Adjusting the values of real Now, the variables Real_Adjusted and Picture_Adjusted represent the anxiety experienced in each condition, adjusted so as to eliminate any between-subject differences. You can plot an error bar ghraph using the chart builder. The finished dialog box will look like this: Completed chart builder dialog box The resulting error bar graph is shown below. The error bars don't overlap which suggests that the groups are significantly different (although we knew this already from the previous task). Error bar graph of the adjusted values of Big Hairy Spider.sav 'Pop psychology' books sometimes spout nonsense that is unsubstantiated by science. As part of my plan to rid the world of pop psychology I took 20 people in relationships and randomly assigned them to one of two groups. One group read the famous popular psychology book Women are from Bras and men are from Penis, and the other read Marie Claire. The outcome variable was their relationship happiness after their assigned readin. Were people happier with their relationship after reading the pop psychology book? (Penis.sav). The output for this example should be: We can compute an effect size as follows: \[ r = \sqrt{\frac{-2.125^2}{-2.125^2 + 18}} = \sqrt{\frac{4.52}{22.52}} = 0.45 \] Or Cohen's d. Let's use a pooled estimate of the standard deviation: \[ \begin{aligned} \ s_p &= \sqrt{\frac{(N_1-1) s_1^2+(N_2-1) s_2^2}{N_1+N_2-2}} \\ \ &= \sqrt{\frac{(10-1)4.110^2+(10-1)4.709^2}{10+10-2}} \\ \ &= \sqrt{\frac{351.60}{18}} \\ \ &= 4.42 \end{aligned} \] \[\hat{d} = \frac{20-24.20}{4.42} = -0.95\] This means that reading the self-help book reduced relationship happiness by about one standard deviation, which is a fairly big effect. We could report this result as: On average, the reported relationship happiness after reading Marie Claire (M = 24.20, SE = 1.49), was significantly higher than after reading Women are from bras and men are from penis (M = 20.00, SE = 1.30), t(17.68) = −2.12, p = 0.048, \(\hat{d} = -0.95\) Twaddle and Sons, the publishers of Women are from Bras and men are from Penis, were upset about my claims that their book was as useful as a paper umbrella. They ran their own experiment (N = 500) in which relationship happiness was measured after participants had read their book and after reading one of mine (Field & Hole, 2003). (Participants read the books in counterbalanced order with a six-month delay.) Was relationship happiness greater after reading their wonderful contribution to pop psychology than after reading my tedious tome about experiments? (Field&Hole.sav). We can compute an effect size, r, as follows: \[ r = \sqrt{\frac{-2.706^2}{-2.706^2 + 499}} = \sqrt{\frac{7.32}{506.32}} = 0.12 \] Or Cohen's d. Let's use Field and Hole as the control: \[\hat{d} = \frac{20.02-18.49}{8.992} = 0.17\] We can adjust this estimate for the repeated-measures design: \[\hat{d}_D = \frac{\hat{d}}{\sqrt{1-r}} = \frac{0.17}{\sqrt{1-0.117}} = 0.18\] Therefore, although this effect is highly statistically significant, the size of the effect is very small and represents a trivial finding. In this example, it would be tempting for Twaddle and Sons to conclude that their book produced significantly greater relationship happiness than our book. In fact, many researchers would write conclusions like this: On average, the reported relationship happiness after reading Field and Hole (2003) (M = 18.49, SE = 0.402), was significantly higher than after reading Women are from bras and men are from penis (M = 20.02, SE = 0.446), t(499) = 2.71, p = 0.007, \(\hat{d}_D = 0.18\). In other words, reading Women are from bras and men are from penis produces significantly greater relationship happiness than that book by smelly old Field and Hole. However, to reach such a conclusion is to confuse statistical significance with the importance of the effect. By calculating the effect size we've discovered that although the difference in happiness after reading the two books is statistically different, the size of effect that this represents is very small. A more correct interpretation might, therefore, be: On average, the reported relationship happiness after reading Field and Hole (2003) (M = 18.49, SE = 0.402), was significantly higher than after reading Women are from bras and men are from penis (M = 20.02, SE = 0.446), t(499) = 2.71, p = 0.007, \(\hat{d}_D = 0.18\). However, the effect size was small, revealing that this finding was not substantial in real terms. Of course, this latter interpretation would be unpopular with Twaddle and Sons who would like to believe that their book had a huge effect on relationship happiness. We looked at data from people who had been forced to marry goats and dogs and measured their life satisfaction as well as how much they like animals (Goat or Dog.sav). Conduct a t-test to see whether life satisfaction depends upon the type of animal to which a person was married. \[ r = \sqrt{\frac{-3.446^2}{-3.446^2 + 18}} = \sqrt{\frac{11.87}{29.87}} = 0.63 \] Or Cohen's d. Let's use a pooled estimate of the standard deviation: \[ \begin{aligned} \ s_p &= \sqrt{\frac{(N_1-1) s_1^2+(N_2-1) s_2^2}{N_1+N_2-2}} \\ \ &= \sqrt{\frac{(12-1)15.509^2+(8-1)11.103^2}{12+8-2}} \\ \ &= \sqrt{\frac{3508.756}{18}} \\ \ &= 13.96 \end{aligned} \] Cohen's d is: \[\hat{d} = \frac{38.17-60.13}{13.96} = -1.57\] As well as being statistically significant, this effect is very large and so represents a substantive finding. We could report: On average, the life satisfaction of men married to dogs (M = 60.13, SE = 3.93) was significantly higher than that of men who were married to goats (M = 38.17, SE = 4.48), t(17.84) = −3.69, p = 0.002, \(\hat{d} = -1.57\). Fit a linear model to the data in Task 5 to see whether life satisfaction is significantly predicted from the type of animal that was married. What do you notice about the t-value and significance in this model compared to Task 5. The output from the linear model should be: Compare this output with the one from the previous Task: the values of t and p are the same. (Technically, t is different because for the linear model it is a positive value and for the t-test it is negative However, the sign of t merely reflects which way around you coded the dog and goat groups. The linear model, by default, has coded the groups the opposite way around to the t-test.) The main point I wanted to make here is that whether you run these data through the regression or t-test menus, the results are identical. In Chapter Error! Reference source not found. we looked at hygiene scores over three days of a rock music festival (Download Festival.sav). Do a paired-samples t-test to see whether hygiene scores on day 1 differed from those on day 3. We can compute the effect size r as follows: \[ r = \sqrt{\frac{-10.587^2}{-10.587^2 + 122}} = \sqrt{\frac{112.08}{234.08}} = 0.69 \] Or Cohen's d. Let's use day 1 as the control: \[\hat{d} = \frac{0.9765-1.6515}{0.6439} = -1.048\] \[\hat{d}_D = \frac{\hat{d}}{\sqrt{1-r}} = \frac{-1.048}{\sqrt{1-0.458}} = -1.424\] This represents a very large effect. Therefore, as well as being statistically significant, this effect is large and represents a substantive finding. We could report: On average, hygiene scores significantly decreased from day 1 (M = 1.65, SE = 0.06), to day 3 (M = 0.98, SE = 0.06) of the Download music festival, t(122) = 10.59, p < .001, \(\hat{d}_D = -1.42\). Analyse the data in Chapter Error! Reference source not found., Task 1 (whether men and dogs differ in their dog-like behaviours) using an independent t-test with bootstrapping. Do you reach the same conclusions? MenLikeDogs.sav We would conclude that men and dogs do not significantly differ in the amount of dog-like behaviour they engage in. The output also shows the results of bootstrapping. The confidence interval ranged from -5.25 to 7.87, which implies (assuming that this confidence interval is one of the 95% containing the true effect) that the difference between means in the population could be negative, positive or even zero. In other words, it's possible that the true difference between means is zero. Therefore, this bootstrap confidence interval confirms our conclusion that men and dogs do not differ in amount of dog-like behaviour. \[ r = \sqrt{\frac{0.363^2}{0.363^2 + 38}} = \sqrt{\frac{0.132}{38.13}} = 0.06 \] Or Cohen's d. Let's use a pooled estimate of the standard deviation: \[ \begin{aligned} \ s_p &= \sqrt{\frac{(N_1-1) s_1^2+(N_2-1) s_2^2}{N_1+N_2-2}} \\ \ &= \sqrt{\frac{(20-1)9.90^2+(20-1)10.98^2}{20+20-2}} \\ \ &= \sqrt{\frac{4152.838}{38}} \\ \ &= 10.45 \end{aligned} \] Cohen's d is: \[\hat{d} = \frac{26.85-28.05}{10.45} = -0.115\] On average, men (M = 26.85, SE = 2.23) engaged in less dog-like behaviour than dogs (M = 28.05, SE = 2.37). However, this difference, 1.2, BCa 95% CI [-5.25 to 7.87], was not significant, t(37.60) = 0.36, p = 0.72, \(\hat{d} = -0.12\). Analyse the data on whether the type of music you hear influences goat sacrificing — DarkLord.sav), using a paired-samples t-test with bootstrapping. Do you reach the same conclusions? The bootstrap confidence interval ranges from -4.19 to -0.72. It does not cross zero suggesting that (if we assume that it is one of the 95% of confidence intervals that contain the true value) that the effect in the population is unlikely to be zero. Therefore, this bootstrap confidence interval confirms our conclusion that there is a significant difference between the number of goats sacrificed when listening to the song containing the backward message compared to when listing to the song played normally. \[ r = \sqrt{\frac{-2.76^2}{-2.76^2 + 31}} = \sqrt{\frac{7.62}{38.62}} = 0.44 \] Or Cohen's d. Let's use the no message group as the control: \[\hat{d} = \frac{9.16-11.50}{4.385} = -0.534\] This represents a fairly large effect. We could report: Fewer goats were sacrificed after hearing the backward message (M = 9.16, SE = 0.62), than after hearing the normal version of the Britney song (M = 11.50, SE = 0.80). This difference, -2.34, BCa 95% CI [-4.19, -0.72], was significant, t(31) = 2.76, p = 0.015, \(\hat{d}_D = -0.12\). Task 10.10 Test whether the number of offers was significantly different in people listening to Bon Scott than in those listening to Brian Johnson, using an independent t-test and bootstrapping. Do your results differ from Oxoby (2008)? (Oxoby (2008) Offers.sav). The bootstrap confidence interval ranged from -1.399 to -0.045, which does not cross zero suggesting that (if we assume that it is one of the 95% of confidence intervals that contain the true value) that the effect in the population is unlikely to be zero. \[ r = \sqrt{\frac{-2.007^2}{2.007^2 + 34}} = \sqrt{\frac{4.028}{38.028}} = 0.33 \] Or Cohen's d. Let's use a pooled estimate of the standard deviation: \[ \begin{aligned} \ s_p &= \sqrt{\frac{(N_1-1) s_1^2+(N_2-1) s_2^2}{N_1+N_2-2}} \\ \ &= \sqrt{\frac{(18-1)0.970^2+(18-1)1.179^2}{18 + 18 -2}} \\ \ &= \sqrt{\frac{39.626}{34}} \\ \ &= 1.08 \end{aligned} \] Cohen's d is: \[\hat{d} = \frac{4.00-3.28}{1.08} = 0.667\] Well, that's pretty spooky: the difference between Bon Scott and Brian Johnson turns out to be the number of the beast. Who'd have thouyght it. We could report these results as: On average, more offers were made when listening to Brian Johnson (M = 4.00, SE = 0.23) than Bon Scott (M = 3.28, SE = 0.28). This difference, -0.72, BCa 95% CI [-1.45, -0.05], was only borderline significant, t(34) = 2.01, p = 0.053; however, it produced a medium effect, \(\hat{d}_D = -0.67\). McNulty et al. (2008) found a relationship between a person's Attractiveness and how much Support they give their partner among newlyweds. The data are in McNulty et al. (2008).sav, Is this relationship moderated by gender (i.e., whether the data were from the husband or wife)? Specifying the model Make sure you have the PROCESS tool installed (installation details are in the book). Access the PROCESS dialog box using Analyze > Regression > PROCESS. Remember that you can move variables in the dialog box by dragging them, or selecting them and cliking . We need to specify three variables: Drag the outcome variable (Support) to the box labelled Outcome Variable (Y). Drag the predictor variable (Attractiveness) to the box labelled Independent Variable (X). Drag the moderator variable (Gender) to the box labelled M Variable(s). The models tested by PROCESS are listed in the drop-down box labelled Model Number. Simple moderation analysis is represented by model 1, so activate this drop-down list and select . The finished dialog box looks like this: Click on and set these options: Because our data file has variables with names longer than 8 characters, click on and set the option to allow long names: Back in the main dialog box, click to run the analysis. Interpreting the output The first part of the output contains the main moderation analysis. Moderation is shown up by a significant interaction effect, and in this case the interaction is highly significant, b = 0.105, 95% CI [0.047, 0.164], t = 3.57, p < 0.001, indicating that the relationship between attractiveness and support is moderated by gender: To interpret the moderation effect we can examine the simple slopes, which are shown in the next part of the output. Essentially, the output shows the results of two different regressions: the regression for attractiveness as a predictor of support (1) when the value for gender is 0.5 (i.e., low). Because husbands were coded as zero, this represents the value for males; and (2) when the value for gender is 0.5 (i.e., high). Because wives were coded as 1, this represents the female end of the gender spectrum. We can interpret these three regressions as we would any other: we're interested the value of b (called Effect in the output), and its significance. From what we have already learnt about regression we can interpret the two models as follows: When gender is low (male), there is a significant negative relationship between attractiveness and support, b = 0.060, 95% CI [-0.100, -0.020], t = -2.95, p = 0.004. When gender is high (female), there is a significant positive relationship between attractiveness and support, b = 0.046, 95% CI [0.003, 0.088], t = 2.12, p = 0.036. These results tell us that the relationship between attractiveness of a person and amount of support given to their spouse is different for men and women. Specifically, for women, as attractiveness increases the level of support that they give to their husbands increases, whereas for men, as attractiveness increases the amount of support they give to their wives decreases: Produce the simple slopes graphs for Task 1. If you set the options that I suggested in task 1, your output should contain the values that you need to plot: Create a data file with a variable that codes Attractiveness as low, mean or high, a variable that codes Gender as husbands or wives, and a variable that contains the values of Support from the output. The data file will look like this: Use the chart builder to draw a line chart with Attractiveness on the x-axis, Support on the y-axis and has different coloured lines for Gender. The dialog box will look like this: The resulting graph confirms our results from the simple slops analysis in the previous task. The direction of the relationship between attractiveness and support is different for men and women: the two regression lines slope in different directions. Specifically, for husbands (blue line) the relationship is negative (the regression line slopes downwards), whereas for wives (green line) the relationship is positive (the regression line slopes upwards). Additionally, the fact that the lines cross indicates a significant interaction effect (moderation). So basically, we can conclude that the relationship between attractiveness and support is positive for wives (more attractive wives give their husbands more support), but negative for husbands (more attractive husbands give their wives less support than unattractive ones). Although they didn't test moderation, this mimics the findings of McNulty et al. (2008). McNulty et al. (2008) also found a relationship between a person's Attractiveness and their relationship Satisfaction among newlyweds. Using the same data as in Tasks 1 and 2, find out if this relationship is moderated by gender? sure you have the PROCESS tool installed (installation details are in the book). Access the PROCESS dialog box using Analyze > Regression > PROCESS. Remember that you can move variables in the dialog box by dragging them, or selecting them and cliking . We need to specify three variables: Drag the outcome variable (Relationship Satisfaction) to the box labelled Outcome Variable (Y). The first part of the output contains the main moderation analysis. Moderation is shown up by a significant interaction effect, and in this case the interaction is not significant, b = 0.547, 95% CI [-0.594, 1.687], t = 0.95, p = 0.345, indicating that the relationship between attractiveness and relationship satisfaction is not significantly moderated by gender: In this chapter we tested a mediation model of infidelity for Lambert et al.'s data using Baron and Kenny's regressions. Repeat this analysis but using Hook_Ups as the measure of infidelity. Baron and Kenny suggested that mediation is tested through three regression models: A regression predicting the outcome (Hook_Ups) from the predictor variable (Consumption). A regression predicting the mediator (Commitment) from the predictor variable (Consumption). A regression predicting the outcome (Hook_Ups) from both the predictor variable (Consumption) and the mediator (Commitment). These models test the four conditions of mediation: (1) the predictor variable (Consumption) must significantly predict the outcome variable (Hook_Ups) in model 1; (2) the predictor variable (Consumption) must significantly predict the mediator (Commitment) in model 2; (3) the mediator (Commitment) must significantly predict the outcome (Hook_Ups) variable in model 3; and (4) the predictor variable (Consumption) must predict the outcome variable (Hook_Ups) less strongly in model 3 than in model 1. Model 1: Predicting infidelity from consumption Dialog box for model 1: Output for model 1: Model 2: Predicting commitment from consumption box for model 2: Model 3: Predicting Infidelity from Consumption and Commitment Is there evidence for mediation? The output from model 1 shows that pornography consumption significantly predicts hook-ups, b = 1.58, 95% CI [0.72, 2.45], t = 3.64, p < .001. As pornography consumption increases, the number of hook-ups increases also. The output from model 2 shows that pornography consumption significantly predicts relationship commitment, b = 0.47, 95% CI [0.89, 0.05], t = 2.21, p = .028. As pornography consumption increases commitment declines. The output from model 3 shows that relationship commitment significantly predicts hook-ups, b = 0.62, 95% CI [0.87, 0.37], t = 4.90, p < .001. As relationship commitment increases the number of hook-ups decreases. The relationship between pornography consumption and infidelity is stronger in model 1, b = 1.58, than in model 3, b = 1.28. As such, the four conditions of mediation have been met. Repeat the analysis in Task 4 but using the PROCESS tool to estimate the indirect effect and its confidence interval. Drag the outcome variable (Hook_Ups) to the box labelled Outcome Variable (Y). Drag the predictor variable (LnConsumption) to the box labelled Independent Variable (X). Drag the mediator variable (Commitment) to the box labelled M Variable(s). The models tested by PROCESS are listed in the drop-down box labelled Model Number. Simple mediation analysis is represented by model 4 (the default). If the drop-down list is not already set to then select this option. The finished dialog box looks like this: The first part of the output shows us the results of the simple regression of commitment predicted from pornography consumption. Pornography consumption significantly predicts relationship commitment, b = -0.47, t = -2.21, p = 0.028. The \(R^2\) value tells us that pornography consumption explains 2% of the variance in relationship commitment, and the fact that the b is negative tells us that the relationship is negative also: as consumption increases, commitment declines (and vice versa): The next part of the output shows the results of the regression of number of hook-ups predicted from both pornography consumption and commitment. We can see that pornography consumption significantly predicts number of hook-ups even with relationship commitment in the model, b = 1.28, t = 3.05, p = 0.003; relationship commitment also significantly predicts number of hook-ups, b = -0.62, t = 4.90, p < .001. The \(R^2\) value tells us that the model explains 14.0% of the variance in number of hook-ups. The negative b for commitment tells us that as commitment increases, number of hook-ups declines (and vice versa), but the positive b for consumptions indicates that as pornography consumption increases, the number of hook-ups increases also. These relationships are in the predicted direction: The next part of the output shows the total effect of pornography consumption on number of hook-ups (outcome). When relationship commitment is not in the model, pornography consumption significantly predicts the number of hook-ups, b = 1.57, t = 3.61, p < .001. The \(R^2\) value tells us that the model explains 5.22% of the variance in number of hook-ups. As is the case when we include relationship commitment in the model, pornography consumption has a positive relationship with number of hook-ups (as shown by the positive b-value): The next part of the output is the most important because it displays the results for the indirect effect of pornography consumption on number of hook-ups (i.e. the effect via relationship commitment). We're told the effect of pornography consumption on the number of hook-ups when relationship commitment is included as a predictor as well (the direct effect). The first bit of new information is the Indirect Effect of X on Y, which in this case is the indirect effect of pornography consumption on the number of hook-ups. We're given an estimate of this effect (b = 0.292) as well as a bootstrapped standard error and confidence interval. As we have seen many times before, 95% confidence intervals contain the true value of a parameter in 95% of samples. Therefore, we tend to assume that our sample isn't one of the 5% that does not contain the true value and use them to infer the population value of an effect. In this case, assuming our sample is one of the 95% that 'hits' the true value, we know that the true b-value for the indirect effect falls between 0.035 and 0.636. This range does not include zero, and remember that b = 0 would mean 'no effect whatsoever'; therefore, the fact that the confidence interval does not contain zero means that there is likely to be a genuine indirect effect. Put another way, relationship commitment is a mediator of the relationship between pornography consumption and the number of hook-ups. The rest of the output contains various standardized forms of the indirect effect. In each case they are accompanied by a bootstrapped confidence interval. As with the unstandardized indirect effect, if the confidence intervals don't contain zero then we can be confident that the true effect size is different from 'no effect'. In other words, there is mediation. All of the effect size measures have confidence intervals that don't include zero, so whichever one we look at we can be fairly confident that the indirect effect is greater than 'no effect'. Focusing on the most useful of these effect sizes, the standardized b for the indirect effect, its value is b = .042, 95% BCa CI [.005, .090]. Although it is better to interpret the bootstrap confidence intervals than formal tests of significance, the Sobel test suggests a significant indirect effect, b = 0.292, z = 1.98, p = .048. You could report the results as: There was a significant indirect effect of pornography consumption on the number of hook-ups though relationship commitment, b = 0.292, BCa CI [0.035, 0.636]. This represents a relatively small effect, standardized indirect effect \(ab_{\text{CS}}\) = 0.042, 95% BCa CI [0.005, 0.090]. We looked at data from people who had been forced to marry goats and dogs and measured their life satisfaction as well as how much they like animals (Goat or Dog.sav). Fit a linear model predicting life satisfaction from the type of animal to which a person was married. Write out the final model. The relevant part of the output is as follows: Looking at the coefficients, we can see that type of animal wife significantly predicted life satisfaction because the p-value is less than 0.05 (0.003). The positive standardized beta value (0.630) indicates a positive relationship between type of animal wife and life satisfaction. Remember that goat was coded as 0 and dog was coded as 1, therefore as type of animal wife increased from goat to dog, life satisfaction also increased. In other words, men who were married to dogs were more satisfied than those who were married to goats. By replacing the b-values in the equation for the linear model (see the book), the specific model is: \[ \begin{aligned} \text{Life satisfaction}_i &= b_0 + b_1\text{type of animal wife}_i\\ &= 16.21 + 21.96 \times\text{type of animal wife}_i \end{aligned} \] Repeat the analysis in Task 6 but include animal liking in the first block, and type of animal in the second block. Do your conclusions about the relationship between type of animal and life satisfaction change? The completed dialog box for block 1 should look like this: Looking at the coefficients from the final model, we can see that both love of animals, t(17) = 3.21, p = 0.005, and type of animal wife, t(17) = 4.06, p = 0.001, significantly predicted life satisfaction. This means that even after adjusting for the effect of love of animals, type of animal wife still significantly predicted life satisfaction. \(R^2\) is the squared correlation between the observed values of life satisfaction and the values of life satisfaction predicted by the model. The values in this output tell us that love of animals explains 26.2% of the variance in life satisfaction. When type of animal wife is factored in as well, 62.5% of variance in life satisfaction is explained (i.e., an additional 36.3%). Using the GlastonburyDummy.sav data, for which we have already fitted the model, comment on whether you think the model is reliable and generalizable. The completed main dialog box should look like this: Click and set these options: Back in the main dialog box click to fit the model. This question asks whether this model is valid. Based on the output below: Residuals: There are no cases that have a standardized residual greater than 3. If you look at the casewise diagnostics table, you can see that there were 5 cases out of a total of 123 (for day 3) with standardized residuals above 2. As a percentage this would be 5/123 × 100 = 4.07%, so that's as we would expect. There was only 1 case out of 123 with residuals above 2.5, which as a percentage would be 1/123 × 100 = 0.81% (and we'd expect 1%), which indicates the data are consistent with what we'd expect. Normality of errors: The histogram looks reasonably normally distributed, indicating that the normality of errors assumption has probably been met. The normal P–P plot verifies this because the dashed line doesn't deviate much from the straight line (which indicates what you'd get from normally distributed errors). Homoscedasticity and independence of errors: The scatterplot of ZPRED vs. ZRESID does look a bit odd with categorical predictors, but essentially we're looking for the height of the lines to be about the same (indicating the variability at each of the three levels is the same). This is true, indicating homoscedasticity. Multicollinearity: For all variables in the model, VIF values are below 10 (or alternatively, tolerance values are all well above 0.2) indicating no multicollinearity in the data. All in all, the model looks fairly reliable (but you should check for influential cases). Tablets like the iPad are very popular. A company owner was interested in how to make his brand of tablets more desirable. He collected data on how cool people perceived a product's advertising to be (Advert_Cool), how cool they thought the product was (Product_Cool), and how desirable they found the product (Desirability). Test his theory that the relationship between cool advertising and product desirability is mediated by how cool people think the product is (Tablets.sav). Am I showing my age by using the word 'cool'? Drag the outcome variable (Desirability) to the box labelled Outcome Variable (Y). Drag the predictor variable (Advert_Cool) to the box labelled Independent Variable (X). Drag the mediator variable (Product_Cool) to the box labelled M Variable(s). The first part of the output shows us the results of the simple regression of how cool the product is perceieved as being predicted from cool advertising. This output is interpreted just as we would interpret any regression: we can see that how cool people perceive the advertising to be significantly predicts how cool they think the product is, b = 0.20, t = 2.98, p = .003. The \(R^2\) value tells us that cool advertising explains 3.59% of the variance in how cool they think the product is, and the fact that the b is positive tells us that the relationship is positive also: the more 'cool' people think the advertising is, the more 'cool' they think the product is (and vice versa): The next part of the output shows the results of the regression of Desirability predicted from both how cool people think the product is and how cool people think the advertising is. We can see that cool advertising significantly predicts product desirability even with Product_Cool in the model, b = 0.19, t = 3.12, p = .002; Product_Cool also significantly predicts product desirability, b = 0.25, t = 4.37, p < .001. The \(R^2\) values tells us that the model explains 12.97% of the variance in product desirability. The positive bs for Product_Cool and Advert_Cool tells us that as adverts and products increase in how cool they are perceived to be, product desirability increases also (and vice versa). These relationships are in the predicted direction: The next part of the output shows the total effect of cool advertising on product desirability (outcome). You will get this bit of the output only if you selected Total effect model. The total effect is the effect of the predictor on the outcome when the mediator is not present in the model. When Product_Cool is not in the model, cool advertising significantly predicts product desirability, b = .24, t = 3.88, p < .001. The \(R^2\) values tells us that the model explains 5.96% of the variance in product desirability. As is the case when we include Product_Cool in the model, Advert_Cool has a positive relationship with product desirability (as shown by the positive b-value): The next part of the output is the most important because it displays the results for the indirect effect cool advertising on product desirability (i.e. the effect via Product_Cool). First, we're again told the effect of cool advertising on the product desirability in isolation (the total effect). Next, we're told the effect of cool advertising on the product desirability when Product_Cool is included as a predictor as well (the direct effect). The first bit of new information is the Indirect Effect of X on Y, which in this case is the indirect effect of cool advertising on the product desirability. We're given an estimate of this effect (b = 0.049) as well as a bootstrapped standard error and confidence interval. As we have seen many times before, 95% confidence intervals contain the true value of a parameter in 95% of samples. Therefore, we tend to assume that our sample isn't one of the 5% that does not contain the true value and use them to infer the population value of an effect. In this case, assuming our sample is one of the 95% that 'hits' the true value, we know that the true b-value for the indirect effect falls between .0140 and .1012. This range does not include zero, and remember that b = 0 would mean 'no effect whatsoever'; therefore, the fact that the confidence interval does not contain zero means that there is likely to be a genuine indirect effect. Put another way, Product_Cool is a mediator of the relationship between cool advertising and product desirability. The rest of the output contains various standardized forms of the indirect effect. In each case they are accompanied by a bootstrapped confidence interval. As with the unstandardized indirect effect, if the confidence intervals don't contain zero then we tend to assume that the true effect size is different from 'no effect'. In other words, there is mediation. All of the effect size measures have confidence intervals that don't include zero, so whatever one we look at we can assume that the indirect effect is greater than 'no effect'. Focusing on the most useful of these effect sizes, the standardized b for the indirect effect, its value is b = 0.051, 95% BCa CI [0.014, 0.104]. Although it is better to interpret the bootstrap confidence intervals than formal tests of significance, the Sobel test suggests a significant indirect effect, b = 0.049, z = 2.42, p = .016. There was a significant indirect effect of how cool people think a products' advertising is on the desirability of the product though how cool they think the product is, b = 0.049, BCa CI [0.014, 0.101]. This represents a relatively small effect, standardized indirect effect \(ab_{\text{CS}}\) = 0.051, 95% BCa CI [0.014, 0.104]. To test how different teaching methods affected students' knowledge I took three statistics modules where I taught the same material. For one module I wandered around with a large cane and beat anyone who asked daft questions or got questions wrong (punish). In the second I encouraged students to discuss things that they found difficult and gave anyone working hard a nice sweet (reward). In the final course I neither punished nor rewarded students' efforts (indifferent). I measured the students' exam marks (percentage). The data are in the file Teach.sav. Fit a model with planned contrasts to test the hypotheses that: (1) reward results in better exam results than either punishment or indifference; and (2) indifference will lead to significantly better exam results than punishment The first part of the output shows the table of descriptive statistics from the one-way ANOVA; we're told the means, standard deviations and standard errors of the means for each experimental condition. The means should correspond to those plotted in the graph. These diagnostics are important for interpretation later on. It looks as though marks are highest after reward and lowest after punishment: The next part of the output is the main ANOVA summary table. We should routinely look at the robust Fs. Because the observed significance value is less than 0.05 we can say that there was a significant effect of teaching style on exam marks. However, at this stage we still do not know exactly what the effect of the teaching style was (we don't know which groups differed). Because there were specific hypotheses I specified some contrasts. The next part of the output shows the codes I used. The first contrast compares reward (coded with −2) against punishment and indifference (both coded with 1). The second contrast compares punishment (coded with 1) against indifference (coded with −1). Note that the codes for each contrast sum to zero, and that in contrast 2, reward has been coded with a 0 because it is excluded from that contrast. It is safest to interpret the part of the table labelled Does not assume equal variances. The t-test for the first contrast tells us that reward was significantly different from punishment and indifference (it's significantly different because the value in the column labelled Sig. is less than 0.05). Looking at the means, this tells us that the average mark after reward was significantly higher than the average mark for punishment and indifference combined. The second contrast (together with the descriptive statistics) tells us that the marks after punishment were significantly lower than after indifference (again, significantly different because the value in the column labelled Sig. is less than 0.05). As such we could conclude that reward produces significantly better exam grades than punishment and indifference, and that punishment produces significantly worse exam marks than indifference. So lecturers should reward their students, not punish them. Compute the effect sizes for the previous task. The outputs provide us with three measures of variance: the between-group effect (\(\text{SS}_\text{M}\)), the within-subject effect (\(\text{MS}_\text{R}\)) and the total amount of variance in the data \(\text{SS}_\text{T}\). We can use these to calculate omega squared (\(\omega^2\)): \[ \begin{aligned} \omega^2 &= \frac{\text{SS}_\text{M} - df_\text{M} \times \text{MS}_\text{R}}{\text{SS}_\text{T} + \text{MS}_\text{R}} \\ &= \frac{1205.067 - 2 \times 28.681}{1979.467 + 28.681}\\ &= \frac{1147.705}{2008.148}\\ &= 0.57 \end{aligned} \] For the contrasts the effect sizes will be (I'm using t and df corrected for variances): \[ \begin{aligned} r_\text{contrast} &= \sqrt{\frac{t^2}{t^2 + df}} \\ r_\text{contrast 1} &= \sqrt{\frac{-6.593^2}{-6.593^2 + 21.696}} = 0.82\\ r_\text{contrast 2} &= \sqrt{\frac{-2.308^2}{-2.3085^2 + 14.476}} = 0.52\\ \end{aligned} \] We could report these analyses (including task 1) as (I'm reporting the Welch F): There was a significant effect of teaching style on exam marks, F(2, 17.34) = 32.24, p < 0.001, \(\omega^2\) = 0.57. Planned contrasts revealed that reward produced significantly better exam grades than punishment and indifference, t(21.70) = –6.59, p < 0.001, r = 0.82, and that punishment produced significantly worse exam marks than indifference, t(14.48) = −2.31, r = 0.52. Children wearing superhero costumes are more likely to harm themselves because of the unrealistic impression of invincibility that these costumes could create. For example, children have reported to hospital with severe injuries because of trying 'to initiate flight without having planned for landing strategies' (Davies, Surridge, Hole, & Munro-Davies, 2007). I can relate to the imagined power that a costume bestows upon you; indeed, I have been known to dress up as Fisher by donning a beard and glasses and trailing a goat around on a lead in the hope that it might make me more knowledgeable about statistics. Imagine we had data (Superhero.sav) about the severity of injury (on a scale from 0, no injury, to 100, death) for children reporting to the accident and emergency department at hospitals, and information on which superhero costume they were wearing (hero): Spiderman, Superman, the Hulk or a teenage mutant ninja turtle. Fit a model with planned contrasts to test the hypothesis that different costumes give rise to more severe injuries. The means suggest that children wearing a Ninja Turtle costume had the least severe injuries (M = 26.25), whereas children wearing a Superman costume had the most severe injuries (M = 60.33): In the ANOVA output (we should routinely look at the robust Fs.), the observed significance value is much less than 0.05 and so we can say that there was a significant effect of superhero costume on injury severity. However, at this stage we still do not know exactly what the effect of superhero costume was (we don't know which groups differed). Because there were no specific hypotheses, only that the groups would differ, we can't look at planned contrasts but we can conduct some post hoc tests. I am going to use Gabriel's post hoc test because the group sizes are slightly different (Spiderman, N = 8; Superman, N = 6; Hulk, N = 8; Ninja Turtle, N = 8). The output tells us that wearing a Superman costume was significantly different from wearing either a Hulk or Ninja Turtle costume in terms of injury severity, but that none of the other groups differed significantly. The post hoc test has shown us which differences between means are significant; however, if we want to see the direction of the effects we can look back to the means in the table of descriptives (Output 7). We can conclude that wearing a Superman costume resulted in significantly more severe injuries than wearing either a Hulk or a Ninja Turtle costume. We can calculate (\(\omega^2\) ) as follows: \[ \begin{aligned} \omega^2 &= \frac{\text{SS}_\text{M} - df_\text{M} \times \text{MS}_\text{R}}{\text{SS}_\text{T} + \text{MS}_\text{R}} \\ &= \frac{4180.617 - 3 \times 167.561}{8537.20 + 167.561}\\ &= \frac{3677.934}{8704.761}\\ &= 0.42 \end{aligned} \] We could report the analysis as follows: There was a significant effect of superhero costume on severity of injury, F(3, 13.02) = 7.10, p = 0.005, \(\omega^2\) = 0.42. Gabriel's post hoc tests revealed that wearing a Superman costume resulted in significantly more severe injuries compared to wearing a Hulk (p = 0.008) or a Ninja Turtle (p < 0.001) costume, but not a spiderman costume (p = 0.70). Injuries were not significantly different when wearing a spiderman costume compared to a Hulk (p = 0.907) or a Ninja Turtle (p = 0.136) costume. Injuries were not significantly different when wearing a Hulk compared to a Ninja Turtle costume (p = 0.650). In Chapter 7 there are some data looking at whether eating soya meals reduces your sperm count. Analyse these data with a linear model (ANOVA). What's the difference between what you find and what was found in Chapter 7. Why do you think this difference has arisen? A boxplot of the data suggests that (1) scores within conditions are skewed; and (2) variability in scores is different across groups. The table of descriptive statistics suggests that as soya intake increases, sperm counts decrease as predicted: The next part of the output is the main ANOVA summary table. We should routinely look at the robust Fs. Note that the Welch test agrees with the non-parametric test in Chapter 7 in that the significance of F is below the 0.05 threshold. However, the Brown-Forsythe F is non-significant (it is just above the threshold). This illustrates the relative superiority (with respect to power) of the Welch procedure. The unadjusted F is also not significant. If we were using the unadjusted F then we would conclude that, because the observed significance value is greater than 0.05, there was no significant effect of soya intake on men's sperm count. This may seem strange because if you read Chapter 7, from where this example came, the Kruskal–Wallis test produced a significant result. The reason for this difference is that the data violate the assumptions of normality and homogeneity of variance. As I mention in Chapter 7, although parametric tests have more power to detect effects when their assumptions are met, when their assumptions are violated non-parametric tests have more power! This example was arranged to prove this point: because the parametric assumptions are violated, the non-parametric tests produced a significant result and the parametric test did not because, in these circumstances, the non-parametric test has the greater power. Also, the Welch F, which does adjust for these violations yields a significant result. Mobile phones emit microwaves, and so holding one next to your brain for large parts of the day is a bit like sticking your brain in a microwave oven and pushing the 'cook until well done' button. If we wanted to test this experimentally, we could get six groups of people and strap a mobile phone on their heads, then by remote control turn the phones on for a certain amount of time each day. After six months, we measure the size of any tumour (in mm3) close to the site of the phone antenna (just behind the ear). The six groups experienced 0, 1, 2, 3, 4 or 5 hours per day of phone microwaves for six months. Do tumours significantly increase with greater daily exposure? The data are in Tumour.sav. The following figure displays the error bar chart of the mobile phone data shows the mean size of brain tumour in each condition, and the funny 'I' shapes show the confidence interval of these means. Note that in the control group (0 hours), the mean size of the tumour is virtually zero (we wouldn't actually expect them to have a tumour) and the error bar shows that there was very little variance across samples - this almost certainly means we cannot assume equal variances. The first part of the output shows the table of descriptive statistics from the one-way ANOVA; we're told the means, standard deviations and standard errors of the means for each experimental condition. The means should correspond to those plotted in the graph. These diagnostics are important for interpretation later on. The next part of the output is the main ANOVA summary table. We should routinely look at the robust Fs. Because the observed significance of Welch's F is less than 0.05 we can say that there was a significant effect of mobile phones on the size of tumour. However, at this stage we still do not know exactly what the effect of the phones was (we don't know which groups differed). Because there were no specific hypotheses I just carried out post hoc tests and stuck to my favourite Games–Howell procedure (because variances were unequal). It is clear from that each group of participants is compared to all of the remaining groups. First, the control group (0 hours) is compared to the 1, 2, 3, 4 and 5 hour groups and reveals a significant difference in all cases (all the values in the column labelled Sig. are less than 0.05). In the next part of the table, the 1 hour group is compared to all other groups. Again all comparisons are significant (all the values in the column labelled Sig. are less than 0.05). In fact, all of the comparisons appear to be highly significant except the comparison between the 4 and 5 hour groups, which is non-significant because the value in the column labelled Sig. is larger than 0.05. We can calculate omega squared (\(\omega^2\)) as follows: \[ \begin{aligned} \omega^2 &= \frac{\text{SS}_\text{M} - df_\text{M} \times \text{MS}_\text{R}}{\text{SS}_\text{T} + \text{MS}_\text{R}} \\ &= \frac{450.664 - 5 \times 0.334}{488.758 + 0.334}\\ &= \frac{448.994}{488.424}\\ &= 0.92 \end{aligned} \] We could report the main finding as follows: * The results show that using a mobile phone significantly affected the size of brain tumour found in participants, F(5, 44.39) = 414.93, p < 0.001, \(\omega^2\) = 0.92. The effect size indicated that the effect of phone use on tumour size was substantial. Games–Howell post hoc tests revealed significant differences between all groups (p < 0.001 for all tests) except between 4 and 5 hours (p = 0.984). Using the Glastonbury data from Chapter 11 (GlastonburyFestival.sav), fit a model to see if the change in hygiene (*change) is significant across people with different musical tastes (music). Compare the results to those described in Chapter 11. The first part of the output is the main ANOVA table. We could say that the change in hygiene scores was significantly different across the different musical groups, F(3, 119) = 3.27, p = 0.024: Compare this table to the one in Chapter 11, in which we analysed these data as a regression (reproduced below): The tables are exactly the same! What about the contrasts? The table below shows the codes I used to get simple contrasts that compare each group to the no affiliation group, and the subsequent contrasts: And here's what we got when we ran the same analysis as a linear model with the groups dummy coded (see Chapter 11): Again they are the same (the values of the contrast match the unstandardized B, and the standard errors, t-values and p-values match): Contrast 1 matches exactly the No Affiliation vs. Indie Kid dummy variable from the linear model. Contrast 2 matches exactly the No Affiliation vs. Metaller dummy variable from the linear model. Contrast 3 matches exactly the No Affiliation vs. Crusty dummy variable from the linear model. This should, I hope, re-emphasize to you that regression and ANOVA are the same analytic system. Labcoat Leni 7.2 describes an experiment (Çetinkaya & Domjan, 2006) on quails with fetishes for terrycloth objects. There were two outcome variables (time spent near the terrycloth object and copulatory efficiency) that we didn't analyse. Read Labcoat Leni 7.2 to get the full story then fit a model with Bonferroni post hoc tests on the time spent near the terrycloth object The first part of the output tells usb that the group (fetishistic, non-fetishistic or control group) had a significant effect on the time spent near the terrycloth object. The authors report the unadjusted F, although I would recommend usinh Welch's F (not that it affects the conclusions from this model). To find out exactly what's going on we can look at our post hoc tests. The authors reported this analysis in their paper as follows: A one-way ANOVA indicated significant group differences, F(2, 56) = 91.38, p < 0.05, \(\eta_\text{p}\) = 0.76. Subsequent pairwise comparisons (with the Bonferroni correction) revealed that fetishistic male quail stayed near the CS longer than both the nonfetishistic male quail (mean difference = 10.59; 95% CI = 4.16, 17.02; p < 0.05) and the control male quail (mean difference = 29.74 s; 95% CI = 24.12, 35.35; p < 0.05). In addition, the nonfetishistic male quail spent more time near the CS than did the control male quail (mean difference = 19.15 s; 95% CI = 13.30, 24.99; p < 0.05). (pp. 429–430) These results show that male quails do show fetishistic behaviour (the time spent with the terrycloth). Note that the 'CS' is the terrycloth object. Look at the output to see from where the values reported in the paper come. Repeat the analysis in Task 7 but using copulatory efficiency as the outcome. The first part of the output tells usb that the group (fetishistic, non-fetishistic or control group) had a significant effect on copulatory efficiency. The authors report the unadjusted F, although I would recommend usinh Welch's F (not that it affects the conclusions from this model). A one-way ANOVA yielded a significant main effect of groups, F(2, 56) = 6.04, p < 0.05, \(\eta_\text{p}\) = 0.18. Paired comparisons (with the Bonferroni correction) indicated that the nonfetishistic male quail copulated with the live female quail (US) more efficiently than both the fetishistic male quail (mean difference = 6.61; 95% CI = 1.41, 11.82; p < 0.05) and the control male quail (mean difference = 5.83; 95% CI = 1.11, 10.56; p < 0.05). The difference between the efficiency scores of the fetishistic and the control male quail was not significant (mean difference = 0.78; 95% CI = –5.33, 3.77; p > 0.05). (p. 430) These results show that male quails do show fetishistic behaviour (the time spent with the terrycloth – see Task 7 above) and that this affects their copulatory efficiency (they are less efficient than those that don't develop a fetish, but it's worth remembering that they are no worse than quails that had no sexual conditioning – the controls). If you look at Labcoat Leni's box then you'll also see that this fetishistic behaviour may have evolved because the quails with fetishistic behaviour manage to fertilize a greater percentage of eggs (so their genes are passed on). A sociologist wanted to compare murder rates (Murder) each month in a year at three high-profile locations in London (Street). Fit a model with bootstrapping on the post hoc tests to see in which streets the most murders happened. The data are in Murder.sav. Looking at the means we can see that Rue Morgue had the highest mean number of murders (M = 2.92) and Ruskin Avenue had the smallest mean number of murders (M = 0.83). These means will be important in interpreting the post hoc tests later. The next part of the output shows us the F-statistic for predicting mean murders from location. We should routinely look at the robust Fs. For all tests, because the observed significance value is less than 0.05 we can say that there was a significant effect of street on the number of murders. However, at this stage we still do not know exactly which streets had significantly more murders (we don't know which groups differed). I'd favour reporting the Welch F. Because there were no specific hypotheses I just carried out post hoc tests and stuck to my favourite Games–Howell procedure (because variances were unequal). It is clear from the output that each street is compared to all of the remaining streets. If we look at the values in the column labelled Sig. we can see that the only significant comparison was between Ruskin Avenue and Rue Morgue (p = 0.024); all other comparisons were non-significant because all the other values in this column are greater than 0.05. However, Acacia Avenue and Rue Morgue were close to being significantly different (p = 0.089). The question asked us to bootstrap the post hoc tests and this has been done. The columns of interest are the ones containing the BCa 95% confidence intervals (lower and upper limits). We can see that the difference between Ruskin Avenue and Rue Morgue remains significant after bootstrapping the confidence intervals; we can tell this because the confidence intervals do not cross zero for this comparison. Surprisingly, it appears that the difference between Acacia Avenue and Rue Morgue is now significant after bootstrapping the confidence intervals, because again the confidence intervals do not cross zero. This seems to contradict the p-values in the previous output; however, the p-value was close to being significant (p = 0.089). The mean values in the table of descriptives tell us that Rue Morgue had a significantly higher number of murders than Ruskin Avenue and Acacia Avenue; however, Acacia Avenue did not differ significantly in the number of murders compared to Ruskin Avenue. We can calculate the effect size,\(\omega^2\), as follows: \[ \begin{aligned} \omega^2 &= \frac{\text{SS}_\text{M} - df_\text{M} \times \text{MS}_\text{R}}{\text{SS}_\text{T} + \text{MS}_\text{R}} \\ &= \frac{29.167 - 2 \times 2.328}{106.00 + 2.328}\\ &= \frac{24.511}{108.328}\\ &= 0.23 \end{aligned} \] We could report the main finding as: The results show that the streets measured differed significantly in the number of murders, F(2, 19.29) = 4.60, p = 0.023, \(\omega^2\) = 0.23. Games–Howell post hoc tests with 95% bias corrected confidence intervals on the mean differences revealed that Rue Morgue experienced a significantly greater number of murders than either Ruskin Avenue, 95% BCa CI [0.76, 3.42] or Acacia Avenue, 95% BCa CI [0.17, 3.13]. However, Acacia Avenue and Ruskin Avenue did not differ significantly in the number of murders that had occurred, 95% BCa CI [0.38, 1.24]. Access the ANCOVA dialog box by selecting Analyze > General Linear Model > Univariate … Remember that you can move variables in the dialog box by dragging them, or selecting them and cliking . A few years back I was stalked. You'd think they could have found someone a bit more interesting to stalk, but apparently times were hard. It could have been a lot worse, but it wasn't particularly pleasant. I imagined a world in which a psychologist tried two different therapies on different groups of stalkers (25 stalkers in each group – this variable is called group). To the first group he gave cruel-to-be-kind therapy (every time the stalkers followed him around, or sent him a letter, the psychologist attacked them with a cattle prod). The second therapy was psychodyshamic therapy, in which stalkers were hypnotized and regressed into their childhood to discuss their penis (or lack of penis), their father's penis, their dog's penis, the seventh penis of a seventh penis, and any other penis that sprang to mind. The psychologist measured the number of hours stalking in one week both before (stalk1) and after (stalk2) treatment (Stalker.sav). Analyse the effect of therapy on stalking behaviour after therapy, covarying for the amount of stalking behaviour before therapy. First, conduct an ANOVA to test whether the number of hours spent stalking before therapy (our covariate) is independent of the type of therapy (our predictor variable). Your completed dialog box should look like: The output shows that the main effect of group is not significant, F(1, 48) = 0.06, p = 0.804, which shows that the average level of stalking behaviour before therapy was roughly the same in the two therapy groups. In other words, the mean number of hours spent stalking before therapy is not significantly different in the cruel-to-be-kind and psychodyshamic therapy groups. This result is good news for using stalking behaviour before therapy as a covariate in the analysis. To conduct the ANCOVA, access the main dialog box and: Drag the outcome variable (stalk2) to the box labelled Dependent Variable. Drag the predictor variable (group) to the box labelled Fixed Factor(s). Drag the covariate (stalk1) to the box labelled Covariate(s). Your completed dialog box should look like this: Click to access the options dialog box, and select these options: The output shows that the covariate significantly predicts the outcome variable, so the hours spent stalking after therapy depend on the extent of the initial problem (i.e. the hours spent stalking before therapy). More interesting is that after adjusting for the effect of initial stalking behaviour, the effect of therapy is significant. To interpret the results of the main effect of therapy we look at the adjusted means, which tell us that stalking behaviour was significantly lower after the therapy involving the cattle prod than after psychodyshamic therapy (after adjusting for baseline stalking). To interpret the covariate create a graph of the time spent stalking after therapy (outcome variable) and the initial level of stalking (covariate) using the chart builder: The resulting graph shows that there is a positive relationship between the two variables: that is, high scores on one variable correspond to high scores on the other, whereas low scores on one variable correspond to low scores on the other. Compute effect sizes for Task 1 and report the results. The effect sizes for the main effect of group can be calculated as follows: \[ \begin{aligned} \eta_p^2 &= \frac{\text{SS}_\text{group}}{\text{SS}_\text{group} + \text{SS}_\text{residual}} \\ &= \frac{480.27}{480.27+4111.722}\\ &= 0.10 \end{aligned} \] And for the covariate: \[ \begin{aligned} \eta_p^2 &= \frac{\text{SS}_\text{stalk1}}{\text{SS}_\text{stalk1} + \text{SS}_\text{residual}} \\ &= \frac{4414.598}{4414.598+4111.722} \\ &= 0.52 \end{aligned} \] The main effect of therapy was significant, F(1, 47) = 5.49, p = 0.02, \(\eta_p^2\) = 0.10, indicating that the time spent stalking was lower after using a cattle prod (M = 55.30, SE = 1.87) than after psychodyshamic therapy (M = 61.50, SE = 1.87). The covariate was also significant, F(1, 47) = 50.46, p < 0.001, partial \(\eta_p^2\) = 0.52, indicating that level of stalking before therapy had a significant effect on level of stalking after therapy (there was a positive relationship between these two variables). A marketing manager tested the benefit of soft drinks for curing hangovers. He took 15 people and got them drunk. The next morning as they awoke, dehydrated and feeling as though they'd licked a camel's sandy feet clean with their tongue, he gave five of them water to drink, five of them Lucozade (a very nice glucose-based UK drink) and the remaining five a leading brand of cola (this variable is called drink). He measured how well they felt (on a scale from 0 = I feel like death to 10 = I feel really full of beans and healthy) two hours later (this variable is called well). He measured how drunk the person got the night before on a scale of 0 = as sober as a nun to 10 = flapping about like a haddock out of water on the floor in a puddle of their own vomit (HangoverCure.sav). Fit a model to see whether people felt better after different drinks when covarying for how drunk they were the night before. First let's check that the predictor variable (drink) and the covariate (drunk) are independent. To do this we can run a one-way ANOVA. Your completed dialog box should look like: The output shows that the main effect of drink is not significant, F(2, 12) = 1.36, p = 0.295, which shows that the average level of drunkenness the night before was roughly the same in the three drink groups. This result is good news for using the variable drunk as a covariate in the analysis. Drag the outcome variable (well) to the box labelled Dependent Variable. Drag the predictor variable (drink) to the box labelled Fixed Factor(s). Drag the covariate (drunk) to the box labelled Covariate(s). Click to access the contrasts dialog box. In this example, a sensible set of contrasts would be simple contrasts comparing each experimental group with the control group, water. Select simple from the drop down list and specifying the first category as the reference category. The final dialog box should look like this: The output shows that the covariate significantly predicts the outcome variable, so the drunkenness of the person influenced how well they felt the next day. What's more interesting is that after adjusting for the effect of drunkenness, the effect of drink is significant. The parameter estimates for the model (selected in the options dialog box) are computed having paramterized the variable drink using two dummy coding variables that compare each group against the last (the group coded with the highest value in the data editor, in this case the cola group). This reference category (labelled drink=3 in the output) is coded with a 0 for both dummy variables; drink=2 represents the difference between the group coded as 2 (Lucozade) and the reference category (cola); and drink=1 represents the difference between the group coded as 1 (water) and the reference category (cola). The beta values literally represent the differences between the means of these groups and so the significances of the t-tests tell us whether the group means differ significantly. From these estimates we could conclude that the cola and water groups have similar means whereas the cola and Lucozade groups have significantly different means. The contrasts compare level 2 (Lucozade) against level 1 (water) as a first comparison, and level 3 (cola) against level 1 (water) as a second comparison. These results show that the Lucozade group felt significantly better than the water group (contrast 1), but that the cola group did not differ significantly from the water group (p = 0.741). These results are consistent with the regression parameter estimates (note that contrast 2 is identical to the regression parameters for drink=1 in the previous output). The adjusted group means should be used for interpretation. The adjusted means show that the significant difference between the water and the Lucozade groups refelects people feeling better in the Lucozade group (than the water group). To interpret the covariate create a graph of the outcome (well, y-axis) against the covariate ( drunk, x-axis) using the chart builder: The resulting graph shows that there is a negative relationship between the two variables: that is, high scores on one variable correspond to high scores on the other, whereas low scores on one variable correspond to low scores on the other. The more drunk you got, the less well you felt the following day. The effect sizes for the main effect of drink can be calculated as follows: \[ \begin{aligned} \eta_p^2 &= \frac{\text{SS}_\text{drink}}{\text{SS}_\text{drink} + \text{SS}_\text{residual}} \\ &= \frac{3.464}{3.464+4.413}\\ &= 0.44 \end{aligned} \] And for the covariate: \[ \begin{aligned} \eta_p^2 &= \frac{\text{SS}_\text{drunk}}{\text{SS}_\text{drunk} + \text{SS}_\text{residual}} \\ &= \frac{11.187}{11.187+4.413} \\ &= 0.72 \end{aligned} \] We could also calculate effect sizes for the model parameters using the t-statistics, which have \(N−2\) degrees of freedom, where N is the total sample size (in this case 15). Therefore we get: \[ \begin{aligned} r &= \sqrt{\frac{t^2}{t^2 + df}} \\ r_\text{cola vs. water} &= \sqrt{\frac{-0.338^2}{-0.338^2+13}} = 0.09 \\ r_\text{cola vs. Lucozade} &= \sqrt{\frac{2.233^2}{2.233^2+13}} = 0.53 \\ \end{aligned} \] The covariate, drunkenness, was significantly related to the how ill the person felt the next day, F(1, 11) = 27.89, p < 0.001, \(\eta_p^2\) = 0.72. There was also a significant effect of the type of drink on how well the person felt after adjusting for how drunk they were the night before, F(2, 11) = 4.32, p = 0.041, \(\eta_p^2\) = 0.44. Planned contrasts revealed that having Lucozade significantly improved how well you felt compared to having cola, t(13) = 2.23, p = 0.018, r = 0.53, but having cola was no better than having water, t(13) = –0.34, p = 0.741, r = 0.09. We can conclude that cola and water have the same effect on hangovers but that Lucozade seems significantly better at curing hangovers than cola. The highlight of the elephant calendar is the annual elephant soccer event in Nepal (google search it). A heated argument burns between the African and Asian elephants. In 2010, the president of the Asian Elephant Football Association, an elephant named Boji, claimed that Asian elephants were more talented than their African counterparts. The head of the African Elephant Soccer Association, an elephant called Tunc, issued a press statement that read 'I make it a matter of personal pride never to take seriously any remark made by something that looks like an enormous scrotum'. I was called in to settle things. I collected data from the two types of elephants (elephant) over a season and recorded how many goals each elephant scored (goals) and how many years of experience the elephant had (experience). Analyse the effect of the type of elephant on goal scoring, covarying for the amount of football experience the elephant has (Elephant Football.sav). First, let's check that the predictor variable (elephant) and the covariate (experience) are independent. To do this we can run a one-way ANOVA. Your completed dialog box should look like: The output shows that the main effect of elephant is not significant, F(1, 118) = 1.38, p = 0.24, which shows that the average level of prior football experience was roughly the same in the two elephant groups. This result is good news for using the variable experience as a covariate in the analysis. Drag the outcome variable (goals) to the box labelled Dependent Variable. Drag the predictor variable (elephant) to the box labelled Fixed Factor(s). Drag the covariate (experience) to the box labelled Covariate(s). The output shows that the experience of the elephant significantly predicted how many goals they scored, F(1, 117) = 9.93, p = 0.002. After adjusting for the effect of experience, the effect of elephant is also significant. In other words, African and Asian elephants differed significantly in the number of goals they scored. The adjusted means tell us, specifically, that African elephants scored significantly more goals than Asian elephants after adjusting for prior experience, F(1, 117) = 8.59, p = 0.004. To interpret the covariate create a graph of the outcome (goals, y-axis) against the covariate ( experience, x-axis) using the chart builder: The resulting graph shows that there is a positive relationship between the two variables: the more prior football experience the elephant had, the more goals they scored in the season. In Chapter 4 (Task 6) we looked at data from people who had been forced to marry goats and dogs and measured their life satisfaction and, also, how much they like animals (Goat or Dog.sav). Fit a model predicting life satisfaction from the type of animal to which a person was married and their animal liking score (covariate). First, check that the predictor variable (wife) and the covariate (animal) are independent. To do this we can run a one-way ANOVA. Your completed dialog box should look like: The output shows that the main effect of wife is not significant, F(1, 18) = 0.06, p = 0.81, which shows that the average level of love of animals was roughly the same in the two type of animal wife groups. This result is good news for using the variable love of animals as a covariate in the analysis. Drag the outcome variable (life_satisfaction) to the box labelled Dependent Variable. Drag the predictor variable (wife) to the box labelled Fixed Factor(s). Drag the covariate (animal) to the box labelled Covariate(s). The output shows that love of animals significantly predicted life satisfaction, F(1, 17) = 10.32, p = 0.005. After adjusting for the effect of love of animals, the effect of animal is also significant. In other words, life satisfaction differed significantly in those married to goats compared to those married to dogs. The adjusted means tell us, specifically, that life satisfaction was significantly higher in those married to dogs, F(1, 17) = 16.45, p = 0.001. (My spaniel would like it on record that this result is obvious because, as he puts it, 'dogs are fucking cool'.) To interpret the covariate create a graph of the outcome (life_satisfaction, y-axis) against the covariate ( animal, x-axis) using the chart builder: The resulting graph shows that there is a positive relationship between the two variables: the greater ones love of animals, the greater ones life satisfaction. The effect sizes for the main effect of wife can be calculated as follows: \[ \begin{aligned} \eta_p^2 &= \frac{\text{SS}_\text{wife}}{\text{SS}_\text{wife} + \text{SS}_\text{residual}} \\ &= \frac{2112.099}{2112.099+2183.140}\\ &= 0.49 \end{aligned} \] \[ \begin{aligned} \eta_p^2 &= \frac{\text{SS}_\text{animal}}{\text{SS}_\text{animal} + \text{SS}_\text{residual}} \\ &= \frac{1325.402}{1325.402+2183.140} \\ &= 0.38 \end{aligned} \] We could report the model as follows: The covariate, love of animals, was significantly related to life satisfaction, F(1, 17) = 10.32, p = 0.01, \(\eta_p^2\) = 0.38. There was also a significant effect of the type of animal wife after adjusting for love of animals, F(1, 17) = 16.45, p = 0.001, \(\eta_p^2\) = 0.49, indicating that life satisfaction was significantly higher for men who were married to dogs (M = 59.56, SE = 4.01) than for men who were married to goats (M = 38.55, SE = 3.27). Compare your results for Task 6 to those for the corresponding task in Chapter 11. What differences do you notice and why? Let's remind ourselves of the output from Smart Alex Task 7, Chapter 11, in which we conducted a hierarchical regression predicting life satisfaction from the type of animal wife, and the effect of love of animals. Animal liking was entered in the first block, and type of animal wife in the second block: Looking at the coefficients from model 2, we can see that both love of animals, t(17) = 3.21, p = 0.005, and type of animal wife, t(17) = 4.06, p = 0.001, significantly predicted life satisfaction. In other words, after adjusting for the effect of love of animals, type of animal wife significantly predicted life satisfaction. Now, let's look again at the output from Task 6 (above), in which we conducted an ANCOVA predicting life satisfaction from the type of animal to which a person was married and their animal liking score (covariate): The covariate, love of animals, was significantly related to life satisfaction, F(1, 17) = 10.32, p = 0.01, \(\eta_p^2\) = 0.38. There was also a significant effect of the type of animal wife after adjusting for love of animals, F(1, 17) = 16.45, p = 0.001, \(\eta_p^2\) = 0.49, indicating that life satisfaction was significantly higher for men who were married to dogs (M = 59.56, SE = 4.01) than for men who were married to goats (M = 38.55, SE = 3.27). The conclusions are the same, but more than that: The p-values for both effects are identical. This is because there is a direct relationship between t and F. In fact F = t^2. Let's compare the ts and Fs of our two effects: for love of animals, when we ran the analysis as 'regression' we got t = 3.213. If we square this value we get \(t^2 = 3.213^2 = 10.32\). This is the value of F that we got when we ran the model as 'ANCOVA'. for the type of wife, when we ran the analysis as 'regression' we got t = 4.055 If we square this value we get \(t^2 = 4.055^2 = 16.44\). This is the value of F that we got when we ran the model as 'ANCOVA'. Basically, this Task is all about showing you that despite the menu structure in SPSS creating false distinctions between models, when you do 'ANCOVA' and 'regression' you are simply using the general linear model and accessing it via different menus. In Chapter Error! Reference source not found. we compared the number of mischievous acts (mischief2) in people who had invisibility cloaks to those without (cloak). Imagine we also had information about the baseline number of mischievous acts in these participants (mischief1). Fit a model to see whether people with invisibility cloaks get up to more mischief than those without when factoring in their baseline level of mischief (Invisibility Baseline.sav). First, check that the predictor variable (cloak) and the covariate (mischief1) are independent. To do this we can run a one-way ANOVA. Your completed dialog box should look like: The output shows that the main effect of cloak is not significant, F(1, 78) = 0.14, p = 0.71, which shows that the average level of baseline mischief was roughly the same in the two cloak groups. This result is good news for using baseline mischief as a covariate in the analysis. Drag the outcome variable (mischief2) to the box labelled Dependent Variable. Drag the predictor variable (cloak) to the box labelled Fixed Factor(s). Drag the covariate (mischief1) to the box labelled Covariate(s). The output shows that baseline mischief significantly predicted post-intervention mischief, F(1, 77) = 7.40, p = 0.008. After adjusting for baseline mischief, the effect of cloak is also significant. In other words, mischief levels after the intervention differed significantly in those who had an invisibility cloak and those who did not. The adjusted means tell us, specifically, that mischief was significantly higher in those with invisibility cloaks, F(1, 77) = 11.33, p = 0.001. To interpret the covariate create a graph of the outcome (mischief2, y-axis) against the covariate ( mischief1, x-axis) using the chart builder: The resulting graph shows that there is a positive relationship between the two variables: the greater ones mischief levels before the cloaks were assigned to participants, the greater ones mischief after the cloaks were assigned to participants. The effect sizes for the main effect of cloak can be calculated as follows: \[ \begin{aligned} \eta_p^2 &= \frac{\text{SS}_\text{cloak}}{\text{SS}_\text{cloak} + \text{SS}_\text{residual}} \\ &= \frac{35.166}{35.166+239.081}\\ &= 0.13 \end{aligned} \] \[ \begin{aligned} \eta_p^2 &= \frac{\text{SS}_\text{mischief1}}{\text{SS}_\text{mischief1} + \text{SS}_\text{residual}} \\ &= \frac{22.972}{22.972+239.081} \\ &= 0.09 \end{aligned} \] The covariate, baseline number of mischievous acts, was significantly related to the number of mischievous acts after the cloak of invisibility manipulation, F(1, 77) = 7.40, p = 0.01, \(\eta_p^2\) = 0.09. There was also a significant effect of wearing a cloak of invisibility after adjusting for baseline number of mischievous acts, F(1, 77) = 11.33, p = 0.001, \(\eta_p^2\) = 0.13, indicating that the number of mischievous acts was higher in those who were given a cloak of invisibility (M = 10.13, SE = 0.26) than in those who were not (M = 8.79, SE = 0.30). Access the main dialog box for factorial designs by selecting Analyze > General Linear Model > Univariate … I've wondered whether musical taste changes as you get older: my parents, for example, after years of listening to relatively cool music when I was a kid, hit their mid-forties and developed a worrying obsession with country and western. This possibility worries me immensely because if the future is listening to Garth Brooks and thinking 'oh boy, did I underestimate Garth's immense talent when I was in my twenties', then it is bleak indeed. To test the idea I took two groups (age): young people (which I arbitrarily decided was under 40 years of age) and older people (above 40 years of age). I split each of these groups of 45 into three smaller groups of 15 and assigned them to listen to Fugazi, ABBA or Barf Grooks (music). Each person rated the music (liking) on a scale ranging from +100 (this is sick) through 0 (indifference) to −100 (I'm going to be sick). Fit a model to test my idea (Fugazi.sav). To fit the model, access the main dialog box and: Drag the outcome variable (liking) to the box labelled Dependent Variable. Drag the predictor variables (age and music) to the box labelled Fixed Factor(s). Click to access the Post Hoc dialog box, and select these options: The output shows that the main effect of music is significant, F(2, 84) = 105.62, p < 0.001, as is the interaction, F(2, 84) = 400.98, p < 0.001, but the main effect of age is not, F(1, 84) = 0.002, p = 0.966. Let's look at these effects in turn. The graph of the main effect of music shows that the significant effect is likely to reflect the fact that ABBA were rated (overall) much more positively than the other two artists. The table of post hoc tests tells us more: First, ratings of Fugazi are compared to ABBA, which reveals a significant difference (the value in the column labelled Sig. is less than 0.05), and then Barf Grooks, which reveals no significant difference (the significance value is greater than 0.05). In the next part of the table, ratings of ABBA are compared first to Fugazi (which repeats the finding in the previous part of the table) and then to Barf Grooks, which reveals a significant difference (the significance value is below 0.05). The final part of the table compares Barf Grooks to Fugazi and ABBA, but these results repeat findings from the previous sections of the table. The main effect of music, therefore, reflects that ABBA were rated significantly more highly than both Fugazi and Barf Grooks. The main effect of age was not significant, and the graph shows that when you ignore the type of music that was being rated, older people and younger people, on average, gave almost identical ratings. The interaction effect is shown in the plot of the data split by type of music and age. Ratings of Fugazi are very different for the two age groups: the older ages rated it very low, but the younger people rated it very highly. A reverse trend is found if you look at the ratings for Barf Grooks: the youngsters give it low ratings, while the wrinkly ones love it. For ABBA the groups agreed: both old and young rated them highly. The interaction effect reflects the fact that there are age differences for some bands (Fugazi, Garf Brooks) but not others (ABBA) and that the age difference for Fugazi is in the opposite direction than for Barf. Compute omega squared for the effects in Task 1 and report the results of the analysis. First we use the mean squares and degrees of freedom in the summary table and the sample size per group to compute sigma for each effect: \[ \begin{aligned} \hat{\sigma}_\alpha^2 &= \frac{(a-1)(\text{MS}_A-\text{MS}_\text{R})}{nab} = \frac{(3-1)(40932.033-387.541)}{15×3×2} = 900.99 \\ \hat{\sigma}_\beta^2 &= \frac{(b-1)(\text{MS}_B-\text{MS}_\text{R})}{nab} = \frac{(2-1)(0.711-387.541)}{15×3×2} = -4.30 \\ \hat{\sigma}_{\alpha\beta}^2 &= \frac{(a-1)(b-1)(\text{MS}_{A \times B}-\text{MS}_\text{R})}{nab} = \frac{(3-1)(2-1)(155395.078-387.541)}{15×3×2} = 3444.61 \\ \end{aligned} \] We next need to estimate the total variability, and this is the sum of these other variables plus the residual mean squares: \[ \begin{aligned} \hat{\sigma}_\text{total}^2 &= \hat{\sigma}_\alpha^2 + \hat{\sigma}_\beta^2 + \hat{\sigma}_{\alpha\beta}^2 + \text{MS}_\text{R} \\ &= 900.99-4.30+3444.61+387.54 \\ &= 4728.84 \\ \end{aligned} \] The effect size is then the variance estimate for the effect in which you're interested divided by the total variance estimate: \[ \omega_\text{effect}^2 = \frac{\hat{\sigma}_\text{effect}^2}{\hat{\sigma}_\text{total}^2} \] For the main effect of music we get: \[ \omega_\text{music}^2 = \frac{\hat{\sigma}_\text{music}^2}{\hat{\sigma}_\text{total}^2} = \frac{900.99}{4728.84} = 0.19 \] For the main effect of age we get: \[ \omega_\text{age}^2 = \frac{\hat{\sigma}_\text{age}^2}{\hat{\sigma}_\text{total}^2} = \frac{-4.30}{4728.84} = -0.001 \] For the interaction of music and age we get: \[ \omega_{\text{music} \times \text{age}}^2 = \frac{\hat{\sigma}_{\text{music} \times \text{age}}^2}{\hat{\sigma}_\text{total}^2} = \frac{3444.61}{4728.84} = 0.73 \] We could report (remember if you're using APA format to drop the leading zeros before p-values and \(\omega^2\), for example report p = .035 instead of p = 0.035): The results show that the type of music listened to significantly affected the ratings of that music, F(2, 84) = 105.62, p < .001, \(\omega^2 = 0.19\). Bonferonni post hoc tests revealed that ABBA were rated significantly higher than both Fugazi and Barf Grooks (p < 0.001 in both cases). The main effect of age on the ratings of the music was not significant, F(1, 84) = 0.002, p = .966, \(\omega^2 = –0.001\). The music by age interaction was significant,* F(2, 84) = 400.98, p* < 0.001, \(\omega^2 = 0.73\) indicating that different types of music were rated differently by the two age groups. Specifically, Fugazi were rated more positively by the young group (M = 66.20, SD = 19.90) than the old (M = –75.87, SD = 14.37); ABBA were rated fairly equally by the young (M = 64.13, SD = 16.99) and old groups (M = 59.93, SD = 19.98); Barf Grooks was rated less positively by the young group (M = –71.47, SD = 23.17) than by the old (M = 74.27, SD = 22.29). These findings indicate that there is no hope — the minute you hit 40 you will suddenly start to love country and western music and will delete all of your Fugazi music files (don't worry, it didn't happen to me!). In Chapter 5 we used some data that related to male and female arousal levels when watching The Notebook or a documentary about notebooks (Notebook.sav). Fit a model to test whether men and women differ in their reactions to different types of films. Drag the outcome variable (arousal) to the box labelled Dependent Variable. Drag the predictor variables (sex and film) to the box labelled Fixed Factor(s). The output shows that the main effect of sex is significant, F(1, 36) = 7.292, p = 0.011, as is the main effect of filmt, F(1, 36) = 141.87, p < 0.001 and the interaction, F(1, 36) = 4.64, p = 0.038. Let's look at these effects in turn. The graph of the main effect of sex shows that the significant effect is likely to reflect the fact that males experienced higher levels of psychological arousal in general than women (when the type of film is ignored). The main effect of the film was also significant, and the graph shows that when you ignore the biological sex of the participant, psychological arousal was higher during the notebook than during a documentary about notebooks. The interaction effect is shown in the plot of the data split by type of film and sex of the participant. Psychological arousal is very similar for men and women during the documentary about notebooks (it is low for both sexes). However, for the notebook men experienced greater psychological arousal than women. The interaction is likley to reflect that there is a difference between men and women for one type of film (the notebook) but not the other (the documentary about notebooks). \[ \begin{aligned} \hat{\sigma}_\alpha^2 &= \frac{(a-1)(\text{MS}_A-\text{MS}_\text{R})}{nab} = \frac{(2-1)(297.03-40.77)}{10×2×2} = 6.41 \\ \hat{\sigma}_\beta^2 &= \frac{(b-1)(\text{MS}_B-\text{MS}_\text{R})}{nab} = \frac{(2-1)(5784.03-40.77)}{10×2×2} = 143.58 \\ \hat{\sigma}_{\alpha\beta}^2 &= \frac{(a-1)(b-1)(\text{MS}_{A \times B}-\text{MS}_\text{R})}{nab} = \frac{(2-1)(2-1)(189.23-40.77)}{10×2×2} = 3.71 \\ \end{aligned} \] \[ \begin{aligned} \hat{\sigma}_\text{total}^2 &= \hat{\sigma}_\alpha^2 + \hat{\sigma}_\beta^2 + \hat{\sigma}_{\alpha\beta}^2 + \text{MS}_\text{R} \\ &= 6.41+143.58+3.71+40.77 \\ &= 194.47 \\ \end{aligned} \] For the main effect of sex we get: \[ \omega_\text{sex}^2 = \frac{\hat{\sigma}_\text{sex}^2}{\hat{\sigma}_\text{total}^2} = \frac{6.41}{194.47} = 0.03 \] For the main effect of film we get: \[ \omega_\text{film}^2 = \frac{\hat{\sigma}_\text{film}^2}{\hat{\sigma}_\text{total}^2} = \frac{143.58}{194.47} = 0.74 \] For the interaction of sex and film we get: \[ \omega_{\text{sex} \times \text{film}}^2 = \frac{\hat{\sigma}_{\text{sex} \times \text{film}}^2}{\hat{\sigma}_\text{total}^2} = \frac{3.71}{194.47} = 0.02 \] The results show that the psychological arousal during the films was significantly higher for males than females, F(1, 36) = 7.292, p = 0.011, \(\omega^2 = 0.03\). Psychological arousal was also significantly higher during the notebook than during a documentary about notebooks, F(1, 36) = 141.87, p < 0.001. The interaction was also significant, F(1, 36) = 4.64, p = 0.038, and seemed to reflect the fact that psychological arousal was very similar for men and women during the documentary about notebooks (it was low for both sexes), but for the notebook men experienced greater psychological arousal than women. In Chapter 4 we used some data that related to learning in men and women when either reinforcement or punishment was used in teaching (Method Of Teaching.sav). Analyse these data to see whether men and women's learning differs according to the teaching method used. Drag the outcome variable (Mark) to the box labelled Dependent Variable. Drag the predictor variables (Sex and Method) to the box labelled Fixed Factor(s). We can see that there was no significant main effect of method of teaching, indicating that when we ignore the sex of the participant both methods of teaching had similar effects on the results of the SPSS exam, F(1, 16) = 2.25, p = 0.153. This result is not surprising when we look at the graphed means because being nice (M = 9.0) and electric shock (M = 10.5) had similar means. There was a significant main effect of the sex of the participant, indicating that if we ignore the method of teaching, men and women scored differently on the SPSS exam, F(1, 16) = 12.50, p = 0.003. If we look at the graphed means, we can see that on average men (M = 11.5) scored higher than women (M = 8.0). However, this effect is qualified by a significant interaction between sex and the method of teaching, F(1, 16) = 30.25, p < 0.001. The graphed means suggest that for men, using an electric shock resulted in higher exam scores than being nice, whereas for women, the being nice teaching method resulted in significantly higher exam scores than when an electric shock was used. At the start of this Chapter I described a way of empirically researching whether I wrote better songs than my old bandmate Malcolm, and whether this depended on the type of song (a symphony or song about flies). The outcome variable was the number of screams elicited by audience members during the songs. Draw an error bar graph (lines) and analyse these data (Escape From Inside.sav). To produce the graph, access the chart builder and selecta multiple line graph from the gallery. Then: Drag the outcome variable (Screams) to . Drag one predictor variable (Song_Type) to . Drag the other predictor variable (Songwriter) to . In the Element Properties dialog box remember to select to add error bars: The resulting graph will look like this: Drag the outcome variable (Screams) to the box labelled Dependent Variable. Drag the predictor variables (Song_Type and Songwriter) to the box labelled Fixed Factor(s). We can see that there was a significant main effect of songwriter, indicating that when we ignore the type of song Andy's songs elicited significantly more screams than those written by Malcolm, F(1, 64) = 9.94, p = 0.002. There was a significant main effect of the type of song indicating that, when we ignore the songwriter, symphonies elicited significantly more screams of agony than songs about flies, F(1, 64) = 20.87, p < 0.001. The interaction was also significant, F(1, 64) = 5.07, p = 0.028. The graphed means suggest that although reactions to Malcolm's and Andy's songs were similar for the fly songs, they differed quite a bit for the symphonies (Andy's symphony elicited more screams of torment than Malcolm's). Therefore, although the main effect of songwriter suggests that Malcolm was a better songwriter than Andy, the interaction tells us that this effect is driven by Andy being poor at writing symphonies. \[ \begin{aligned} \hat{\sigma}_\alpha^2 &= \frac{(a-1)(\text{MS}_A-\text{MS}_\text{R})}{nab} = \frac{(2-1)(74.13-3.55)}{17×2×2} = 1.04 \\ \hat{\sigma}_\beta^2 &= \frac{(b-1)(\text{MS}_B-\text{MS}_\text{R})}{nab} = \frac{(2-1)(35.31-3.55)}{17×2×2} = 0.47 \\ \hat{\sigma}_{\alpha\beta}^2 &= \frac{(a-1)(b-1)(\text{MS}_{A \times B}-\text{MS}_\text{R})}{nab} = \frac{(2-1)(2-1)(18.02-3.77)}{17×2×2} = 0.21 \\ \end{aligned} \] \[ \begin{aligned} \hat{\sigma}_\text{total}^2 &= \hat{\sigma}_\alpha^2 + \hat{\sigma}_\beta^2 + \hat{\sigma}_{\alpha\beta}^2 + \text{MS}_\text{R} \\ &= 1.04+0.47+0.21+3.77 \\ &= 5.49 \\ \end{aligned} \] For the main effect of type of song we get: \[ \omega_\text{type of song}^2 = \frac{\hat{\sigma}_\text{type of song}^2}{\hat{\sigma}_\text{total}^2} = \frac{1.04}{5.49} = 0.19 \] For the main effect of songwriter we get: \[ \omega_\text{songwriter}^2 = \frac{\hat{\sigma}_\text{songwriter}^2}{\hat{\sigma}_\text{total}^2} = \frac{0.47}{5.49} = 0.09 \] For the interaction of songwriter and type of song we get: \[ \omega_{\text{songwriter} \times \text{type of song}}^2 = \frac{\hat{\sigma}_{\text{songwriter} \times \text{type of song}}^2}{\hat{\sigma}_\text{total}^2} = \frac{0.21}{5.49} = 0.04 \] The main effect of the type of song significantly affected screams elicited during that song, F(1, 64) = 20.87, p < 0.001, \(\omega^2 = 0.19\); the two symphonies elicited significantly more screams of agony than the two songs about flies. The main effect of the songwriter significantly affected screams elicited during that song, F(1, 64) = 9.94, p = 0.002, \(\omega^2 = 0.09\); Andy's songs elicited significantly more screams of torment from the audience than Malcolm's songs. The song type\(\times\)songwriter interaction was significant, F(1, 64) = 5.07, p = 0.028, \(\omega^2 = 0.04\). Although reactions to Malcolm's and Andy's songs were similar for songs about a fly, Andy's symphony elicited more screams of torment than Malcolm's. Using SPSS Tip 14.1, change the syntax in GogglesSimpleEffects.sps to look at the effect of alcohol at different levels of type of face. The correct syntax to use is: glm Attractiveness by FaceType Alcohol /emmeans = tables(FaceType*Alcohol)compare(Alcohol). Note that all we change is compare(FaceType) to compare(Alcohol). The pertinent part of the output is: This output shows a significant effect of alcohol for unattractive faces, F(2, 42) = 14.34, p < 0.001, but not attractive ones F(2, 42) = 0.29, p = 0.809. Think back to the chapter. These tests reflect the fact that ratings of unattractive faces go up as more alcohol is consumed, but for attractive faces ratings are quite stable across doses of alcohol. There are reports of increases in injuries related to playing Nintendo Wii (http://ow.ly/ceWPj). These injuries were attributed mainly to muscle and tendon strains. A researcher hypothesized that a stretching warm-up before playing Wii would help lower injuries, and that athletes would be less susceptible to injuries because their regular activity makes them more flexible. She took 60 athletes and 60 non-athletes (athlete); half of them played Wii and half watched others playing as a control (wii), and within these groups half did a 5-minute stretch routine before playing/watching whereas the other half did not (stretch). The outcome was a pain score out of 10 (where 0 is no pain, and 10 is severe pain) after playing for 4 hours (injury). Fit a model to test whether athletes are less prone to injury, and whether the prevention programme worked (Wii.sav). This design is a 2(Athlete: athlete vs. non-athlete) by 2(Wii: playing Wii vs. watching Wii) by 2(Stretch: stretching vs. no stretching) three-way independent design. To fit the model, access the main dialog box and: Drag the outcome variable (injury) to the box labelled Dependent Variable. Drag the predictor variables (athlete, wii and stretch) to the box labelled Fixed Factor(s). The main summary table is as follows and we will look at each effect in turn: There was a significant main effect of athlete, F(1, 112) = 64.82, p < .001. The graph shows that, on average, athletes had significantly lower injury scores than non-athletes. There was a significant main effect of stretching, F(1, 112) = 11.05, p = 0.001. The graph shows that stretching significantly decreased injury score compared to not stretching. However, the two-way interaction with athletes will show us that this is true only for athletes and non-athletes who played on the Wii, not for those in the control group (you can also see this pattern in the three-way interaction graph). This is an example of how main effects can sometimes be misleading. There was also a significant main effect of Wii, F(1, 112) = 55.66, p < .001. The graph shows (not surprisingly) that playing on the Wii resulted in a significantly higher injury score compared to watching other people playing on the Wii (control). There was not a significant athlete by stretch interaction F(1, 112) = 1.23, p = 0.270. The graph of the interaction effect shows that (not taking into account playing vs. watching the Wii) while non-athletes had higher injury scores than athletes overall, stretching decreased the number of injuries in both athletes and non-athletes by roughly the same amount. Parallel lines usually indicate a non-significant interaction effect, and so it is not surprising that the interaction between stretch and athlete was non-significant. There was a significant athlete by Wii interaction F(1, 112) = 45.18, p < .001. The interaction graph shows that (not taking stretching into account) non-athletes had low injury scores when watching but high injury scores when playing whereas athletes had low injury scores when both playing and watching. There was a significant stretch by Wii interaction F(1, 112) = 14.19, p < .001. The interaction graph shows that (not taking athlete into account) stretching before playing on the Wii significantly decreased injury scores, but stretching before watching other people playing on the Wii did not significantly reduce injury scores. This is not surprising as watching other people playing on the Wii is unlikely to result in sports injury! There was a significant athlete by stretch by Wii interaction F(1, 112) = 5.94, p < .05. What this actually means is that the effect of stretching and playing on the Wii on injury score was different for athletes than it was for non-athletes. In the presence of this significant interaction it makes no sense to interpret the main effects. The interaction graph for this three-way effect shows that for athletes, stretching and playing on the Wii has very little effect: their mean injury score is quite stable across the two conditions (whether they played on the Wii or watched other people playing on the Wii, stretched or did no stretching). However, for the non-athletes, watching other people play on the Wii compared to not stretching and playing on the Wii rapidly declines their mean injury score. The interaction tells us that stretching and watching rather than playing on the Wii both result in a lower injury score and that this is true only for non-athletes. In short, the results show that athletes are able to minimize their injury level regardless of whether they stretch before exercise or not, whereas non-athletes only have to bend slightly and they get injured! Access the main dialog box for repeated-measures designs by selecting Analyze > General Linear Model > Repeated Measures … It is common that lecturers obtain reputations for being 'hard' or 'light' markers (or, to use the students' terminology, 'evil manifestations from Beelzebub's bowels' and 'nice people'), but there is often little to substantiate these reputations. A group of students investigated the consistency of marking by submitting the same essays to four different lecturers. The outcome was the percentage mark given by each lecturer and the predictor was the lecturer who marked the report (TutorMarks.sav). Compute the F-statistic for the effect of marker by hand. There were eight essays, each marked by four different lecturers. The data look like this: tutor1 62 58 63 64 61.75 6.92 63 60 68 65 64.00 11.33 78 66 67 50 65.25 132.92 The mean mark that each essay received and the variance of marks for a particular essay are shown too. Now, the total variance within essay marks will in part be due to different lecturers marking (some are more critical and some more lenient), and in part by the fact that the essays themselves differ in quality (individual differences). Our job is to tease apart these sources. The total sum of squares The \(\text{SS}_\text{T}\) is calculated as: \[ \text{SS}_\text{T} = \sum_{i=1}^{N} (x_i-\bar{X})^2 \] Let's get some descriptive statistics for all of the scores when they are lumped together: SE.mean CI.mean.0.95 std.dev coef.var 65 63.9375 1.311347 2.674511 55.02823 7.418101 0.1160211 This tells us, for example, that the grand mean (the mean of all scores) is 63.94. We take each score, substract from it the mean of all scores (63.94) and square this difference to get the squared errors: allScores Squared_difference We then add these squared differences to get the sum of squared error: \[ \begin{aligned} \text{SS}_\text{T} &= 3.76 + 0.88 + 1.12 + 16.48 + 25.60 + 49.84 + 197.68 + 122.32 + 35.28 + 15.52 + 8.64 + 0.00 + 1.12 + 9.36 + 4.24 + 82.08 + 0.88 + 16.48 + 64.96 + 35.28 + 98.80 + 1.12 + 9.36 122.32 + 0.00 + 1.12 + 1.12 + 8.64 + 24.40 + 194.32 + 194.32 + 358.72 \\ &= 1705.76 \end{aligned} \] The degrees of freedom for this sum of squares is \(N–1\), or 31. The within-participant sum of squares The within-participant sum of squares, \(\text{SS}_\text{W}\), is calculated using: \[ \text{SS}_\text{W} = s_\text{entity 1}^2(n_1-1)+s_\text{entity 2}^2(n_2-1) + s_\text{entity 3}^2(n_3-1) +\ldots+ s_\text{entity n}^2(n_n-1) \] Our 'entities' in this example are 8 essays so we could write the equation as: \[ \text{SS}_\text{W} = s_\text{essay 1}^2(n_1-1)+s_\text{essay 2}^2(n_2-1) + s_\text{essay 3}^2(n_3-1) +\ldots+ s_\text{essay 8}^2(n_8-1) \] The ns are the number of scores on which the variances are based (i.e. in this case the number of marks each essay received, which was 4). The variance in marks for each essay were computed in one of the tables above so we use these values to calculate \(\text{SS}_\text{W}\) as: \[ \begin{aligned} \text{SS}_\text{W} &= s_\text{essay 1}^2(n_1-1)+s_\text{essay 2}^2(n_2-1) + s_\text{essay 3}^2(n_3-1) +\ldots+ s_\text{essay 8}^2(n_8-1) \\ &= 6.92(4-1) + 11.33(4-1) + 20.92(4-1) + 18.25(4-1) + 43.58(4-1) + 84.25(4-1) + 132.92(4-1) + 216.00(4-1)\\ &= 1602.51 \end{aligned} \] The degrees of freedom for each essay are \(n–1\) (i.e. the number of marks per essay minus 1). To get the total degrees of freedom we add the df for each essay \[ \begin{aligned} \text{df}_\text{W} &= df_\text{essay 1}+df_\text{essay 2} + df_\text{essay 3} +\ldots+ df_\text{essay 8} \\ &= (4-1) + (4-1) + (4-1) + (4-1) + (4-1) + (4-1) + (4-1) + (4-1)\\ &= 24 \end{aligned} \] A shortcut would be to multiply the degrees of freedom per essay (3) by the number of essays (8): \(3 \times 8 = 24\) The model sum of squares We calculate the model sum of squares \(\text{SS}_\text{M}\) as: \[ \sum_{g = 1}^{k}n_g(\bar{x}_g-\bar{x}_\text{grand})^2 \] Therefore, we need to subtract the mean of all marks from the mean mark awarded by each tutor, then squres these differences, multiply them by the number of essays marked and sum the results. The mean mark awarded by each tutor is: tutor1 68.5 68.875 1.994971 4.717358 31.83929 5.642631 0.0819257 We can calculate \(\text{SS}_\text{M}\) as: \[ \begin{aligned} \text{SS}_\text{M} &= 8(68.88 – 63.94)^2 +8(64.25 – 63.94)^2 + 8(65.25 – 63.94)^2 + 8(57.38–63.94)^2\\ &= 554 \end{aligned} \] The degrees of freedom are the number of conditions (in this case the number of markers) minus 1, \(df_M = k-1 = 3\) The residual sum of squares We now know that there are 1706 units of variation to be explained in our data, and that the variation across our conditions accounts for 1602 units. Of these 1602 units, our experimental manipulation can explain 554 units. The final sum of squares is the residual sum of squares (\(\text{SS}_\text{R}\)), which tells us how much of the variation cannot be explained by the model. Knowing \(\text{SS}_\text{W}\) and \(\text{SS}_\text{M}\) already, the simplest way to calculate \(\text{SS}_\text{R}\) is throiugh subtraction: \[ \begin{aligned} \text{SS}_\text{R} &= \text{SS}_\text{W}-\text{SS}_\text{M}\\ &=1602.51-554\\ &=1048.51 \end{aligned} \] The degrees of freedom are calculated in a similar way: \[ \begin{aligned} df_\text{R} &= df_\text{W}-df_\text{M}\\ &=24-3\\ &=21 \end{aligned} \] = 21 ### The mean squares Next, convert the sums of squares to mean squares by dividing by their degrees of freedom: \[ \begin{aligned} \text{MS}_\text{M} &= \frac{\text{SS}_\text{M}}{df_\text{M}} = \frac{554}{3} = 184.67 \\ \text{MS}_\text{R} &= \frac{\text{SS}_\text{R}}{df_\text{R}} = \frac{1048.51}{21} = 49.93 \\ \end{aligned} \] The F-statistic The F-statistic is calculated by dividing the model mean squares by the residual mean squares: \[ F = \frac{\text{MS}_\text{M}}{\text{MS}_\text{R}} = \frac{184.67}{49.93} = 3.70 \] This value of F can be compared against a critical value based on its degrees of freedom (which are 3 and 21 in this case). Repeat the analysis for Task 1 using SPSS Statistics and interpret the results. To fit the model: Type a name (I typed Marker) for the repeated measures variable in the box labelled Within-Subject Factor Name: Enter the number of levels of the repeated measures variable (4) in the box labelled Number of Levels: Click to register the variable The dialog box should look like this: Click to define the variable Move the variables representing the levels of your repeated measures variable) to the box labelled Within-Subjects Variables Click to request post hoc tests Move the variable representing the repeated measures predictor to the box labelled Display Means for:, select and select Bonferroni from the drop down list The first part of the output tells us about sphericity. Mauchley's test indicates a significant violation of sphericity, but I have argued in the book that you should ignore this test and routinely correct for sphericity. The second part of the output tells us about the main effect of marker. If we look at the Greenhouse-Geisser corrected values, we would conclude that tutors did not significantly differ in the marks they award, F(1.67, 89.53) = 3.70, p = 0.063. If, however, we look at the Huynh-Feldt corrected values, we would conclude that tutors did significantly differ in the marks they award, F(2.14, 70.09) = 3.70, p = 0.047. Which to believe then? Well, this example illustrates just how silly it is to have a cetagorical threshold like p < 0.05 that lead to completely opposite conclusions. The best course of action here would be report both results openly, compute some effect sizes and focus more on the size of the effect than its p-value. The final part of the output shows the post hoc tests. Assuming we want to interpret these (which, if we do, we might be speculative unless the effect size for the main effect seems meaningul). The only significant difference between group means is between Prof Field and Prof Smith. Looking at the means of these markers, we can see that I give significantly higher marks than Prof Smith. However, there is a rather anomalous result in that there is no significant difference between the marks given by Prof Death and myself, even though the mean difference between our marks is higher (11.5) than the mean difference between myself and Prof Smith (4.6). The reason is the sphericity in the data. The interested reader might like to run some correlations between the four tutors' grades. You will find that there is a very high positive correlation between the marks given by Prof Smith and myself (indicating a low level of variability in our data). However, there is a very low correlation between the marks given by Prof Death and myself (indicating a high level of variability between our marks). It is this large variability between Prof Death and myself that has produced the non-significant result despite the average marks being very different (this observation is also evident from the standard errors). ## Task 15.3 Calculate the effect sizes for the analysis in Task 1. In repeated-measures ANOVA, the equation for \(\omega^2\) is: \[ \omega^2 = \frac{[\frac{k-1}{nk}(\text{MS}_\text{M}-\text{MS}_\text{R})]}{\text{MS}_\text{R}+\frac{\text{MS}_\text{B}-\text{MS}_\text{R}}{k}+[\frac{k-1}{nk}(\text{MS}_\text{M}-\text{MS}_\text{R})]} \] To get \(\text{MS}_\text{B}\) we need \(\text{SS}_\text{W}\), which is not in the output. However, we can obtain it as follows: \[ \begin{aligned} \text{SS}_\text{T} &= \text{SS}_\text{B} + \text{SS}_\text{M} + \text{SS}_\text{R} \\ \text{SS}_\text{B} &= \text{SS}_\text{T} - \text{SS}_\text{M} - \text{SS}_\text{R} \\ \end{aligned} \] The next problem is that the output also doesn't include \(\text{SS}_\text{T}\) but we have the value from Task 1. You should get: \[ \begin{aligned} \text{SS}_\text{B} &= 1705.868-554.125-1048.375 \\ &=103.37 \end{aligned} \] The next step is to convert this to a mean squares by dividing by the degrees of freedom, which in this case are the number of essays minus 1: \[ \begin{aligned} \text{MS}_\text{B} &= \frac{\text{SS}_\text{B}}{df_\text{B}} = \frac{\text{SS}_\text{B}}{N-1} \\ &=\frac{103.37}{8-1} \\ &= 14.77 \end{aligned} \] The resulting effect size is: \[ \begin{aligned} \omega^2 &= \frac{[\frac{4-1}{8 \times 4}(184.71-49.92)]}{49.92+\frac{14.77-49.92}{4}+[\frac{4-1}{8 \times4}(184.71-49.92)]} \\ &= \frac{12.64}{53.77} \\ &= 0.24 \end{aligned} \] I mention in the book that it's typically more useful to have effect size measures for focused comparisons (rather than the omnibus test), and so another approach to calculating effect sizes is to calculate them for the contrasts by converting the F-statistics (because they all have 1 degree of freedom for the model) to r: \[ r = \sqrt{\frac{F(1, df_\text{R})}{F(1, df_\text{R}) + df_\text{R}}} \] For the three comparisons we did, we would get: \[ \begin{aligned} r_\text{Field vs. Smith} &= \sqrt{\frac{18.18}{18.18 + 7}} = 0.85\\ r_\text{Smith vs. Scrote} &= \sqrt{\frac{0.15}{0.15 + 7}} = 0.14\\ r_\text{Scrote vs. Death} &= \sqrt{\frac{3.44}{3.44 + 7}} = 0.57\ \end{aligned} \] We could report the main finding as follows (remember if you're using APA format to drop the leading zeros before p-values and \(\omega^2\), for example report p = .063 instead of p = 0.063): Degrees of freedom were corrected using Greenhouse–Geisser estimates of sphericity (ε = .56). The mark of an essay was not significantly affected by the lecturer who marked it, F(1.67, 11.71) = 3.70, p = 0.063, \(\omega^2\) = 0.24. Remember that because the main F-statistic was not significant we should not report further analysis. The 'roving eye' effect is the propensity of people in relationships to 'eye up' members of the opposite sex. I fitted 20 people with incredibly sophisticated glasses that tracked their eye movements (yes, I am making this up …). Over four nights I plied them with either 1, 2, 3 or 4 pints of strong lager in a nightclub and recorded how many different people they eyed up (i.e., scanned their bodies). Is there an effect of alcohol on the tendency to eye people up? (RovingEye.sav). Type a name (I typed alcohol) for the repeated measures variable in the box labelled Within-Subject Factor Name: The second part of the output tells us about the main effect of alcohol. If we look at the Greenhouse-Geisser corrected values, we would conclude that the dose of alcohol significantly affected how many people were 'eyed up', F(2.24, 42.47) = 4.73, p = 0.011. The final part of the output shows the post hoc tests. These show that the only significant difference was between 2 and 3 pints of alcohol. Looking at the graph of means, this sugegsts that the number of people 'eyed up' by participants significantly increases from 2 to 3 pints. ## Warning: attributes are not identical across measure variables; ## they will be dropped Degrees of freedom were corrected using Greenhouse–Geisser estimates of sphericity (ε = 0.75). The number of people eyed up was significantly affected by the amount of alcohol drunk, F(2.24, 42.47) = 4.73, p = 0.011. Bonferroni post hoc tests revealed a significant increase in the number of people eyed up from when 2 pints were drunk to when 3 pints were, 95% CI (–6.85, –0.15), p = .038, but not between 1 and 2 pints, 95% CI (–2.13, 2.23), p = 1.00, 1 and 3 pints, 95% CI (–7.54, 0.64), p = .136, 1 and 4 pints, 95% CI (–7.48, 1.08), p = .242, 2 and 4 pints, 95% CI (–7.43, 0.93, p = .202, or 3 and 4 pints, 95% CI (–3.49, 3.99), p = 1.00. In the previous chapter we came across the beer-goggles effect. In that chapter, we saw that the beer-goggles effect was stronger for unattractive faces. We took a follow-up sample of 26 people and gave them doses of alcohol (0 pints, 2 pints, 4 pints and 6 pints of lager) over four different weeks. We asked them to rate a bunch of photos of unattractive faces in either dim or bright lighting. The outcome measure was the mean attractiveness rating (out of 100) of the faces, and the predictors were the dose of alcohol and the lighting conditions (BeerGogglesLighting.sav). Do alcohol dose and lighting interact to magnify the beer goggles effect? Type a name (I typed lighting) for the first repeated measures variable in the box labelled Within-Subject Factor Name: Type a name (I typed alcohol) for the second repeated measures variable in the box labelled Within-Subject Factor Name: Click to define the variables Move the variables representing the levels of your repeated measures variable) to the box labelled Within-Subjects Variables in the appropriate order Click to request repeated contrasts as in the dialog box below The first part of the output tells us about sphericity. Mauchley's test indicates a non-significant violation of sphericity for both variables, but I have argued in the book that you should ignore this test and routinely correct for sphericity, so that's what we'll do. The second part of the output tells us about the main effects of alcohol and lighting, and also their interaction. All effects are significant at p < 0.001. We'll look at each effect in turn. The final part of the output shows the contrasts. We will refer to this table as we interpret each effect. The main effect of lighting shows that the attractiveness ratings of photos was significantly lower when the lighting was dim compared to when it was bright, F(1, 25) = 23.42, p < 0.001. The main effect of alcohol shows that the attractiveness ratings of photos of faces was significantly affected by how much alcohol was consumed, F(2.62, 65.47) = 104.39, p < 0.001. Looking at the contrasts, ratings were not significantly different when two pints were consumed compared to no pints, F(1, 25) = 0.01, p = 0.909. However, ratings were significantly lower after four pints compared to two, F(1, 25) = 84.32, p < .001, and after six pints compared to four, F(1, 25) = 27.98, p < .001. The lighting by alcohol interaction was significant, F(2.81, 70.23) = 22.22, p < 0.001, indicating that the effect of alcohol on the ratings of the attractiveness of faces differed when lighting was dim compared to when it was bright. Contrasts on this interaction term revealed that when the difference in attractiveness ratings in dim and bright conditions was compared after no alcohol to after two pints there was no significant difference, F(1, 25) = 0.14, p = 0.708. However, when comparing the difference of ratings in dim and bright conditions after two pints compared to four, a significant difference emerged, F(1, 25) = 24.75, p < 0.001. The graph shows that the decline in attractiveness ratings between two and four pints was more pronounced in the dim lighting condition. A final contrast revealed that the difference in ratings in dim conditions compared to bright after consuming four pints compared to six was not significant, F(1, 25) = 2.16, p = 0.154. To sum up, there was a significant interaction between the amount of alcohol consumed and whether ratings were made in bright or dim lighting conditions: the decline in the attractiveness ratings seen after two pints (compared to after four) was significantly more pronounced when the lighting was dim. Using SPSS Tip 15.3, change the syntax in SimpleEffectsAttitude.sps to look at the effect of drink at different levels of imagery. GLM beerpos beerneg beerneut winepos wineneg wineneut waterpos waterneg waterneut /WSFACTOR=Drink 3 Imagery 3 /EMMEANS = TABLES(Drink*Imagery) COMPARE(Drink). Then output shows a significant effect of drink at level 1 of imagery. So, the ratings of the three drinks significantly differed when positive imagery was used. Because there are three levels of drink, though, this isn't that helpful in untangling what's going on. There is also a significant effect of drink at level 2 of imagery. So, the ratings of the three drinks significantly differed when negative imagery was used. Finally, there is also a significant effect of drink at level 3 of imagery. So, the ratings of the three drinks significantly differed when neutral imagery was used. Early in my career I looked at the effect of giving children information about animals. In one study (Field, 2006), I used three novel animals (the quoll, quokka and cuscus), and children were told negative things about one of the animals, positive things about another, and given no information about the third (our control). After the information I asked the children to place their hands in three wooden boxes each of which they believed contained one of the aforementioned animals (Field(2006).sav). Draw an error bar graph of the means and do some normality tests on the data. To produce the graph, access the chart builder and select a bar graph from the gallery. Then: Select the three variables representing the levels of the repeated measures variable (bhvneg, bhvpos, and bhvnone) and drag them (simultaneously) to . In the Element Properties dialog box remember to select to add error bars. The resulting graph will look like this: To get the normality tests I used the Kolmogorov–Smirnov test from the Nonparametric > One Sample… menu. I did this because I had a fairly large sample and back when I did this research the Kolmogorov–Smirnov test executed through this menu differed from that obtained through the Explore menu because it did not use the Lilliefor's correction (see Oliver Twisted for Chapter 6). This appears to have changed so you'll likley get the same results using the explore menu. To get this test complete the dialog boxes as described. First, ask for a custom analysis Next, select the Fields tab and drag the three variables representing the levels of the repeated measures variable (bhvneg, bhvpos, and bhvnone) to the box labelled Test Fields: In the Settings tab select Test observed distribution against hypothesized (Kolmogorov-Smirnov test) You can leave the default as they are because we want to test our sample data against a normal distribution: The resulting tests for each variable show that they are all very heavily non-normal. This will be, in part, because if a child didn't put their hand in the box after 15 seconds we gave them a score of 15 and asked them to move on to the next box (this was for ethical reasons: if a child hadn't put their hand in the box after 15 s we assumed that they did not want to do the task). These days I'd use a robust test on these data but back when I conducted these research I decided to log-transform to reduce the skew. hence Task 8! Log-transform the scores in Task 7 and repeat the normality tests. The easiest way to conduct these transformations is by executing the following syntax: COMPUTE LogNegative=ln(bhvneg). COMPUTE LogPositive=ln(bhvpos). COMPUTE LogNoInformation=ln(bhvnone). When you re-run the Kolmogorov-Smirnov tests, you will see that the state of affairs hasn't changed much (except for the negative information animal). As an interesting aside, older versions of SPSS did not apply Lillifor's correction, and the results suggested that the log-transformed variables could be considered normally-distributed. However, doing this many years later, SPSS applies Lillifor's correction and the results are different! Analyse the data in Task 7 with a robust model. Do children take longer to put their hands in a box that they believe contains an animal about which they have been told nasty things? You would adapt the syntax file as follows: mySPSSdata = spssdata.GetDataFromSPSS(factorMode = "labels") ID<-"code" rmFactor<-c("bhvneg", "bhvpos", "bhvnone") df<-melt(mySPSSdata, id.vars = ID, measure.vars = rmFactor) names(df)[names(df) == ID] <- "id" rmanova(df$value, df$variable, df$id, tr = 0.2) rmmcp(df$value, df$variable, df$id, tr = 0.2) The results from the robust model mirror the analysis that I conducted on the log-transformed values in the paper itself (in case you want to check). The main effect of the type of information was significant F(1.24, 94.32) = 78.15, p < 0.001. The post hoc tests show a significantly longer time to approach the box containing the negative information animal compared to the positive information animal, \(\hat{\psi} = 2.42, p_{\text{observed}} < 0.001, p_{\text{crit}} =0.017\), and compared to the no information box, \(\hat{\psi} = 2.07, p_{\text{observed}} < 0.001, p_{\text{crit}} =0.025\). Children also approached the box containing the positive information animal signifiacntly faster than the no information animal, \(\hat{\psi} = -0.21, p_{\text{observed}} = 0.014, p_{\text{crit}} = 0.050\). ## Call: ## rmanova(y = fieldLong$latency, groups = fieldLong$info, blocks = fieldLong$code, ## tr = 0.2) ## Test statistic: F = 78.1521 ## Degrees of freedom 1: 1.24 ## Degrees of freedom 2: 94.32 ## p-value: 0 ## rmmcp(y = fieldLong$latency, groups = fieldLong$info, blocks = fieldLong$code, ## psihat ci.lower ci.upper p.value p.crit sig ## bhvneg vs. bhvpos 2.41558 1.71695 3.11421 0.00000 0.0169 TRUE ## bhvneg vs. bhvnone 2.07013 1.35313 2.78713 0.00000 0.0250 TRUE ## bhvpos vs. bhvnone -0.20597 -0.40537 -0.00658 0.01351 0.0500 TRUE In the previous chapter we looked at an example in which participants viewed videos of different drink products in the context of positive, negative or neutral imagery. Men and women might respond differently to the products so reanalyse the data taking sex (a between-group variable) into account. The data are in the file MixedAttitude.sav. To fit the model, follow the same instructions that are in the book. There is a video that runs through the process here. In addition to what's in the video/book you must specify sex as a between-group variable by dragging it from the variable list and to the box labelled Between-Subjects Factors. The initial output is the same as in the two-way ANOVA example in the book (previous chapter) so look there for an explanation. The results of Mauchly's sphericity test (Output 1) shows that the main effect of drink significantly violates the sphericity assumption (W = 0.572, p = .009) but the main effect of imagery and imagery by drink interaction do not. Hoiwever, as suggested in the book, it's a good idea to correct for sphericity regardless of Mauchley's test so that's what we'll do. The summary table of the repeated-measures effects (Output 2) has been edited to show only Greenhouse-Geisser corrected degrees of freedom (the book explains how to change how the layers of the table are displayed). We would expect the main effects that were previously significant to still be so (in a balanced design, the inclusion of an extra predictor variable should not affect these effects). By looking at the significance values it is clear that this prediction is true: there are still significant effects of the type of drink being rated, the type of imagery used, and the interaction of these two variables. I won't re-explain these effects as you can look at the book. I will forcus only on the effects involving sex. The output shows that sex interacts significantly with both the type of drink being rated, and imagery. The combined interaction between sex, imagery and drink is also significant, indicating that the way in which imagery affects responses to different types of drinks depends on whether the participant is male or female. The main effect of sex There was a significant main effect of sex, F(1, 18) = 6.75, p = 0.018. This effect tells us that if we ignore all other variables, male participants' ratings were significantly different than females. The table of means for the main effect of sex make clear that men's ratings were significantly more positive than females (in general). The interaction between sex and drink There was a significant interaction between the type of drink being rated and the sex of the participant, F(1.40, 25.22) = 25.57, p < .001 (Output 2). This effect tells us that the different types of drinks were rated differently by men and women. We can use the estimated marginal means (Output 5) to determine the nature of this interaction (I have graphed these means too). The graph shows that male (orange) and female (blue) ratings are very similar for wine and water, but men rate beer more highly than women — regardless of the type of imagery used. ## Warning: `fun.y` is deprecated. Use `fun` instead. This interaction can be clarified using the contrasts specified before the analysis (Output 6). Drink × sex interaction 1: beer vs. water, male vs. female. The first interaction term looks at level 1 of drink (beer) compared to level 3 (water), comparing male and female scores. This contrast is highly significant, F(1, 18) = 28.97, p < .001. This result tells us that the increased ratings of beer compared to water found for men are not found for women. So, in the graph male and female ratings of water are quite similar (the points are close) but for beer they are very different (male point is much higher than the female one). Drink × sex interaction 2: wine vs. water, male vs. female. The second interaction term compares level 2 of drink (wine) to level 3 (water), contrasting male and female scores. There is no significant difference for this contrast, F(1, 18) = 2.34, p = 0.14, which tells us that the difference between ratings of wine compared to water in males is roughly the same as in females. Therefore, overall, the drink sex interaction has shown up a difference between males and females in how they rate beer relative to water (regardless of the type of imagery used). The interaction between sex and imagery There was a significant interaction between the type of imagery used and the sex of the participant, F(1.93, 34.77) = 26.55, p < .001). This effect tells us that the type of imagery used in the advert had a different effect on men and women. We can use the estimated marginal means to determine the nature of this interaction (Output 7), which I have graphed also. The graph shows the average male (orange) and female (blue) ratings in each imagery condition ignoring the type of drink that was rated. Male and female ratings are very similar for positive and neutral imagery, but men seem to be less affected by negative imagery than women — regardless of the drink in the advert. Imagery × sex interaction 1: positive vs. neutral, male vs. female. The first interaction term looks at level 1 of imagery (positive) compared to level 3 (neutral), comparing male and female scores. This contrast is not significant F(1, 18) = 0.02, p = 0.886. This result tells us that ratings of drinks presented with positive imagery (relative to those presented with neutral imagery) were equivalent for males and females. This finding represents the fact that in the graph of this interaction the orange and blue points for both the positive and neutral conditions overlap (therefore male and female responses were the same). Imagery × sex interaction 2: negative vs. neutral, male vs. female. The second interaction term looks at level 2 of imagery (negative) compared to level 3 (neutral), comparing male and female scores. This contrast is highly significant, F(1, 18) = 34.13, p < .001. This result tells us that the difference between ratings of drinks paired with negative imagery compared to neutral was different for men and women. Looking at the interaction graph, this finding represents the fact that for men, ratings of drinks paired with negative imagery were relatively similar to ratings of drinks paired with neutral imagery (the orange dots have a fairly similar vertical position). However, if you look at the female ratings, then drinks were rated much less favourably when presented with negative imagery than when presented with neutral imagery (the blue dot for negative imagery is much lower than the one for neutral imagery). Overall, the imagery sex interaction has shown up a difference between males and females in terms of their ratings of drinks presented with negative imagery compared to neutral; specifically, men seem less affected by negative imagery. The interaction between drink and imagery The interpretation of this interaction is the same as for the two-way design that we analysed in the chapter in the book on repeated measures designs. You may remember that the interaction reflected the fact that negative imagery has a different effect than both positive and neutral imagery. The graph shows that the pattern of response across drinks was similar when positive and neutral imagery were used (blue and grey lines). That is, ratings were positive for beer, they were slightly higher for wine and they were lower for water. The fact that the (blue) line representing positive imagery is higher than the neutral (grey) line indicates that positive imagery produced higher ratings than neutral imagery across all drinks. The red line (representing negative imagery) shows a different pattern: ratings were lowest for wine and water but quite high for beer. The interaction between sex, drink and imagery The three-way interaction tells us whether the drink × imagery interaction is the same for men and women (i.e., whether the combined effect of the type of drink and the imagery used is the same for male participants as for female ones). There is a significant three-way drink × imagery × sex interaction, F(3.25, 58.52) = 3.70, p = .014. The nature of this interaction is shown up in the means (Output 8), which are also plotted below. The male graph shows that when positive imagery is used (blue line), men generally rated all three drinks positively (the blue line is higher than the other lines for all drinks). This pattern is true of women also (the line representing positive imagery is above the other two lines). When neutral imagery is used (grey line), men rate beer very highly, but rate wine and water fairly neutrally. Women, on the other hand rate beer and water neutrally, but rate wine more positively (in fact, the pattern of the positive and neutral imagery lines show that women generally rate wine slightly more positively than water and beer). So, for neutral imagery men still rate beer positively, and women still rate wine positively. For the negative imagery (red line), the men still rate beer very highly, but give low ratings to the other two types of drink. So, regardless of the type of imagery used, men rate beer very positively (if you look at the graph you'll note that ratings for beer are virtually identical for the three types of imagery). Women, however, rate all three drinks very negatively when negative imagery is used. The three-way interaction is, therefore, likely to reflect that men seem fairly immune to the effects of imagery when beer is being used as a stimulus, whereas women are not. The contrasts will show up exactly what this interaction represents. Drink × imagery × sex interaction 1: beer vs. water, positive vs. neutral imagery, male vs. female. The first interaction term compares level 1 of drink (beer) to level 3 (water), when positive imagery (level 1) is used compared to neutral (level 3) in males compared to females, F(1, 18) = 2.33, p = .144. The non-significance of this contrast tells us that the difference in ratings when positive imagery is used compared to neutral imagery is roughly equal when beer is used as a stimulus and when water is used, and these differences are equivalent in male and female participants. In terms of the interaction graph it means that the distance between the blue and grey points in the beer condition is the same as the distance between the blue and grey points in the water condition and that these distances are equivalent in men and women. Drink × imagery × sex interaction 2: beer vs. water, negative vs. neutral imagery, male vs. female. The second interaction term looks at level 1 of drink (beer) compared to level 3 (water), when negative imagery (level 2) is used compared to neutral (level 3). This contrast is significant, F(1, 18) = 5.59, p = 0.029. This result tells us that the difference in ratings between beer and water when negative imagery is used (compared to neutral imagery) is different between men and women. In terms of the interaction graph it means that the distance between the red and grey points in the beer condition relative to the same distance for water was different in men and women. Drink × imagery × sex interaction 3: wine vs. water, positive vs. neutral imagery, male vs. female. The third interaction term looks at level 2 of drink (wine) compared to level 3 (water), when positive imagery (level 1) is used compared to neutral (level 3) in males compared to females. This contrast is non-significant, F(1, 18) = 0.03, p = 0.877. This result tells us that the difference in ratings when positive imagery is used compared to neutral imagery is roughly equal when wine is used as a stimulus and when water is used, and these differences are equivalent in male and female participants. In terms of the interaction graph it means that the distance between the blue and grey points in the wine condition is the same as the corresponding distance in the water condition and that these distances are equivalent in men and women. Drink × imagery × sex interaction 4: wine vs. water, negative vs. neutral imagery, male vs. female. The final interaction term looks at level 2 of drink (wine) compared to level 3 (water), when negative imagery (level 2) is used compared to neutral (level 3). This contrast is very close to significance, F(1, 18) = 4.38, p = .051. This result tells us that the difference in ratings between wine and water when negative imagery is used (compared to neutral imagery) is different between men and women (although this difference has not quite reached significance). In terms of the interaction graph it means that the distance between the red and grey points in the wine condition relative to the same distance for water was different (depending on how you interpret a p of 0.051) in men and women. It is noteworthy that this contrast was close to the 0.051 threshold. At best, this result is suggestive and not definitive. Text messaging and Twitter encourage communication using abbreviated forms of words (if u no wat I mean). A researcher wanted to see the effect this had on children's understanding of grammar. One group of 25 children was encouraged to send text messages on their mobile phones over a six-month period. A second group of 25 was forbidden from sending text messages for the same period (to ensure adherence, this group were given armbands that administered painful shocks in the presence of a phone signal). The outcome was a score on a grammatical test (as a percentage) that was measured both before and after the experiment. The data are in the file TextMessages.sav. Does using text messages affect grammar? The line chart shows the mean grammar score (and 95% confidence interval) before and after the experiment for the text message group and the controls. It's clear that in the text message group grammar scores went down over the six-month period whereas they remained fairly static for the controls. The basic analysis is achieved by following the general instructions and setting up the initial dialog boxes as follows (for more detailed instructions see the book): Completed dialog box The output shows the table of descriptive statistics; the table has means at baseline split according to whether the people were in the text messaging group or the control group, and then the means for the two groups at follow-up. These means correspond to those plotted in the graph above. For a mixed design we should check the assumptions of sphericity and homogeneity of variance. In this case, we have only two levels of the repeated measure so the assumption of sphericity does not apply. Levene's test produces a different test for each level of the repeated-measures variable (see Output). The homogeneity assumption has to hold for every level of the repeated-measures variable. At both levels of time, Levene's test is non-significant (p = 0.77 before the experiment and p = .069 after the experiment). To the extent that Levene's is useful in testing this assumption we might conclude that the assumption has not been broken (although we might want to take a closer look for the follow-up scores). The main effect of time is significant, so we can conclude that grammar scores were significantly affected by the time at which they were measured. The exact nature of this effect is easily determined because there were only two points in time (and so this main effect is comparing only two means). The means show that grammar scores were higher before the experiment than at follow-up: before the experimental manipulation scores were higher than after, meaning that the manipulation had the net effect of significantly reducing grammar scores. This main effect seems interesting until you consider that these means include both text messagers and controls. There are three possible reasons for the drop in grammar scores: (1) the text messagers got worse and are dragging down the mean after the experiment; (2) the controls somehow got worse; or (3) the whole group just got worse and it had nothing to do with whether the children text-messaged or not. Until we examine the interaction, we won't see which of these is true. The main effect of group has a p-value probabilityof .09, which is just above the critical value of .05. We should conclude that there was no significant main effect on grammar scores of whether children text-messaged or not. Again, this effect seems interesting enough, and mobile phone companies might certainly choose to cite it as evidence that text messaging does not affect your grammatical ability. However, remember that this main effect ignores the time at which grammatical ability is measured. It just means that if we took the average grammar score for text messagers (that's including their score both before and after they started using their phone), and compared this to the mean of the controls (again including scores before and after) then these means would not be significantly different. The graph shows that when you ignore the time at which grammar was measured, the controls have slightly better grammar than the text messagers, but not significantly so. Main effects are not always that interesting and should certainly be viewed in the context of any interaction effects. The interaction effect in this example is shown by the F-statistic in the row labelled **Time*Group**, and because the p-value is .047, which is just less than the criterion of .05, we might conclude that there is a significant interaction between the time at which grammar was measured and whether or not children were allowed to text-message within that time. The mean ratings in all conditions help us to interpret this effect. Looking at the earlier interaction graph, we can see that although grammar scores fell in controls, the drop was much more marked in the text messagers; so, text messaging does seem to ruin your ability at grammar compared to controls. We can report the three effects from this analysis as follows: * he results show that the grammar ratings at the end of the experiment were significantly lower than those at the beginning of the experiment, F(1, 48) = 15.46, p < .001, r = .61. * The main effect of group on the grammar scores was non-significant, F(1, 48) = 2.99, p = .09, r = .27. This indicated that when the time at which grammar was measured is ignored, the grammar ability in the text message group was not significantly different from the controls. * The time group interaction was significant, F(1, 48) = 4.17, p = .047, r = .34, indicating that the change in grammar ability in the text message group was significantly different from the change in the control groups. These findings indicate that although there was a natural decay of grammatical ability over time (as shown by the controls) there was a much stronger effect when participants were encouraged to use text messages. This shows that using text messages accelerates the inevitable decline in grammatical ability. A researcher hypothesized that reality TV show contestants start off with personality disorders that are exacerbated by being forced to spend time with people as attention-seeking as them (see Chapter 1). To test this hypothesis, she gave eight contestants a questionnaire measuring personality disorders before and after they entered the show. A second group of eight people were given the questionnaires at the same time; these people were short-listed to go on the show, but never did. The data are in RealityTV.sav. Does entering a reality TV competition give you a personality disorder? The plot shows that in the contestant group the mean personality disorder score increased from time 1 (before entering the house) to time 2 (after leaving the house). However, in the no contestant group the mean personality disorder score decreased over time. The descriptive statistics shows the mean personality disorder symptom (PDS) scores before going on reality TV split according to whether the people were a contestant or not, and then the means for the two groups after leaving the house. These means correspond to those plotted above. For sphericity to be an issue we need at least three conditions. We have only two conditions here so sphericity does not need to be tested. We do need to check the homogeneity of variance assumption. Levene's test produces a different test for each level of the repeated-measures variable. In mixed designs, the homogeneity assumption has to hold for every level of the repeated-measures variable. At both levels of time, Levene's test is non-significant (p = 0.061 before entering the show and p = .088 after leaving). This means the assumption has not been significantly broken (but it was quite close to being a problem). The main effect of time is not significant, so we can conclude that PDS scores were not significantly affected by the time at which they were measured. The means show that symptom levels were cmparable before entering the show (M = 64.06) and after (M = 65.13). The main effect of contestant has a p-value of .43, which is above the critical value of .05. Therefore, most people would conclude that there was no significant main effect on PDS scores of whether the person was a contestant or not. The means shows that when you ignore the time at which PDS was measured, the contestants and shortlist are not significantly different. The interaction effect in this example is shown by the F-statistic in the row labelled **time*contestant** (see earlier), and because the p-value is .018, which is less than the criterion of .05, most people would conclude that there is a significant interaction between the time at which PDS was measured and whether or not the person was a contestant. The mean ratings in all conditions (and on the interaction graph) help us to interpret this effect. The significant interaction seems to indicate that for controls PDS scores went down (slightly) from before entering the show to after leaving it, but for contestants these opposite is true: PDS scores increased over time. We can report the three effects from this analysis as follows: * The main effect of group was not significant, F(1, 14) = 0.67, p = .43, indicating that across both time points personality disorder symptoms were similar in reality TV contestants and shortlist controls. * The main effect of time was not significant, F(1, 14) = 0.09, p = .77, indicating that across all participants personality disorder symptoms were similar before the show and after it. * The time × group interaction was significant, F(1, 14) = 7.15, p = .018, indicating that although personality disorder symptoms decreased for shortlist controls from before the show to after, scores increased for the contestants. Angry Birds is a video game in which you fire birds at pigs. Some daft people think this sort of thing makes people more violent. A (fabricated) study was set up in which people played Angry Birds and a control game (Tetris) over a two-year period (one year per game). They were put in a pen of pigs for a day before the study, and after 1 month, 6 months and 12 months. Their violent acts towards the pigs were counted. Does playing Angry Birds make people more violent to pigs compared to a control game? (Angry Pigs.sav) To answer this question we need to conduct a 2 (BaselineGame: Angry Birds vs. Tetris) × 4 (Time: Baseline, 1 month, 6 months and 12 months) two-way mixed ANOVA with repeated measures on the time variable. Follow the general instructions for this chapter. Your completed dialog boxes should look like this: The plot of the angry pigs data shows that when participants played Tetris in general their aggressive behaviour towards pigs decreased over time but when participants played Angry Birds, their aggressive behaviour towards pigs increased over time. The output shows the means for the interaction between Game and time These values correspond with those plotted above. When we use a mixed design we have to check both the assumptions of sphericity and homogeneity of variance. Mauchly's test for our repeated-measures variable Time has a value in the column labelled Sig of .170, which is larger than the cut off of .05, therefore it is non-significant. Levene's test produces a different test for each level of the repeated-measures variable. In mixed designs, the homogeneity assumption has to hold for every level of the repeated-measures variable. At each level of the variable Time, Levene's test is significant (p < .05 in every case). This means the assumption has been broken. The main effect of Game was significant, indicating that (ignoring the time at which the aggression scores were measured), the type of game being played significantly affected participant's aggression towards pigs. The main effect of Time was also significant, so we can conclude that (ignoring the type of game being played), aggression was significantly different at different points in time. However, the effect that we are most interested in is the Time × Game interaction, which was also significant. This effect tells us that changes in aggression scores over time were different when participants played Tetris compared to when they played Angry Birds. Looking at the graph, we can see that for Angry Birds, aggression scores increase over time, whereas for Tetris, aggression scores decreased over time. To investigate the exact nature of this interaction effect we can look at some contrasts. I chose to use the repeated contrast, which compare aggression scores for the two games at each time point against the previous time point. We are most interested in the Time × Game interaction. We can see that the first contrast (Level 1 vs. Level 2) was significant, p = .034, indicating that the change in aggression scores from the baseline to 1 month was significantly different for Tetris and Angry birds. If we look at the plot, we can see that on average, aggression scores decreased from baseline to 1 month when participants played Tetris. However, aggression scores increased from baseline to 1 month when participants played Angry Birds. The second contrast (Level 2 vs. Level 3) was non-significant (p = .073), indicating that the change in aggression scores from 1 month to 6 months was similar when participants played Tetris compared to when they played Angry Birds. Looking at the plot, we can see that aggression scores increased for Angry Birds but decreased for Tetris – according to the contrast, not significantly so. The final contrast (Level 3 vs. Level 4) was significant, p = .002. Again looking at the plot, we can see that for Angry Birds aggression scores increased dramatically from 6 to 12 months, whereas for Tetris they stayed fairly stable. We can report the three effects from this analysis as follows: * The results show that the aggression scores were significantly higher when participants played Angry Birds compared to when they played Tetris, F(1, 82) = 12.87, p = .001. * The main effect of Time on the aggression scores was significant, F(3, 246) = 8.92, p < .001. This indicated that when the game which participants played is ignored, aggressive behaviour was significantly different across the four time points. * The time game interaction was significant, F(3, 246) = 17.57, p < .001, indicating that the change in aggression scores when participants played Tetris was significantly different from the change in aggression scores when they played Angry Birds. Looking at the line graph, we can see that these findings indicate that when participants played Tetris, their aggressive behaviour towards pigs significantly decreased over time, whereas when they played Angry birds their aggressive behaviour towards pigs significantly increased over time. A different study was conducted with the same design as in Task 4. The only difference was that the participant's violent acts in real life were monitored before the study, and after 1 month, 6 months and 12 months. Does playing Angry Birds make people more violent in general compared to a control game? (Angry Real.sav) The plot below shows the mean aggressive acts after playing the two games. Compare this plot with the one in the previous task and you can see that aggressive behaviour in the real world was more erratic for the two video games than aggressive behaviour towards pigs. For Tetris, aggressive behaviour in the real world increased from time 1 (baseline) to time 3 (6 months) and then decreased from time 3 (6 months) to time 4 (12 months). For Angry Birds, aggressive behaviour in the real world initially increased from baseline to 1 month, it then decreased from 1 month to 6 months and then dramatically increased from 6 months to 12 months. The plot also shows that the means are very similar for the two games at each time point. To fit the model follow the instructions for the previous task. Not that I particularly recommend basing your life decisions on Mauchley's and Levene's tests, Mauchly's test is not significant (p = 0.808) and Levene's is similarly non-significant for all but the final timepoint. More important (for sphericity) the estimates themselves are effectively 1, indicating no deviation from sphericity. The remaining 2 outputs show the effects in the model. The main effect of Game is non-significant, indicating that (ignoring the time at which the aggression scores were measured), the type of game being played did not significantly affect participants' aggression in the real world. The main effect of Time is also non-significant, so we can conclude that (ignoring the type of game being played), aggression was not significantly different at different points in time. The effect that we are most interested in is the Time × Game interaction, which like the main effects is non-significant. This effect tells us that change in aggression scores over time were not significantly different when participants played Tetris compared to when they played Angry Birds. Because none of the effects were significant it doesn't make sense to conduct any contrasts. Therefore, we can conclude that playing Angry Birds does not make people more violent in general, just towards pigs. My wife believes that she has received fewer friend requests from random men on Facebook since she changed her profile picture to a photo of us both. Imagine we took 40 women who had profiles on a social networking website; 17 of them had a relationship status of 'single' and the remaining 23 had their status as 'in a relationship' (relationship_status). We asked these women to set their profile picture to a photo of them on their own (alone) and to count how many friend request they got from men over 3 weeks, then to switch it to a photo of them with a man (couple) and record their friend requests from random men over 3 weeks. Fit a model to see if friend requests are affected by relationship status and type of profile picture (ProfilePicture.sav). We need to run a 2 (relationship_status: single vs. in a relationship) 2(photo: couple vs. alone) mixed ANOVA with repeated measures on the second variable. Follow the general instructions for this chapter. Your completed dialog boxes should look like this: The plot below shows the two-way interaction between relationship status and profile picture. It shows that in both photo conditions, single women received more friend requests than women who were in a relationship. The number of friend requests increased in both single women and those who were in a relationship when they displayed a profile picture of themselves alone compared to with a partner. However, for single women this increase was greater than for women who were in a relationship. We have only two repeated-measures conditions here so sphericity is not an issue (see the book). Levene's test shows no heterogeneity of variance (although in such a small sample it will be hideously underpowered to detect a problem). The main effect of relationship_status is significant, so we can conclude that, ignoring the type of profile picture, the number of friend requests was significantly affected by the relationship status of the woman. The exact nature of this effect is easily determined because there were only two levels of relationship status (and so this main effect is comparing only two means). Looking at the estimated marginal means we can see that the number of friend requests was significantly higher for single women (M = 5.94) compared to women who were in a relationship (M = 4.47). The main effect of Profile_picture is also significant. Therefore, we can conclude that when ignoring relationship status, there was a significant main effect of whether the person was alone in their profile picture or with a partner on the number of friend requests. Looking at the estimated marginal means for the profile picture variable, we can see that the number of friend requests was significantly higher when women were alone in their profile picture (M = 6.78) than when they were with a partner (M = 3.63). Note: we know that 1 = 'in a couple' and 2 = 'alone' because this is how we coded the levels of the profile picture variable in the define dialog box (in Figure 20)see above). The interaction effect is the effect that we are most interested in and it is also significant (p = .010 in one of the outputs above). We would conclude that there is a significant interaction between the relationship status of women and whether they had a photo of themselves alone or with a partner. The interaction graph (see earlier) help us to interpret this effect. The significant interaction seems to indicate that when displaying a photo of themselves alone rather than with a partner, the number of friend requests increases in both women in a relationship and single women. However, for single women this increase is greater than for women who are in a relationship. We can report the three effects from this analysis as follows: The main effect of relationship status was significant, F(1, 38) = 16.29, p < .001, indicating that single women received more friend requests than women who were in a relationship, regardless of their type of profile picture. The main effect of profile picture was significant, F(1, 38) = 114.77, p < .001, indicating that across all women, the number of friend requests was greater when displaying a photo alone rather than with a partner. The relationship status × profile picture interaction was significant, F(1, 38) = 7.41, p = .010, indicating that although number of friend requests increased in all women when they displayed a photo of themselves alone compared to when they displayed a photo of themselves with a partner, this increase was significantly greater for single women than for women who were in a relationship. Labcoat Leni described a study by Johns, Hargrave, and Newton-Fisher (2012) in which they reasoned that if red was a proxy signal to indicate sexual proceptivity then men should find red female genitalia more attractive than other colours. They also recorded the men's sexual experience (Partners) as 'some' or 'very little'. Fit a model to test whether attractiveness was affected by genitalia colour (PalePink, LightPink, DarkPink, Red) and sexual experience (Johns et al. (2012).sav). Look at page 3 of Johns et al. to see how to report the results. We need to run a 2 (sexual experience: very little vs. some) × 4(genital colour: pale pink, light pink, dark pink, red) mixed ANOVA with repeated measures on the second variable. Follow the general instructions for this chapter. Your completed dialog boxes should look like this: Because the theory predicted that red should be the most attractive colour I also asked fora s imple contrast comparing each colour to red: The plot below shows the two-way interaction between sexual experience and colour. It shows that overall attractiveness ratings were higher for pink colours than red and this appears relatively unaffected by sexual experience. The Mauchley test is significant (and the estimates of sphericity are less than 1) suggesting that we should use Greenhouse-Geisser corrected values. The authors actually report the multivariate tests, which is another appropriate way to deal with a lack of sphericity (because multivariate tests do not assume it). Levene's test shows no heterogeneity of variance (although in such a small sample it will be hideously underpowered to detect a problem). The main effect of colour is significant, so we can conclude that, ignoring sexual experience, attractiveness ratings were significantly affected by the genital colour. We'll explore this below. The colour × Partners interaction is not significant suggesting that the effect of colour is not significantly moderated by sexual exoperience (p = .121). The authors actually report the multivariate tests for the main effect of colour which are reporduced here: The contrasts for the main effect of colour show that attractiveness ratings were significantly lower when the colour was red compared to dark pink, F(1, 38) = 15.47, p < .001, light pink, F(1, 38) = 22.82, p < .001, and pale pink, F(1, 38) = 17.44, p < .001. This is contrary to the theory, which suggested that red would be rated as more attractive than other colours. The main effect of sexual experience was not significant, F(1, 38) = 0.48, p = .492. Therefore, we can conclude that when ignoring genital colour, attractiveness ratings were not significant;y different for those with 'some' compared to 'very little' sexual experience. A clinical psychologist decided to compare his patients against a normal sample. He observed 10 of his patients as they went through a normal day. He also observed 10 lecturers at the University of Sussex. He measured all participants using two outcome variables: how many chicken impersonations they did, and how good their impersonations were (as scored out of 10 by an independent farmyard noise expert). Use MANOVA and discriminant function analysis to find out whether these variables could be used to distinguish manic psychotic patients from those without the disorder (Chicken.sav). It seems that manic psychotics and Sussex lecturers do pretty similar numbers of chicken impersonations (lecturers do slightly fewer actually, but they are of a higher quality). Box's test of the assumption of equality of covariance matrices tests the null hypothesis that the variance-covariance matrices are the same in both groups. For these data p is .000 (which is less than .05), hence, the covariance matrices are significantly different (the assumption is broken). However, because group sizes are equal we can ignore this test because Pillai's trace should be robust to this violation (fingers crossed!). All test statistics for the effect of group are significant with p = .032 (which is less than .05). From this result we should probably conclude that the groups differ significantly in the quality and quantity of their chicken impersonations; however, this effect needs to be broken down to find out exactly what's going on. Levene's test should be non-significant for all dependent variables if the assumption of homogeneity of variance has been met. The results for these data clearly show that the assumption has been met for the quantity of chicken impersonations but has been broken for the quality of impersonations. This might dent our confidence in the reliability of the univariate tests to follow (especially given the small sample size because this test will have low power to detect a difference, so the fact it has suggests that variances are very dissimilar). The univariate test of the main effect of group contains separate F-statistics for quality and quantity of chicken impersonations, respectively. The values of p indicate that there was a non-significant difference between groups in terms of both (p is greater than .05 in both cases). The multivariate test statistics led us to conclude that the groups did differ in terms of the quality and quantity of their chicken impersonations yet the univariate results contradict this! We don't need to look at contrasts because the univariate tests were non-significant (and in any case there were only two groups and so no further comparisons would be necessary). Instead, to see how the dependent variables interact, we need to carry out a discriminant function analysis (DFA). The initial statistics from the DFA tell us that there was only one variate (because there are only two groups) and this variate is significant. Therefore, the group differences shown by the MANOVA can be explained in terms of one underlying dimension. The standardized discriminant function coefficients tell us the relative contribution of each variable to the variates. Both quality and quantity of impersonations have similar-sized coefficients indicating that they have equally strong influence in discriminating the groups. However, they have the opposite sign, which suggests that that group differences are explained by the difference between the quality and quantity of impersonations. The variate centroids for each group (Output 8) confirm that variate 1 discriminates the two groups because the manic psychotics have a negative coefficient and the Sussex lecturers have a positive one. There won't be a combined-groups plot because there is only one variate. Overall we could conclude that manic psychotics are distinguished from Sussex lecturers in terms of the difference between the pattern of results for quantity of impersonations compared to quality. If we look at the means we can see that manic psychotics produce slightly more impersonations than Sussex lecturers (but remember from the non-significant univariate tests that this isn't sufficient, alone, to differentiate the groups), but the lecturers produce impersonations of a higher quality (but again remember that quality alone is not enough to differentiate the groups). Therefore, although the manic psychotics and Sussex lecturers produce similar numbers of impersonations of similar quality (see univariate tests), if we combine the quality and quantity we can differentiate the groups. A news story claimed that children who lie would become successful citizens. I was intrigued because although the article cited a lot of well-conducted work by Dr. Khang Lee that shows that children lie, I couldn't find anything in that research that supported the journalist's claim that children who lie become successful citizens. Imagine a Huxleyesque parallel universe in which the government was daft enough to believe the contents of this newspaper story and decided to implement a systematic programme of infant conditioning. Some infants were trained not to lie, others were bought up as normal, and a final group was trained in the art of lying. Thirty years later, they collected data on how successful these children were as adults. They measured their salary, and two indices out of 10 (10 = as successful as it could possibly be, 0 = better luck in your next life) of how successful their family and work life was. Use MANOVA and discriminant function analysis to find out whether lying really does make you a better citizen (Lying.sav). The means show that children encouraged to lie landed the best and highest-paid jobs, but had the worst family success compared to the other two groups. Children who were trained not to lie had great family lives but not so great jobs compared to children who were brought up to lie and children who experienced normal parenting. Finally, children who were in the normal parenting group (if that exists!) were pretty middle of the road compared to the other two groups. Box's test is non-significant, p = .345 (which is greater than .05), hence the covariance matrices are roughly equal as assumed. In the main table of results the column of real interest is the one containing the significance values of the F-statistics. For these data, Pillai's trace (p = .002), Wilks's lambda (p = .001), Hotelling's trace (p < .001) and Roy's largest root (p < .001) all reach the criterion for significance at the .05 level. Therefore, we can conclude that the type of lying intervention had a significant effect on success later on in life. The nature of this effect is not clear from the multivariate test statistic: it tells us nothing about which groups differed from which, or about whether the effect of lying intervention was on work life, family life, salary, or a combination of all three. To determine the nature of the effect, a discriminant analysis would be helpful, but for some reason SPSS provides us with univariate tests instead. Levene's test should be non-significant for all dependent variables if the assumption of homogeneity of variance has been met. We can see here that the assumption has been met (p > .05 in all cases), which strengthens the case for assuming that the multivariate test statistics are robust. The F-statistics for each univariate ANOVA and their significance values are listed in the columns labelled F and Sig. These values are identical to those obtained if one-way ANOVA was conducted on each dependent variable independently. As such, MANOVA offers only hypothetical protection of inflated Type I error rates: there is no real-life adjustment made to the values obtained. The values of p indicate that there was a significant difference between intervention groups in terms of salary (p = .049), family life (p = .004), and work life (p = .036). We should conclude that the type of intervention had a significant effect on the later success of children. However, this effect needs to be broken down to find out exactly what's going on. The contrasts show that there were significant differences in salary (p = .016), family success (p = .002) and work success (p = .016) when comparing children who were prevented from lying (level 1) with those who were encouraged to lie (level 3). Looking back at the means, we can see that children who were trained to lie had significantly higher salaries, significantly better work lives but significantly less successful family lives when compared to children who were prevented from lying. When we compare children who experienced normal parenting (level 2) with those who were encouraged to lie (level 3), there were no significant differences between the three life success outcome variables (p > .05 in all cases). In my opinion discriminant analysis is the best method for following up a significant MANOVA (see the book chapter) and we will do this next. The covariance matrices are made up of the variances of each dependent variable for each group. The values in this output are useful because they give us some idea of how the relationship between dependent variables changes from group to group. For example, in the lying prevented group, all the dependent variables are positively related, so as one of the variables increases (e.g., success at work), the other two variables (family life and salary) increase also. In the normal parenting group, success at work is positively related to both family success and salary. However, salary and family success are negatively related, so as salary increases family success decreases and vice versa. Finally, in the lying encouraged group, salary has a positive relationship with both work success and family success, but success at work is negatively related to family success. It is important to note that these matrices don't tell us about the substantive importance of the relationships because they are unstandardized - they merely give a basic indication. The eigenvalues for each variate are converted into percentage of variance accounted for, and the first variate accounts for 96.1% of variance compared to the second variate, which accounts for only 3.9%. This table also shows the canonical correlation, which we can square to use as an effect size (just like \(R^2\), which we have encountered in the linear model). The next output shows the significance tests of both variates ('1 through 2' in the table), and the significance after the first variate has been removed ('2' in the table). So, effectively we test the model as a whole, and then peel away variates one at a time to see whether what's left is significant. In this case with two variates we get only two steps: the whole model, and then the model after the first variate is removed (which leaves only the second variate). When both variates are tested in combination Wilks's lambda has the same value (.536), degrees of freedom (6) and significance value (.001) as in the MANOVA. The important point to note from this table is that the two variates significantly discriminate the groups in combination (p = .001), but the second variate alone is non-significant, p = .543. Therefore, the group differences shown by the MANOVA can be explained in terms of two underlying dimensions in combination. The next two outputs are the most important for interpretation. The coefficients in these tables tell us the relative contribution of each variable to the variates. If we look at variate 1 first, family life has the opposite effect to work life and salary (work life and salary have positive relationships with this variate, whereas family life has a negative relationship). Given that these values (in both tables) can vary between 1 and 1, we can also see that family life has the strongest relationship, work life also has a strong relationship, whereas salary has a relatively weaker relationship to the first variate. The first variate, then, could be seen as one that differentiates family life from work life and salary (it affects family life in the opposite way to salary and work life). Salary has a very strong positive relationship to the second variate, family life has only a weak positive relationship and work life has a medium negative relationship to the second variate. This tells us that this variate represents something that affects salary and to a lesser degree family life in a different way than work life. Remembering that ultimately these variates are used to differentiate groups, we could say that the first variate differentiates groups by some factor that affects family differently than work and salary, whereas the second variate differentiates groups on some dimension that affects salary (and to a small degree family life) and work in different ways. We can also use a combined-groups plot. This graph plots the variate scores for each person, grouped according to the experimental condition to which that person belonged. The graph (Figure 7) tell us that (look at the big squares) variate 1 discriminates the lying prevented group from the lying encouraged group (look at the horizontal distance between these centroids). The second variate differentiates the normal parenting group from the lying prevented and lying encouraged groups (look at the vertical distances), but this difference is not as dramatic as for the first variate. Remember that the variates significantly discriminate the groups in combination (i.e., when both are considered). Using Pillai's trace, there was a significant effect of lying on future success, V = 0.48, F(6, 76) = 3.98, p = .002. Separate univariate ANOVAs on the outcome variables revealed significant effects of lying on salary F(2, 39) = 3.27, p = .049, family, F(2, 39) = 6.37, p = .004 and work F(2, 39) = 3.62, p = .036. The MANOVA was followed up with discriminant analysis, which revealed two discriminant functions. The first explained 96.1% of the variance, canonical \(R^2\) = .45, whereas the second explained only 3.9%, canonical \(R^2\) = .03. In combination these discriminant functions significantly differentiated the lying intervention groups, Λ = .536, \(\chi^2\)(6) = 23.70, p = .001, but removing the first function indicated that the second function did not significantly differentiate the intervention groups, Λ = .968, \(\chi^2\)(2) = 1.22, p = .543. The correlations between outcomes and the discriminant functions revealed that salary loaded more highly onto the second function (r = .94) than the first (r = .40); family life loaded more highly on the first function (r = .84) than the second function (r = .23); work life loaded fairly evenly onto both functions but in opposite directions (r = .62 for the first function and r = .53 for the second). The discriminant function plot showed that the first function discriminated the lying intervention group from the lying prevented group, and the second function differentiated the normal parenting group from the two interventions. I was interested in whether students' knowledge of different aspects of psychology improved throughout their degree (Psychology.sav). I took a sample of first-years, second-years and third-years and gave them five tests (scored out of 15) representing different aspects of psychology: Exper (experimental psychology such as cognitive and neuropsychology); Stats (statistics); Social (social psychology); Develop (developmental psychology); Person (personality). (1) Determine whether there are overall group differences along these five measures. (2) Interpret the scale-by-scale analyses of group differences. (3) Select contrasts that test the hypothesis that second and third years will score higher than first years on all scales. (4) Select post hoc tests and compare these results to the contrasts. (5) Carry out a discriminant function analysis including only those scales that revealed group differences for the contrasts. Interpret the results. The first output contains the overall and group means and standard deviations for each dependent variable in turn. Box's test has a p = .06 (which is greater than .05); hence, the covariance matrices are roughly equal and the assumption is tenable. (I mean, it's probably not because it is close to significance in a relatively small sample.) The group effect tells us whether the scores from different areas of psychology differ across the three years of the degree programme. For these data, Pillai's trace (p =.02), Wilks's lambda (p = .012), Hotelling's trace (p =.007) and Roy's largest root (p =.01) all reach the criterion for significance of the .05 level. From this result we should probably conclude that the profile of knowledge across different areas of psychology does indeed change across the three years of the degree. The nature of this effect is not clear from the multivariate test statistic. Levene's test should be non-significant for all dependent variables if the assumption of homogeneity of variance has been met. The results for these data clearly show that the assumption has been met. This finding not only gives us confidence in the reliability of the univariate tests to follow, but also strengthens the case for assuming that the multivariate test statistics are robust. The univariate F-statistics for each of the areas of psychology indicate that there was a non-significant difference between student groups in all areas (p > .05 in each case). The multivariate test statistics led us to conclude that the student groups did differ significantly across the types of psychology, yet the univariate results contradict this (I really should stop making up data sets that do this!). We don't need to look at contrasts because the univariate tests were non-significant, and instead, to see how the dependent variables interact, we will carry out a DFA. The initial statistics from the DFA tell us that only one of the variates is significant (the second variate is non-significant, p = .608). Therefore, the group differences shown by the MANOVA can be explained in terms of one underlying dimension. The standardized discriminant function coefficients tell us the relative contribution of each variable to the variates. Looking at the first variate, it's clear that statistic has the greatest contribution to the first variate. Most interesting is that on the first variate, statistics and experimental psychology have positive weights, whereas social, developmental and personality have negative weights. This suggests that the group differences are explained by the difference between experimental psychology and statistics compared to other areas of psychology. The variate centroids for each group tell us that variate 1 discriminates the first years from second and third years because the first years have a negative value whereas the second and third years have positive values on the first variate. The relationship between the variates and the groups is best illuminated using a combined-groups plot, which plots the variate scores for each person, grouped according to the year of their degree. In addition, the group centroids are indicated, which are the average variate scores for each group. The plot for these data confirms that variate 1 discriminates the first years from subsequent years (look at the horizontal distance between these centroids). Overall we could conclude that different years are discriminated by different areas of psychology. In particular, it seems as though statistics and aspects of experimentation (compared to other areas of psychology) discriminate between first-year undergraduates and subsequent years. From the means, we could interpret this as first years struggling with statistics and experimental psychology (compared to other areas of psychology) but with their ability improving across the three years. However, for other areas of psychology, first years are relatively good but their abilities decline over the three years. Put another way, psychology degrees improve only your knowledge of statistics and experimentation. For these tasks, access the factor analysis dialog boxes by selecting Analyze > Dimension Reduction > Factor …. Simply select the variables you want to include in the analysis and drag them to the box labelled Variables. Use the book chapter to learn how to use the other options/dialog boxes. Rerun the analysis in this chapter using principal component analysis and compare the results to those in the chapter. (Set the iterations to convergence to 30.) Follow the instructions in the chapter, except that in the Extraction dialog box select Principal components in the drop-down menu labelled Method, as shown below: The question also suggests increasing the iterations to convergence to 30, and we do this in the Rotation dialog box as follows: Note that I have selected an oblique rotation (Direct Oblimin) because (as explained in the book) it is unrealistic to assume that components measuring different aspects of a psychological construct will be independent. Complete all of the other dialog boxes as in the book. All of the descriptives, correlation matrices, KMO tests and so on should be exactly the same as in the book (these will be unaffected by our choice of principal components as the method of dimension reduction). Follow the book to interpret these. Things start to get different at the point of extraction. The first part of the factor extraction process is to determine the linear components (note, linear components not factors) within the data (the eigenvectors) by calculating the eigenvalues of the R-matrix. There are as many components (eigenvectors) in the R-matrix as there are variables, but most will be unimportant. The eigenvalue tells us the importance of a particular vector. We can then apply criteria to determine which components to retain and which to discard. By default IBM SPSS Statistics uses Kaiser's criterion of retaining components with eigenvalues greater than 1 (see the book for details). The output lists the eigenvalues associated with each linear component before extraction, after extraction and after rotation. Before extraction, 23 linear components are identified within the data (i.e., the number of original variables). The eigenvalues represent the variance explained by a particular linear component, and this value is also displayed as the percentage of variance explained (so component 1 explains 31.696% of total variance). The first few components explain relatively large amounts of variance (especially component 1), whereas subsequent components explain only small amounts of variance. The four components with eigenvalues greater than 1 are then extracted. The eigenvalues associated with these components are again displayed (and the percentage of variance explained) in the columns labelled Extraction Sums of Squared Loadings. The values in this part of the table are the same as the values before extraction, except that the values for the discarded components are ignored (i.e., the table is blank after the fourth component). The final part of the table (labelled Rotation Sums of Squared Loadings) shows the eigenvalues of the components after rotation. Rotation has the effect of optimizing the component structure, and for these data it has equalized the relative importance of the four components. Before rotation, component 1 accounted for considerably more variance than the remaining three (31.696% compared to 7.560, 5.725 and 5.336%), but after rotation it accounts for only 16.219% of variance (compared to 14.523, 11.099 and 8.475%, respectively). The next output shows the communalities before and after extraction. Remember that the communality is the proportion of common variance within a variable. Principal component analysis works on the initial assumption that all variance is common; therefore, before extraction the communalities are all 1 (see the column labelled Initial). In effect, all of the variance associated with a variable is assumed to be common variance. Once components have been extracted, we have a better idea of how much variance is, in reality, common. The communalities in the column labelled Extraction reflect this common variance. So, for example, we can say that 43.5% of the variance associated with question 1 is common, or shared, variance. Another way to look at these communalities is in terms of the proportion of variance explained by the underlying components. Before extraction, there are as many components as there are variables, so all variance is explained by the components and communalities are all 1. However, after extraction some of the components are discarded and so some information is lost. The retained components cannot explain all of the variance present in the data, but they can explain some. The amount of variance in each variable that can be explained by the retained components is represented by the communalities after extraction. The next output shows the component matrix before rotation. This matrix contains the loadings of each variable onto each component. By default IBM SPSS Statistics displays all loadings; however, if you followed the book you'd have requested that all loadings less than 0.3 be suppressed and so there are blank spaces. This doesn't mean that the loadings don't exist, merely that they are smaller than 0.3. This matrix is not particularly important for interpretation, but it is interesting to note that before rotation most variables load highly onto the first component. At this stage IBM SPSS Statistics has extracted four components based on Kaiser's criterion. This criterion is accurate when there are fewer than 30 variables and communalities after extraction are greater than 0.7, or when the sample size exceeds 250 and the average communality is greater than 0.6. The communalities are shown in one of the outputs above and only one exceeds 0.7. The average of the communalities is 11.573/23 = 0.503. Therefore, on both grounds Kaiser's rule might not be accurate. However, you should consider the huge sample that we have, because the research into Kaiser's criterion gives recommendations for much smaller samples. The scree plot (below) The scree plot looks very similar to the one in the book (where we used principal axis factoring). The book gives more explanation, but essentially we could probably justify retaining either two or four components. As in the chapter we'll stick with four. The next outputs shows the pattern and structure matrices, which contain the component loadings for each variable onto each component (see the chapter for an explanation of the differences between these matrices). Let's interpret the pattern matrix, because it's a bit more straighforward. Remember that we suppressed loadings less than 0.3, so the blank spaces represent loadings lower than this threshold. Also, the variables are listed in the order of size of their component loadings (because we selected this option, by default they will be listed in the order you list them in the main dialog box. Compare this matrix to the unrotated solution from earlier. Before rotation, most variables loaded highly onto the first component and the remaining components didn't really get a look in. The rotation of the component structure has clarified things considerably: there are four components and variables generally load highly onto only one component and less so on the others. As in the chapter, we can look for themes among questions that load onto the same component. Like the factor analysis in the chapter, the principal components analysis reveals that the initial questionnaire is composed of four subscales: Fear of statistics: the questions that load highly on component 1 relate to statistics Peer evaluation: the questions that load highly on component 2 relate to aspects of peer evaluation Fear of computers: the questions that load highly on component 3 relate to using computers or IBM SPSS Statistics Fear of mathematics: the questions that load highly on component 4 relate to mathematics The final output is the component correlation matrix (comparable to the factor correlation matrix in the book). This matrix contains the correlation coefficients between components. Component 2 has fairly small relationships with all other components (the correlation coefficients are low), but all other components are interrelated to some degree (notably components 1 and 3 and components 3 and 4). The constructs measured appear to be correlated. This dependence between components suggests that oblique rotation was a good decision (that is, the components are not orthogonal/independent). At a theoretical level the dependence between components makes sense: we might expect a fairly strong relationship between fear of maths, fear of statistics and fear of computers. Generally, the less mathematically and technically minded people struggle with statistics. However, we would not, necessarily, expect these constructs to correlate with fear of peer evaluation (because this construct is more socially based) and this component correlates weakly with the others. The University of Sussex constantly seeks to employ the best people possible as lecturers. They wanted to revise the 'Teaching of Statistics for Scientific Experiments' (TOSSE) questionnaire, which is based on Bland's theory that says that good research methods lecturers should have: (1) a profound love of statistics; (2) an enthusiasm for experimental design; (3) a love of teaching; and (4) a complete absence of normal interpersonal skills. These characteristics should be related (i.e., correlated). The University revised this questionnaire to become the 'Teaching of Statistics for Scientific Experiments - Revised (TOSSE - R). They gave this questionnaire to 239 research methods lecturers to see if it supported Bland's theory. Conduct a factor analysis (with appropriate rotation) and interpret the factor structure (TOSSE-R.sav). Like in the chapter, I ran the analysis with principal axis factoring and oblique rotation. The syntax for my analysis is as follows: /VARIABLES q1 q2 q3 q4 q5 q6 q7 q8 q9 q10 q11 q12 q13 q14 q15 q16 q17 q18 q19 q20 q21 q22 q23 q24 q25 q26 q27 q28 /MISSING LISTWISE /ANALYSIS q1 q2 q3 q4 q5 q6 q7 q8 q9 q10 q11 q12 q13 q14 q15 q16 q17 q18 q19 q20 q21 q22 q23 q24 /PRINT UNIVARIATE INITIAL CORRELATION SIG DET KMO INV REPR AIC EXTRACTION ROTATION /FORMAT SORT BLANK(.30) /PLOT EIGEN /CRITERIA MINEIGEN(1) ITERATE(25) /EXTRACTION PAF /CRITERIA ITERATE(25) DELTA(0) /ROTATION OBLIMIN /METHOD=CORRELATION. Multicollinearity: The determinant of the correlation matrix was 1.240E-6 (i.e., 0.00000124), which is smaller than 0.00001 and, therefore, indicates that multicollinearity could be a problem in these data. Sample size: MacCallum et al. (1999) have demonstrated that when communalities after extraction are above 0.5 a sample size between 100 and 200 can be adequate, and even when communalities are below 0.5 a sample size of 500 should be sufficient. We have a sample size of 239 with some communalities below 0.5, and so the sample size may not be adequate. However, the KMO measure of sampling adequacy is .894, which is above Kaiser's (1974) recommendation of 0.5. This value is also 'meritorious' (and almost 'marvellous'). As such, the evidence suggests that the sample size is adequate to yield distinct and reliable factors. Bartlett's test: This tests whether the correlations between questions are sufficiently large for factor analysis to be appropriate (it actually tests whether the correlation matrix is sufficiently different from an identity matrix). In this case it is significant, \(\chi^2\)(378) = 2989.77, p < .001, indicating that the correlations within the R-matrix are sufficiently different from zero to warrant factor analysis. Extraction: By default five factors are extracted based on Kaiser's criterion of retaining factors with eigenvalues greater than 1. Is this warranted? Kaiser's criterion is accurate when there are fewer than 30 variables and the communalities after extraction are greater than 0.7, or when the sample size exceeds 250 and the average communality is greater than 0.6. For these data the sample size is 239, there are 28 variables, and the mean communality is 0.488, so extracting five factors is not really warranted. The scree plot shows clear inflexions at 3 and 5 factors and so using the scree plot you could justify extracting 2 or 4 factors. Rotation: You should choose an oblique rotation because the question says that the constructs we're measuring are related. Looking at the pattern matrix (and using loadings greater than 0.3 as recommended by Stevens) we see the following: Factor 1: Q 16: Thinking about whether to use repeated or independent measures thrills me Q 14: I'd rather think about appropriate dependent variables than go to the pub Q 22: I quiver with excitement when thinking about designing my next experiment Q 17: I enjoy sitting in the park contemplating whether to use participant observation in my next experiment Q 13: Designing experiments is fun Q 8: I like control conditions Q 10: I could spend all day explaining statistics to people Q 19: I like to help students Q 20: Passing on knowledge is the greatest gift you can bestow an individual Q 25: I love teaching Q 27: I love teaching because students have to pretend to like me or they'll get bad marks Q 7: Helping others to understand sums of squares is a great feeling Q 26: I spend lots of time helping students Q 23: I often spend my spare time talking to the pigeons … and even they die of boredom Q 28: My cat is my only friend Q 5: I still live with my mother and have little personal hygiene Q 12: People fall asleep as soon as I open my mouth to speak Q 24: I tried to build myself a time machine so that I could go back to the 1930s and follow Fisher around on my hands and knees licking the floor on which he'd just trodden Q 3: I memorize probability values for the F-distribution Q 4: I worship at the shrine of Pearson Q 15: I soil my pants with excitement at the mere mention of factor analysis Q 21: Thinking about Bonferroni corrections gives me a tingly feeling in my groin Q 1: I once woke up in the middle of a vegetable patch hugging a turnip that I'd mistakenly dug up thinking it was Roy's largest root Q 6: Teaching others makes me want to swallow a large bottle of bleach because the pain of my burning oesophagus would be light relief in comparison Q 2: If I had a big gun I'd shoot all the students I have to teach Q 18: Standing in front of 300 people in no way makes me lose control of my bowels No factor: Q 9: I calculate three ANOVAs in my head before getting out of bed every morning Q 11: I like it when people tell me I've helped them to understand factor rotation Factor 1 seems to relate to research methods, factor 2 to teaching, factor 3 to general social skills, factor 4 to statistics and factor 5 to, well, err, teaching again. All in all, this isn't particularly satisfying and doesn't really support the theoretical four-factor model. We saw earlier that the extraction of five factors probably wasn't justified. In fact the scree plot seems to indicate four Let's rerun the analysis but asking for four factors in the extraction dialog box: Let's see how this changes the pattern matrix: We now get the following: Q 2: If I had a big gun I'd shoot all the students I have to teach (note negative weight) Q 18: Standing in front of 300 people in no way makes me lose control of my bowels (note negative weight) Q 6: Teaching others makes me want to swallow a large bottle of bleach because the pain of my burning oesophagus would be light relief in comparison (note negative weight) This factor structure is a bit clearer-cut: factor 1 largely relates to an enthusiasm for experimental design, factor 2 to a love of teaching (although a lot of reversed itsme there!), factor 3 to a complete absence of normal interpersonal skills, and 4. is a bit of a mishmash of a profound love of statistics and a love of teaching. Again, this factor structure doesn't fully support the original four-factor model suggested because the data indicate that love of statistics can't be separated out from a love of methods or a love of teaching. It's almost as though the example is completely made up. Dr Sian Williams (University of Brighton) devised a questionnaire to measure organizational ability. She predicted five factors to do with organizational ability:(1) preference for organization; (2) goal achievement; (3) planning approach; (4) acceptance of delays; and (5) preference for routine. These dimensions are theoretically independent. Williams's questionnaire contains 28 items using a seven-point Likert scale (1 = strongly disagree, 4 = neither, 7 = strongly agree). She gave it to 239 people. Run a principal component analysis on the data in Williams.sav. The questionnaire items are as follows: I like to have a plan to work to in everyday life I feel frustrated when things don't go to plan I get most things done in a day that I want to I stick to a plan once I have made it I enjoy spontaneity and uncertainty I feel frustrated if I can't find something I need I find it difficult to follow a plan through I am an organized person I like to know what I have to do in a day Disorganized people annoy me I leave things to the last minute I have many different plans relating to the same goal I like to have my documents filed and in order I find it easy to work in a disorganized environment I make 'to do' lists and achieve most of the things on it My workspace is messy and disorganized I like to be organized Interruptions to my daily routine annoy me I feel that I am wasting my time I forget the plans I have made I prioritize the things I have to do I like to work in an organized environment I feel relaxed when I don't have a routine I set deadlines for myself and achieve them I change rather aimlessly from one activity to another during the day I have trouble organizing the things I have to do I put tasks off to another day I feel restricted by schedules and plans I ran the analysis with principal components and oblique rotation. The syntax for my analysis is as follows: /VARIABLES org1 org2 org3 org4 org6 org7 org9 org10 org11 org12 org13 org14 org16 org17 org18 org19 org20 org21 org22 org23 org24 org25 org26 org27 org28 org29 org30 org31 /ANALYSIS org1 org2 org3 org4 org6 org7 org9 org10 org11 org12 org13 org14 org16 org17 org18 /EXTRACTION PC By default, five components have been extracted based on Kaiser's criterion. The scree plot shows clear inflexions at 3 and 5 factors, and so using the scree plot you could justify extracting 3 or 5 factors. Looking at the rotated component matrix (and using loadings greater than 0.4) we see the following pattern: Component 1: preference for organization (Note: It's odd that none of these have reverse loadings.) Q8: I am an organized person Q13: I like to have my documents filed and in order Q14: I find it easy to work in a disorganized environment Q 16: My workspace is messy and disorganized Q17: I like to be organized Q22: I like to work in an organized environment Component 2: goal achievement Q7: I find it difficult to follow a plan through Q11: I leave things to the last minute Q19: I feel that I am wasting my time Q20: I forget the plans I have made Q25: I change rather aimlessly from one activity to another during the day Q26: I have trouble organizing the things I have to do Q27: I put tasks off to another day Component 3: preference for routine Q5: I enjoy spontaneity and uncertainty Q12: I have many different plans relating to the same goal Q23: I feel relaxed when I don't have a routine Q28: I feel restricted by schedules and plans Component 4: plan approach Q1: I like to have a plan to work to in everyday life Q3: I get most things done in a day that I want to Q4: I stick to a plan once I have made it Q9: I like to know what I have to do in a day Q15: I make 'to do' lists and achieve most of the things on it Q 21: I prioritize the things I have to do Q24: I set deadlines for myself and achieve them Component 5: acceptance of delays Q2: I feel frustrated when things don't go to plan Q6: I feel frustrated if I can't find something I need Q10: Disorganized people annoy me Q18: Interruptions to my daily routine annoy me It seems as though there is some factorial validity to the structure. Zibarras, Port, and Woods (2008) looked at the relationship between personality and creativity. They used the Hogan Development Survey (HDS), which measures 11 dysfunctional dispositions of employed adults: being volatile, mistrustful, cautious, detached, passive_aggressive, arrogant, manipulative, dramatic, eccentric, perfectionist, and dependent. Zibarras et al. wanted to reduce these 11 traits down and, based on parallel analysis, found that they could be reduced to three components. They ran a principal component analysis with varimax rotation. Repeat this analysis (Zibarras et al. (2008).sav) to see which personality dimensions clustered together (see page 210 of the original paper). As indicated in the question, I ran the analysis with principal components and varimax rotation. I specified to extract three factors to match Zibarras et al. (2008). The syntax for my analysis is as follows: /VARIABLES volatile mistrustful cautious detached passive_aggressive arrogant manipulative dramatic eccentric perfectist dependent /ANALYSIS volatile mistrustful cautious detached passive_aggressive arrogant manipulative /CRITERIA FACTORS(3) ITERATE(25) /CRITERIA ITERATE(25) /ROTATION VARIMAX The output shows the rotated component matrix, from which we see this pattern: Component 1: Manipulative Cautious (negative weight) Perfectionist (negative weight) Mistrustful Dependent (negative weight) Compare these results to those of Zibarras et al. (Table 4 from the original paper reproduced below), and note that they are the same. To open the dialog box to weight cases select Data > Weight Cases …. Next drag the variable containing the number of cases (i.e. the frequency) to the box labelled Frequency Variable: (or select the variable and click ) To open the dialog box for a chi-square test select Analyze > Descriptive Statistics > Crosstabs …. Research suggests that people who can switch off from work (Detachment) during off-hours are more satisfied with life and have fewer symptoms of psychological strain (Sonnentag, 2012). Factors at work, such as time pressure, affect your ability to detach when away from work. A study of 1709 employees measured their time pressure (Time_Pressure) at work (no time pressure, low, medium, high and very high time pressure). Data generated to approximate Figure 1 in Sonnentag (2012) are in the file Sonnentag (2012).sav. Carry out a chi-square test to see if time pressure is associated with the ability to detach from work. Follow the general instructions for this chapter to weight cases by the variable Frequency (see the completed dialog box below). To conduct the chi-square test, use the crosstabs command by selecting Analyze > Descriptive Statistics > Crosstabs …. We have two variables in our crosstabulation table: Detachment and Time pressure. Drag one of these variables into the box labelled Row(s) (I selected Time Pressure in the figure). Next, drag the other variable of interest (Detachment) to the box labelled Column(s). Use the book chapter to select other appropriate options. The chi-square test is highly significant, \(\chi^2\) 2(4) = 15.55, p = .004, indicating that the profile of low-detachment and very low-detachment responses differed across different time pressures. Looking at the standardized residuals, the only time pressure for which these are significant is very high time pressure, which showed the greatest split of whether the employees experienced low detachment (36%) or very low detachment (64%). Within the other time pressure groups all of the standardized residuals are lower than 1.96. It's interesting to look at the direction of the residuals (i.e., whether they are positive or negative). For all time pressure groups except very high time pressure, the residual for 'low detachment' was positive but for 'very low detachment' was negative; these are, therefore, people who responded more than we would expect that they experienced low detachment from work and less than expected that they experienced very low detachment from work. It was only under very high time pressure that the opposite pattern occurred: the residual for 'low detachment' was negative but for 'very low detachment' was positive; these are, therefore, people who responded less than we would expect that they experienced low detachment from work and more than expected that they experienced very low detachment from work. In short, there are similar numbers of people who experience low detachment and very low detachment from work when there is no time pressure, low time pressure, medium time pressure and high time pressure. However, when time pressure was very high, significantly more people experienced very low detachment than low detachment. Labcoat Leni's Real Research describes a study (Daniels, 2012) that looked at the impact of sexualized images of atheletes compared to performance pictures on women's perceptions of the athletes and of themselves. Women looked at different types of pictures (Picture) and then did a writing task. Daniels identified whether certain themes were present or absent in each written piece (Theme_Present). We looked at the self-evaluation theme, but Daniels idetified others: commenting on the athlete's body/appearance (Athletes_Body), indicating admiration or jelousy for the athlete (Admiration), indicating that the athlete was a role model or motivating (Role_Model), and their own physical activity (Self_Physical_Activity). Test whether the type of picture viewed was associated with commenting on the athlete's body/appearance (Daniels (2012).sav). Follow the general instructions for this chapter to weight cases by the variable Athletes_Body (see the completed dialog box below). To conduct the chi-square test, use the crosstabs command by selecting Analyze > Descriptive Statistics > Crosstabs …. We have two variables in our crosstabulation table: Picture and Theme_Present. Drag one of these variables into the box labelled Row(s) (I selected Picture in the figure). Next, drag the other variable of interest (Theme_Present) to the box labelled Column(s). Use the book chapter to select other appropriate options. The chi-square test is highly significant, \(\chi^2\)(1) = 104.92, p < .001. This indicates that the profile of theme present vs. theme absent differed across different pictures. Looking at the standardized residuals, they are significant for both pictures of performance athletes and sexualized pictures of athletes. If we look at the direction of these residuals (i.e., whether they are positive or negative), we can see that for pictures of performance athletes, the residual for 'theme absent' was positive but for 'theme present' was negative; this indicates that in this condition, more people than we would expect did not include the theme her appearance and attractiveness and fewer people than we would expect did include this theme in what they wrote. In the sexualized picture condition on the other hand, the opposite was true: the residual for 'theme absent' was negative and for 'theme present' was positive. This indicates that in the sexualized picture condition, more people than we would expect included the theme her appearance and attractiveness in what they wrote and fewer people than we would expect did not include this theme in what they wrote. Daniels reports: Extract from article Using the data in Task 2, see whether the type of picture viewed was associated with indicating admiration or jelousy for the athlete. Follow the general instructions for this chapter to weight cases by the variable Admiration (see the completed dialog box below). The chi-square test is highly significant, \(\chi^2\)(1) = 28.98, p < .001. This indicates that the profile of theme present vs. theme absent differed across different pictures. Looking at the standardized residuals, they are significant for both pictures of performance athletes and sexualized pictures of athletes. If we look at the direction of these residuals (i.e., whether they are positive or negative), we can see that for pictures of performance athletes, the residual for 'theme absent' was positive but for 'theme present' was negative; this indicates that in this condition, more people than we would expect did not include the theme My admiration or jealousy for the athlete and fewer people than we would expect did include this theme in what they wrote. In the sexualized picture condition, on the other hand, the opposite was true: the residual for 'theme absent' was negative and for 'theme present was positive'. This indicates that in the sexualized picture condition, more people than we would expect included the theme My admiration or jealousy for the athlete in what they wrote and fewer people than we would expect did not include this theme in what they wrote. Using the data in Task 2, see whether the type of picture viewed was associated with indicating that the athlete was a role model or motivating. Follow the general instructions for this chapter to weight cases by the variable Role_Model (see the completed dialog box below). The chi-square test is highly significant, \(\chi^2\)(1) = 47.50, p < .001. This indicates that the profile of theme present vs. theme absent differed across different pictures. Looking at the standardized residuals, they are significant for both types of pictures. If we look at the direction of these residuals (i.e., whether they are positive or negative), we can see that for pictures of performance athletes, the residual for 'theme absent' was negative but was positive for 'theme present'. This indicates that when looking at pictures of performance athletes, more people than we would expect included the theme Athlete is a good role model and fewer people than we would expect did not include this theme in what they wrote. In the sexualized picture condition on the other hand, the opposite was true: the residual for 'theme absent' was positive and for 'theme present' it was negative. This indicates that in the sexualized picture condition, more people than we would expect did not include the theme Athlete is a good role model in what they wrote and fewer people than we would expect did include this theme in what they wrote. Using the data in Task 2, see whether the type of picture viewed was associated with the participant commenting on their own physical activity. Follow the general instructions for this chapter to weight cases by the variable Self_Evaluation (see the completed dialog box below). The chi-square test is significant, \(\chi^2\)(1) = 5.91, p = .02. This indicates that the profile of theme present vs. theme absent differed across different pictures. Looking at the standardized residuals, they are not significant for either type of picture (i.e., they are less than 1.96). If we look at the direction of these residuals (i.e., whether they are positive or negative), we can see that for pictures of performance athletes, the residual for 'theme absent' was negative and for 'theme present' was positive. This indicates that when looking at pictures of performance athletes, more people than we would expect included the theme My own physical activity and fewer people than we would expect did not include this theme in what they wrote. In the sexualized picture condition on the other hand, the opposite was true: the residual for 'theme absent' was positive and for 'theme present' it was negative. This indicates that in the sexualized picture condition, more people than we would expect did not include the theme My own physical activity in what they wrote and fewer people than we would expect did include this theme in what they wrote. I wrote much of the third edition of this book in the Netherlands (I have a soft spot for it). The Dutch travel by bike much more than the English. I noticed that many more Dutch people cycle while steering with only one hand. I pointed this out to one of my friends, Birgit Mayer, and she said that I was a crazy English fool and that Dutch people did not cycle one-handed. Several weeks of me pointing at one-handed cyclists and her pointing at two-handed cyclists ensued. To put it to the test I counted the number of Dutch and English cyclists who ride with one or two hands on the handlebars (Handlebars.sav). Can you work out which one of us is correct? To conduct the chi-square test, use the crosstabs command by selecting Analyze > Descriptive Statistics > Crosstabs …. We have two variables in our crosstabulation table: Nationality and Hands. Drag one of these variables into the box labelled Row(s) (I selected Nationality in the figure). Next, drag the other variable of interest (Hands) to the box labelled Column(s). Use the book chapter to select other appropriate options. The crosstabulation table produced by SPSS contains the number of cases that fall into each combination of categories. We can see that in total 137 people rode their bike one-handed, of which 120 (87.6%) were Dutch and only 17 (12.4%) were English; 732 people rode their bike two-handed, of which 578 (79%) were Dutch and only 154 (21%) were English. Before moving on to look at the test statistic itself, we can check that the assumption for chi-square has been met. The assumption is that in 2 × 2 tables (which is what we have here), all expected frequencies should be greater than 5. The smallest expected count is 27 (for English people who ride their bike one-handed). This value exceeds 5 and so the assumption has been met. The value of the chi-square statistic is 5.44. This value has a two-tailed significance of .020, which is smaller than .05 (hence significant), which suggests that the pattern of bike riding (i.e., relative numbers of one- and two-handed riders) significantly differs in English and Dutch people. The significant result indicates that there is an association between whether someone is Dutch or English and whether they ride their bike one- or two-handed. Looking at the frequencies, this finding seems to show that the ratio of one- to two-handed riders differs in Dutch and English people. In Dutch people 17.2% ride their bike one-handed compared to 82.8% who ride two-handed. In England, though, only 9.9% ride their bike one-handed (almost half as many as in Holland), and 90.1% ride two-handed. If we look at the standardized residuals (in the contingency table) we can see that the only cell with a residual approaching significance (a value that lies outside of ±1.96) is the cell for English people riding one-handed (z = -1.9). The fact that this value is negative tells us that fewer people than expected fell into this cell. Compute and interpret the odds ratio for Task 6. The odds of someone riding one-handed if they are Dutch are: \[ \text{odds}_{one-handed, Dutch} = \frac{120}{578} = 0.21 \] The odds of someone riding one-handed if they are English are: \[ \text{odds}_{one-handed, English} = \frac{17}{154} = 0.11 \] Therefore, the odds ratio is: \[ \text{odds ratio} = \frac{\text{odds}_{one-handed, Dutch}}{\text{odds}_{one-handed, English}} = \frac{0.21}{0.11} = 1.90 \] In other words, the odds of riding one-handed if you are Dutch are 1.9 times higher than if you are English (or, conversely, the odds of riding one-handed if you are English are about half that of a Dutch person). We could report this effect as: There was a significant association between nationality and whether the Dutch or English rode their bike one- or two-handed, \(\chi^2\)(1) = 5.44, p < .05. This represents the fact that, based on the odds ratio, the odds of riding a bike one-handed were 1.9 time higher for Dutch people than for English people. This supports Field's argument that there are more one-handed bike riders in the Netherlands than in England and utterly refutes Mayer's competing theory. These data are in no way made up. Certain editors at Sage like to think they're great at football (soccer). To see whether they are better than Sussex lecturers and postgraduates we invited employees of Sage to join in our football matches. Every person played in one match. Over many matches, we counted the number of players that scored goals. Is there a significant relationship between scoring goals and whether you work for Sage or Sussex? (Sage Editors Can't Play Football.sav) To conduct the chi-square test, use the crosstabs command by selecting Analyze > Descriptive Statistics > Crosstabs …. We have two variables in our crosstabulation table: Employer and Score. Drag one of these variables into the box labelled Row(s) (I selected Employer in the figure). Next, drag the other variable of interest (Score) to the box labelled Column(s). Use the book chapter to select other appropriate options. The crosstabulation table produced by SPSS Statistics contains the number of cases that fall into each combination of categories. We can see that in total 28 people scored goals and of these 5 were from Sage Publications and 23 were from Sussex; 49 people didn't score at all (63.6% of the total) and, of those, 19 worked for Sage (38.8% of the total that didn't score) and 30 were from Sussex (61.2% of the total that didn't score). Before moving on to look at the test statistic itself we check that the assumption for chi-square has been met. The assumption is that in 2 × 2 tables (which is what we have here), all expected frequencies should be greater than 5. The smallest expected count is 8.7 (for Sage editors who scored). This value exceeds 5 and so the assumption has been met. Pearson's chi-square test examines whether there is an association between two categorical variables (in this case the job and whether the person scored or not). The value of the chi-square statistic is 3.63. This value has a two-tailed significance of .057, which is bigger than .05 (hence, non-significant). Because we made a specific prediction (that Sussex people would score more than Sage people), there is a case to be made that we can halve this p-value, which would give us a significant association (because p = .0285, which is less than .05). However, as explained in the book, I'm not a big fan of one-tailed tests. In any case, we'd be well-advised to look for other information such as an effect size. Which brings us neatly onto the next task … The odds of someone scoring given that they were employed by SAGE are: \[ \text{odds}_{scored, Sage} = \frac{5}{19}= 0.26 \] The odds of someone scoring given that they were employed by Sussex are: \[ \text{odds}_{scored, Sussex} = \frac{23}{30} = 0.77 \] \[ \text{odds ratio} = \frac{\text{odds}_{scored, Sage}}{\text{odds}_{scored, Sussex}} = \frac{0.26}{0.77} = 0.34 \] The odds of scoring if you work for Sage are 0.34 times as high as if you work for Sussex; another way to express this is that if you work for Sussex, the odds of scoring are 1/0.34 = 2.95 times higher than if you work for Sussex. We could report this as follows: There was a non-significant association between the type of job and whether or not a person scored a goal, \(\chi^2\)(1) = 3.63, p = .057, OR = 2.95. Despite the non-significant result, the odds of Sussex employees scoring were 2.95 times higher than that for Sage employees. I was interested in whether horoscopes are tosh. I recruited 2201 people, made a note of their star sign (this variable, obviously, has 12 categories: Capricorn, Aquarius, Pisces, Aries, Taurus, Gemini, Cancer, Leo, Virgo, Libra, Scorpio and Sagittarius) and whether they believed in horoscopes (this variable has two categories: believer or unbeliever). I sent them an identical horoscope about events in the next month, which read 'August is an exciting month for you. You will make friends with a tramp in the first week and cook him a cheese omelette. Curiosity is your greatest virtue, and in the second week, you'll discover knowledge of a subject that you previously thought was boring. Statistics perhaps. You might purchase a book around this time that guides you towards this knowledge. Your new wisdom leads to a change in career around the third week, when you ditch your current job and become an accountant. By the final week you find yourself free from the constraints of having friends, your boy/girlfriend has left you for a Russian ballet dancer with a glass eye, and you now spend your weekends doing loglinear analysis by hand with a pigeon called Hephzibah for company.' At the end of August I interviewed these people and I classified the horoscope as having come true, or not, based on how closely their lives had matched the fictitious horoscope. Conduct a loglinear analysis to see whether there is a relationship between the person's star sign, whether they believe in horoscopes and whether the horoscope came true (Horoscope.sav). To get a crosstabulation table, select Analyze > Descriptive Statistics > Crosstabs …. We have three variables in our crosstabulation table: Star_Sign Believe and True. Drag one of these variables into the box labelled Row(s) (I selected Believe in the figure). Drag a second variable of interest (I chose True) to the box labelled Column(s), and drag the final variable (Star_Sign) to the box labelled Layer 1 of 1. The table is quite large so I've set a minimal set of cell values (observed values, expected values and standardized residuals). The crosstabulation table produced by SPSS Statistics contains the number of cases that fall into each combination of categories. Although this table is quite complicated, you should be able to see that there are roughly the same number of believers and non-believers and similar numbers of those whose horoscopes came true or didn't. These proportions are fairly consistent also across the different star signs. There are no expected counts less than 5, so the assumption of the test is met. To run a loglinear analysis that is consistent with my section on the theory is to select Analyze > Loglinear > Model Selection … to access the dialog box in the figure. Drag the variables that you want to include in the analysis to the box labelled Factor(s). Select each variable in the Factor(s) box and click to activate a dialog box in which you specify the value of the minimum and maximum code that you've used for that variable (the figure shows these values for the variables in this dataset). When you've done this, click to return to main dialog box, and to fit the model. To begin with, SPSS Statistics fits the saturated model (all terms are in the model, including the highest-order interaction, in this case the star sign × believe × true interaction). The two goodness-of-fit statistics (Pearson's chi-square and the likelihood-ratio statistic) test the hypothesis that the frequencies predicted by the model (the expected frequencies) are significantly different from the actual frequencies in our data (the observed frequencies). At this stage the model fits the data perfectly, so both statistics are 0 and yield a p-value of '.' (i.e., SPSS Statistics can't compute the probability). The next part of the output tells us something about which components of the model can be removed. The first bit of the output is labelled K-Way and Higher-Order Effects, and underneath there is a table showing likelihood-ratio and chi-square statistics when K = 1, 2 and 3 (as we go down the rows of the table). The first row (K = 1) tells us whether removing the one-way effects (i.e., the main effects of star sign, believer and true) and any higher-order effects will significantly affect the fit of the model. There are lots of higher-order effects here - there are the two-way interactions and the three-way interaction - and so this is basically testing whether if we remove everything from the model there will be a significant effect on the fit of the model. This is highly significant because the p-value is given as .000, which is less than .05. The next row of the table (K = 2) tells us whether removing the two-way interactions (i.e., the star sign × believe, star sign × true, and believe × true interactions) and any higher-order effects will affect the model. In this case there is a higher-order effect (the three-way interaction) so this is testing whether removing the two-way interactions and the three-way interaction would affect the fit of the model. This is significant (p = .03, which is less than .05) indicating that if we removed the two-way interactions and the three-way interaction then this would have a significant detrimental effect on the model. The final row (K = 3) is testing whether removing the three-way effect and higher-order effects will significantly affect the fit of the model. The three-way interaction is of course the highest-order effect that we have. so this is simply testing whether removal of the three-way interaction (star sign × believe × true) will significantly affect the fit of the model. If you look at the two columns labelled Sig. then you can see that both chi-square and likelihood ratio tests agree that removing this interaction will not significantly affect the fit of the model (because p > .05). The next part of the table expresses the same thing but without including the higher-order effects. It's labelled K-Way Effects and lists tests for when K = 1, 2 and 3. The first row (K = 1), therefore, tests whether removing the main effects (the one-way effects) has a significant detrimental effect on the model. The p-values are less than .05, indicating that if we removed the main effects of star sign, believe and true from our model it would significantly affect the fit of the model (in other words, one or more of these effects is a significant predictor of the data). The second row (K = 2) tests whether removing the two-way interactions has a significant detrimental effect on the model. The p-values are less than .05, indicating that if we removed the star sign × believe, star sign × true and believe × true interactions then this would significantly reduce how well the model fits the data. In other words, one or more of these two-way interactions is a significant predictor of the data. The final row (K = 3) tests whether removing the three-way interaction has a significant detrimental effect on the model. The p-values are greater than .05, indicating that if we removed the star sign × believe × true interaction then this would not significantly reduce how well the model fits the data. In other words, this three-way interaction is not a significant predictor of the data. This row should be identical to the final row of the upper part of the table (the K-Way and Higher-Order Effects) because it is the highest-order effect and so in the previous table there were no higher-order effects to include in the test (look at the output and you'll see the results are identical). In a nutshell, this tells us that the three-way interaction is not significant: removing it from the model does not have a significant effect on how well the model fits the data. We also know that removing all two-way interactions does have a significant effect on the model, as does removing the main effects, but you have to remember that loglinear analysis should be done hierarchically and so these two-way interactions are more important than the main effects. The Partial Associations table simply breaks down the table that we've just looked at into its component parts. So, for example, although we know from the previous output that removing all of the two-way interactions significantly affects the model, we don't know which of the two-way interactions is having the effect. This table tells us. We get a Pearson chi-square test for each of the two-way interactions and the main effects, and the column labelled Sig. tells us which of these effects is significant (values less than .05 are significant). We can tell from this that the star sign × believe and believe × true interactions are significant, but the star sign × true interaction is not. Likewise, we saw in the previous output that removing the one-way effects also significantly affects the fit of the model, and these findings are confirmed here because the main effect of star sign is highly significant (although this just means that we collected different amounts of data for each of the star signs!). The final bit of output deals with the backward elimination. SPSS Statistics begins with the highest-order effect (in this case, the star sign × believe × true interaction), remove it from the model, see what effect this has, and, if this effect is not significant, move on to the next highest effects (in this case the two-way interactions). As we've already seen, removing the three-way interaction does not have a significant effect, and the table labelled Step Summary confirms that removing the three-way interaction has a non-significant effect on the model. At step 1, the three two-way interactions are then assessed in the bit of the table labelled Deleted Effect. From the values of Sig. it's clear that the star sign × believe (p = .037) and believe × true (p = .000) interactions are significant but the star sign × true interaction (p = 0. 465) is not. Therefore, at step 2 the non-significant star sign × true interaction is deleted, leaving the remaining two-way interactions in the model. These two interactions are then re-evaluated and both the star sign × believe (p = .049) and believe × true (p = .001) interactions are still significant and so are still retained. The final model is the one that retains all main effects and these two interactions. As neither of these interactions can be removed without affecting the model, and these interactions involve all three of the main effects (the variables star sign, true and believe are all involved in at least one of the remaining interactions), the main effects are not examined (because their effect is confounded with the interactions that have been retained). Finally, SPSS Statistics evaluates this final model with the likelihood ratio statistic and we're looking for a non-significant test statistic, which indicates that the expected values generated by the model are not significantly different from the observed data (put another way, the model is a good fit of the data). In this case the result is very non-significant, indicating that the model is a good fit of the data. On my statistics module students have weekly SPSS classes in a computer laboratory. I've noticed that many students are studying Facebook more than the very interesting statistics assignments that I have set them. I wanted to see the impact that this behaviour had on their exam performance. I collected data from all 260 students on my module. I classified their Attendance as being either more or less than 50% of their lab classes, and I classified them as someone who looked at Facebook during their lab class, or someone who never did. After the exam, I noted whether they passed or failed (Exam). Do a loglinear analysis to see if there is an association between studying Facebook and failing your exam (Facebook.sav). To get a crosstabulation table, select Analyze > Descriptive Statistics > Crosstabs …. We have three variables in our crosstabulation table: Attendance Facebook and Exam. Drag one of these variables into the box labelled Row(s) (I selected Facebook in the figure). Drag a second variable of interest (I chose Exam) to the box labelled Column(s), and drag the final variable (Attendance) to the box labelled Layer 1 of 1. The crosstabulation table produced by SPSS Statistics contains the number of cases that fall into each combination of categories. There are no expected counts less than 5, so the assumption of the test is met. The first bit of the output labelled K-Way and Higher-Order Effects shows likelihood ratio and chi-square statistics when K = 1, 2 and 3 (as we go down the rows of the table). The first row (K = 1) tells us whether removing the one-way effects (i.e., the main effects of attendance, Facebook and exam) and any higher-order effects will significantly affect the fit of the model. There are lots of higher-order effects here - there are the two-way interactions and the three-way interaction - and so this is basically testing whether if we remove everything from the model there will be a significant effect on the fit of the model. This is highly significant because the p-value is given as .000, which is less than .05. The next row of the table (K = 2) tells us whether removing the two-way interactions (i.e., Attendance × Exam, Facebook × Exam and Attendance × Facebook) and any higher-order effects will affect the model. In this case there is a higher-order effect (the three-way interaction) so this is testing whether removing the two-way interactions and the three-way interaction would affect the fit of the model. This is significant (the p-value is given as .000, which is less than .05), indicating that if we removed the two-way interactions and the three-way interaction then this would have a significant detrimental effect on the model. The final row (K = 3) is testing whether removing the three-way effect and higher-order effects will significantly affect the fit of the model. The three-way interaction is of course the highest-order effect that we have, so this is simply testing whether removal of the three-way interaction (Attendance × Facebook × Exam) will significantly affect the fit of the model. If you look at the two columns labelled Sig. then you can see that both chi-square and likelihood ratio tests agree that removing this interaction will not significantly affect the fit of the model (because the p > .05). The next part of the table expresses the same thing but without including the higher-order effects. It's labelled K-Way Effects and lists tests for when K = 1, 2 and 3. The first row (K = 1), therefore, tests whether removing the main effects (the one-way effects) has a significant detrimental effect on the model. The p-values are less than .05, indicating that if we removed the main effects of Attendance, Facebook and Exam from our model it would significantly affect the fit of the model (in other words, one or more of these effects is a significant predictor of the data). The second row (K = 2) tests whether removing the two-way interactions has a significant detrimental effect on the model. The p-values are less than .05, indicating that if we removed the two-way interactions then this would significantly reduce how well the model fits the data. In other words, one or more of these two-way interactions is a significant predictor of the data. The final row (K = 3) tests whether removing the three-way interaction has a significant detrimental effect on the model. The p-values are greater than .05, indicating that if we removed the three-way interaction then this would not significantly reduce how well the model fits the data. In other words, this three-way interaction is not a significant predictor of the data. This row should be identical to the final row of the upper part of the table (the K-way and Higher-Order Effects) because it is the highest-order effect and so in the previous table there were no higher-order effects to include in the test (look at the output and you'll see the results are identical). The main effect of Attendance was significant, \(\chi^2\)(1) = 27.63, p < .001, indicating (based on the contingency table) that significantly more students attended over 50% of their classes (N = 39 + 30 + 98 + 5 = 172) than attended less than 50% (N = 5 + 30 + 26 + 27 = 88). The main effect of Facebook was significant, \(\chi^2\)(1) = 10.47, p < .01, indicating (based on the contingency table) that significantly fewer students looked at Facebook during their classes (N = 39 + 30 + 5 + 30 = 104) than did not look at Facebook (N = 98 + 5 + 26 + 27 = 156). The main effect of Exam was significant, \(\chi^2\)(1) = 22.54, p < .001, indicating (based on the contingency table) that significantly more students passed the RMiP exam (N = 39 + 98 + 5 + 26 = 168) than failed (N = 39 + 98 + 5 + 26 = 92). The Attendance × Exam interaction was significant, \(\chi^2\)(1) = 61.80, p < .01, indicating that whether you attended more or less than 50% of classes affected exam performance. The contingency table shows that those who attended more than half of their classes had a much better chance of passing their exam (nearly 80% passed) than those attending less than half of their classes (only 35% passed). All of the standardized residuals are significant, indicating that all cells contribute to this overall association. The Facebook × Exam interaction was significant, $^2$1) = 49.77, p < .001, indicating that whether you looked at Facebook or not affected exam performance. The contingency table shows that those who looked at Facebook had a much lower chance of passing their exam (58% failed) than those who didn't look at Facebook during their lab classes (around 80% passed). Finally, the Facebook × Attendance × Exam interaction was not significant, \(\chi^2\)(1) = 1.57, p = .20. This result indicates that the effect of Facebook (described above) was the same (roughly) in those who attended more than 50% of classes and those that attended less than 50% of classes. In other words, although those attending less than 50% of classes did worse than those attending more than 50%, within that group, those looking at Facebook did relatively worse than those not looking at Facebook. To access the dialog boxes for logistic regression select Analyze > Regression > Binary Logistic …. The main dialog box is shown in the figure (taken from Task 1). Drag the outcome variable to the Dependent box, then specify the covariates (i.e., predictor variables) by dragging them to the box labelled Covariates:. If you have several predictors, specify the main effects by selecting one predictor and then holding down Ctrl (⌘ on a mac) while you select others and transfer them by clicking . To input an interaction, again select two or more predictors while holding down Ctrl (⌘ on a mac) but click to transfer them. Use the drop down list labelled Method: to select the method for entering rpedictors into the model (in the figire Forward: LR has been selected). In the Categorical … dialog box drag any categorical variables you have to the Categorical Covariates: box and select a coding scheme to apply to them (by default SPSS Statistics uses indicator coding). Click to return to the main dialog box. In the Save … dialog box select the options shown in the figure below. Click to return to the main dialog box. In the Options … dialog box select the options shown in the figure below. Click to return to the main dialog box, and once there click to fit the model. A 'display rule' refers to displaying an appropriate emotion in a situation. For example, if you receive a present that you don't like, you should smile politely and say 'Thank you Auntie Kate, I've always wanted a rotting cabbage'; you do not start crying and scream 'Why did you buy me a rotting cabbage, you selfish old turd?!' A psychologist measured children's understanding of display rules (with a task that they could pass or fail), their age (months), and their ability to understand others' mental states ('theory of mind', measured with a false belief task that they could pass or fail). Can display rule understanding (did the child pass the test: yes/no?) be predicted from theory of mind (did the child pass the false belief task: yes/no?), age and their interaction? (Display.sav.) Open the file Display.sav. Notice that both of the categorical variables have been entered as coding variables: the outcome variable is coded as 1 is having display rule understanding, and 0 represents an absence of display rule understanding. For the false-belief task a similar coding has been used (1 = passed the false-belief task, 0 = failed the false-belief task). Select Analyze > Regression > Binary Logistic … to access the main dialog box in the figure. Drag display to the Dependent box, then specify the covariates (i.e., predictor variables). To specify the main effects, select one predictor (e.g. age) and then hold down Ctrl (⌘ on a mac) and select the other (fb). Transfer them to the box labelled Covariates: by clicking . To input the interaction, again select age and fb while holding down Ctrl (⌘ on a mac) but then click . For this analysis select the Forward:LR method of regression. In the Categorical … dialog box the covariates we specified in the main dialog box are listed on the left-hand side. Drag any categorical variables you have (in this example fb) to the Categorical Covariates:. By default SPSS Statistics uses indicator coding (i.e., the standard dummy variable coding explained in the book). This is fine for us because fb has only two catregories, but to ease interpretation change the Reference Category to first and click . Click to return to the main dialog box. The first part of the output tell us the parameter codings given to the categorical predictor variable. We requested a forward stepwise method so the initial model is derived using only the constant in the model. The initial output tells us about the model when only the constant is included (i.e. all predictor variables are omitted). The log-likelihood of this baseline model is 96.124. This represents the fit of the model when including only the constant. Initially every child is predicted to belong to the category in which most observed cases fell. In this example there were 39 children who had display rule understanding and only 31 who did not. Therefore, of the two available options it is better to predict that all children had display rule understanding because this results in a greater number of correct predictions. Overall, the model correctly classifies 55.7% of children. The next part of the output summarizes the model, and at this stage this entails quoting the value of the constant (\(b_0\)), which is equal to 0.23. In the first step, false-belief understanding (fb) is added to the model as a predictor. As such, a child is now classified as having display rule understanding based on whether they passed or failed the false-belief task. The next output shows summary statistics about the new model. The overall fit of the new model is assessed using the log-likelihood statistic (multiplied by -2 to give it a chi-square distribution, -2LL). Remember that large values of the log-likelihood statistic indicate poorly fitting statistical models. If fb has improved the fit of the model then the value of −2LL should be less than the value when only the constant was included (because lower values of −2LL indicate better fit). When only the constant was included, −2LL = 96.124, but now fb has been included this value has been reduced to 70.042. This reduction tells us that the model is better at predicting display rule understanding than it was before fb was added. We can assess the significance of the change in a model by taking the log-likelihood of the new model and subtracting the log-likelihood of the baseline model from it. The value of the model chi-square statistic works on this principle and is, therefore, equal to −2LL with fb included minus the value of −2LL when only the constant was in the model (96.124 − 70.042 = 26.083). This value has a chi-square distribution. In this example, the value is significant at the .05 level and so we can say that overall the model predicts display rule understanding significantly better than with fb included than with only the constant included. The output also shows various \(R^2\) statistics, which we'll return to in due course. The classification table indicates how well the model predicts group membership. The current model correctly classifies 23 children who don't have display rule understanding but misclassifies 8 others (i.e. it correctly classifies 74.2% of cases). For children who do have display rule understanding, the model correctly classifies 33 and misclassifies 6 cases (i.e. correctly classifies 84.6% of cases). The overall accuracy of classification is, therefore, the weighted average of these two values (80%). So, when only the constant was included, the model correctly classified 56% of children, but now, with the inclusion of fb as a predictor, this has risen to 80%. The next part of the output tells us the estimates for the coefficients for the predictors included in the model (namely, fb and the constant). The coefficient represents the change in the logit of the outcome variable associated with a one-unit change in the predictor variable. The logit of the outcome is the natural logarithm of the odds of Y occurring. The Wald statistic has a chi-square distribution and tells us whether the b coefficient for that predictor is significantly different from zero. If the coefficient is significantly different from zero then we can assume that the predictor is making a significant contribution to the prediction of the outcome (Y). For these data it seems to indicate that false-belief understanding is a significant predictor of display rule understanding (note the significance of the Wald statistic is less than .05). We can calculate an analogue of R using the equation in the chapter (for these data, the Wald statistic and its df are 20.856 and 1, respectively, and the original -2LL was 96.124). Therefore, R can be calculated as: \[ R = \sqrt{\frac{20.856-(2 \times 1)}{96.124}} = 0.4429 \] Hosmer and Lemeshow's measure (\(R^2_{L}\)) is calculated by dividing the model chi-square by the original −2LL. In this example the model chi-square after all variables have been entered into the model is 26.083, and the original -2LL (before any variables were entered) was 96.124. So \(R^2_{L}\) = 26.083/96.124 = .271, which is different from the value we would get by squaring the value of R given above (\(R^2 = 0.4429^2 = .196\)). Cox and Snell's \(R^2\) is 0.311 (see earlier output), which is calculated from this equation: \[ R_{\text{CS}}^{2} = 1 - exp\bigg(\frac{-2\text{LL}_\text{new} - (-2\text{LL}_\text{baseline})}{n}\bigg) \] The −2LL(new) is 70.04 and −2LL(baseline) is 96.124. The sample size, n, is 70, which gives us: \[ \begin{align} R_{\text{CS}}^{2} &= 1 - exp\bigg(\frac{70.04 - 96.124}{70}\bigg) \\ &= 1 - \exp( -0.3726) \\ &= 1 - e^{- 0.3726} \\ &= 0.311 \end{align} \] Nagelkerke's adjustment (see earlier output) is calculated from: \[ \begin{align} R_{N}^{2} &= \frac{R_\text{CS}^2}{1 - \exp\bigg( -\frac{-2\text{LL}_\text{baseline}}{n} \bigg)} \\ &= \frac{0.311}{1 - \exp\big( - \frac{96.124}{70} \big)} \\ &= \frac{0.311}{1 - e^{-1.3732}} \\ &= \frac{0.311}{1 - 0.2533} \\ &= 0.416 \end{align} \] As you can see, there's a fairly substantial difference between the two values! The odds ratio, exp(b) (Exp(B) in the output) is the change in odds. If the value is greater than 1 then it indicates that as the predictor increases, the odds of the outcome occurring increase. Conversely, a value less than 1 indicates that as the predictor increases, the odds of the outcome occurring decrease. In this example, we can say that the odds of a child who has false-belief understanding also having display rule understanding are 15 times higher than those of a child who does not have false-belief understanding. In the options, we requested a confidence interval for exp(b) and it can also be found in Output 4. Remember that if we ran 100 experiments and calculated confidence intervals for the value of exp(b), then these intervals would encompass the actual value of exp(b) in the population (rather than the sample) on 95 occasions. So, assuming that this experiment was one of the 95% where the confidence interval contains the population value then the population value of exp(b) lies between 4.84 and 51.71. However, this experiment might be one of the 5% that 'misses' the true value. The next output shows the test statistic for fb if it were removed from the model. Removing fb would result in a change in the -2LL that is highly significant (p < .001), which means that removing fb from the model would have a significant detrimental effect on the fit of the model - in other words, it fb significantly predicts display rule understanding. We are also told about the variables currently not in the model. First of all, the residual chi-square (labelled Overall Statistics in the output), which is non-significant, tells us that none of the remaining variables have coefficients significantly different from zero. Furthermore, each variable is listed with its score statistic and significance value, and for both variables their coefficients are not significantly different from zero (as can be seen from the significance values of .128 for age and .261 for the interaction of age and false-belief understanding). Therefore, no further variables will be added to the model. The classification plot shows the predicted probabilities of a child passing the display rule task. If the model perfectly fits the data, then this histogram should show all of the cases for which the event has occurred on the right-hand side, and all the cases for which the event hasn't occurred on the left-hand side. In this example, the only significant predictor is dichotomous and so there are only two columns of cases on the plot. As a rule of thumb, the more that the cases cluster at each end of the graph, the better (se ethe bookc hapter for more details). In this example there are two Ns on the right of the model and one Y on the left of the model. These are misclassified cases, and the fact there are relatively few of them suggests the model is making correct predictions for most children. The predicted probabilities and predicted group memberships will have been saved as variables in the data editor ( PRE_1 and PGR_1). These probabilities can be listed using the Analyze > Reports > Case Summaries … dialog box (see the book chapter). The output shows a selection of the predicted probabilities. Because the only significant predictor was a dichotomous variable, there are only two different probability values. The only significant predictor of display rule understanding was false-belief understanding, which could have a value of either 1 (pass the false-belief task) or 0 (fail the false-belief task). These values tells us that when a child doesn't possess second-order false-belief understanding (fb = 0, No), there is a probability of .2069 that they will pass the display rule task, approximately a 21% chance (1 out of 5 children). However, if the child does pass the false-belief task (fb = 1, yes), there is a probability of .8049 that they will pass the display rule task, an 80.5% chance (4 out of 5 children). Consider that a probability of 0 indicates no chance of the child passing the display rule task, and a probability of 1 indicates that the child will definitely pass the display rule task. Therefore, the values obtained suggest a role for false-belief understanding as a prerequisite for display rule understanding. Assuming we are content that the model is accurate and that false-belief understanding has some substantive significance, then we could conclude that false-belief understanding is the single best predictor of display rule understanding. Furthermore, age and the interaction of age and false-belief understanding did not significantly predict display rule understanding. This conclusion is fine in itself, but to be sure that the model is a good one, it is important to examine the residuals, which brings us nicely onto the next task. Are there any influential cases or outliers in the model for Task 1? To answer this question we need to look at the model residuals. These residuals are slightly unusual because they are based on a single predictor that is categorical. This is why there isn't a lot of variability in their values. The basic residual statistics for this example (Cook's distance, leverage, standardized residuals and DFBeta values) show little cause for concern. Note that all cases have DFBetas less than 1 and leverage statistics (LEV_1) close to the calculated expected value of 0.03. There are also no unusually high values of Cook's distance (COO_1) which, all in all, means that there are no influential cases having an effect on the model. For Cook's distance you should look for values which are particularly high compared to the other cases in the sample, and values greater than 1 are usually problematic. About half of the leverage values are a little high but given that the other statistics are fine, this is probably no cause for concern. The standardized residuals all have values within ±2.5 and predominantly have values within ±2, and so there seems to be very little here to concern us. Piff, Stancato, Côté, Mendoza-Dentona, and Keltner (2012) used the behaviour of drivers to claim that people of a higher social class are more unpleasant. They classified social class by the type of car (Vehicle) on a five-point scale and observed whether the drivers cut in front of other cars at a busy intersection (Vehicle_Cut). Do a logistic regression to see whether social class predicts whether a driver cut in front of other vehicles (Piff et al. (2012) Vehicle.sav). Follow the general instructions for logistic regression to fit the model. The main dialog box should look like the figure below. The first block of output tells us about the model when only the constant is included.In this example there were 34 participants who did cut off other vehicles at intersections and 240 who did not. Therefore, of the two available options it is better to predict that all participants did not cut off other vehicles because this results in a greater number of correct predictions. The contingency table for the model in this basic state shows that predicting that all participants did not cut off other vehicles results in 0% accuracy for those who did cut off other vehicles, and 100% accuracy for those who did not. Overall, the model correctly classifies 87.6% of participants. The table labelled Variables in the Equation at this stage contains only the constant, which has a value of \(b_0 = −1.95\). The table labelled Variables not in the Equation. The bottom line of this table reports the residual chi-square statistic (labelled Overall Statistics) as 4.01 which is only just significant at p = .045. This statistic tells us that the coefficient for the variable not in the model is significantly different from zero - in other words, that the addition of this variable to the model will significantly impoprve its fit. The next part of the output deals with the model after the predictor variable (Vehicle) has been added to the model. As such, a person is now classified as either cutting off other vehicles at an intersection or not, based on the type of vehicle they were driving (as a measure of social status). The output shows summary statistics about the new model. The overall fit of the new model is significant because the Model chi-square in the table labelled Omnibus Tests of Model Coefficients is significant, \(\chi^2\)(1) = 4.16, p = .041. Therefore, the model that includes the variable Vehicle predicted whether or not participants cut off other vehicles at intersections better than the model that includes only the constant. The classification table indicates how well the model predicts group membership. In step 1, the model correctly classifies 240 participants who did not cut off other vehicles and does not misclassify any (i.e. it correctly classifies 100% of cases). For participants who do did cut off other vehicles, the model correctly classifies 0 and misclassifies 34 cases (i.e. correctly classifies 0% of cases). The overall accuracy of classification is, therefore, the weighted average of these two values (87.6%). Therefore, the accuracy is no different than when only the constant was included in the model. The significance of the Wald statistic is .047, which is less than .05. Therefore, we can conclude that the status of the vehicle the participant was driving significantly predicted whether or not they cut off another vehicle at an intersection. However, I'd interpret this significance in the context of the classification table, which showed us that adding the predecitor of vehicle did not result in any more cases being more accurately classified. The exp b (Exp(B) in the output) is the change in odds of the outcome resulting from a unit change in the predictor. In this example, the exp b for vehicle in step 1 is 1.441, which is greater than 1, indicating that as the predictor (vehicle) increases, the value of the outcome also increases, that is, the value of the categorical variable moves from 0 (did not cut off vehicle) to 1 (cut off vehicle). In other words, drivers of vehicles of a higher status were more likely to cut off other vehicles at intersections. In a second study, Piff et al. (2012) observed the behaviour of drivers and classified social class by the type of car (Vehicle), but the outcome was whether the drivers cut off a pedestrian at a crossing (Pedestrian_Cut). Do a logistic regression to see whether social class predicts whether or not a driver prevents a pedestrian from crossing (Piff et al. (2012) Pedestrian.sav). The first block of output tells us about the model when only the constant is included. In this example there were 54 participants who did cut off pedestrians at intersections and 98 who did not. Therefore, of the two available options it is better to predict that all participants did not cut off other vehicles because this results in a greater number of correct predictions. The contingency table for the model in this basic state shows that predicting that all participants did not cut off pedestrians results in 0% accuracy for those who did cut off pedestrians, and 100% accuracy for those who did not. Overall, the model correctly classifies 64.5% of participants. The table labelled Variables in the Equation at this stage contains only the constant, which has a value of \(b_0 = −0.596\). The table labelled Variables not in the Equation. The bottom line of this table reports the residual chi-square statistic (labelled Overall Statistics) as 4.77 which is only just significant at p = .029. This statistic tells us that the coefficient for the variable not in the model is significantly different from zero - in other words, that the addition of this variable to the model will significantly impoprve its fit. The next part of the output deals with the model after the predictor variable (Vehicle) has been added to the model. As such, a person is now classified as either cutting off pedestrians at an intersection or not, based on the type of vehicle they were driving (as a measure of social status). The output shows summary statistics about the new model. The overall fit of the new model is significant because the Model chi-square in the table labelled Omnibus Tests of Model Coefficients is significant, \(\chi^2\)(1) = 4.86, p = .028. Therefore, the model that includes the variable Vehicle predicted whether or not participants cut off pedestrians at intersections better than the model that includes only the constant. The classification table indicates how well the model predicts group membership. In step 1, the model correctly classifies 91 participants who did not cut off pedestrians and does not misclassify any (i.e. it correctly classifies 92.9% of cases). For participants who do did cut off pedestrians, the model correctly classifies 6 and misclassifies 48 cases (i.e. correctly classifies 11.1% of cases). The overall accuracy of classification is the weighted average of these two values (63.8%). Therefore, the accuracy (0verall) has decreaed slightly (from 64.5% to 63.8%). The significance of the Wald statistic is .031, which is less than .05. Therefore, we can conclude that the status of the vehicle the participant was driving significantly predicted whether or not they cut off pedestrians at an intersection. The exp b (Exp(B) in the output) is the change in odds of the outcome resulting from a unit change in the predictor. In this example, the exp b for vehicle in step 1 is 1.495, which is greater than 1, indicating that as the predictor (vehicle) increases, the value of the outcome also increases, that is, the value of the categorical variable moves from 0 (did not cut off pedestrian) to 1 (cut off pedestrian). In other words, drivers of vehicles of a higher status were more likely to cut off pedestrians at intersections. Four hundred and sixty-seven lecturers completed questionnaire measures of Burnout (burnt out or not), Perceived Control (high score = low perceived control), Coping Style (high score = high ability to cope with stress), Stress from Teaching (high score = teaching creates a lot of stress for the person), Stress from Research (high score = research creates a lot of stress for the person) and Stress from Providing Pastoral Care (high score = providing pastoral care creates a lot of stress for the person). Cooper, Sloan, and Williams's (1988) model of stress indicates that perceived control and coping style are important predictors of burnout. The remaining predictors were measured to see the unique contribution of different aspects of a lecturer's work to their burnout. Conduct a logistic regression to see which factors predict burnout? (Burnout.sav). Follow the general instructions for logistic regression to fit the model. The model should be fit hierarchically because Cooper's model indicates that perceived control and coping style are important predictors of burnout. So, these variables should be entered in the first block: The second block should contain all other variables, and because we don't know anything much about their predictive ability, we might enter them in a stepwise fashion (I chose Forward: LR): At step 1, the overall fit of the model is significant, \(\chi^2\)(2) = 165.93, p < .001. The model accounts for 29.9% or 44.1% of the variance in burnout (depending on which measure of \(R^2\) you use). At step 2, the overall fit of the model is significant after both the first new variable (teaching), \(\chi^2\)(3) = 193.34, p < .001, and second new variable (pastoral) have been entered, \(\chi^2\)(4) = 205.40, p < .001. The final model accounts for 35.6% or 52.4% of the variance in burnout (depending on which measure of \(R^2\) you use). In terms of the individual predictors we could report the following: B (SE) 95%CI for E xp(B) Lower Exp(B) Upper Constant –4.48** Perceived control 0.06** (0.01) 1.04 1.06 1.09 Coping style 0.08** Teaching stress –0.11** Pastoral stress 0.04** Note: \(R^2\) = .36 (Cox and Snell), .52 (Nagelkerke). Model \(\chi^2\)(4) = 205.40, p < .001. p < .01, p < .001. Burnout is significantly predicted by perceived control, coping style (as predicted by Cooper), stress from teaching and stress from giving pastoral care. The Exp(B) and direction of the beta values tell us that, for perceived control, coping ability and pastoral care, the relationships are positive. That is (and look back to the question to see the direction of these scales, i.e., what a high score represents), poor perceived control, poor ability to cope with stress and stress from giving pastoral care all predict burnout. However, for teaching, the relationship if the opposite way around: stress from teaching appears to be a positive thing as it predicts not becoming burnt out. An HIV researcher explored the factors that influenced condom use with a new partner (relationship less than 1 month old). The outcome measure was whether a condom was used (use: condom used = 1, not used = 0). The predictor variables were mainly scales from the Condom Attitude Scale (CAS) by Sacco, Levine, Reed, and Thompson (1991): gender; the degree to which the person views their relationship as 'safe' from sexually transmitted disease (safety); the degree to which previous experience influences attitudes towards condom use (sexexp); whether or not the couple used a condom in their previous encounter (Previous: 1 = condom used, 0 = not used, 2 = no previous encounter with this partner); the degree of self-control that a person has when it comes to condom use (selfcon); the degree to which the person perceives a risk from unprotected sex (perceive). Previous research (Sacco, Rickman, Thompson, Levine, & Reed, 1993) has shown that gender, relationship safety and perceived risk predict condom use. Verify these previous findings and test whether self-control, previous usage and sexual experience predict condom use (Condom.sav). Follow the general instructions for logistic regression to fit the model. We run a hierarchical logistic regression entering perceive, safety and gender in the first block: In the second block we add previous, selfcon and sexexp. I used forced entry on both blocks: For the variable previous I used an indicator contrast with 'No condom' (the first category) as the base category. I left gender with the default indicator coding: In this analysis we forced perceive, safety and gender into the model first. The first output tells us that 100 cases have been accepted, that the dependent variable has been coded 0 and 1 (because this variable was coded as 0 and 1 in the data editor, these codings correspond exactly to the data itself). The output for block 1 provides information about the model after the variables perceive, safety and gender have been added. The −2LL has dropped to 105.77, which is a change of 30.89 (the model chi-square). This value tells us about the model as a whole, whereas the block tells us how the model has improved since the last block. The change in the amount of information explained by the model is significant (\(\chi^2\)(3) = 30.89, p < .001) and so using perceived risk, relationship safety and gender as predictors significantly improves our ability to predict condom use. Finally, the classification table shows us that 74% of cases can be correctly classified using these three predictors. Hosmer and Lemeshow's goodness-of-fit test statistic tests the hypothesis that the observed data are significantly different from the predicted values from the model. So, in effect, we want a non-significant value for this test (because this would indicate that the model does not differ significantly from the observed data). In this case (\(\chi^2\)(8) = 9.70, p = .287) it is non-significant, which is indicative of a model that is predicting the real-world data fairly well. The table labelled Variables in the Equation tells us the parameters of the model for the first block. The significance values of the Wald statistics for each predictor indicate that both perceived risk (Wald = 17.78, p < .001) and relationship safety (Wald = 4.54, p = .033) significantly predict condom use. Gender, however, does not (Wald = 0.41, p = .523). The odds ratio for perceived risk (Exp(B) = 2.56 [1.65, 3.96] indicates that if the value of perceived risk goes up by 1, then the odds of using a condom also increase (because Exp(B) is greater than 1). The confidence interval for this value ranges from 1.65 to 3.96, so if this is one of the 95% of samples for which the confidence interval contains the population value the value of Exp(B) in the population lies somewhere between these two values. In short, as perceived risk increase by 1, people are just over twice as likely to use a condom. The odds ratio for relationship safety (Exp(B) = 0.63 [0.41, 0.96] indicates that if the relationship safety increases by one point, then the odds of using a condom decrease (because Exp(B) is less than 1). The confidence interval for this value ranges from 0.41 to 0.96, so if this is one of the 95% of samples for which the confidence interval contains the population value the value of Exp(B) in the population lies somewhere between these two values. In short, as relationship safety increases by one unit, subjects are about 1.6 times less likely to use a condom. The odds ratio for gender (Exp(B) = 0.729 [0.28, 1.93] indicates that as gender changes from 1 (female) to 0 (male), then the odds of using a condom decrease (because Exp(B) is less than 1). The confidence interval for this value crosses 1. Assuming that this is one of the 95% of samples for which the confidence interval contains the population value this means that the direction of the effect in the population could indicate either a positive (Exp(B) > 1) or negative (Exp(B) < 1) relationship between gender and condom use. A glance at the classification plot brings not such good news because a lot of cases are clustered around the middle. This pattern indicates that the model could be making better predictions (there are currently a lot of cases that have a probability of condom use at around 0.5). The output for block 2 shows what happens to the model when our new predictors are added (previous use, self-control and sexual experience).So, we begin with the model that we had in block 1 and we then add previous, selfcon and sexexp to it. The effect of adding these predictors to the model is to reduce the –2 log-likelihood to 87.97 (a reduction of 48.69 from the original model (the model chi-square) and an additional reduction of 17.80 from block 1 (the block statistics). This additional improvement of block 2 is significant (\(\chi^2\)(4) = 17.80, p < .001), which tells us that including these three new predictors in the model has significantly improved our ability to predict condom use. The classification table tells us that the model is now correctly classifying 78% of cases. Remember that in block 1 there were 74% correctly classified and so an extra 4% of cases are now classified (not a great deal more – in fact, examining the table shows us that only four extra cases have now been correctly classified). The table labelled Variables in the Equation contains details of the final model. The significance values of the Wald statistics for each predictor indicate that both perceived risk (Wald = 16.04, p < .001) and relationship safety (Wald = 4.17, p = .041) still significantly predict condom use and, as in block 1, gender does not (Wald = 0.00, p = .996). Previous use has been split into two components (according to whatever contrasts were specified for this variable). Looking at the very first output, we are told the parameter codings for previous(1) and previous(2). From the output we can see that previous(1) compares the condom used group against the no condom used group, and previous(2) compares the first time with partner against the no condom used group. Therefore we can tell that (1) using a condom on the previous occasion does predict use on the current occasion (Wald = 3.88, p = .049); and (2) there is no significant diference between not using a condom on the previous occasion and this being the first time (Wald = 0.00, p = .991). Of the other new predictors we find that self-control predicts condom use (Wald = 7.51, p = .006) but sexual experience does not (Wald = 2.61, p = .106). The odds ratio for perceived risk (Exp(B) = 2.58[1.62, 4.11] indicates that if the value of perceived risk goes up by 1, then the odds of using a condom also increase (because Exp(B) is greater than 1). The confidence interval for this value ranges from 1.62 to 4.11, so if this is one of the 95% of samples for which the confidence interval contains the population value the value of Exp(B) in the population lies somewhere between these two values. In short, as perceived risk increase by 1, people are just over twice as likely to use a condom. The odds ratio for previous(1) (Exp(B) = 2.97[1.01, 8.75) indicates that if the value of previous usage goes up by 1 (i.e., changes from not having used one to having used one), then the odds of using a condom also increase. If this is one of the 95% of samples for which the confidence interval contains the population value then the value of Exp(B) in the population lies somewhere between 1.01 and 8.75. In other words it is a positive relationship: previous use predicts future use. For previous(2) the odds ratio (Exp(B) = 0.98 [0.06, 15.29) indicates that if the value of previous usage goes changes from not having used one to this being the first time with this partner), then the odds of using a condom do not change (because the value is very nearly equal to 1). If this is one of the 95% of samples for which the confidence interval contains the population value then the value of Exp(B) in the population lies somewhere between 0.06 and 15.29 and because this interval contains 1 it means that the population relationship could be either positive or negative (and very wide ranging). The odds ratio for self-control (Exp(B) = 1.42 [1.10, 1.82] indicates that if self-control increases by one point, then the odds of using a condom increase also. As self-control increases by one unit, people are about 1.4 times more likely to use a condom. If this is one of the 95% of samples for which the confidence interval contains the population value then the value of Exp(B) in the population lies somewhere between 1.10 and 1.82. In other words it is a positive relationship Finally, the odds ratio for sexual experience (Exp(B) = 1.20[0.95, 1.49] indicates that as sexual experience increases by one unit, people are about 1.2 times more likely to use a condom. If this is one of the 95% of samples for which the confidence interval contains the population value then the value of Exp(B) in the population lies somewhere between 0.06 and 15.29 and because this interval contains 1 it means that the population relationship could be either positive or negative. A glance at the classification plot brings better news because a lot of cases that were clustered in the middle are now spread towards the edges. Therefore, overall this new model is more accurately classifying cases compared to block 1. How reliable is the model in Task 6? First, we'll check for multicollinearity (see the book for how to do this). The tolerance values for all variables are close to 1 and VIF values are much less than 10, which suggests no collinearity issues. The table labelled Collinearity Diagnostics shows the eigenvalues of the scaled, uncentred cross-products matrix, the condition index and the variance proportions for each predictor. If the eigenvalues are fairly similar then the derived model is likely to be unchanged by small changes in the measured variables. The condition indexes represent the square root of the ratio of the largest eigenvalue to the eigenvalue of interest (so, for the dimension with the largest eigenvalue, the condition index will always be 1). The variance proportions shows the proportion of the variance of each predictor's b that is attributed to each eigenvalue. In terms of collinearity, we are looking for predictors that have high proportions on the same small eigenvalue, because this would indicate that the variances of their b coefficients are dependent (see the main textbook for more detail). No variables have similarly high variance proportions for the same dimensions. The result of this output suggests that there is no problem of collinearity in these data. Residuals should be checked for influential cases and outliers. The output lists cases with standardized residuals greater than 2. In a sample of 100, we would expect around 5–10% of cases to have standardized residuals with absolute values greater than this value. For these data we have only four cases (out of 100) and only one of these has an absolute value greater than 3. Therefore, we can be fairly sure that there are no outliers (the number of cases with large standardized residuals is consistent with what we would expect). Using the final model from Task 6, what are the probabilities that participants 12, 53 and 75 will use a condom? The values predicted for these cases will depend on exactly how you ran the analysis (and the parameter coding used on the variable previous). Therefore, your answers might differ slightly from mine. A female who used a condom in her previous encounter scores 2 on all variables except perceived risk (for which she scores 6). Use the model in Task 6 to estimate the probability that she will use a condom in her next encounter. Use the logistic regression equation: \[ p(Y_i) = \frac{1}{1 + e^{-Z}} \\ \] where \[ Z = b_0 + b_1X_{1i} + b_2X_{2i} + ... + b_nX_{ni} \] We need to use the values of b from the output (final model) and the values of X for each variable (from the question). The values of b we can get from an earlier output: For the values of X, remember that we need to check how the categorical variables were coded. Again, refer back to an earlier output: For example, a female is coded as 0, so that will be the value of X for this person. Similarly, she used a condom with her previous partner so this will be coded as 1 for previous(1) and 0 for previous(2). The table below shows the values of b and X and then multiplies them. Perceived risk 0.949 6 5.694 Relationship safety -0.482 2 -0.964 Biological sex -0.003 0 0.000 Previous use (1) 1.087 1 1.087 Previous use (2) -0.017 0 0.000 Self-control 0.348 2 0.696 Sexual experience 0.180 2 0.360 Constant -4.957 1 -4.957 We now sum the values in the last column to get the number in the brackets in the equation above: \[ \begin{align} Z &= 5.694 -0.964 + 0.000 + 1.087 + 0.000 + 0.696 + 0.360 -4.957 \\ &= 1.916 \end{align} \] Replace this value of z into the logistic regression equation: \[ \begin{align} p(Y_i) &= \frac{1}{1 + e^{-Z}} \\ &= \frac{1}{1 + e^{-1.916}} \\ &= \frac{1}{1 + 0.147} \\ &= \frac{1}{1.147} \\ &= 0.872 \end{align} \] Therefore, there is a 0.872 probability (87.2% if you prefer) that she will use a condom on her next encounter. At the start of the chapter we looked at whether the type of instrument a person plays is connected to their personality. A musicologist measured Extroversion and Agreeableness in 200 singers and guitarists (Instrument). Use logistic regression to see which personality variables (ignore their interaction) predict which instrument a person plays (Sing or Guitar.sav). The first part of the output tells us about the model when only the constant is included (i.e., all predictor variables are omitted). The log-likelihood of this baseline model is 271.957, which represents the fit of the model when including only the constant. At this point, the model predicts that every participant is a singer, because this results in more correct classifications than if the model predicted that everyone was a guitarist. Self-evidently, this model has 0% accuracy for the participants who played the guitar, and 100% accuracy for singers. Overall, the model correctly classifies 53.8% of participants. The next part of the output summarizes the model, which at this stage tells us the value of the constant (\(b_0\)), which is −0.153. The table labelled Variables not in the Equation reports the residual chi-square statistic (labelled Overall Statistics) as 115.231 which is significant at p < .001 . This statistic tells us that the coefficients for the variables not in the model are significantly different from zero – in other words, the addition of one or more of these variables to the model will significantly improve predictions from the model. This table also lists both predictors with the corresonding value of Roa's efficient score statistic ( labelled Score). Both excluded variables have significant score statistics at p < .001 and so both could potentially make a contribution to the model. The next part of the output deals with the model after these predictors have been added to the model. The overall fit of the new models is assessed using the −2log-likelihood statistic (−2LL). Remember that large values of the log-likelihood statistic indicate poorly fitting statistical models. The value of −2log-likelihood for a new model should, therefore, be smaller than the value for the previous model if the fit is improving. When only the constant was included, −2LL = 271.96, but with the two predictors added it has reduced to 225.18 (a change of 46.78), which tells us that the model is better at predicting which instrument participants' played when both predictors are included. The classification table how well the model predicts group membership. Before the predictors were entered into the model, the model correctly classified the 106 participants who are singers and misclassified all of the guitarests. So, overall it classified 53.8 of cases (see above). After the predictors are added it correctly classifies 103 of the 106 singers and 87 of the 91 guitarists. Overall then, it correctly classifies 96.4% of cases. A huge number (which you might want to think about for the following task!). The table labelled Variables in the Equation tells us the estimates for the coefficients for the predictors included in the model. These coefficients represents the change in the logit (log odds) of the outcome variable associated with a one-unit change in the predictor variable. The Wald statistics suggest that both Extroversion, Wald(1) = 22.90, p < .001, and Agreeableness, Wald(1) = 15.30, p < .001, significantly predict the instrument played. The corresponding odds ratio (labelled Exp(B)) tells us the change in odds associated with a unit change in the predictor. The odds ratio for Extroversion is 0.238, which is less than 1 meaning that as the predictor (extroversion) increases, the odds of the outcome decrease, that is, the odds of being a guitarist (compared to a singer) decrease. In other words, more extroverted participants are more likely to be singers. The odds ratio for Agreeableness is 1.429, which is greater than 1 meaning that as agreeableness increases, the odds of the outcome increase, that is, the odds of being a guitarist (compared to a singer) increase. In other words, more agreeable people are more likely to be guitarists. Note that the odds ratio for the constant is insanely large, which brings us neatly onto the next task … Which problem associated with logistic regression might we have in the analysis in Task 10? Looking at the classification plot, it looks as though we might have complete separation. The model almost perfectly predicts group membership. In a new study, the musicologist in Task 10 extended her previous one by collecting data from 430 musicians who played their voice (singers), guitar, bass, or drums (Instrument). She measured the same personality variables but also their Conscientiousness (Band Personality.sav). Use multinomial logistic regression to see which of these three variables (ignore interactions) predict which instrument a person plays (use drums as the reference category). To fit the model select Analyze > Regression > Multinomial Logistic …. The main dialog box should look like the figure below. Drag the outcome Instrument to the box labelled Dependent. We can specify which category to compare other categories against by clicking the Reference Category … button but the default is to use the last category, and this default is perfect for us because drums is the last category and is also the category that we want to use as our reference category. Next, specify the predictor variables by dragging them (Agreeableness, Extroversion and Conscientiousness) to the box labelled Covariate(s). For a basic analysis in which all of these predictors are forced into the model, this is all we really need to do, but consult with the book chapter to select other options. The first output shows the log-likelihood. The change in log-likelihood indicates how much new variance has been explained by the model. The chi-square test tests the decrease in unexplained variance from the baseline model (1122.82) to the final model (450.91), which is a difference of 1149.53−871 = 672.02. This change is significant, which means that our final model explains a significant amount of the original variability in the instrument played (in other words, it's a better fit than the original model). The next part of the output relates to the fit of the model. We know that the model with predictors is significantly better than the one without predictors, but is it a good fit of the data? The Pearson and deviance statistics both test whether the predicted values from the model differ significantly from the observed values. If these statistics are not significant then the model is a good fit. Here we have contrasting results: the deviance statistic says that the model is a good fit of the data (p = 1.00, which is much higher than .05), but the Pearson test indicates the opposite, namely that predicted values are significantly different from the observed values (p < .001). Oh dear. Differences between these statistics can be caused by overdispersion. We can compute the dispersion parameters from both statistics: \[ \begin{align} \phi_\text{Pearson} &= \frac{\chi_{\text{Pearson}}^2}{\text{df}} = \frac{1042672.72}{1140} = 914.63 \\ \phi_\text{Deviance} &= \frac{\chi_{\text{Deviance}}^2}{\text{df}} = \frac{448.032}{1140} = 0.39 \end{align} \] The dispersion parameter based on the Pearson statistic is 914.63, which is ridiculously high compared to the value of 2, which I cited in the chapter as being a threshold for 'problematic'. Conversely, the value based on the deviance statistic is below 1, which we saw in the chapter indicated underdispersion. Again, these values contradict, so all we can really be sure of is that there's something pretty weird going on. Large dispersion parameters can occur for reasons other than overdispersion, for example omitted variables or interactions and predictors that violate the linearity of the logit assumption. In this example there were several interaction terms that we could have entered but chose not to, which might go some way to explaining these strange results. The output also shows us the two other measures of \(R^2\). The first is Cox and Snell's measure (.81) and the second is Nagelkerke's adjusted value (.86). They are reasonably similar values and represent very large effects. The likelihood ratio tests can be used to ascertain the significance of predictors to the model. This table tells us that extroversion had a significant main effect on type of instrument played, \(\chi^2\)(3) = 339.73, p < .001, as did agreeableness, \(\chi^2\)(3) = 100.16, p < .001, and conscientiousness, \(\chi^2\)(3) = 84.26, p < .001. These likelihood statistics can be seen as overall statistics that tell us which predictors significantly enable us to predict the outcome category, but they don't really tell us specifically what the effect is. To see this we have to look at the individual parameter estimates. We specified the last category (drums) as our reference category; therefore, each section of the table compares one of the instrument categories against the drums category. Let's look at the effects one by one; because we are just comparing two categories the interpretation is the same as for binary logistic regression (so if you don't understand my conclusions reread the book chapter): Extroversion. Whether a person was a drummer or a singer was significantly predicted by how extroverted they were, b = 1.70, Wald \(\chi^2\)(1) = 54.34, p < .001. The odds ratio tells us that as extroversion increases by one unit, the change in the odds of being a singer (rather than being a drummer) is 5.47. The odds ratio (5.47) is greater than 1, therefore we can say that as participants move up the extroversion scale, they were more likely to be a singer (coded 1) than they were to be a drummer (coded 0). Similarly, Whether a person was a drummer or a bassist was significantly predicted by how extroverted they were, b = 0.25, Wald \(\chi^2\)(1) = 18.28, p < .001. The odds ratio tells us that as extroversion increases by one unit, the change in the odds of being a bass player (rather than being a drummer) is 1.29, so the more extroverted the participant was, the more likely they were to be a bass player than they were to be a drummer. However, whether a person was a drummer or a guitarest was not significantly predicted by how extroverted they were, b = .06, Wald \(\chi^2\)(1) = 3.58, p = .06. Agreeableness. Whether a person was a drummer or a singer was significantly predicted by how agreeable they were, b = −0.40, Wald \(\chi^2\)(1) = 35.49, p < .001. The odds ratio tells us that as agreeableness increases by one unit, the change in the odds of being a singer (rather than being a drummer) is 0.67, so the more agreeable the participant was, the more likely they were to be a drummer than they were to be a singer. Similarly, whether a person was a drummer or a bassist was significantly predicted by how agreeable they were, b = −0.40, Wald \(\chi^2\)(1) = 41.55, p < .001. The odds ratio tells us that as agreeableness increases by one unit, the change in the odds of being a bass player (rather than being a drummer) is 0.67, so, the more agreeable the participant was, the more likely they were to be a drummer than they were to be a bass player. However, whether a person was a drummer or a guitarist was not significantly predicted by how agreeable they were, b = .02, Wald \(\chi^2\)(1) = 0.51, p = .48. Conscientiousness. Whether a person was a drummer or a singer was significantly predicted by how conscientious they were, b = −0.35, Wald \(\chi^2\)(1) = 21.27, p < .001. The odds ratio tells us that as conscientiousness increases by one unit, the change in the odds of being a singer (rather than being a drummer) is 0.71, so the more conscientious the participant was, the more likely they were to be a drummer than they were to be a singer. Similarly, Whether a person was a drummer or a bassist was significantly predicted by how conscientious they were, b = −0.36, Wald \(\chi^2\)(1) = 40.93, p < .001. The odds ratio tells us that as conscientiousness increases by one unit, the change in the odds of being a bass player (rather than being a drummer) is 0.70, so the more conscientious the participant was, the more likely they were to be a drummer than they were to be a bass player. However, whether a person was a drummer or a guitarist was not significantly predicted by how conscientious they were, b = 0.00, Wald \(\chi^2\)(1) = 0.00, p = 1.00. Using the cosmetic surgery example, run the analysis described in Section 1.6.5 but also including BDI, age and sex as fixed effect predictors. What differences does including these predictors make? To fit the model follow the instructions in section 21.6.6 of the book except that the main dialog box (Figure 21.20 in the book) should include Surgery, Base_QoL, Age, Sex, Reason and BDI in the list of covariates: hold down Ctrl (⌘ on MacOS) to select all of these simultaneously and drag them to the box labelled Covariate(s):. We'd set up the Fixed Effects dialog box (Figure 21.21 in the book) as follows: We'd set up the Random Effects dialog box (Figure 21.18 in the book) as follows: Set all of the other options as described for the example in the book. We can test the fit this new model using the log-likelihood statistics. the model in the book had a -2LL of 1789.05 with 9 degrees of freedom, and by adding the new predictors the model here this value has changed to 1725.39 with 12 degrees of freedom. This is a change of 63.66 with a difference of 3 degrees of freedom: \[ \begin{aligned} \chi_\text{Change}^2 &= 1789.05 - 1725.39 = 63.66 \\ \text{df}_\text{Change} &= 12 - 9 = 3 \end{aligned} \] The critical value for the chi-square statistic (see the book Appendix) is significant \(\chi^2\)(3) = 7.81, p < .05; therefore, this change is significant. Including these three predictors has improved the fit of the model. Age, F(1, 150.83) = 37.32, p < .001, and BDI, F(1, 260.83) = 16.74, p < .001, significantly predicted quality of life after surgery but Sex did not, F(1, 264.48) = 0.90, p = .34. The main difference that including these factors has made is that the main effect of Reason has become non-significant, and the Reason × Surgery interaction has become more significant (its b has changed from 4.22, p = .013, to 5.02, p = .001). We could break down this interaction as we did in the chapter by splitting the file and running a simpler analysis (without th interaction and the main effect of Reason, but including Base_QoL, Surgery, BDI, Age, and Sex). If you do these analyses you will get the parameter tables shown below. These tables show a similar pattern to the example in the book. For those operated on only to change their appearance, surgery significantly predicted quality of life after surgery, b = -3.16, t(5.25) = -2.63, p = .04. Unlike when age, sex and BDI were not included, this effect is now significant. The negative gradient shows that in these people quality of life was lower after surgery compared to the control group. However, for those who had surgery to solve a physical problem, surgery did not significantly predict quality of life, b = 0.67, t(10.59) = 0.58, p = .57. In essence, the inclusion of age, sex and BDI has made very little difference in this latter group. However, the slope was positive, indicating that people who had surgery scored higher on quality of life than those on the waiting list (although not significantly so!). The interaction effect, therefore, as in the chapter, reflects the difference in slopes for surgery as a predictor of quality of life in those who had surgery for physical problems (slight positive slope) and those who had surgery purely for vanity (a negative slope). Using our growth model example in this chapter, analyse the data but include Sex as an additional covariate. Does this change your conclusions? To fit the model follow the instructions in the book except that the main dialog box (Figure 21.27 in the book) should look like this: The Fixed Effects dialog box should look like this: All other dialog boxes should be completed as in the book (section 21.7.4). The output is the same as the final one in the chapter, except that it now includes the effect of Sex. To see whether Sex has improved the model we compare the value of -2LL for this new model to the value in the previous model. We have added only one term to the model, so the new degrees of freedom will have risen by 1, from 8 to 9 (you can find the value of 9 in the row labelled Total in the column labelled Number of Parameters, in the table called Model Dimension). We can compute the change in -2LL as a result of Sex by subtracting the -2LL for this model from the -2LL for the last model in the chapter: \[ \begin{aligned} \chi_\text{Change}^2 &= 1798.86 - 1798.74 = 0.12 \\ \text{df}_\text{Change} &= 9 - 8 = 1 \end{aligned} \] The critical values for the chi-square statistic for df = 1 in the Appendix are 3.84 (p < .05) and 6.63 (p < .01); therefore, this change is not significant because 0.12 is less than the critical value of 3.84. The table of fixed effects and the parameter estimates tell us that the linear, F(1, 221.41) = 10.01, p = .002, and quadratic, F(1, 212.51) = 9.41, p = .002, trends both significantly described the pattern of the data over time; however, the cubic trend, F(1, 214.39) = 3.19, p = .076, does not. These results are basically the same as in the chapter. Sex itself is also not significant in this table, F(1, 113.02) = 0.11, p = .736. The output also tells us about the random parameters in the model. First, the variance of the random intercepts was Var(\(u_{0j}\)) = 3.90. This suggests that we were correct to assume that life satisfaction at baseline varied significantly across people. Also, the variance of the people's slopes varied significantly Var(\(u_{1j}\)) = 0.24. This suggests also that the change in life satisfaction over time varied significantly across people too. Finally, the covariance between the slopes and intercepts (−0.39) suggests that as intercepts increased, the slope decreased. These results confirm what we already know from the chapter. The trend in the data is best described by a second-order polynomial, or a quadratic trend. This reflects the initial increase in life satisfaction 6 months after finding a new partner but a subsequent reduction in life satisfaction at 12 and 18 months after the start of the relationship. The parameter estimates tell us much the same thing. As such, our conclusions have been unaffected by adjusting for biological sex. Hill, Abraham, and Wright (2007) examined whether providing children with a leaflet based on the 'theory of planned behaviour' increased their exercise. There were four different interventions (Intervention): a control group, a leaflet, a leaflet and quiz, and a leaflet and a plan. A total of 503 children from 22 different classrooms were sampled (Classroom). The 22 classrooms were randomly assigned to the four different conditions. Children were asked 'On average over the last three weeks, I have exercised energetically for at least 30 minutes ______ times per week' after the intervention (Post_Exercise). Run a multilevel model analysis on these data (Hill et al. (2007).sav) to see whether the intervention affected the children's exercise levels (the hierarchy is children within classrooms within interventions). To fit the model use the Analyze > Mixed Models > Linear … menu to access the first dialog box, which should be completed as follows: Click to access the main dialog box and complete it as follows: The Random Effects dialog box should look like this: The Estimation dialog box should look like this: The Statistics dialog box should look like this: The first part of the output tells you details about the model that are being entered into the SPSS machinery. The Information Criteria table gives some of the popular methods for assessing the fit models. AIC and BIC are two of the most popular. The Fixed Effects box gives the information in which most of you will be most interested. It says the effect of intervention is non-significant, F(3, 22.10) = 2.08, p = .131. A few words of warning: calculating a p-value requires assuming that the null hypothesis is true. In most of the statistical procedures covered in this book you would construct a probability distribution based on this null hypothesis, and often it is fairly simple, like the z- or t-distributions. For multilevel models the probability distribution of the null is often not known. Most packages that estimate p-values for multilevel models estimate this probability in complex way. This is why the denominator degrees of freedom are not whole numbers. For more complex models there is concern about the accuracy of some of these approximations. Many methodologists urge caution in rejecting hypotheses even when the observed p-value is less than .05. The random effects show how much of the variability in responses is associated with which class a person is in: 0.017178/(0.017178 + 0.290745) = 5.58%. This is fairly small. The corresponding Wald z just fails to reach the traditional level for statistical significance, p = .057. The result from these data could be that the intervention failed to affect exercise. However, there is a lot of individual variability in the amount of exercise people get. A better approach would be to take into account the amount of self-reported exercise prior to the study as a covariate, which leads us to the next task. Repeat the analysis in Task 3 but include the pre-intervention exercise scores (Pre_Exercise) as a covariate. What difference does this make to the results? To fit the model follow the instructions for the previous task excpe the main dialog box should be completed as follows: and the Fixed Effects dialog box should look like this: Otherwise complete the dialog boxes in the same way as the previous example. The first part of the output tells you details about the model that is being entered into the SPSS machinery. The Information Criteria box gives some of the popular methods for assessing the fit models. AIC and BIC are two of the most popular. The Fixed Effects box gives the information in which most of you will be most interested. It says the effect of pre-intervention exercise level is a significant predictor of post-intervention exercise level, F(1, 478.54) = 719.775, p < .001, and, most interestingly, the effect of intervention is now significant, F(1, 22.83) = 8.02, p = .001. These results show that when we adjust for the amount of self-reported exercise prior to the study, the intervention group becomes a significant predictor of post-intervention exercise levels. The random effects show how much of the variability in responses is associated with which class a person is in: 0.001739/(0.001739 + 0.122045) = 1.40%. This is pretty small. The corresponding Wald z is not significant, p = .410. Copyright © 2000-2019, Professor Andy Field.
CommonCrawl
View source for SMHS CLT LLN ← SMHS CLT LLN ==[[SMHS| Scientific Methods for Health Sciences]] - Limit Theory: Central Limit Theorem and Law of Large Numbers == ===Overview:=== The two most commonly used theorems in the field of probability – Law of Large Numbers (LLT) and the Central Limit Theorem (CLT) – are commonly referred to as the first and second fundamental laws of probability. CLT suggests that the arithmetic mean of a sufficiently large number of iterates of independent random variables given certain conditions will be approximately normally distributed. LLT states that in performing the same experiment a large number of times, the average of the results obtained should be close to the expected value and tends to get closer to the expected value with increasing number of trials. In this section, we are going to introduce these two probability theorems and illustrate their applications with examples. Finally, we will show some common misconceptions of CLT and LLN. ===Motivation:=== Suppose we independently conduct one experiment repeatedly. Assume that we are interested in the relative frequency of the occurrence of one event whose probability to be observed at each experiment is p. The ratio of the observed frequency of that event to the total number of repetitions converges towards p as the number of (identical and independent) experiments increases. This is an informal statement of the Law of Large Numbers (LLN). Another important property comes with large sample size is the CLT. What would be the situation when the experiment is repeated with a sufficiently large number of iterations? Does it matter what the original distribution each individual outcome follow in this case? What would CLT tell us in situations like this and how can we apply this theorem to help us solve more complicated problems in researches? ===Theory=== ====Law of Large Numbers (LLN)==== When performing the same experiment a large number of times, the average of the results obtained should be close to the expected value and tends to get closer to the expected value with increasing number of trials. *It is generally necessary to draw the parallels between the formal LLN statements (in terms of sample averages) and the frequent interpretations of the LLN (in terms of probabilities of various events). Suppose we observe the same process independently multiple times. Assume a binarized (dichotomous) function of the outcome of each trial is of interest (e.g., failure may denote the event that the continuous voltage measure $< 0.5V$, and the complement, success, that voltage $≥ 0.5V,$ this is the situation in electronic chips which binarize electric currents to 0 or 1). Researchers are often interested in the event of observing a success at a given trial or the number of successes in an experiment consisting of multiple trials. Let's denote $p=P(success)$ at each trial. Then, the ratio of the total number of successes to the number of trials $(n)$ is the average:$\bar X_{n}=\frac{1}{n}\sum_{i=1}^{n}X_{i}$ , where $X_{i}=0$ if failure and 1 if success. Hence, $\bar X_{n}=\hat\rho$,the ratio of the observed frequency of that event to the total number of repetitions, estimates the true p=P(success). Therefore, $\hat\rho$ converges towards $\rho$ as the number of (identical and independent) trials increases. *LLN Application: One demonstration of the law of large numbers provides practical algorithms for estimation of transcendental numbers. The two most popular transcendental numbers are $\pi$ and $e$. *[http://socr.ucla.edu/htmls/SOCR_Experiments.html The SOCR Uniform e-Estimate Experiment] provides the complete details of this simulation. In a nutshell, we can estimate the value of the natural number e using random sampling from Uniform distribution. Suppose $X_{1},X_{2},…,X_{n}$ are drawn from Uniform distribution on $(0,1)$and define $U= \arg\min_{n}( X_{1}+X_{2}+⋯+X_{n}>1)$, note that all $X_{i}≥0.$ Now,the expected value $E(U)=e\approx 2.7182.$ Therefore, by [[SOCR_EduMaterials_Activities_LawOfLargeNumbers|LLN]], taking averages of ${U_{1},U_{2},…,U_{k}}$ values, each computed from random samples $X_{1},X_{2},…,X_{n}\sim U(0,1)$ as described above, will provide a more accurate estimate (as $k\rightarrow\infty$) of the natural number $e$. The Uniform E-Estimate Experiment, part of provides a hands-on demonstration of how the LLN facilitates stochastic simulation-based estimation of $e$. *Common misconceptions: (1) If we observe a streak of 10 consecutive heads (when p=0.5, say) the odds of the 11th trial being a Head is > p! This is of course, incorrect, as the coin tosses are independent trials (an example of a memoryless process); (2) If run large number of coin tosses, the number of heads and number of tails become more and more equal. This is incorrect, as the LLN only guarantees that the sample proportion of heads will converge to the true population proportion (the p parameter that we selected). In fact, the difference |Heads - Tails| diverges. ====Central Limit Theorem (CLT)==== The arithmetic mean of a sufficiently large number of iterates of independent random variables given certain conditions will be approximately normally distributed. It states that the sum of many independent and identically distributed (i.i.d.) random variables will tend to be distributed according to one of a small set of attractor distributions. There are various statements of the central limit theorem, but all of them represent weak-convergence results regarding (mostly) the sums of independent identically-distributed (random) variables. *Definition of CLT: let ${X_{1},X_{2},…,X_{n}}$ be a i.i.d. random sample of size n drawn from distributions of expected values $\mu$ and finite variance $\sigma^{2}$. The sample average $\bar{x}_{n}=\frac{X_{1}+X_{2}+⋯+X_{n}}{n}$. By LLT, the sample averages converge in probability and almost surely to the expected value $\mu$ as $n\rightarrow \infty$. As n gets larger, the distribution of difference between the sample average $\bar{x}_{n}$ and its limit $\mu$, when multiplied by the factor $\sqrt n$, that is $\sqrt n(\bar{x}_{n}-\mu)$ approximates the normal distribution with mean 0 and variance $\sigma^{2}$: $\sqrt n(\bar{x}_{n}-\mu)\rightarrow N(0,\sigma^{2})$ when n is large enough. Thus, for large enough n, the distribution of $\bar{x}_{n}$ is close to the normal distribution with mean $\mu$ and variance $\frac{\sigma^{2}}{n}$: $\bar{x}_{n}\rightarrow N(\mu,\frac{\sigma^{2}}{n})$. * See the [[SOCR_EduMaterials_Activities_GeneralCentralLimitTheorem| Generalized CLT Activity and Applications]]. *Multidimensional CLT: extend the central limit theorem characteristics to the cases where $X_1,X_2,…,X_n$ for each individual is an i.i.d. random vector in $R^k$ with mean $μ=E(X_i)$ and covariance matrix $Σ$. Then with multidimensional CLT, Let ${X_i}=\begin{bmatrix} X_{i(1)} \\ \vdots \\ X_{i(k)} \end{bmatrix}$ be the $i$-vector. The bold in '''X'''<sub>''i''</sub> means that it is a random vector, not a random (univariate) variable. Then the sum of the random vectors will be $\begin{bmatrix} X_{1(1)} \\ \vdots \\ X_{1(k)} \end{bmatrix}+\begin{bmatrix} X_{2(1)} \\ \vdots \\ X_{2(k)} \end{bmatrix}+\cdots+\begin{bmatrix} X_{n(1)} \\ \vdots \\ X_{n(k)} \end{bmatrix} = \begin{bmatrix} \sum_{i=1}^{n} \left [ X_{i(1)} \right ] \\ \vdots \\ \sum_{i=1}^{n} \left [ X_{i(k)} \right ] \end{bmatrix} = \sum_{i=1}^{n} \left [ \mathbf{X_i} \right ].$ Also, the average will be $\left (\frac{1}{n}\right)\sum_{i=1}^{n} \left [ \mathbf{X_i} \right ]= \frac{1}{n}\begin{bmatrix} \sum_{i=1}^{n} \left [ X_{i(1)} \right ] \\ \vdots \\ \sum_{i=1}^{n} \left [ X_{i(k)} \right ] \end{bmatrix} = \begin{bmatrix} \bar X_{i(1)} \\ \vdots \\ \bar X_{i(k)} \end{bmatrix}=\mathbf{\bar X_n}$. Thus, $\frac{1}{\sqrt{n}} \sum_{i=1}^{n} \left [\mathbf{X_i} - E\left ( X_i\right ) \right ]=\frac{1}{\sqrt{n}}\sum_{i=1}^{n} \left [ \mathbf{X_i} - \mu \right ]=\sqrt{n}\left(\mathbf{\overline{X}}_n - \mu\right) .$ The multivariate central limit theorem implies that $$\sqrt{n}\left(\mathbf{\overline{X}}_n - \mu\right)\ \stackrel{D}{\rightarrow}\ \mathcal{N}_k(0,\Sigma),$$ where the covariance matrix $Σ$ is equal to $$\Sigma=\begin{bmatrix} {Var \left (X_{1(1)} \right)} & {Cov \left (X_{1(1)},X_{1(2)} \right)} & Cov \left (X_{1(1)},X_{1(3)} \right) & \cdots & Cov \left (X_{1(1)},X_{1(k)} \right) \\ {Cov \left (X_{1(2)},X_{1(1)} \right)} & {Var \left (X_{1(2)} \right)} & {Cov \left(X_{1(2)},X_{1(3)} \right)} & \cdots & Cov \left(X_{1(2)},X_{1(k)} \right) \\ Cov \left (X_{1(3)},X_{1(1)} \right) & {Cov \left (X_{1(3)},X_{1(2)} \right)} & Var \left (X_{1(3)} \right) & \cdots & Cov \left (X_{1(3)},X_{1(k)} \right) \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ Cov \left (X_{1(k)},X_{1(1)} \right) & Cov \left (X_{1(k)},X_{1(2)} \right) & Cov \left (X_{1(k)},X_{1(3)} \right) & \cdots & Var \left (X_{1(k)} \right) \\ \end{bmatrix}.$$ *The chart below demonstrates the CLT: The sample means are generated using a random number generator, which draws numbers between 1 and 100 from a uniform probability distribution. It illustrates that increasing sample sizes result in the 500 measured sample means being more closely distributed about the population mean (50 in this case). It also compares the observed distributions with the distributions that would be expected for a normalized Gaussian distribution, and shows the chi-squared values that quantify the goodness of the fit (the fit is good if the reduced chi-squared value is less than or approximately equal to one). The input into the normalized Gaussian function is the mean of sample means (~50) and the mean sample standard deviation divided by the square root of the sample size (~28.87/√n), which is called the standard deviation of the mean (since it refers to the spread of sample means). Use the following R-script to generate the graph below: par(mfrow=c(4,3)) k = 5 # Sample-size m <- 200 # Number of Samples xbarn.5 <- apply(matrix(rnorm(m*k,50,15),nrow=m),1,mean) hist(xbarn.5,col="blue",xlim=c(0,100),prob=T,xlab="",ylab="",main="Normal(50,15)") mtext(expression(bar(x)[5]),side=4,line=1) xbaru.5 <- apply(matrix(runif(m*k,0,1),nrow=m),1,mean) hist(xbaru.5,col="blue",xlim=c(0,1),prob=T,xlab="",ylab="",main="Uniform(0,1)") mtext(expression(bar(x)[5]),side=4,line=1) xbare.5 <- apply(matrix(rlnorm(m*k,1,1),nrow=m),1,mean) hist(xbare.5,col="blue",xlim=c(0,15),prob=T,xlab="",ylab="",main="Log-Normal(1,1)") mtext(expression(bar(x)[5]),side=4,line=1) xbarn.10 <- apply(matrix(rnorm(m*k*2,50,15),nrow=m),1,mean) hist(xbarn.10,col="blue",xlim=c(0,100),prob=T,xlab="",ylab="",main="") mtext(expression(bar(x)[10]),side=4,line=1) xbaru.10 <- apply(matrix(runif(m*k*2,0,1),nrow=m),1,mean) hist(xbaru.10,col="blue",xlim=c(0,1),prob=T,xlab="",ylab="",main="") mtext(expression(bar(x)[10]),side=4,line=1) xbare.10 <- apply(matrix(rlnorm(m* k*2,1,1),nrow=m),1,mean) hist(xbare.10,col="blue",xlim=c(0,15),prob=T,xlab="",ylab="",main="") mtext(expression(bar(x)[10]),side=4,line=1) xbarn.30 <- apply(matrix(rnorm(m*k*3,50,15),nrow=m),1,mean) hist(xbarn.30,col="blue",xlim=c(0,100),prob=T,xlab="",ylab="",main="") mtext(expression(bar(x)[30]),side=4,line=1) xbaru.30 <- apply(matrix(runif(m*k*3,0,1),nrow=m),1,mean) hist(xbaru.30,col="blue",xlim=c(0,1),prob=T,xlab="",ylab="",main="") mtext(expression(bar(x)[30]),side=4,line=1) xbare.30 <- apply(matrix(rlnorm(m*k*3,1,1),nrow=m),1,mean) hist(xbare.30,col="blue",xlim=c(0,15),prob=T,xlab="",ylab="",main="") mtext(expression(bar(x)[30]),side=4,line=1) [[Image:SMHS CCT LLN Fig 1.png|600px]] # Alternative Plots m <- 2000 # Number of samples n <- 16 # size of each sample mu <- 50 sigma <- 15 sigma.xbar <- sigma/sqrt(n) rnv <- rnorm(m*n,mu,sigma) # m samples of size n rnvm <- matrix(rnv,nrow=m) # m*n matrix samplemeans <- apply(rnvm,1,mean) # compute mean across rows of matrix hist(samplemeans) # plain histogram hist(samplemeans,prob=T) # density histogram xs <- seq((mu-4*sigma.xbar),(mu+4*sigma.xbar),length=800) ys <- dnorm(xs,mu,sigma.xbar) lines(xs,ys,type="l") # superimpose normal par(mfrow=c(1,1)) par(col.main="blue",pty="s") hist(samplemeans,prob=T,col="blue",breaks="scott", xlab=expression(bar(X)[16]), main=expression(paste("(X~N(50,15^2): Simulated Sampling Distribution of ", bar(X)))) lines(xs,ys,type="l",lwd=2,col="red") # superimpose normal Alpha <- round(mean(samplemeans),5) Beta <- round(sd(samplemeans),5) text(37,.08,bquote(hat(mu)[bar(X)]==.(Alpha)),pos=4,col="blue") text(37,.07,bquote(hat(sigma)[bar(X)]==.(Beta)),pos=4,col="blue") text(55, .08,bquote(mu[bar(X)]==.(mu)),pos=4,col="red") text(55,.07,bquote(sigma[bar(X)]==.(sigma.xbar)),pos=4,col="red") [[Image:SMHS_CCT_LLN_Fig_2.png|600px]] *CLT instructional challenges: We have extensive CLT pedagogical experience based on graduate and undergraduate teaching, interacting with students (and teaching assistants) and evaluating students' performance in various probability and statistics classes. In our endeavors, we have used a variety of classical (e.g., mathematical formulations), hands-on activities (e.g., beads, sums, Quincunx) and technological approaches (e.g., applets, demonstrations). Our prior efforts have identified the following instructional challenges in teaching the concept of the CLT using purely classical and physical hands-on activities. **Some of these challenges may be addressed by employing modern IT-based technologies, like interactive applets and computer activities: What is a native process (native distribution), a sample, a sample distribution, a parameter estimator, a sample-driven numerical parameter (point) estimate or a sampling distribution? What is the relationship between the inference of the CLT and its applications in the real world? How does one improve CLT knowledge retention, which seems to decay over time? Are there paramount characteristics we can demonstrate in the classroom, which may later serve as a foundation for reconstructing the detailed statement of the CLT and improving communication of CLT meaning and results? How does one directly involve and challenge students in thinking about CLT (in and out of classroom)? **Traditional CLT teaching techniques (symbolic mathematics and physical demonstrations) are typically restricted in terms of time and space (e.g., shown once in class) and may have the limitations of involving one native process, studying one population parameter and restricting the scope of the inference (e.g., sample-size constraints). **Modern IT-based blended instruction approaches address many of these CLT teaching challenges by utilizing the Internet and the available computational power. For example, a Java CLT applet may be evoked repeatedly under different initial conditions (choosing sample-sizes and number of experiments, native process distributions, parameters of interest, etc.). Such tests may be performed from remote locations (e.g., classroom, library, home), and may provide enhanced interactive features (e.g., fitting Normal model to sampling distribution) demonstrated in different experimental modes (e.g., intense computational vs. visual animated sampling). Such features are especially useful for active, visual and deductive learners. Furthermore, interactive demonstrations are thought to significantly enhance the learning process for some student populations. **Students in probability and statistics classes are generally expected to master difficult concepts that ultimately lead to understanding the basis of data variation, modeling and analysis. For many students relying on procedural manipulations and recipes is natural, perhaps because of their prior experiences with (deterministic) Newtonian sciences. Various statistics-education researchers have experimented with technology to explore novel exploratory data-analysis techniques that emphasize making sense of data via data manipulation, visualization and simulation. Such investigators refer to statistical literacy as the process of acquiring and utilizing intuition for discovering and interpreting trends, proposing solutions and counterexamples to basic problems in probability, as well as understanding statistical data modeling and analysis. Because the concepts of distribution, variation, probability, randomness, modeling and estimation are so ubiquitously used and entangled, instructors frequently forget that these notions should be defined, explained and demonstrated in (most) undergraduate probability and statistics classes. Various sampling and simulation applets and demonstrations are quite useful for this purpose. ===Applications=== [http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_LawOfLargeNumbers This article]demonstrated the theory and application of LLN in SOCR tools. It illustrated the theoretical meaning and practical implications of LLN and presented the LLN in varieties of situations. It also provided empirical evidence in support of LLN convergence and dispelled the common LLN misconceptions. [http://www.amstat.org/publications/jse/v16n2/dinov.html This article] presents the CLT in new SOCR applet and demonstration activity. Abstract: Modern approaches for information technology based blended education utilize a variety of novel instructional, computational and network resources. Such attempts employ technology to deliver integrated, dynamically linked, interactive content and multi-faceted learning environments, which may facilitate student comprehension and information retention. In this manuscript, we describe one such innovative effort of using technological tools for improving student motivation and learning of the theory, practice and usability of the Central Limit Theorem (CLT) in probability and statistics courses. Our approach is based on harnessing the computational libraries developed by the Statistics Online Computational Resource (SOCR) to design a new interactive Java applet and a corresponding demonstration activity that illustrate the meaning and the power of the CLT. The CLT applet and activity have clear common goals; to provide graphical representation of the CLT, to improve student intuition, and to empirically validate and establish the limits of the CLT. The SOCR CLT activity consists of four experiments that demonstrate the assumptions, meaning and implications of the CLT and ties these to specific hands-on simulations. We include a number of examples illustrating the theory and applications of the CLT. Both the SOCR CLT applet and activity are freely available online to the community to test, validate and extend [http://www.socr.ucla.edu/htmls/SOCR_Experiments.html Applet:] [http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_GeneralCentralLimitTheorem Activity:] [http://link.springer.com/article/10.1007/BF01207515 This article] presents the CLT for quadratic forms in strongly dependent linear variables and its application to asymptotical normality of Whittle's estimate. A central limit theorem for quadratic forms in strongly dependent linear (or moving average) variables is proved, generalizing the results of Avramand Fox and Taqqu for Gaussian variables. The theorem is applied to prove asymptotical normality of Whittle's estimate of the parameter of strongly dependent linear sequences. [http://www.sciencedirect.com/science/article/pii/0022053185900596 This article] studied on LLN with continuous i.i.d. random variables. There are two problems with the common argument that a continuum of independent and identically distributed random variables sum to a nonrandom quantity in "large economies". First, it may be unintelligible in that it may call for the measure of a non-measurable set. However, there is a probability measure, consistent with the finite-dimensional distributions, which assigns zero measure to the set of realizations having that difficulty. A second difficulty is that the "law of large numbers" may not hold even when there is no measurability problem. ===Software=== * [http://socr.ucla.edu/htmls/exp/LLN_Simple_Experiment.html SOCR LLN Experiment (Java Applet)] * [http://socr.ucla.edu/htmls/exp/Coin_Toss_LLN_Experiment.html SOCR Coin Toss LLN Experiment (Java Applet)] * [[SOCR_EduMaterials_Activities_LawOfLargeNumbers#Estimating_.CF.80_using_SOCR_simulation |SOCR LLN Activity]] * [[SOCR_EduMaterials_Activities_GeneralCentralLimitTheorem| SOCR CLT Activity]] ===Problems=== 6.1) Your friend is in Vegas playing Keno, and he has noticed that some numbers have been coming up more frequently than others. He declares that the other numbers were "due" to come up, and puts all of his money on those numbers. Is this a correct assessment? (a) Yes, the Law of Averages says that the numbers that haven't shown up will now come up more often, because the probabilities will even out in the end. (b) No, this is a misconception, because random phenomena do not "compensate" for what happened in the past. (c) No, the game is probably broken, and the other numbers won't be coming up more frequently. (d) Yes, the more often a certain number doesn't come up, its probability of coming up next turn increases. 6.2) You are flipping a coin, and it has already landed heads seven times in a row. For the next flip, the probability of getting tails will be greater than the probability of getting heads. (a) TRUE (b) FALSE Hands-on activities for practice to help students experiment with the SOCR LLN activity and understand the meaning, ramifications and limitations of the LLN. 6.3) Run the SOCR Coin Toss LLN Experiment twice with stop=100 and p=0.5. This corresponds to flipping a fair coin 100 times and observing the behavior of the proportion of heads across (discrete) time. What will be different in the outcomes of the 2 experiments? What properties of the 2 outcomes will be very similar? If we did this 10 times, what is expected to vary and what may be predicted accurately? 6.4) Use the SOCR Uniform e-Estimate Experiment to obtain stochastic estimates of the natural number $e≈2.7182$. Try to explain in words, and support your argument with data/results from this simulation, why is the expected value of the variable U (defined above) equal to e, $E(U) = e$. How does the LLN come into play in this experiment? How would you go about in practice if you had to estimate $e^2≈7.38861124$? Similarly, try to estimate $π≈3.141592$ and $π^2≈9.8696044$ using the [[SOCR_EduMaterials_Activities_BuffonNeedleExperiment|SOCR Buffon's Needle Experiment]]. 6.5) Run the [[SOCR_EduMaterials_Activities_RouletteExperiment|SOCR Roulette Experiment]] and bet on 1-18 (out of the 38 possible numbers/outcomes). What is the probability of success (p)? What does the LLN imply about $p$ and repeated runs of this experiment? Run this experiment 3 times. What is the sample estimate of p ($\hat{p}$)? What is the difference $p-\hat{p}$? Would this difference change if we ran the experiment 10 or 100 times? How? In 100 Roulette experiments, what can you say about the difference of the number of successes (outcome in 1-18) and the number of failures? How about the proportion of successes? 6.6) Work through the experiments given in this article to (1) empirically validate the sample average of random observations (most processes) follow normal distribution; (2) demonstrate that the sample average is special and other sample statistics like median or variance generally don't have distributions that are normal; (3) illustrate that the expectations of the sample average equals the population mean; and show that the variation of the sampling distribution of the mean rapidly decreases as the sample size increases. '''Answer the following questions:''' What effects will asymmetry, gaps and continuity of the native distribution have on the applicability of the CLT, or on the asymptotic distribution of various sample statistics? When can we reasonably expect statistics, other than the sample mean, to have CLT properties? If a native process has $σ_X = 10$ and we take a sample of size 10, what will be ($σ_{\bar{X}}$)? Does it depend on the shape of the original process? How large should the sample-size be so that $σ_{\bar{X}}=\frac{2}{3} σ_X$? ===References=== * [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3095954/ Law of Large Numbers: the Theory, Applications and Technology-based Education] * [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3152447/ Central Limit Theorem: New SOCR Applet and Demonstration Activity] * [http://mirlyn.lib.umich.edu/Record/004199238 Statistical inference] * [http://mirlyn.lib.umich.edu/Record/004232056 Sampling] * [http://en.wikipedia.org/wiki/Central_limit_theorem Wikipedia CLT] * [http://en.wikipedia.org/wiki/Law_of_large_numbers Wikipedia LLN] <hr> * SOCR Home page: http://www.socr.umich.edu {{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_CLT_LLN}} Return to SMHS CLT LLN. Retrieved from "https://wiki.socr.umich.edu/index.php/SMHS_CLT_LLN"
CommonCrawl
Thinning decreased soil respiration differently in two dryland Mediterranean forests with contrasted soil temperature and humidity regimes Inmaculada Bautista1, Antonio Lidón1, Cristina Lull1, María González-Sanchis2 & Antonio D. del Campo2 European Journal of Forest Research volume 140, pages 1469–1485 (2021)Cite this article The effects of a thinning treatment on soil respiration (Rs) were analysed in two dryland forest types with a Mediterranean climate in east Spain: a dry subhumid holm oak forest (Quercus ilex subsp. ballota) in La Hunde; a semiarid postfire regenerated Aleppo pine (Pinus halepensis) forest in Sierra Calderona. Two twin plots were established at each site: one was thinned and the other was the control. Rs, soil humidity and temperature were measured regularly in the field at nine points per plot distributed into three blocks along the slope for 3 years at HU and for 2 years at CA after forest treatment. Soil heterotrophic activity was measured in laboratory on soil samples obtained bimonthly from December 2012 to June 2013 at the HU site. Seasonal Rs distribution gave low values in winter, began to increase in spring before lowering as soil dried in summer. This scenario indicates that with a semiarid climate, soil respiration is controlled by both soil humidity and soil temperature. Throughout the study period, the mean Rs value in the HU C plot was 13% higher than at HU T, and was 26% higher at CA C than the corresponding CA T plot value, being the differences significantly higher in control plots during active growing periods. Soil microclimatic variables explain the biggest proportion of variability for Rs: soil temperature explained 24.1% of total variability for Rs in the dry subhumid forest; soil humidity accounted for 24.6% of total variability for Rs in the semiarid forest. As Mediterranean climates are characterised by wide interannual variability, Rs showed considerable variability over the years, which can mask the effect caused by thinning treatment. Working on a manuscript? Avoid the most common mistakes and prepare your manuscript for journal editors. The world's forests store more than 80% of all terrestrial aboveground carbon, and more than 70% of all soil organic carbon (Dixon et al. 1994; Jandl et al. 2007). The carbon balance in an ecosystem is the net result of CO2 fixation by photosynthesis occurring aboveground and the release of carbon as CO2 from both above and belowground compartments. In forest ecosystems, the CO2 flux from the soil surface, or soil respiration (Rs), accounts for 30–80% of total ecosystem respiration (Davidson et al. 2006). The belowground carbon outputs are balanced by organic carbon inputs to soil via litterfall and annual root turnover, and by root exudates (Ågren and Knecht 2001; Ukonmaanaho et al. 2008). Rs is the sum of both soil autotrophic and heterotrophic respirations. Autotrophic respiration agents are roots and their associated mycorrhizae, whereas heterotrophic respiration is performed by saprophytic fungi and bacteria, and also by soil meso- and macrofauna. The former is maintained by the current assimilates produced by photosynthesis and root starch reserves at the beginning of the growing season (Högberg et al. 2001). The latter is maintained by the decomposition of organic debris on forest floors and stabilised organic matter on the ground. Estimates of root respiration contribution to total Rs range from 10–90% (Hanson et al. 2000), and can seasonally vary between growing and non-growing periods (Högberg et al. 2001; Rey et al. 2003; Tang et al. 2005a). Soil biological activity is controlled by environmental conditions, mainly temperature and water and substrate availability. In climate zones where biological activity is controlled by water availability, such as Mediterranean ecosystems, it is necessary to understand the effect of water on Rs. The effect of soil humidity is complex because Rs is restricted by lack of water, but also by excess water as the latter induces oxygen deficiency, especially during high oxygen demand periods (Curtin et al. 2012). Current climate evolution models estimate a temperature rise of around 3–4ºC in the Mediterranean Region in Europe, along with an expected drop in current annual precipitation of up to 20% (up to 50% less in summer) (Lindner et al. 2010). The Mediterranean Region is especially sensitive to climate change impacts given the high sensitivity of ecosystem productivity to water availability. Cabon et al. (2018) warns about the rising vulnerability of oak coppices in Southern Europe to drought as a result of ongoing climate change. In this area, one research priority is to study how human intervention can mediate the adaptation of forests to environmental changes (Scarascia-Mugnozza et al. 2000; Lindner et al. 2010; del Campo et al. 2017). To detect and predict future changes in the forest ecosystem function, the patterns, controls and variability of forest productivity within and among stands must be better understood (Newman et al. 2006). Awareness of the impact of forest management on forest resilience against environmental changes has grown (Jandl et al. 2019). Forest thinning, which is the most frequent forest management practice, increases forest productivity, but also helps to enhance ecosystem resilience (Millar et al. 2007; del Campo et al. 2019), increase water availability (Molina and del Campo 2012) or improve tree growth and vigour (Olivar et al. 2014). In semiarid areas, the idea of using thinning as an adaptation strategy to climate change is emerging as a means to improve forests' resistance to climate change by decreasing water competition between the remaining stems. Several studies have analysed the effect of thinning on forest tolerance to drought (D'Amato et al. 2013; Cabon et al. 2018; del Campo et al. 2019). When implementing hydrology-oriented forest management, a shift in the paradigm of full canopy cover to medium covers occurs (40–50%). A previous work (del Campo et al. 2018) has shown that a reduced forest biomass in a Mediterranean climate significantly increases net rainfall by reducing canopy interception. Net water gain accumulates mainly on the forest floor and is lost from the soil surface by evaporation, which leads to more dynamic water content changes on the soil surface layer (del Campo et al. 2019). Studies that examine soil CO2 flux patterns and mechanisms in response to management practices are important as a means to enhance forest C sequestration (Peng et al. 2008). The carbon sequestration potential in forest land through forest management practices is estimated at 0.94–2.45 Pg C year−1 (Lal et al. 2018). Thinning practices can potentially influence soil respiration (autotrophic and heterotrophic respiration) by modifying root activity, labile organic C inputs, substrate availability, soil temperature and soil water content (Thibodeau et al. 2000; López et al. 2003; Xu et al. 2011; López-Serrano et al. 2016; Martínez-García et al. 2017). Thinning strongly affects the forest carbon pools, since reduces the canopy structure and increases the amount of woody debris, which are normally scattered on the ground. The spatial pattern of litter has been largely influenced by the amount of downed woody debris and downed trees at sites (Martin and Timmer 2006). Notwithstanding, the effects of thinning on carbon fluxes are not so evident. Previous studies have reported contradictory effects of thinning on Rs. Toland and Zak (1994) found that daily rates of soil respiration did not significant differ between intact and clear-cut plots; Striegl and Wickland (1998) reported a great reduction in soil respiration in the growing season after clear-cut, whereas Ohashi et al. (1999) reported soil respiration rates in the thinned section significantly higher. In recent years, the number of studies into forest thinning impacts on Rs in the Mediterranean Region has increased also with contrasting conclusions. Some authors have reported significantly higher Rs with thinning for the first year after treatment (Akburak and Makineci 2016; López-Serrano et al. 2016). In contrast, Tang et al. (2005b) observed a 13% reduction in total Rs after thinning. These authors suggested that this decrease could be associated with reduced root density. Other authors observed no effect of thinning on soil respiration (Campbell et al. 2009). Chang et al. (2016) indicated that Rs response to thinning was species-dependent with Rs responding less to thinning in deep-rooted species. The general review by Zhang et al. (2018) reported an initial increase in Rs after light or moderate thinning intensity in broadleaved or mixed forests. Afterwards, Rs gradually lowered to the original pre-thinning level. Soil respiration is very responsive to climate conditions. Although a quasi-general agreement has been reached for modelling the soil temperature effect on microbial activity using exponential functions or Q10 functions (Lloyd and Taylor 1994; Pang et al. 2013), the effect of water availability on Rs does not show a similar pattern. Several types of relations have been proposed to model soil humidity effects on Rs (López-Serrano et al. 2016; Martínez-García et al. 2017). It has also been shown that the relation between Rs and soil humidity is affected by soil properties (Moyano et al. 2012). Apart from this complexity, wetting pulses after dry periods trigger a burst of microbial activity as a result of increased substrate availability after rewetting (Almagro et al. 2009; Lado-Monserrat et al. 2014). A special characteristic of Mediterranean area is that forests concentrate mainly in mountain areas with high slopes (Yaalon 1997). Slope and aspect angles induce great heterogeneity in soil properties along slopes, and increase Rs spatial variability (Xu and Qi 2001; Ohashi and Gyokusen 2007). Topography strongly influences soil humidity by creating spatial patterns that influence Rs (Martin and Bolstad 2009). The analysis of the spatial variability + topography interaction on soil CO2 emissions is essential for scaling up processes measured on small scales (plot) to an ecosystem scale. This study is framed within an overall project that aims to evaluate the effects of adaptive forest management on water fluxes, growth dynamics, field CO2 flux and soil properties. Selective thinning was studied in two dryland forest types with a Mediterranean climate in east Spain as an adaptive measure to compensate for rainfall reduction (del Campo et al. 2019). Both forests differ in aridity and present two distinct dominant species: Quercus ilex spp. ballota at La Hunde and Pinus halepensis at Sierra Calderona. At both study sites, two twin plots were established: one was thinned and the other acted as a control. The present study offers experimental data showing the effects of management intensity on soil respiration to better understand carbon–water coupling under forest treatments. Our hypothesis is that climatic conditions in the Mediterranean, especially water availability controls soil respiration. The high variability between years in water resources can control canopy cover evolution after thinning and the relative contribution of autotrophic and heterotrophic respiration and thus can mask a clear response of soil carbon fluxes to thinning. The aims of this study were to: (1) determine the influence of thinning on soil respiration and microclimate; (2) compare thinning effects on soil respiration among forests; (3) determine which factors affected soil respiration after thinning. Study sites This study was carried out in two different forested areas in east Spain with high tree density and fierce competition (del Campo et al. 2018). La Hunde (HU) site (39°04′50'' N, 1°14′47′′ W, at 1090 m.a.s.l) is located near Ayora, SW Valencia (Spain) at the headwaters of the Rambla Espadilla catchment. The Calderona site (CA) (39°42′29″ N, 0°27′25″ W, at 790 m.a.s.l.) is located in the Sierra Calderona Natural Park, a mountain range that separates the basins of the Palancia and Turia rivers. Both sites differ not only in vegetation, but also in climate, rainfall annual distribution and soil characteristics (del Campo et al. 2018). At HU, 150 km inland, climate is continental with average annual precipitation of 466 mm and mean annual temperature of 12.8 ºC, whereas the CA site has marked influence from the Mediterranean Sea, which is 25 km away and has annual values of 342 mm and 14 ºC for rainfall and mean temperature. Following the drylands classification according to the Aridity Index (UNEP, 1992), CA is defined as semiarid and presents drier conditions than HU, which has a dry subhumid climate. HU presents a coppice oak and shrubland forest, developed as a result of traditional fuelwood harvesting that fell into disuse in the 1970s. The dominant species there is Quercus ilex ssp. ballota, with some Q. faginea and Pinus halepensis trees. The understory shrubs are mainly Juniperus phoenicea and J. oxycedrus. Orientation is NW with a mean slope of 30% (Fig. 1a). In May 2012, two contiguous experimental plots (1,800 m2 each) were established: one without treatment (HU C) and the one subjected to thinning with shrub clearing treatment (HU T), where tree density reduction was 73% and the removed basal area was 41%. Trees were chosen in an attempt to achieve homogeneous spatial distribution in the whole plot. Timber was exported after treatment, but branches, twigs and leaves were cut down and ground into mulch in the thinned area. In summer 2013, some pines were ringed and later cut down in September 2013. Slope angle and aspect of (a) La Hunde C (control) and T (treated) plots and (b) Calderona C (control) and T (treated) plots The CA plots were established in an Aleppo pine (Pinus halepensis Mill.) forest, which was regenerated after a wildfire in 1992. Spontaneous regeneration, together with lack of management, created highly dense pine saplings, with over 15,000 trees/ha (del Campo et al. 2018). A thinning treatment intervention was performed in the 8.3 ha of fired forest. Between January and October 2012, the overall area was thinned by cutting 94% of trees except in a plot (40 × 40 m) that was used as a control (CA C). Contiguously to this control plot, a treated plot (CA T) of similar size, was delimited (Fig. 1b). The basal removed area was 74%. Trees were cut down and ground into detritus to be scattered on the forest floor. Orientation is NW. The slope is not uniform (ranging from 19.6% to 40%), and is lower on the upper and lower hill parts, and higher in the central part. In both sites, three reiterations per treatment were established. Soil characterisation At the HU site, before thinning treatment (2 May, 2012), four points in the control plot and four points in the treated plot distributed along the slope were randomly selected. At each point, a metal frame (30 × 30 cm) was used to separately collect the litter layer, the humified organic layer underneath and the top mineral soil layer from 0–15 cm. Then, a soil probe was used to take samples from 15–30 cm and below 30 cm whenever possible. Soil depth was around 40 cm at the bottom of the slope but, as soil is very shallow in the upper slope part, organic matter was found to directly overlie rocks at some points. Samples were weighed and air-dried, weighed again before the different fractions were separated by sieving through a 2-mm mesh. The larger fraction was hand-separated into stones, roots, leaf debris, woody debris and miscellaneous organic fractions. Air-dry soil humidity in the fine fraction was determined in a subsample by drying at 105 ºC until constant weight. In the fine fraction, soil pH was determined in a 1:2.5 water suspension, inorganic carbonate content was established by the Bernard calcimeter method (MAPA, 1994), and total organic carbon (TOC) by the Walkley–Black method (Nelson and Sommers 1982). At the CA site, a similar scheme was performed after treatment in March 2013, with four points per plot distributed along the slope. All the material inside the frame was taken by separating the litter layer, humified organic layer and mineral soil from 0–5 cm. Deeper samples were taken with a 5-cm-diameter helicoidal probe. Samples were separated and analysed similarly to those from the HU site. Soil characteristics are given in Tables 1 and 2. At HU, soil presents high stoniness (> 50%), a high calcium carbonate content (18–36.4%) and a basic pH (7.88–8.29). The CA soil presents less stoniness, a higher calcium carbonate content (27.9–50.3%), lower total soil organic carbon levels and slightly higher pH values (8.24–8.51). Table 1 Soil characterisation at both experimental sites (La Hunde, HU; Calderona, CA). The forest floor is divided into litter layer and partially decomposed organic residues (L/F layer) fermentation and humified layer (H layer). Under the organic layer mineral soil was sampled at regular depths. Table 2 Weight of organic matter > 2 mm (g/m2) on the organic horizon at both experimental sites (HU, La Hunde before treatment; Calderona after treatment, in the control CA C and the treated, CA T, plots). The forests present an organic horizon with different morphology, whereas La Hunde shows a well-developed humified layer over the A horizon, in Calderona forest, where organic horizon was developed under an arid climate with P. halepensis as main species and possibly affected by the 1992 fire, the organic horizon lack of humified layer, showing only a L layer composed of pinus needles and a thick F layer with partially decomposed vegetal material. In the litter layer, oak leaves prevailed at HU vs. pine needles at CA (Table 2). At CA, where all the residue was left on soil, treatment increased the woody debris weight 18-fold compared to the control plot. Soil respiration The experimental design was a complete block design following a stratified random sampling scheme (Pennock et al. 2008) with two treatments (thinning and untreated control). Both the thinned and control areas were split into three similar sized blocks from upslope to downslope. In the three previously defined blocks, three PVC collars (10 cm in diameter, 5 cm deep) were installed to measure the Rs efflux in the same place throughout the study. Collars were placed 3 cm into soil at each position. An EGM-4 environmental gas monitor (PP System Company, Amesbury, MA, USA) was used to measure periodically the CO2 efflux after the treatment until November 2014; and from November 2015 to the end of 2016. At the HU site, treatment was performed in May 2012 and respiration measurements began in September 2012. At CA, Rs measurements began in July 2013. Nearly, all our measurements were taken at midday, between 11:00 and 13:00 h GMT + 1, with a 1–2-month interval. Soil humidity and soil temperature were recorded simultaneously with Rs near collars from a depth of 0–6 cm with an HH2 humidity meter and a WET-2 sensor (Delta-T Devices Ltd, UK). Air temperature measurements were taken above the canopy with T, RH sensors (Decagon Devices, Pullman, USA), connected to a data-logging system (CR1000, Campbell Sci., UT, USA). Air data were recorded at 10-min time intervals. Heterotrophic activity Laboratory heterotrophic activity measurements were performed on the soil samples taken from the surface mineral soil in HU plots. Soil sampling was conducted bimonthly from December 2012 and ended in June 2013 at the HU site. In each plot (HU C and HU T), nine samples were taken from 0–10 cm after removing the organic horizon. After being sieved through a 2-mm mesh, soil samples, were incubated at 25 ºC and field humidity to analyse laboratory respiration. A subsample of each sieved soil was dried to 105 ºC to determine gravimetric water content. The remainder was stored at 4 °C until respiration was measured. CO2 evolution was measured in sealed flasks with rubber septa by an infrared sensor PBI Dansensor, and was used as an indicator of heterotrophic activity (Ha) of surface mineral soil. The incubation flasks were aerated daily to prevent restriction in respiration by lack of oxygen. Heterotrophic activity is performed in altered soil samples and results are expressed in mass basis. In these samples, water-soluble organic carbon (WSOC) was determined in the (1:2.5) aqueous soil extract obtained after 30 min of mechanical shaking, centrifugation at 2,500 rpm for 5 min and filtration through Whatman 42 paper filter. The WSOC in the extracts was estimated by K2Cr2O7 oxidation in concentrated H2SO4 (Yakovchenko and Sikora 1998). Spatial distribution of forest floor depth Spatial distribution of forest floor depth was characterised in January and February 2017 at the HU site and the CA site, respectively. In nine points per plot and close to each respiration collar, all the forest floor material was collected from an area covering 12 cm × 12 cm. Forest floor thickness was recorded at each point. After air drying, each sample was separated into different fractions: stones, roots, plant material over 4 mm and the fine fraction under 4 mm. These fractions were dried at 65 ºC for at least 48 h before being weighed. At each point, a mineral soil sample underneath was collected from 0–15 cm and sieved at 2 mm for routine analysis purposes. In these samples, roots were hand-separated and root dry weight was recorded to gravimetrically obtain the proportion of roots. Data analysis and statistics Datasets were tested before analysing for normality, performing the Kolmogorov–Smirnoff test and homogeneity of the variance (Levene's test), and were log-transformed whenever necessary to improve normality and homocedasticity. Thinning effects were assessed by comparing the T and C plots for soil respiration, soil humidity and temperature on each sampling date. When the distribution parameters were non-normally distributed, a one-way Kruskal–Wallis test was performed using treatment as the main factor. Differences at P < 0.05 were considered to be statistically significant. Slope effects were assessed by comparing the mean values among the three positions along the slope (1 ridgetop, 2 mid-slope, 3 bottom). The effects of thinning treatment (T and C) and position along the slope (1 ridgetop, 2 mid-slope, 3 bottom) on heterotrophic activity, soluble organic carbon (WSOC) and forest floor characteristics were analysed by a multifactor ANOVA. The effects of humidity, spatial position and substrate availability on the Ha values were tested by an ANCOVA analysis using WSOC and humidity as covariables. These statistical analyses were performed with Statgraphics Centurion XVII. When a significant interaction was found between explanatory variables, a separate one-way ANOVA analysis with Fisher's LSD post hoc test was performed. The relations between field Rs and soil climate variables were investigated with the general linear model (GLM) proposed by Tang et al. (2005b), which has also been employed in other forest ecosystems (Olajuyigbe et al. 2012). A multivariate analysis with two independent variables (soil temperature and water content), and a categorical variable representing the interaction of treatment and position along the slope, was performed. An exponential function was applied to model the regulatory effect of surface (0–6 cm depth), soil temperature (Ts, ºC) and humidity (θ, %) on Rs (Rs, µmol C m−2 s−1). $$Rs = \beta_{0} e^{{\beta_{1} Ts}} e^{{\beta_{2} \theta + \beta_{3} \theta^{2} }}$$ where \({\beta }_{0,}\) \({\beta }_{1},\) \({\beta }_{2}\) and \({\beta }_{3}\) are fitting parameters. The equation can be log-transformed to a linear model: $$\ln \left( {Rs} \right) = \ln \left( {\beta_{0} } \right) + \beta_{1} Ts + \beta_{2} \theta + \beta_{3} \theta^{2}$$ where \({\beta }_{0}\) is basal respiration, which is related to the decomposable substrate size. It is the value of Rs when β1 Ts = 0 or Ts = 0ºC and \({\beta }_{2}\theta +{\beta }_{3}{\theta }^{2}\) = 0. β1 is related to temperature sensitivity factor Q10 as: $$\beta_{1} = \frac{{\ln Q_{10} }}{10}$$ Q10 is defined as the increment in the Rs rate when soil temperature increases 10 ºC. To obtain more information about the spatial pattern of Rs distribution along the slope and its interrelation with thinning treatment, a combined variable was defined with six levels (T1, T2, T3, C1, C2, C3), where 1, 2 and 3 represented the three identified slope strata (ridgetop, mid-slope, bottom) in each plot. Differences in the Rs induced by changes in microclimatology and their interaction with the time elapsed after treatments were tested using the GLM analysis by taking the post-treatment period (short term, 1–2 years; mid-term, 3–4 years) and the block and treatment combination as factors, and soil temperature, humidity and squared humidity as covariables. This analysis was carried out with R 3.3.3 (R Development Core Team 2017). Soil respiration, soil temperature and soil humidity in the thinned and control plots The Rs rate displayed significant seasonal variability. In the HU C plot (Fig. 2a), the maximum respiration rate 5.06 ± 1.89 µmol C m−2 s−1 was recorded in September 2014 and the minimum value of 0.65 ± 0.26 µmol C m−2 s−1 in January 2016. In the HU T plot, the maximum respiration value (5.58 ± 6.07 µmol C m−2 s−1) was recorded in the first post-treatment year in September 2012 and the minimum one (0.61 ± 0.31 µmol C m−2 s−1) in January 2015. The HU T plot obtained significantly higher standard deviation values than the control in the first post-thinning year. Seasonal variation in a soil respiration, Rs; b soil temperature; c soil humidity in La Hunde oak forest. Points represent the mean values and error bars standard deviation (n = 9). *, ** and *** significant differences in the C (control) or T (thinning treatment) plots (p < 0.05, p < 0.01 and p < 0.001, respectively) (Rs data were log-transformed before comparing the mean values) Soil temperature showed marked seasonal variation, with maximum values in July and minimum values in January (Fig. 2b). Soil humidity also had a strong seasonal component (Fig. 2c) associated with rainfall, with maximum values in winter and minimum ones in summer. The changes in the rainfall pattern of the studied years vastly differed, with 2013 and 2014 being very dry years (del Campo et al. 2018). In 2014, soil began to dry out in April and the dry period lasted until September. Conversely as 2012 and 2016 were wetter, surface horizon humidity remained high until the end of spring. The maximum value of respiration rates at site CA were 5.45 ± 1.63 µmol C m−2 s−1 in September 2014 in the CA T plot, and 6.26 ± 3.94 µmol C m−2 s−1 in November 2015 in the CA C plot (Fig. 3a). The minimum respiration values were 0.56 ± 0.42 µmol C m−2 s−1 for the CA T plot and 0.67 ± 0.47 µmol C m−2 s−1 for the CA C plot, both recorded in November 2013. As a general trend, the Rs at both study sites was low in winter due to low soil temperature, and began to increase in spring before lowering when soil dried in summer. Soil temperature and humidity (Fig. 3b,c) presented similar temporal variations to those of the HU plots. Seasonal variation in: a soil respiration, Rs; b soil temperature; c soil humidity in the Calderona pine forest. Points represent the mean values and error bars standard deviation (n = 9). *, ** and *** significant differences in the C (control) or T (thinning treatment) plots (p < 0.05, p < 0,01 and p < 0,001, respectively) (Rs data were log-transformed before comparing the mean values) The highest Rs values were obtained in September, which coincided with the first rains after summer drought, and lasted while soil temperature was over 10 ºC. Thinning effect on soil respiration and microclimate. Comparison of forests The mean Rs value in the HU C plot (2.17 ± 1.65 µmol C m−2 s−1) was 13% higher than that recorded in the HU T plot. In La Hunde forest, after treatment, soil respiration remained slightly higher in the treated plot until February 2013, although differences were not statistically significant. Soil respiration was significantly higher in the control plot on four dates (17/07/2013, 27/11/2015, 26/06/16, 19/07/16), and three of these dates correspond to the beginning of summer, this being the active tree growth season. This effect was not observed in the driest year (2014), when respiration values were quite low. Surface soil moisture was significantly higher in the control plot on five dates. Soil temperature was significantly higher in the control plot on eight dates, but differences between plots were usually very small. In the CA forest, the mean Rs value was 2.71 ± 2.19 µmol C m−2 s−1 in CA C, which was 26% higher than the corresponding CA T plot value. Soil respiration was generally higher in the C plot, although these differences were significant only on four dates recorded in autumn after summer drought. Soil temperature was generally higher in the T plot, and this difference was significant on almost all the dates despite us also usually starting to measure in the T plot in this forest. In CA, the biggest differences in temperature between the treated and control plots appeared in spring following thinning treatment. Topographical effects Topography differentially affected soil respiration in relation to treatment (Fig. 4). Soil respiration presented a thinning-position interaction along the slope. In the thinned plot, soil respiration was higher in the mid-slope position at both sites. In the control plot of La Hunde forest, Rs was higher at the ridgetop and, in the control plot of the Calderona forest, Rs was significantly higher in the bottom position. Mean values and standard deviation for soil respiration (Rs) in the control (C) and thinned (T) plots in different positions along the slope (1 ridgetop, 2 mid-slope, 3 bottom) in the (a) HU, La Hunde forest and (b) CA, Calderona forest. Values presented are mean and standard deviation. Lower case letters indicate significant differences in the mean values (p < 0.05) among blocks according to LSD Fisher, independently for each plot Regarding the topographical effect, a humidity gradient along the slope was found with higher soil humidity values in the bottom position (11.36% in CA T, 13.97% in CA C in the lower part vs. 7.11% and 7.50% in the upper part in of CA T and CA C, respectively). At the HU site, this gradient was significant only in the treated plot (Table 3). Topography did not induce any significant effect on soil temperature. Table 3 Mean soil surface temperature, Ts, and soil surface humidity, Hs, in the thinned and control plots in different positions along the slope (1 ridgetop, 2 mid-slope, 3 bottom) in treated, T, and control, C, plots. Thinning and heterotrophic activity In the samples used for the heterotrophic activity measurements, mineral soil humidity ranged from 7.45 to 72.51 g water/100 g of dry soil, and was considerably higher than those values found in the bulk surface soil layer. With the ANCOVA analysis (Table 4), soil humidity explained 46.5% of the variability in the Ha rates, and substrate (WSOC) availability explained a further 13.7% of the variance in Ha. Table 4 The ANCOVA of heterotrophic activity, Ha (expressed as µmol kg−1 day−1), obtained by lab incubation with La Hunde samples at 25ºC vs. treatment, block, soil humidity and water-soluble organic carbon (WSOC) The effect of thinning on Ha all along the topographical position (Fig. 5) showed distinctive behaviour between the control and treated plots, with a significant increase in heterotrophic activity, Ha, and water-soluble organic carbon, WSOC noted for the mid-slope block in the treated plot. Effect of position on the slope and thinning treatment in (a) heterotrophic activity, Ha and (b) water-soluble organic carbon, WSOC in La Hunde control, HU C and treated, HU T plots: control top, C1, control middle, C2, control bottom, C3, treated top, T1, treated middle, T2 and treated bottom, T3. Bars represent the mean and standard deviation. Lower case letters indicate significant differences in the mean values among blocks according to LSD Fisher, independently for each plot Effect of thinning on the soil organic horizon The mean organic horizon thickness and water depth values (Fig. 6a,b) were higher in HU than in CA, and wide variability appeared between blocks. Thinning increased the organic layer depth due to the incorporation of chopped woody debris into the soil surface, and this effect was stronger in CA. The organic layer depth in the treated plot was deeper in the middle position at both sites, which indicates irregular debris distribution from the thinning treatment. The quantity of organic matter strongly impacted the water retained by the organic horizon (Fig. 6b). As the organic layer had a higher degree of humification in HU, it created favourable root growth conditions because it enhanced the root percentages in this zone (Fig. 7). At HU, fine roots were also found in the organic horizon. a Organic layer depth (cm) and b the water content (mm) that remained in the organic layer on the sampling date (beginning of 2017) in La Hunde (HU) and Calderona (CA): control top, C1, control middle, C2, control bottom, C3, treated top, T1, treated middle, T2 and treated bottom, T3. Bars indicate the mean values and standard deviation (n = 3) Percentage of the root dry weight in the organic layer (O layer) and surface mineral soil (0–10 cm) M layer in La Hunde and Calderona plots (sampling date in 2017). Bars indicate the mean values and standard deviation (n = 9) Influencing factors for variation in Rs A general linear analysis was performed using the model proposed by Tang et al. (2005b) (Table 5). The variables that most significantly explained variation in Rs were soil temperature in the dry subhumid forest (the F-value was 310 for HU) and soil humidity in the semiarid forest (F-value was 119 and 38 in CA for humidity and square humidity, respectively). The soil climatic variables accounted for the greatest percentage of total Rs variability: soil temperature explained 24.1% of total Rs variability in the dry subhumid forest and soil humidity explained 24.6% of total Rs variability in the semiarid forest. The topographical effect was also very significant at both sites (F-value of 15.0 for HU and 5.1 for CA). Differences between sites were the significant effect of period after treatment on CA Rs, and the significant interaction between temperature sensitivity and treatment observed in HU. Otherwise, no other significant interactions were found at either site. Table 5 Statistical significance (F ratio and P value) of the factors and variables implicated in Rs (log-transformed) in Calderona, CA and La Hunde, HU according to the GLM analysis. Factors are the block and treatment combination (T1, T2, T3, C1, C2, C3) and period after treatment. The position along the slope had significant effects on Rs, but gave contrasting results in plots T and C (Table 6). At both sites, the highest basal respiration (\({\beta }_{0})\) was found in the treated plot in the mid-slope position. The regression slopes in relation to soil humidity and square of soil humidity were similar in both areas, and were also similar to the values reported by Tang et al. (2005a). In HU, significant differences in temperature sensitivity (obtained from \({\beta }_{1}\) according to Eq. 3) between the HU C (Q10 = 2.67) and HU T plots (Q10 = equal to 2.08). The Q10 value in CA (1.87) indicates a lower temperature sensitivity in this site. Overall, the percentage of variance explained by the spatial and microclimate variables was higher than 41% in both areas. Table 6 Mean basal respiration (β0) values for each slope position (control top, C1, control middle, C2, control bottom, C3, treated top, T1, treated middle, T2 and treated bottom, T3) at La Hunde (HU) and Calderona (CA) forests for the first period after treatment and the regression coefficients for temperature (β1), humidity (β2) and square humidity (β3) In 2012, thinning treatment was performed in two different Mediterranean forest stands to analyse thinning effects on forest resilience. Both stands are located in mountain areas with similar slopes and aspects, although the two forests' climate characteristics and main species differed. Rs was measured from September 2012 to November 2014 and from November 2015 to December 2016 in HU, whereas in CA were performed also in two periods but the first one began in July 2013. The seasonal Rs distribution at both study sites gave low values in winter due to low soil temperatures, which began to rise in spring, but lowered as soil dried in summer. These findings indicate that with a semiarid climate, soil respiration is controlled by both soil humidity and soil temperature. Our results revealed a general decrease in Rs in the thinned plots at both sites, although the effect of treatment on Rs was stronger in CA in the first year after thinning. In this same year (2012), Rs was higher in the HU T plot vs. HU C, but Rs was considerably lower in CA T than in CA C in 2013. Soil respiration was significantly higher in the control plot of the HU forest on four dates, three of which corresponded to the beginning of summer. At the CA site, the Rs in the control plot was significantly higher in autumn after summer drought, which indicates that root activity and autotrophic respiration brought about higher respiration rates in the control plots during active plant-growing periods. Several reasons explain the differences in response to thinning between both forests. In HU, the silvicultural treatment was moderate (41% of the basal area removed) compared to CA (74% of the basal area removed). As Zhao et al. (2019) reported, thinning intensity is an important factor because after thinning, Rs only increase under light and in the moderate thinning treatments, probably due to the induced increased root activity, which cannot be compensated by decreasing root density after intense thinning. Thinning treatment was performed earlier at HU (May 2012), and was done in November 2012 at CA. For this reason, tree responses to thinning during the growing season began in 2012 for HU and in 2013 for CA. Both years vastly differed from a hydrological point of view. Species' response to thinning also differed. Holm oak is a resprouting species with considerable belowground biomass, particularly after many years of coppicing in the past (Ducrey and Tunel 1992). After moderate thinning in HU and good hydrological conditions, the canopy cover was highly dynamic; a more open canopy and less water competitiveness trigger the growth of not only the remaining trees, whose resprouting was notably enhanced by treatment, but also of understory brush (del Campo et al. 2018; 2019). In the CA forest where Aleppo pine is the main species, the increased growth of roots from the remaining trees during the next growing season after thinning was delayed because the year following treatment was extremely dry. Thinning also change soil macroclimate. Soil moisture was generally higher in the C plot in both sites but significant differences only appeared when soil was dry, and could be explained by the higher tree density in the C plot, which enhances shading and limits evaporation losses under high evaporative demand. As thinning represents a shift from a closed canopy in the control to a more opened ventilated forest structure, direct evaporation from soil is promoted by lowering surface water contents. Regarding soil temperature in HU differences between plots were usually very small although soil temperature was significantly higher in the control plot in the autumn measurements, when soil cools, and could be due to an experimental artifice as measurements were normally taken firstly in the treated plot and then in the C plot. In CA, soil temperature was generally higher in the T plot, and this difference was significant on almost all the dates. In CA with the biggest differences in temperature between the treated and control plots appearing in the spring following thinning treatment. The contrast between both forests, with similar slopes and orientations, was due mainly to thinning intensity, which was higher in CA. Both forests also differ in tree canopy structure between both oak and pine species, which induces changes in ground shading. Many studies about thinning effects on Rs have been carried out on gentle slopes (Tang et al. 2005b; Lopez-Serrano et al. 2016) or slopes are not reported in them (Cheng et al. 2015). The wide diversity in soil properties on steep slopes leads to heterogeneity in chemical, structural and functional traits, which contributes to the spatial heterogeneity of Rs (Mallik and Hu 1997; Ma et al. 2004; Martin and Bolstad 2009). We observed a marked interaction between position along the slope and silvicultural treatment, and not only in the total Rs measured under field conditions, but also in Ha under controlled laboratory conditions. When comparing position along the slope, the mid-slope block in the treated plots showed significantly higher Rs and Ha values. Other authors (Arias-Navarro et al. 2017) have found that position along the slope has significant effects on soil CO2 or N2O fluxes, regardless of the effect of topography on soil microclimate variables. The study by Ohashi and Gyokusen (2007), who analysed the relation between Rs variability and slopes, showed a spatial pattern with higher values located in the mid- or upper slope part. Our experiment indicated that soil respiration significantly increased in the mid-slope position in the treated plots, probably in relation to either different species or residue covers along the slope, or the more soluble organic matter or nutrients being transported along the slope in the treated plots (Fang et al. 2009). An additional factor that increases variability is the influence of slope on microclimate conditions: under our climate conditions, a humidity gradient appeared with drier conditions in the upper slope part. We found that heterotrophic respiration measured in HU mineral soil is dependent on soil humidity and soluble organic carbon. These findings suggest that this soil organic matter fraction was an easily available substrate to soil microorganisms and is, thus, an important heterotrophic Rs component. Heterotrophic respiration is controlled mainly by climate and substrate availability, of which the latter very much depends on organic carbon accumulation in soil. One early thinning effect is an increasing amount of the woody debris normally scattered on the ground. In the HU area, litter layer presented a high degree of humification and contributed to retain water by the organic horizon. After thinning, the organic layer depth had a similar mean value in both HU C and HU T, but heterogeneity was greater in the treated plot with higher values in the mid-slope part. The organic residue left in soil after thinning treatment as coarse woody debris (CWD) brought about a temporal increase in dissolved organic carbon shortly after treatment in the HU plots (Bautista et al. 2015; Lull et al. 2020). This scenario indicates that residue may generate high biological activity areas by the release of carbon-rich dissolved organic matter (DOM) (Spears and Lajtha 2004; Goldin and Hutchinson 2013). Conversely in the CA plots, the organic matter level that accumulated on the soil surface before the treatment was very low compared to HU probably due to this area was fired in 1992. It was reported that wild fires decreased litter layer leaving only one third to one half of the pre-fire content (Boerner 1983). As a consequence of the strong thinning treatment, the CA C plot had a litter layer that doubled in CA T after treatment in the treated plot. It is well-known that soil temperature and moisture are two very important soil microclimate variables that influence Rs (Lloyd and Taylor 1994; Tang et al. 2005b; Inclán et al. 2010), and soil heterotrophic respiration especially responded well to climate soil conditions (Zhao et al. 2019). In our experiment, although both sites have a similar slope and aspect, the increased soil temperature reflected thinning intensity: thinning did not affect the soil temperature at the HU site, but increased the soil temperature in the CA T plot by about 1.5 degrees vs. the CA C plot. Similar positive effects of thinning on soil temperature have been reported by other authors (Bolat 2014; Akburak and Makineci 2016). In relation to surface soil humidity, thinning treatment increases net rainfall by reducing interception losses by tree canopies (del Campo et al. 2018), but also increases direct evaporation from the exposed soil surface. The overall thinning effect at both sites favoured a reduction in surface soil humidity in summer under high evaporative demand conditions. Recently, Zhao et al. (2019) reported how autotrophic and heterotrophic respirations respond to thinning intensity independently of one another, and how heterotrophic respiration was more responsive to soil humidity. Soil heterotrophic activity concentrates in a shallow layer, where water content quickly changes, which explains the response of heterotrophic respiration to water content. Autotrophic soil respiration depends on water root uptake from deeper layers (Chang et al. 2016; Puertes et al. 2019), which is why it is less sensitive to surface soil water content variations. The effect of humidity on the Rs rate may be attributed to heterotrophic microbial respiration, which is very sensitive to surface soil humidity. In water-limited ecosystems, drought controls Rs. The effect of forest thinning on soil respiration is the combined result of reduced plant density, increased soil organic matter and changing soil temperature and water content due to both thinning and local climate conditions (Tang et al. 2005a). In the model proposed by Tang et al. (2005b), surface soil temperature and humidity explained a high percentage of Rs variability. In our study, the optimal model with all the statistically significant terms at p < 0.05 accounted for more than 40% of Rs variability in an Aleppo pine forest and a holm oak forest, with climate variables explaining a high percentage of Rs variability. Soil temperature showed an exponential relation to Rs with adjusted Q10 values ranging from 1.87 to 2.64, and greater temperature sensitivity in the HU plots, where Q10 was affected by silvicultural treatment. The calculated Q10 values agree with the values often reported in the literature (Ohashi and Gyokusen 2007; Inclán et al. 2010). Pang et al. (2013) also found that thinning altered Q10. As Q10 reflects the increased velocity of a reaction with a 10ºC increase in temperature, its value is determined not only by the speed of biochemical reactions, but also by increased microbial biomass. High Q10 values indicate full substrate availability when biomass increases (Davidson and Janssens 2006; Zhou et al. 2013). The effect of surface soil humidity on Rs was well-represented by a second-order polynomial model with microbial activity limited at a low soil humidity level, but also at a high humidity level due to oxygen deficiency. Several studies have revealed that litter heterotrophic respiration can be a primary source of C loss from forest ecosystems (Xiao et al. 2014; McElligott et al. 2017). Several others have compared the heterotrophic and autotrophic components of Rs. In a mixed oak ecosystem, Rey et al. (2003) report a distribution of 54.9% Rs produced by soil organic matter decomposition on an annual basis, 21.9% from aboveground litter decomposition and 23.3% from rhizosphere respiration. We observed that the mean Rs values in the control plots were higher in the pine forest than in the oak forest. Fernández-Alonso et al. (2018), who worked with two similar forests, also obtained higher Rs values in a Scots pine forest than in a Pyrenean oak forest because of higher heterotrophic respiration values in pine forests. They also indicated that soil heterotrophic respiration was the most important component, which accounted for 76% for Pyrenean oak and 88% for Scots pine of Rs. Their study measured heterotrophic Rs in soil with a soil organic matter content that was approximately threefold higher in pine forests than in oak forests. At the CA site in our experiment, the organic horizon was composed of a shallow litter layer, whose main component was partially decomposed needles with a high C/N ratio, while mineral soil had a lower soil organic carbon content than the soil from HU. Thinning reduces soil autotrophic respiration, but temporary increase in litter respiration after thinning has been reported (Lagomarsino et al. 2020). As soil heterotrophic respiration relies on substrate availability, its value depends not only on the actual forest composition, but on the entire cumulative carbon balance throughout the soil's history. Understanding the mechanisms that drive carbon loss from the soil surface is essential for ascertaining the impact of forest management on soil carbon sequestration. This work presents the complexity of the mechanisms that affect Rs spatial variability in Mediterranean mountain forest ecosystems. The impact of forest thinning on the soil carbon balance depends on direct effects, such as reduced plant biomass and root respiration, increased plant residue and, hence, heterotrophic substrate respiration. Forest thinning also affects Rs, mainly due to indirect effects, e.g. changes in soil microclimate conditions, which leads to a higher soil organic matter decomposition rate. As Mediterranean climates are characterised by wide interannual variability, Rs shows considerable variability among years, which can mask the effect caused by thinning treatment. The analysis of spatio-temporal variability helped us to understand the effect of adaptive forest management on Rs. Further work needs to consider the relation between Rs and the many factors that influence substrate availability, species distribution and microclimate variables on abrupt slopes. Ǻgren GI, Knecht M (2001) Simulation of soil carbon and nutrient development under Pinus Sylvestris and Pinus contorta. Forest Ecol Manag 141:117–129. https://doi.org/10.1016/S0378-1127(00)00495-3 Akburak S, Makineci E. (2016). Thinning effects on soil and microbial respiration in a coppice-originated Carpinus betulus L. stand in Turkey. Iforest. 9 783. https://doi.org/10.3832/ifor1810-009 Almagro M, López J, Querejeta JI, Martínez-Mena M (2009) Temperature dependence of soil CO2 efflux is strongly modulated by seasonal patterns of moisture availability in a Mediterranean ecosystem. Soil Biol Biochem 41:594–605. https://doi.org/10.1016/j.soilbio.2008.12.021 Arias-Navarro C, Díaz-Pinés E, Klatt S, Brandt P, Rufino MC, Butterbach-Bahl K, Verchot LV (2017) Spatial variability of soil N2O and CO2 fluxes in different topographic positions in a tropical montane forest in Kenia. J Geophys Res Biogeosci 122:514–527. https://doi.org/10.1002/2016JG003667 Bautista I, Pabón CA, Lull C, González-Sanchis MC, Lidón A, del Campo AD (2015) Efectos de la gestión forestal en los flujos de nutrientes asociados al ciclo hidrológico en un bosque mediterráneo de Quercus ilex. Cuad Soc Esp Cienc for 41:343–354. https://doi.org/10.31167/csef.v0i41.17400 Boerner REJ (1983) Nutrient dynamics of vegetation and detritus following two intesities of fire in the New Jersey pine barrens. Oecologia 59:129–134 Bolat Ĭ (2014) The effect of thinning on microbial biomass C, N and basal respiration in black pine forest soils in Mudurnu, Turkey. Eur J for Res 133:131–139. https://doi.org/10.1007/s10342-013-0752-8 Cabon A, Mouillot F, Lempereur M, Ourcival JM, Simioni G, Limousin JM (2018) Thinning increases tree growth by delaying drought-induced growth cessation in a Mediterranean evergreen oak coppice. Forest Ecol Manag 409:333–342. https://doi.org/10.1016/j.foreco.2017.11.030 Campbell J, Alberti G, Martin J, Law BE (2009) Carbon dynamics of a ponderosa pine plantation following a thinning treatment in the northern Sierra Nevada. Forest Ecol Manag 257:453–463. https://doi.org/10.1016/j.foreco.2008.09.021 Chang CT, Sperlich D, Sabaté S, Sánchez-Costa E, Cotillas M, Espelta JM, Gracia C (2016) Mitigating the stress of drought on soil respiration by selective thinning: Contrasting effects of drought on soil respiration of two oak species in a Mediterranean forest. Forests 7:263. https://doi.org/10.3390/f7110263 Cheng X, Kang F, Han H, Liu H, Zhang Y (2015) Effect of thinning on partitioned soil respiration in a young Pinus tabulaeformis plantation during growing season. Agric for Meteorol 214–215:473–482. https://doi.org/10.1016/j.agrformet.2015.09.016 Curtin D, Beare MH, Hernandez-Ramirez G (2012) Temperature and humidity effects on microbial biomass and soil organic matter mineralization. Soil Sci Soc Am J 76:2055–2067. https://doi.org/10.2136/sssaj2012.0011 D'Amato AW, Bradford JB, Fraver S, Palik BJ (2013) Effects of thinning on drought vulnerability and climate response in north temperate forest ecosystems. Ecol Appl 23:1735–1742. https://doi.org/10.1890/13-0677.1 Davidson EA, Janssens IA (2006) Temperature sensitivity of soil carbon decomposition and feedbacks to climate change. Nature 440:165–173. https://doi.org/10.1038/nature04514 Davidson EA, Richardson AD, Savage KE, Hollinger DY (2006) A distinct seasonal pattern of the ratio of soil respiration to total ecosystem respiration in a spruce–dominated forest. Global Change Biol 12:230–239. https://doi.org/10.1111/j.1365-2486.2005.01062.x del Campo AD, González–Sanchis M, Lidón A, García–Prats A, Lull C, Bautista I, Ruíz-Pérez G, Francés F (2017) Ecohydrological-based forest management in semi-arid climate. In: Krecek J, Haigh M, Hofer T, Kubin E, Promper C (eds) Ecosystem Services of Headwater Catchments. Springer. 45–57. https://doi.org/10.1007/978-3-319-57946-7_6 del Campo AD, González-Sanchis M, Lidón A, Ceacero C, García-Prats A (2018) Rainfall partitioning in two low-biomass semiarid forest: impact of climate and forest structure on the effectiveness of water oriented treatments. J Hydrol 565:74–86. https://doi.org/10.1016/j.jhydrol.2018.08.013 del Campo AD, González-Sanchis M, Garcia-Prats A, Ceacero CJ, Lull C (2019) The impact of adaptive forest management on water fluxes and growth dynamics in a water-limited low-biomass oak coppice. Agric for Meteorol 264:266–282. https://doi.org/10.1016/j.agrformet.2018.10.016 Dixon RK, Solomon AM, Brown S, Houghton RA, Trexier MC, Wisniewski J (1994) Carbon pools and flux of global forest ecosystems. Science 263:185–190. https://doi.org/10.1126/science.263.5144.185 Ducrey M, Turrel M (1992) Influence of cutting methods and dates on stump sprouting in Holm oak (Quercus ilex L) coppice. Ann for Sci 49:449–464. https://doi.org/10.1051/forest:19920502 Fang Y, Gundersen P, Zhang W, Zhou G, Christiansen JR, Mo J, Dong S, Zhang T (2009) Soil–atmosphere exchange of N2O, CO2 and CH4 along a slope of an evergreen broad-leaved forest in southern China. Plant Soil 319:37–48. https://doi.org/10.1007/s11104-008-9847-2 Fernández-Alonso MJ, Diaz-Pines E, Ortiz C, Rubio A (2018) Disentangling the effects of tree species and microclimate on heterotrophic and autotrophic soil respiration in a Mediterranean ecotone forest. Forest Ecol Manag 430:533–544. https://doi.org/10.1016/j.foreco.2018.08.046 Goldin SR, Hutchinson MF (2013) Coarse woody debris modifies surface soils of degraded temperate eucalypt woodlands. Plant Soil 370:461–469. https://doi.org/10.1007/s11104-013-1642-z Hanson PJ, Edwards NT, Gaeten CT, Andrews JA (2000) Separating root and soil microbial contributions to soil respiration: and review of methods and observations. Biogeochemistry 48:115–146. https://doi.org/10.1023/A:1006244819642 Högberg P, Nordgren A, Buchmann N, Taylor AFS, Ekblad A, Högberg MN, Nyberg G, Ottosson-Löfvenius M, Read DJ (2001) Large-scale forest girdling shows that current photosynthesis drives soil respiration. Nature 411:789–792. https://doi.org/10.1038/35081058 Inclán R, Uribe C, De La Torre D, Sánchez DM, Clavero MA, Fernández AM, Morante R, Cardeña A, Fernández M, Rubio A (2010) Carbon dioxide fluxes across the Sierra de Guadarrama, Spain. Eur J for Res 129:93–100. https://doi.org/10.1007/s10342-008-0247-1 Jandl R, Lindner M, Vesterdal L, Bauwens B, Baritz R, Hagedorn F, Johnson DW, Minkkinen K, Byrne KA (2007) How strongly can forest management influence soil carbon sequestration? Geoderma 137:253–268. https://doi.org/10.1016/j.geoderma.2006.09.003 Jandl R, Spathelf P, Bolte A, Prescott CE (2019) Forest adaptation to climate change—is non-management an option? Ann for Sci 76:48. https://doi.org/10.1007/s13595-019-0827-x Lado-Monserrat L, Lull C, Bautista I, Lidón A, Herrera R (2014) Soil humidity increment as a controlling variable of the "Birch effect". Interactions with the pre-wetting soil humidity and litter addition. Plant Soil 379:21–34. https://doi.org/10.1007/s11104-014-2037-5 Lagomarsino A, Mazza G, Agnelli AE, Lorenzetti R, Bartoli C, Viti C, Colombo C, Pastorelli R (2020) Litter fractions and dynamics in a degraded pine forest after thinning treatments. Eur J Forest Res 139:295–310. https://doi.org/10.1007/s10342-019-01245-8 Lal R, Smith P, Jungkunst HF, Mitsch WJ, Lehmann J, Nair KPR, McBratney AB, de Moraes Sá JC, Schneider J, Zinn YL, Skorupa ALA, Zhang H, Minasny B, Srinivasrao C, Ravindranath NH (2018) The carbon sequestration potential of terrestrial ecosystems. J Soil Water Conserv 73:145A-152A. https://doi.org/10.2489/jswc.73.6.145A Lindner M, Maroschek M, Netherer S, Kremer A, Barbati A, Garcia-Gonzalo J, Seidl R, Delzon S, Corona P, Kolström M, Lexer MJ, Marchetti M (2010) Climate change impacts, adaptive capacity, and vulnerability of European forest ecosystems Forest Ecol Manag 259: 698–709. https://doi.org/10.1016/j.foreco.2009.09.023 Lloyd J, Taylor JA (1994) On the temperature dependence of soil respiration. Funct Ecol 8:315–323. https://doi.org/10.2307/2389824 López BC, Sabaté S, Gracia CA (2003) Thinning effects on carbon allocation to fine roots in a Quercus ilex forest. Tree Physiol 23:1217–1224. https://doi.org/10.1093/treephys/23.17.1217 López-Serrano FR, Rubio E, Dadi T, Moya D, Andrés-Abellán M, García-Morote FA, Martínez-García E (2016) Influences of recovery from wildfire and thinning on soil respiration of a Mediterranean mixed forest. Sci Total Environ 573:1217–1231. https://doi.org/10.1016/j.scitotenv.2016.03.242 Lull C, Bautista I, Lidón A, del Campo AD, González-Sanchis M, García-Prats A (2020) Temporal effects of thinning on soil organic carbon pools, basal respiration and enzyme activities in a Mediterranean Holm oak forest. Forest Ecol Manage 464:118088. https://doi.org/10.1016/j.foreco.2020.118088 Ma S, Chen J, North M, Erickson HE, Bresee M, Le Moine J (2004) Short-term effects of experimental burning and thinning on soil respiration in an old-growth, mixed-conifer forest. Environ Manag 33:148–159. https://doi.org/10.1007/s00267-003-9125-2 Mallik AU, Hu D (1997) Soil respiration following site preparation treatments in boreal mixedwood forest. For Ecol Manag 97:265–275. https://doi.org/10.1016/S0378-1127(97)00067-4 MAPA (1994) Métodos oficiales de análisis de suelos y aguas. Ministerio de Agricultura, Pesca y Alimentación, Madrid Martin JG, Bolstad PV (2009) Variation of soil respiration at three spatial scales: Components within measurements, intra-site variation and patterns on the lanscape. Soil Biol Biochem 41:530–543. https://doi.org/10.1016/j.soilbio.2008.12.012 Martin WKE, Timmer VR (2006) Capturing spatial variability of soil and litter properties in a forest stand by landform segmentation procedures. Geoderma 132:169–181. https://doi.org/10.1016/j.geoderma.2005.05.004 Martínez-García E, López-Serrano FR, Dadi T, García-Morote FA, Andrés-Abellán M, Pumpanen J, Rubio E (2017) Medium-term dynamics of soil respiration in a Mediterranean mountain ecosystem: The effects of burn severity, post-fire burnt-wood management, and slope-aspect. Agric for Meteorol 233:195–208. https://doi.org/10.1016/j.agrformet.2016.11.192 McElligott KM, Seiler JR, Strahm BD (2017) The impact of water content on sources of heterotrophic soil respiration. Forests 8:299. https://doi.org/10.3390/f8080299 Millar CI, Stephenson NL, Stephens SL (2007) Climate change and forests of the future: managing in the face of uncertainty. Ecol Appl 17:2145–2151. https://doi.org/10.1890/06-1715.1 Molina A, del Campo A (2012) The effects of experimental thinning on throughfall and stemflow: a contribution towards hydrology-oriented silviculture in Aleppo pine plantations. For Ecol Manag 269:206–213. https://doi.org/10.1016/j.foreco.2011.12.037 Moyano FE, Vasilyeva NA, Bouckaert L, Cook F, Craine JM, Don A, Epron D, Formanek P, Franzluebbers A, Ilstedt U, Katterer T, Orchard V, Reichstein M, Rey A, Ruamps LS, Subke J, Thomsen IK, Chenu C (2012) The moisture response of soil heterotrophic respiration: Interaction with soil properties. Biogeosciences 9:1173–1182. https://doi.org/10.5194/bg-9-1173-2012 Nelson, DW, Sommers LE (1982) Total carbon, organic carbon and organic matter. In: Page AL, Miller RH, Jeeney DR (eds) Methods of soil analysis. Part 2. Chemical and mineralogical properties. American Society of Agronomy, Madison, 9. 539–579 Newman GS, Arthur MA, Muller RN (2006) Above–and belowground net primary production in a temperate mixed deciduous forest. Ecosystems 9:317–329. https://doi.org/10.1007/s10021-006-0015-3 Ohashi M, Gyokusen K (2007) Temporal change in spatial variability of soil respiration on a slope of Japanese cedar (Cryptomeria japonica D. Don) forest. Soil Biol Biochem 39:1130–1138. https://doi.org/10.1016/j.soilbio.2006.12.021 Ohashi M, Gyokusen K, Saito A (1999) Measurement of carbon dioxide evolution from a Japanese cedar (Cryptomeria japonica D. Don) forest floor using an open-flow chamber method. For Ecol Manag 123:105–114. https://doi.org/10.1016/S0378-1127(99)00020-1 Olajuyigbe S, Tobin B, Saunders M, Nieuwenhuis M (2012) Forest thinning and soil respiration in a Sitka spruce forest in Ireland. Agric for Meteorol 157:86–95. https://doi.org/10.1016/j.agrformet.2012.01.016 Olivar J, Bogino S, Rathgeber C, Bonnesoeur V, Bravo F (2014) Thinning has a positive effect on growth dynamics and growth–climate relationships in Aleppo pine (Pinus halepensis) trees of different crown classes. Ann for Sci 71:395–404. https://doi.org/10.1007/s13595-013-0348-y Pang X, Bao W, Zhu B, Cheng W (2013) Responses of soil respiration and its temperature sensitivity to thinning in a pine plantation. Agric for Meteorol 171:57–64. https://doi.org/10.1016/j.agrformet.2012.12.001 Peng Y, Thomas SC, Tian D (2008) Forest management and soil respiration: implications for carbon sequestration: Environ. Rev 16:93–111. https://doi.org/10.1139/A08-003 Pennock D, Yates T, Braidek J (2008) Soil sampling designs. In: Carter MR, Gregorich EG (eds) Soil sampling and methods of analysis, 2on edn. CRC Press, pp 1–14 Puertes C, Lidón A, Echeverría C, Bautista I, González-Sanchis M, del Campo AD, Francés F (2019) Explaining the hydrological behaviour of facultative phreatophytes using a multi-variable and multi-objective modelling approach. J Hydrol 575:395–407. https://doi.org/10.1016/j.jhydrol.2019.05.041 R Development Core Team (2017) R version 3.3.3. R Foundation for Statistical Computing. Rey A, Pegoraro E, Tedeschi V, de Parri I, Jarvis PG, Valentini R (2003) Annual variation in soil respiration and its components in a coppice oak forest in Central Italy. Glob Change Biol 8:851–866. https://doi.org/10.1046/j.1365-2486.2002.00521.x Scarascia-Mugnozza G, Oswald H, Piussi P, Radoglou K (2000) Forests of the Mediterranean region: gaps in knowledge and research needs. For Ecol Manag 132:97–109. https://doi.org/10.1016/S0378-1127(00)00383-2 Spears JDH, Lajtha K (2004) The imprint of coarse woody debris on soil chemistry in the western Oregon Cascades. Biogeochemistry 71:163–175. https://doi.org/10.1007/s10533-004-6395-6 Striegl RJ, Wickland KP (1998) Effects of a clear-cut harvest on soil respiration in a jack pine–lichen woodland. Can J for Res 28:534–539. https://doi.org/10.1139/x98-023 Tang J, Misson L, Gerhenson A, Cheng W, Goldstein AH (2005a) Continuous measurements of soil respiration with and without roots in a ponderosa pine plantation in the Sierra Nevada Mountains. Agric for Meteorol 132:212–227. https://doi.org/10.1016/j.agrformet.2005.07.011 Tang J, Qi Y, Xu M, Misson L, Goldstein AH (2005b) Forest thinning and soil respiration in a ponderosa pine plantation in the Sierra Nevada. Tree Physiol 25:57–66. https://doi.org/10.1093/treephys/25.1.57 Thibodeau L, Raymond P, Camiré C, Munson AD (2000) Impact of precommercial thinning in balsam fir stands on soil nitrogen dynamics, microbial biomass, decomposition and foliar nutrition. Can J for Res 30:229–238. https://doi.org/10.1139/x99-202 Toland DE, Zak DR (1994) Seasonal patterns of soil respiration in intact and clear-cut northern hardwood forests. Can J for Res 24:1711–1716. https://doi.org/10.1139/x94-221 Ukonmaanaho L, Merilä P, Nöjd P, Nieminen TM (2008) Litterfall production and nutrient return to the forest floor in Scots pine and Norway spruce stands in Finland. Boreal Env Res 13: 67−91. http://urn.fi/URN:NBN:fi-fe2016091323703 UNEP (United Nations Environment Programme) (1992) World atlas of desertification. Edited by Middleton N, Thomas DSG, Arnold E, London. https://doi.org/10.1002/ldr.3400030407 Xiao W, Ge X, Zeng L, Huang Z, Lei J, Zhou B, Li M (2014) Rates of litter decomposition and soil respiration in relation to soil temperature and water in different−aged Pinus massoniana forests in the three Gorges Reservoir area. China Plos ONE 9:e101890. https://doi.org/10.1371/journal.pone.0101890 Xu M, Qi Y (2001) Soil-surface CO2 efflux and its spatial and temporal variations in a young ponderosa pine plantation in northern California. Glob Change Biol 7:667–677. https://doi.org/10.1046/j.1354-1013.2001.00435.x Xu J, Chen J, Brosofske K, Li Q, Weintraub M, Henderson R, Wilske B, John R, Jensen R, Li H, Shao C (2011) Influence of timber harvesting alternatives on forest soil respiration and its biophysical regulatory factors over a 5-year period in the Missouri Ozarks. Ecosystems 14:1310–1327. https://doi.org/10.1007/s10021-011-9482-2 Yaalon DH (1997) Soils in the Mediterranean region: what makes them different? CATENA 28:157–169. https://doi.org/10.1016/S0341-8162(96)00035-5 Yakovchenko VP, Sikora LJ (1998) Modified dichromate method for determining low concentrations of extractable organic carbon in soil. Commun Soil Sci Plant Anal 29:421–433. https://doi.org/10.1080/00103629809369955 Zhang X, Guan D, Li W, Suna D, Jin C, Yuan F, Wang A, Wu J (2018) The effects of forest thinning on soil carbon stocks and dynamics: A meta-analysis. For Ecol Manag 429:36–43. https://doi.org/10.1016/j.foreco.2018.06.027 Zhao B, Gao J, Geng Y, Zhao X, von Gadow K (2019) Inconsistent responses of soil respiration and its components to thinning intensity in a Pinus tabuliformis plantation in northern China. Agric for Meteorol 265:370–380. https://doi.org/10.1016/j.agrformet.2018.11.034 Zhou Z, Guo C, Meng H (2013) Temperature sensitivity and basal rate of soil respiration and their determinants in temperate forests of north China. PLoS ONE 8:e81793. https://doi.org/10.1371/journal.pone.0081793 This study was supported by research projects Hydrological characterisation of forest structures on a plot scale for adaptive management (CGL2011-28776-C02-02) and SILWAMED (CGL2014-58127-C3-2) funded by the Spanish Ministry of Science and Innovation and FEDER funds. CEHYRFO-MED (CGL2017-86839-C3-2-R), RESILIENT-FORESTS (LIFE17 CCA/ES/000063) and SilvAdapt.net (RED2018-102719-T) are also acknowledged. The authors are grateful to the Valencia Regional Government, VAERSA, ACCIONA and the "Sierra Calderona" Natural Park for their support in allowing us to use experimental forests and for their assistance in fieldwork. We also thank Rafael Herrera from the Centro de Ecología, Instituto Venezolano de Investigaciones Científicas, Caracas, Venezuela, and the anonymous reviewers for critically reviewing the manuscript. Open Access funding provided thanks to the CRUE-Universitat Politècnica de València agreement with Springer Nature. Research Group in Forest Science and Technology (Re-ForeST), Research Institute of Water and Environmental Engineering (IIAMA), Universitat Politècnica de València, Camí de Vera s/n, 46022, Valencia, Spain Inmaculada Bautista, Antonio Lidón & Cristina Lull Hydraulic Engineering and Environment Department, Research Group in Forest Science and Technology (Re-ForeST), Universitat Politècnica de València, Camí de Vera s/n, 46022, Valencia, Spain María González-Sanchis & Antonio D. del Campo Inmaculada Bautista Antonio Lidón Cristina Lull María González-Sanchis Antonio D. del Campo Correspondence to Inmaculada Bautista. Communicated by Agustin Merino. Bautista, I., Lidón, A., Lull, C. et al. Thinning decreased soil respiration differently in two dryland Mediterranean forests with contrasted soil temperature and humidity regimes. Eur J Forest Res 140, 1469–1485 (2021). https://doi.org/10.1007/s10342-021-01413-9 Revised: 07 September 2021 Issue Date: December 2021 Soil climate Soil hydrology
CommonCrawl
Test of Mathematics Solution Subjective 67 - Four Real Roots This is a Test of Mathematics Solution Subjective 67 (from ISI Entrance). The book, Test of Mathematics at 10+2 Level is Published by East West Press. This problem book is indispensable for the preparation of I.S.I. B.Stat and B.Math Entrance. Also visit: I.S.I. & C.M.I. Entrance Course of Cheenta Describe the set of all real numbers x which satisfy 2 $ {\log_{{2x+3}^x}} $ <1. 2 $ {\log_{{2x+3}^x}} $ < 1 Now x< 0 is not in the domain of logarithm. So x> 0. Now as x>0 2x+3 > 1. So $ {(2x+3)^a} $ > $ {(2x+3)^b} $ for a>b So 2 $ {\log_{{2x+3}^x}} $ < 1 or $ {(2x+3)^{\frac{1}{2}}} $ > x or 2x+3 > $ {x^2} $ or $ {x^2} $ -2x -3 < 0 or -1 < x < 3 But x > 0 so set of all real number x is 0 < x < 3.
CommonCrawl
General decay for viscoelastic plate equation with p-Laplacian and time-varying delay Jum-Ran Kang1 In this paper we study the viscoelastic plate equation with p-Laplacian and time-varying delay. We establish a general decay rate result under some restrictions on the coefficients of strong damping and strong time-varying delay and weakening the usual assumptions on the relaxation function, using the energy perturbation method. In this paper, we are concerned with the following problem: $$\begin{aligned} & u_{tt} + \alpha\Delta^{2} u -\Delta_{p} u - \int_{-\infty}^{t} g(t-s) \Delta^{2} u (s) \,d s - \mu_{1} \Delta u_{t} \\ &\quad{} -\mu_{2} \Delta u_{t} \bigl(t-\tau(t)\bigr) +f(u) = h,\quad \Omega\times {\mathbb {R}}^{+} , \end{aligned}$$ $$\begin{aligned} & u = \Delta u= 0\quad \text{on } \partial\Omega \times {\mathbb {R}}^{+} , \end{aligned}$$ $$\begin{aligned} & u(x,0) = u_{0} (x) , \qquad u_{t} (x,0) = u_{1} (x) ,\quad x\in \Omega, \end{aligned}$$ $$\begin{aligned} & u_{t} (x, t)=f_{0} (x, t),\quad (x, t) \in\Omega\times\bigl[- \tau(0), 0\bigr) , \end{aligned}$$ where Ω is a bounded domain of \({\mathbb {R}}^{n}\) with a sufficiently smooth boundary ∂Ω, and $$\Delta_{p} u=\operatorname{div}\bigl( \Vert \nabla u \Vert ^{p-2} \nabla u \bigr) $$ is the p-Laplacian operator. The unknown function \(u(x,t)\) denotes the transverse displacement of a plate filament with prescribed history \(u_{0} (x,t), t\leq0 \). The constants α and \(\mu_{1}\) are positive and \(\mu_{2}\) is a real number. The function \(\tau(t)>0\) represents the time-varying delay, \(g>0\) is the memory kernel and f is forcing term. The plate equation with lower order perturbation of p-Laplacian type, $$u_{tt} +\Delta^{2} u -\Delta_{p} u -\Delta u_{t} = h(x), $$ has been extensively studied (see [1–3]) and results concerning existence, nonexistence and long-time behavior of solutions have been considered. This model can be regarded as describing elastoplastic microstructure flows. On the other hand, the elliptic problem for p-Laplace operator can be found in [4]. Recently, Torres [5] showed the existence of a solution for the fractional p-Laplacian Dirichlet problem with mixed derivatives. When \(\mu_{2}=0\) in Eq. (1.1), that is, in the absence of delay, problem (1.1) with strong damping was investigated by Jorge Silva and Ma [6]. They established exponential stability of solutions under the condition $$\begin{aligned} g'(t)\leq-cg(t), \quad\forall t\geq0, \end{aligned}$$ for some \(c>0\). Andrade et al. [7] proved exponential stability of solutions for the plate equation with finite memory and p-Laplacian. The viscosity term \(-\Delta u_{t}\) is often called a Kelvin–Voigt type dissipation or strong dissipation; it appears in phenomena of wave propagation in a viscoelastic material. Nakao [8] obtained the existence and uniqueness of a global decaying solution for the quasilinear wave equation with Kelvin–Voigt dissipation and a derivative nonlinearity. Pukach et al. [9] studied sufficient conditions of nonexistence of global in time solution for a nonlinear evolution equation with memory generalizing the Voigt–Kelvin model. Recently, Cavalcanti et al. [10] considered intrinsic decay rates for the energy of a nonlinear viscoelastic equation modeling the vibrations of thin rods with variable density. Time delays so often arise in many physical, chemical, biological, thermal, and economical phenomena because these phenomena depend not only on the present state but also on the past history of the system in a more complicated way. In recent years, there has been published much work concerning the wave equation with constant delay or time-varying delay effects. Nicaise and Pignotti [11] investigated some stability results for the following wave equation with a linear damping and delay term in the domain: $$\begin{aligned} u_{tt } -\Delta u +\mu_{1} u_{t} + \mu_{2} u_{t}(t-\tau)=0 \end{aligned}$$ in the case \(0<\mu_{2} <\mu_{1} \). Moreover, the same results were obtained when both the damping and the delay act on the boundary. Nicaise and Pignotti [12] studied exponential stability results for the following wave equation with time-dependent delay: $$\begin{aligned} u_{tt } -\Delta u +\mu_{1} u_{t} + \mu_{2} u_{t}\bigl(t-\tau(t)\bigr)=0 \end{aligned}$$ under the condition \(\vert \mu_{2} \vert < \sqrt{1-d} \mu_{1} \). Kirane and Said-Houari [13] considered the following viscoelastic wave equation with a linear damping and a delay term: $$\begin{aligned} u_{tt } -\Delta u + \int_{0}^{t} g(t-s) \Delta u(s) \,ds +\mu_{1} u_{t} + \mu_{2} u_{t} (t-\tau) =0, \end{aligned}$$ where \(\mu_{1}\) and \(\mu_{2}\) are positive constants. When \(\mu_{2} \leq\mu_{1}\), they proved general decay of the energy under the condition $$\begin{aligned} g'(t) \leq-\xi(t) g(t), \quad\forall t\geq0, \end{aligned}$$ where \(\xi:R^{+} \rightarrow R^{+}\) is a nonincreasing differentiable function. Dai and Yang [14] improved the results of [13]. They also obtained an exponentially decay results for the energy of the problem (1.8) in the case \(\mu_{1}=0\). Liu [15] studied a general decay result for the following viscoelastic wave equation with time-dependent delay: $$\begin{aligned} u_{tt } -\Delta u + \int_{0}^{t} g(t-s) \Delta u(s) \,ds +\mu_{1} u_{t} + \mu_{2} u_{t} \bigl(t-\tau(t)\bigr) =0 \end{aligned}$$ under the conditions (1.9) and \(\vert \mu_{2} \vert < \sqrt{1-d} \mu_{1}\). For the plate equation with time delay term, Yang [16] considered the stability for an Euler–Bernoulli viscoelastic equation with constant delay $$\begin{aligned} u_{tt } +\Delta^{2} u + \int_{0}^{t} g(t-s) \Delta^{2} u(s) \,ds + \mu_{1} u_{t} + \mu_{2} u_{t} (t-\tau) =0 \end{aligned}$$ under the conditions (1.5) and \(0< \vert \mu_{2} \vert < \mu_{1}\). Moreover, he proved the exponential decay results of the energy in the case \(\mu_{1}=0\). Recently, Feng [17] investigated an exponential stability results for the following plate equation with time-varying delay and past history: $$\begin{aligned} u_{tt } + \alpha\Delta^{2} u - \int_{-\infty}^{t} g(t-s) \Delta^{2} u(s) \,ds + \mu_{1} u_{t} + \mu_{2} u_{t} \bigl(t- \tau(t)\bigr) +f(u)=0 \end{aligned}$$ under the conditions (1.5) and \(0< \vert \mu_{2} \vert < \sqrt{1-d} \mu _{1}\). Mustafa and Kafini [18] showed the decay rates for memory type plate system (1.12) with \(\tau(t)=\tau\) and \(f(u)=-u \vert u \vert ^{\gamma}\). Park [19] obtained the general decay estimates for a viscoelastic plate equation with time-varying delay under the condition (1.9). The stability of the solutions to a viscoelastic system under the condition (1.9) was studied in [20–23] and the references therein. With respect to wave equation with strong time delay, there is just little published work. Messaoudi et al. [24] considered the following wave equation with strong time delay: $$\begin{aligned} u_{tt } -\Delta u -\mu_{1} \Delta u_{t} - \mu_{2} \Delta u_{t} (t-\tau) =0 \end{aligned}$$ and proved the well-posedness under the condition \(\vert \mu _{2} \vert \leq \mu_{1}\) and obtained exponential decay of energy under the condition \(\vert \mu_{2} \vert < \mu_{1} \). Recently, Feng [25] established the general decay result for the following viscoelastic wave equation with strong time-dependent delay: $$\begin{aligned} u_{tt } -\Delta u + \int_{0}^{t} g(t-s) \Delta u(s) \,ds -\mu_{1} \Delta u_{t} - \mu_{2} \Delta u_{t} \bigl(t-\tau(t) \bigr) =0 \end{aligned}$$ under the conditions (1.9) and \(\vert \mu_{2} \vert < \sqrt {1-d} \mu_{1}\). However, to the best of my knowledge, there is no stability result for the viscoelastic plate equation with strong time-varying delay. Motivated by [24, 25], we study a general decay result for viscoelastic plate equation with p-Laplacian and time-varying delay (1.1)–(1.4) for relaxation function g satisfying the condition (1.9). This result improves on earlier ones in the literature because it allows for certain relaxation functions which are not necessarily of exponential or polynomial decay. We end this section by establishing the usual history setting of problem (1.1)–(1.4). Following a method devised in [26–29], we shall use a new variable \(\eta^{t}\) to the system with past history. Let us define $$\begin{aligned} \eta= \eta^{t} (x, s) = u(x, t) -u(x, t-s), \quad (x, s)\in\Omega\times { \mathbb {R}}^{+} , t\geq0 . \end{aligned}$$ Differentiating in (1.15) we have $$\begin{aligned} \eta_{t}^{t} (x, s) + \eta_{s}^{t} (x, s) = u_{t}(x, t) , \quad (x, s)\in\Omega\times{\mathbb {R}}^{+} , t\geq0 . \end{aligned}$$ Taking \({\alpha= 1+ \int_{0}^{\infty}g (s) \,d s}\), the original problem (1.1)–(1.4) can be transformed into the new system $$\begin{aligned} \textstyle\begin{cases} u_{tt}+ \Delta^{2} u -\Delta_{p} u + \int_{0}^{\infty} g(s) \Delta^{2} \eta^{t} (s) \,ds\\ \quad{} -\mu_{1}\Delta u_{t} - \mu_{2} \Delta u_{t} ( t-\tau(t)) + f(u) =h ,\quad\text{in } \Omega\times{\mathbb {R}}^{+}, \\ \eta^{t}_{t} = -\eta^{t}_{s} +u_{t} , \quad (x, t,s) \in\Omega\times{\mathbb {R}}^{+} \times{\mathbb {R}}^{+}, \\ u_{t}(x, t)=f_{0} (x, t) ,\quad (x, t) \in\Omega\times[- \tau(0), 0), \end{cases}\displaystyle \end{aligned}$$ with boundary conditions $$\begin{aligned} u(x, t) =0 \quad\text{on }\partial\Omega\times {\mathbb {R}}^{+},\qquad \eta=0 \quad\text{on }\partial \Omega\times{\mathbb {R}}^{+} \times{\mathbb {R}}^{+}, \end{aligned}$$ and initial conditions $$\begin{aligned} u(x,0) = u_{0} (x),\qquad u_{t} (x,0) = u_{1}(x),\qquad \eta^{t} (x, 0)=0, \qquad\eta^{0} (x, s) =\eta_{0} (x, s) , \end{aligned}$$ $$\begin{aligned} \textstyle\begin{cases} u_{0} (x) = u_{0} (x, 0),\quad x\in\Omega, \\ u_{1} (x) = \partial_{t} u_{0} (x, t)|_{t=0},\quad x\in\Omega, \\ \eta_{0} (x, s) = u_{0} (x, 0 ) - u_{0} ( x, -s) , \quad(x, s) \in\Omega \times{\mathbb {R}}^{+}. \end{cases}\displaystyle \end{aligned}$$ The paper is organized as follows. In Sect. 2, we state the notation and main result. In Sect. 3, we prove the general decay of the solutions to the viscoelastic plate equation with p-Laplacian and time-varying delay by using the energy perturbation method. In this section, we present some material needed in the proof of our result and state the main result. For a Banach space \(X, \Vert \cdot \Vert _{X}\) denotes the norm of X. For simplicity, we denote \(\Vert \cdot \Vert _{L^{2}(\Omega)}\) by \(\Vert \cdot \Vert \). We use the standard Lebesgue and Sobolev spaces, with their usual scalar products and norms, and the Sobolev–Poincaré inequality $$\begin{aligned} \Vert u \Vert ^{2}_{q} \leq C_{*} \Vert \Delta u \Vert ^{2} ,\quad u\in H^{2}_{0} (\Omega), \end{aligned}$$ for \(q\geq2\) if \(1\leq n\leq4\) or \(2\leq q \leq\frac{2n}{n-4}\) if \(n\geq5\). In the following, we fix some notations on the function spaces that will be used. Let $$V_{0} =L^{2}(\Omega),\qquad V_{1}=H^{1}_{0}( \Omega),\qquad V_{2}= H^{2}(\Omega)\cap H^{1}_{0}( \Omega) $$ $$V_{3}=\bigl\{ u\in H^{3}(\Omega) |u=\Delta u =0 \text{ on } \partial \Omega\bigr\} . $$ In order to consider the new variable η, we introduce the weighted \(L^{2}\)-spaces $${\mathcal {M}}_{i}=L^{2}_{g} \bigl({\mathbb {R}}^{+} ;V_{i}\bigr)= \biggl\{ \eta: {\mathbb {R}}^{+} \rightarrow V_{i}\Big| \int_{0}^{\infty}g(s) \bigl\Vert \eta(s) \bigr\Vert _{V_{i}}^{2} \,ds < \infty \biggr\} ,\quad i=0, 1,2,3, $$ which are Hilbert spaces endowed with the inner products and norms $$(\eta, \xi)_{{\mathcal {M}}_{i}}= \int_{0}^{\infty}g(s) \bigl( \eta(s), \xi(s) \bigr)_{V_{i}} \,ds\quad \text{and}\quad \Vert \eta \Vert ^{2}_{{\mathcal {M}}_{i}}= \int_{0}^{\infty}g(s) \bigl\Vert \eta(s) \bigr\Vert ^{2}_{V_{i}} \,ds ,\quad i=0, 1,2,3, $$ respectively. To simplify the notation, we define the Hilbert spaces $${\mathcal {H}}=V_{2} \times V_{0} \times{\mathcal {M}}_{2} \quad\text{and}\quad {\mathcal {H}}_{1}=V_{3} \times V_{1} \times{\mathcal {M}}_{3}. $$ Let us begin with the precise hypotheses on the constant p and the functions f and g. For \(n\in{\mathbb {N}}\), we assume that $$\begin{aligned} 2\leq p\leq\frac{2n-2}{n-2} \quad\text{if }n\geq3\quad \text{and}\quad p\geq2 \quad\text{if }n=1, 2. \end{aligned}$$ $$H^{2}(\Omega)\cap H^{1}_{0} (\Omega) \hookrightarrow W_{0}^{1, 2(p-1)} (\Omega) \hookrightarrow H^{1}_{0} (\Omega) \hookrightarrow L^{2} (\Omega) . $$ The nonlinear function \(f: {\mathbb {R}} \rightarrow{\mathbb {R}}\) satisfying \(f(0)=0\) and the growth condition, $$\begin{aligned} \bigl\vert f(w)-f(z) \bigr\vert \leq k_{0} \bigl(1+ \vert w \vert ^{\rho}+ \vert z \vert ^{\rho}\bigr) \vert w-z \vert ,\quad \forall w, z \in{\mathbb {R}}, \end{aligned}$$ where \(k_{0} >0\) and $$\begin{aligned} 0< \rho\leq\frac{4}{n-4} \quad\text{if }n\geq5\quad\text{and}\quad\rho>0\quad\text{if }1\leq n\leq4 , \end{aligned}$$ which implies that \(H^{2} (\Omega) \hookrightarrow L^{2(\rho+1)} (\Omega)\). In addition, we assume that $$\begin{aligned} 0 \leq F(u) \leq f(u) u,\quad \forall u \in{\mathbb {R}}, \end{aligned}$$ where \(F(z)=\int_{0}^{z} f(s) \,d s \). For the relaxation function g, we assume that \(g:{\mathbb {R}}^{+} \rightarrow{\mathbb {R}}^{+}\) is a nonincreasing \(C^{1}\) function satisfying $$\begin{aligned} g(0)>0 ,\quad l:=1- \int_{0}^{\infty}g(s) \,ds >0 , \end{aligned}$$ and there exists a nonincreasing differentiable function \(\xi: {\mathbb {R}}^{+} \rightarrow{\mathbb {R}}^{+}\) satisfying $$\begin{aligned} \xi(t) >0, g'(t) \leq-\xi(t) g(t), \quad\forall t\geq0 \end{aligned}$$ $$\int_{0}^{+\infty} \xi(t) \,dt =\infty. $$ As in [12], for the time-varying delay, we assume that \(\tau\in W^{2, \infty}([0, T])\) for \(T>0\), and there exist positive constants \(\tau_{0}, \tau_{1}\) and d satisfying $$\begin{aligned} 0< \tau_{0} \leq\tau(t) \leq\tau_{1} \quad\text{and}\quad \tau'(t) \leq d < 1\quad \forall t >0, \end{aligned}$$ and that \(\mu_{1}\) and \(\mu_{2}\) satisfy $$\begin{aligned} \vert \mu_{2} \vert < \sqrt{1-d} \mu_{1} . \end{aligned}$$ We can prove the existence of weak solution by making use of the classical Faedo–Galerkin method. Then using elliptic regularity and second order estimates we can show the regularity of the solution. We state a well-posedness result without a proof here (see [6, 7, 13, 17]). Suppose that hypotheses (2.2)–(2.9) hold. If the initial data \((u_{0}, u_{1}, \eta_{0}) \in{\mathcal {H}}, f_{0}(x,t)\in H^{1}(\Omega\times(-\tau(0),0))\) and \(h(x)\in L^{2}(\Omega)\), then problem (1.16)–(1.18) has a unique weak solution $$\bigl(u, u_{t}, \eta^{t}\bigr) \in C(0, T; {\mathcal {H}} ),\quad \forall T>0, $$ $$u\in L^{\infty}(0,T;V_{2}),\qquad u_{t} \in L^{\infty}(0, T;V_{0})\cap L^{2}(0,T;V_{1} ),\qquad \eta^{t} \in L^{\infty}(0,T;{\mathcal {M}}_{2} ) . $$ If the initial data \((u_{0}, u_{1}, \eta_{0}) \in{\mathcal {H}}_{1}, f_{0}(x,t)\in H^{2}(\Omega\times(-\tau(0),0))\) and \(h(x) \in H^{1} (\Omega)\), then the above weak solution has higher regularity $$\begin{aligned} &u\in L^{\infty}(0,T;V_{3}),\qquad u_{t} \in L^{\infty}(0, T;V_{1})\cap L^{2}(0,T;V_{2} ),\\ &\eta^{t} \in L^{\infty}(0,T;{\mathcal {M}}_{3} ) . \end{aligned}$$ In order to state our main result, we define the energy of problem (1.16)–(1.18) by $$\begin{aligned} E(t) ={}& \frac{1}{2} \bigl\Vert u_{t}(t) \bigr\Vert ^{2}+\frac{1}{2} \bigl\Vert \Delta u(t) \bigr\Vert ^{2} +\frac{1}{p} \bigl\Vert \nabla u(t) \bigr\Vert ^{p}_{p} + \int_{\Omega}F(u) \,d x \\ &{}+ \frac{1}{2} \bigl\Vert \eta^{t} \bigr\Vert ^{2}_{{\mathcal {M}}_{2}} +\frac{\xi}{2} \int_{t-\tau(t)}^{t}e^{\lambda(s-t)} \bigl\Vert \nabla u_{t} (s) \bigr\Vert ^{2} \,ds , \end{aligned}$$ where ξ and λ are positive constants satisfying $$\begin{aligned} \frac{ \vert \mu_{2} \vert }{\sqrt{1-d}} < \xi< 2\mu_{1} -\frac{ \vert \mu_{2} \vert }{\sqrt{1-d}} \quad\text{and}\quad \lambda< \frac{1}{\tau_{1}}\log \biggl\vert \frac{\xi\sqrt{ 1-d}}{ \vert \mu _{2} \vert } \biggr\vert . \end{aligned}$$ Note that this choice of ξ is possible from assumption (2.9). Suppose that assumptions (2.2)–(2.9) hold. Let \(h=0\) and the initial data \((u_{0}, u_{1}, \eta_{0}) \in{\mathcal {H}}, f_{0}(x,t)\in H^{1}(\Omega\times (-\tau(0),0))\). Then there exist two positive constants \(k_{1}\) and \(k_{2}\) such that the energy \(E(t)\) satisfies $$\begin{aligned} E(t) \leq k_{1} e^{-k_{2} \int_{0}^{t} \xi(s) \,ds} ,\quad \forall t\geq0 . \end{aligned}$$ General decay of the energy In this section we shall establish the decay rates in Theorem 2.2. To demonstrate the stability of the system (1.16)–(1.18) the lemmas below are essential. Under the assumptions of Theorem 2.2, the energy functional \(E(t)\) satisfies, for any \(t\geq0\), $$\begin{aligned} E'(t) \leq {}&\biggl(\frac{ \vert \mu_{2} \vert }{2\sqrt{1-d}}- \mu_{1}+\frac{\xi }{2} \biggr) \Vert \nabla u_{t} \Vert ^{2} + \biggl(\frac{ \vert \mu_{2} \vert \sqrt {1-d}}{2} - \frac{\xi(1-d)}{2e^{\lambda\tau_{1}}} \biggr) \bigl\Vert \nabla u_{t} \bigl(t-\tau(t)\bigr) \bigr\Vert ^{2} \\ &{} +\frac{1}{2} \int_{0}^{\infty}g'(s) \bigl\Vert \Delta \eta^{t}(s) \bigr\Vert ^{2} \,ds -\frac{\lambda\xi}{2} \int_{t-\tau(t)}^{t} e^{\lambda(s-t)} \bigl\Vert \nabla u_{t}(s) \bigr\Vert ^{2} \,ds . \end{aligned}$$ Multiplying the first equation of (1.16) by \(u_{t} (t) \), we get the identity $$\begin{aligned} &\frac{d}{dt} \biggl( \frac{1}{2} \Vert u_{t} \Vert ^{2} +\frac{1}{2} \Vert \Delta u \Vert ^{2} +\frac{1}{p} \Vert \nabla u \Vert ^{p}_{p} + \int_{\Omega}F(u)\,dx \biggr) \\ &\quad = -\mu_{1} \Vert \nabla u_{t} \Vert ^{2} - \mu_{2} \int_{\Omega}\nabla u_{t}\bigl(t-\tau(t)\bigr) \nabla u_{t} \,dx - \int_{\Omega}\int_{0}^{\infty} g(s)\Delta\eta^{t}(s) \Delta u_{t}(t) \,d s \,dx . \end{aligned}$$ Applying the second equation of (1.16) to (3.2), we have $$\begin{aligned} { E}'(t) ={}& {-}\mu_{1} \Vert \nabla u_{t} \Vert ^{2}-\mu _{2} \int_{\Omega}\nabla u_{t}\bigl(t-\tau(t)\bigr) \nabla u_{t} \,dx -\bigl(\eta^{t}_{s} ,\eta^{t} \bigr)_{{\mathcal {M}}_{2}} +\frac{\xi}{2} \Vert \nabla u_{t} \Vert ^{2} \\ &{} -\frac{\xi}{2} e^{-\lambda\tau(t) }\bigl(1-\tau'(t)\bigr) \bigl\Vert \nabla u_{t}\bigl(t-\tau(t)\bigr) \bigr\Vert ^{2} - \frac{\lambda\xi}{2} \int_{t-\tau(t)}^{t} e^{\lambda(s-t)} \bigl\Vert \nabla u_{t}(s) \bigr\Vert ^{2} \,ds . \end{aligned}$$ Using Young's inequality, we obtain $$\begin{aligned} -\mu_{2} \int_{\Omega}\nabla u_{t}\bigl(t-\tau(t)\bigr) \nabla u_{t} \,dx \leq \frac{ \vert \mu_{2} \vert }{2\sqrt{1-d}} \Vert \nabla u_{t} \Vert ^{2} + \frac{ \vert \mu_{2} \vert \sqrt{1-d}}{2} \bigl\Vert \nabla u_{t}\bigl(t- \tau(t)\bigr) \bigr\Vert ^{2} . \end{aligned}$$ By (2.8) we get $$\begin{aligned} -\frac{\xi}{2} e^{-\lambda\tau(t) }\bigl(1-\tau'(t)\bigr) \bigl\Vert \nabla u_{t}\bigl(t-\tau(t)\bigr) \bigr\Vert ^{2} \leq-\frac{\xi}{2} e^{-\lambda\tau_{1} }(1-d) \bigl\Vert \nabla u_{t} \bigl(t-\tau(t)\bigr) \bigr\Vert ^{2} . \end{aligned}$$ Since \(\eta^{t}(0)=0\), we have $$\begin{aligned} -\bigl(\eta^{t}_{s} ,\eta^{t} \bigr)_{{\mathcal {M}}_{2}} =\frac{1}{2} \int_{0}^{\infty}g'(s) \bigl\Vert \Delta \eta^{t} (s) \bigr\Vert ^{2} \,ds . \end{aligned}$$ Combining with the above estimates, we obtain the desired inequality (3.1). The proof is now complete. □ Now, let us define the perturbed modified energy by $$\begin{aligned} L(t) = { E}(t) +\epsilon\Phi(t) , \end{aligned}$$ $$\Phi(t) = \int_{\Omega}u_{t} u \,dx . $$ Then it is easily shown that there exists \(C_{1}>0\) such that $$\begin{aligned} \bigl\vert L(t) - { E}(t) \bigr\vert \leq\epsilon C_{1} { E}(t),\quad \forall t\geq0, \forall\epsilon>0, \end{aligned}$$ where \(C_{1} = \max\{ 1, C_{*} \}\). There exist positive constants \(C_{2}\) and \(C_{3}\) such that $$\begin{aligned} L'(t) \leq- C_{2}{ E}(t) +C_{3} \int_{0}^{\infty}g(s) \bigl\Vert \Delta\eta ^{t} (s) \bigr\Vert ^{2} \,d s ,\quad \forall t \geq0 . \end{aligned}$$ By using (1.16), we get $$\begin{aligned} \Phi'(t)={}& \Vert u_{t} \Vert ^{2} - \Vert \Delta u \Vert ^{2} - \Vert \nabla u \Vert ^{p}_{p} - \int_{\Omega}\int_{0}^{\infty}g(s) \Delta\eta^{t}(s) \Delta u(t) \,d s \,d x \\ &{} - \mu_{1} \int_{\Omega}\nabla u_{t} \nabla u \,d x - \mu_{2} \int_{\Omega}\nabla u_{t}\bigl(t-\tau(t)\bigr) \nabla u \,d x - \int_{\Omega}f(u) u \,d x . \end{aligned}$$ Adding and subtracting \(E(t)\), we see that $$\begin{aligned} \Phi'(t)={}&{-}E(t) +\frac{3}{2} \Vert u_{t} \Vert ^{2} -\frac {1}{2} \Vert \Delta u \Vert ^{2} - \biggl(1-\frac{1}{p} \biggr) \Vert \nabla u \Vert ^{p}_{p} +\frac{1}{2} \bigl\Vert \eta^{t} \bigr\Vert ^{2}_{{\mathcal {M}}_{2}}\\ &{} + \int _{\Omega}\bigl(F(u)- f(u) u \bigr)\,d x \\ &{} - \int_{\Omega}\int_{0}^{\infty}g(s) \Delta\eta^{t}(s) \Delta u(t) \,d s \,d x - \mu_{1} \int_{\Omega}\nabla u_{t} \nabla u \,d x \\ &{}- \mu_{2} \int_{\Omega}\nabla u_{t}\bigl(t-\tau(t)\bigr) \nabla u \,d x \\ &{} +\frac{\xi}{2} \int_{t-\tau(t)}^{t}e^{\lambda(s-t)} \bigl\Vert \nabla u_{t} (s) \bigr\Vert ^{2} \,ds. \end{aligned}$$ Applying Young's inequality and (2.6), we have $$\begin{aligned} & \int_{\Omega}\int_{0}^{\infty}g(s)\Delta\eta^{t}(s) \Delta u(t) \,ds\,d x \leq \frac{1}{8} \Vert \Delta u \Vert ^{2}+ 2(1-l) \bigl\Vert \eta ^{t} \bigr\Vert ^{2}_{{\mathcal {M}}_{2}}, \\ &{-} \mu_{1} \int_{\Omega}\nabla u_{t} \nabla u \,dx \leq \frac{1}{16} \Vert \Delta u \Vert ^{2} + {4d_{1} \mu_{1}^{2}} \Vert \nabla u_{t} \Vert ^{2} \end{aligned}$$ $$\begin{aligned} -\mu_{2} \int_{\Omega}\nabla u_{t}\bigl(t-\tau(t)\bigr)\nabla u \,d x \leq\frac{1}{16} \Vert \Delta u \Vert ^{2} + {4d_{1} \mu_{2}^{2}} \bigl\Vert \nabla u_{t}\bigl(t-\tau(t)\bigr) \bigr\Vert ^{2} , \end{aligned}$$ where \(d_{1}>0\) is an embedding constant for \(H^{2}(\Omega)\cap H^{1}_{0}(\Omega) \hookrightarrow H^{1}_{0}(\Omega)\). Combining all above estimates and using (2.5), we obtain $$\begin{aligned} \Phi'(t) \leq{}&{-}E(t) + \biggl( 4d_{1} \mu_{1}^{2} +\frac{3}{2}d_{2} \biggr) \Vert \nabla u_{t} \Vert ^{2} -\frac {1}{4} \Vert \Delta u \Vert ^{2}\\ &{} - \biggl(1-\frac{1}{p} \biggr) \Vert \nabla u \Vert ^{p}_{p} + \biggl( 2(1-l)+\frac{1}{2} \biggr) \bigl\Vert \eta^{t} \bigr\Vert ^{2}_{{\mathcal {M}}_{2}} \\ &{} + {4d_{1} \mu_{2}^{2}} \bigl\Vert \nabla u_{t}\bigl(t-\tau(t)\bigr) \bigr\Vert ^{2} + \frac{\xi}{2} \int_{t-\tau(t)}^{t}e^{\lambda(s-t)} \bigl\Vert \nabla u_{t} (s) \bigr\Vert ^{2} \,ds , \end{aligned}$$ where \(d_{2}>0\) is an embedding constant for \(H^{1}_{0}(\Omega) \hookrightarrow L^{2}(\Omega)\). Thus, taking two positive constants $$c_{1} =\max\biggl\{ 4d_{1} \mu_{1}^{2} + \frac{3}{2}d_{2} , 4d_{1} \mu_{2}^{2} \biggr\} \quad\text{and}\quad c_{2} =2(1-l)+\frac{1}{2} , $$ we find that $$\begin{aligned} \Phi'(t) \leq{}&{-}E(t) + c_{1} \Vert \nabla u_{t} \Vert ^{2} +c_{1} \bigl\Vert \nabla u_{t}\bigl(t-\tau(t)\bigr) \bigr\Vert ^{2} \\ &{}+ c_{2} \bigl\Vert \eta^{t} \bigr\Vert ^{2}_{{\mathcal {M}}_{2}} +\frac{\xi}{2} \int_{t-\tau(t)}^{t}e^{\lambda(s-t)} \bigl\Vert \nabla u_{t} (s) \bigr\Vert ^{2} \,ds . \end{aligned}$$ Using (3.1), (3.3) and (3.5), we deduce that $$\begin{aligned} L' (t) \leq{}&{-}\epsilon E(t)+ \biggl(\frac{ \vert \mu_{2} \vert }{2\sqrt{1-d}}- \mu_{1}+\frac {\xi}{2} +\epsilon c_{1} \biggr) \Vert \nabla u_{t} \Vert ^{2} \\ &{}+ \biggl(\frac{ \vert \mu_{2} \vert \sqrt{1-d}}{2} - \frac{\xi(1-d)}{2e^{\lambda\tau_{1}}} +\epsilon c_{1} \biggr) \bigl\Vert \nabla u_{t} \bigl(t-\tau(t)\bigr) \bigr\Vert ^{2} \\ &{} +\frac{1}{2} \int_{0}^{\infty}g'(s) \bigl\Vert \Delta \eta^{t} (s) \bigr\Vert ^{2} \,ds +\epsilon c_{2} \bigl\Vert \eta^{t} \bigr\Vert ^{2}_{{\mathcal {M}}_{2}} - \frac{ \xi}{2} (\lambda-\epsilon) \int_{t-\tau(t)}^{t} e^{\lambda(s-t)} \bigl\Vert \nabla u_{t}(s) \bigr\Vert ^{2} \,ds . \end{aligned}$$ From (2.10), we see that $$\frac{ \vert \mu_{2} \vert }{2\sqrt{1-d}}-\mu_{1}+\frac{\xi}{2} < 0, \qquad\frac{ \vert \mu_{2} \vert \sqrt{1-d}}{2} - \frac{\xi (1-d)}{2e^{\lambda\tau_{1}}} < 0. $$ We choose \(\epsilon>0\) sufficiently small for $$\frac{ \vert \mu_{2} \vert }{2\sqrt{1-d}}-\mu_{1}+\frac{\xi}{2} +\epsilon c_{1}< 0 ,\qquad \frac{ \vert \mu_{2} \vert \sqrt{1-d}}{2} - \frac{\xi (1-d)}{2e^{\lambda\tau_{1}}} +\epsilon c_{1} < 0\quad\text{and}\quad\lambda-\epsilon>0 . $$ Hence, we conclude that (3.4) holds for some constants \(C_{2}, C_{3}>0\). □ From the ideas presented in [19–23], we get the following results. Multiplying (3.4) by \(\xi(t)\) and using (2.7) and (3.1), we get $$\begin{aligned} \xi(t) L'(t) &\leq -C_{2} \xi(t){ E}(t) +C_{3} \lim_{t\rightarrow\infty} \int_{0}^{t}\xi(s)g(s) \bigl\Vert \Delta \eta^{t}(s) \bigr\Vert ^{2}\,d s \\ & \leq -C_{2} \xi(t){ E}(t) -C_{3} \int_{0}^{\infty}g'(s) \bigl\Vert \Delta \eta^{t}(s) \bigr\Vert ^{2}\,d s \\ & \leq -C_{2} \xi(t){ E}(t) -2C_{3} E'(t). \end{aligned}$$ Using the fact that \(\xi'(t)\leq0\) and letting $$\begin{aligned} {\mathcal {L}}(t)=\xi(t)L(t)+{2C_{3}} { E}(t)\sim E(t) \end{aligned}$$ $$\begin{aligned} {\mathcal {L}}'(t) \leq- C_{2} \xi(t){ E}(t) \leq-C_{4} \xi(t) {\mathcal {L}} (t),\quad \forall t \geq0, \end{aligned}$$ where \(C_{4}\) is a positive constant. A simple integrating of (3.7) over \((0, t)\) leads to $$\begin{aligned} {\mathcal {L}}(t) \leq{\mathcal {L}}(0) e^{-C_{4} \int_{0}^{t} \xi(s) \,ds }, \quad\forall t\geq0 . \end{aligned}$$ Consequently, from (3.6) and (3.8), we obtain the desired inequality (2.11). □ In this paper, a viscoelastic plate equation with p-Laplacian and time-varying delay has been investigated. In recent years, there has been published much work concerning the wave equation with constant delay or time-varying delay. However, to the best of my knowledge, there was no stability result for the viscoelastic plate equation with strong time delay. We have been proved that the general decay rate result under some restrictions on the coefficients of strong damping and strong time-varying delay. Furthermore, we have been obtained the general decay rate result under weakening the usual assumptions on the relaxation function, using the energy perturbation method. This result improves on earlier ones in the literature because it allows for certain relaxation functions which are not necessarily of exponential or polynomial decay. Yang, Z.: Longtime behavior for a nonlinear wave equation arising in elasto-plastic flow. Math. Methods Appl. Sci. 32, 1082–1104 (2009) Yang, Z.: Global attractors and their Hausdorff dimensions for a class of Kirchhoff models. J. Math. Phys. 51, 032701 (2010) Yang, Z., Jin, B.: Global attractor for a class of Kirchhoff models. J. Math. Phys. 50, 032701 (2009) Ghergu, M., Radulescu, V.: Nonlinear PDEs. Mathematical Models in Biology, Chemistry and Population Genetics. Springer Monographs in Mathematics. Springer, Heidelberg (2012) Torres, C.: Boundary value problem with fractional p-Laplacian operator. Adv. Nonlinear Anal. 5(2), 133–146 (2016) Jorge Silva, M.A., Ma, T.F.: On a viscoelastic plate equation with history setting and perturbation of p-Laplacian type. IMA J. Appl. Math. 78, 1130–1146 (2013) Andrade, D., Jorge Silva, M.A., Ma, T.F.,: Exponential stability for a plate equation with p-Laplacian and memory terms. Math. Methods Appl. Sci. 35, 417–426 (2012) Nakao, M.: Global solutions to the initial-boundary value problem for the quasilinear viscoelastic equation with a derivative nonlinearity. Opusc. Math. 34(3), 569–590 (2014) Pukach, P., Il'kiv, V., Nytrebych, Z., Vovk, M.: On nonexistence of global in time solution for a mixed problem for a nonlinear evolution equation with memory generalizing the Voigt–Kelvin rheological model. Opusc. Math. 37(5), 735–753 (2017) Cavalcanti, M.M., Domingos Cavalcanti, V.N., Lasiecka, I., Webler, C.M.: Intrinsic decay rates for the energy of a nonlinear viscoelastic equation modeling the vibrations of thin rods with variable density. Adv. Nonlinear Anal. 6(2), 121–145 (2017) Nicaise, S., Pignotti, C.: Stability and instability results of the wave equation with a delay term in the boundary or internal feedbacks. SIAM J. Control Optim. 45(5), 1561–1585 (2006) Nicaise, S., Pignotti, C.: Interior feedback stabilization of wave equations with time dependence delay. Electron. J. Differ. Equ. 2011, 41 (2011) Kirane, M., Said-Houari, B.: Existence and asymptotic stability of a viscoelastic wave equation with a delay. Z. Angew. Math. Phys. 62, 1065–1082 (2011) Dai, Q., Yang, Z.F.: Global existence and exponential decay of the solution for a viscoelastic wave equation with a delay. Z. Angew. Math. Phys. 65, 885–903 (2014) Liu, W.J.: General decay of the solution for a viscoelastic wave equation with a time-varying delay term in the internal feedback. J. Math. Phys. 54, 043504 (2013) Yang, Z.F.: Existence and energy decay of solutions for the Euler–Bernoulli viscoelastic equation with a delay. Z. Angew. Math. Phys. 66, 727–745 (2015) Feng, B.: Well-posedness and exponential stability for a plate equation with time-varying delay and past history. Z. Angew. Math. Phys. 68(6), 1–24 (2017) Mustafa, M.I., Kafini, M.: Decay rates for memory-type plate system with delay and source term. Math. Methods Appl. Sci. 40, 883–895 (2017) Park, S.H.: Decay rate estimates for a weak viscoelastic beam equation with time-varying delay. Appl. Math. Lett. 31, 46–51 (2014) Ferreira, J., Messaoudi, S.A.: On the general decay of a nonlinear viscoelastic plate equation with a strong damping and \(\overrightarrow{p}(x, t)\)-Laplacian. Nonlinear Anal. 104, 40–49 (2014) Messaoudi, S.A.: General decay of the solution energy in a viscoelastic equation with a nonlinear source. Nonlinear Anal. 69, 2589–2598 (2008) Messaoudi, S.A., Mustafa, M.I.: On convexity for energy decay rates of a viscoelastic equation with boundary feedback. Nonlinear Anal. 72, 3602–3611 (2010) Park, J.Y., Park, S.H.: Decay rate estimates for wave equations of memory type with acoustic boundary conditions. Nonlinear Anal. 74, 993–998 (2011) Messaoudi, S.A., Fareh, A., Doudi, N.: Well posedness and exponential stability in a wave equation with a strong damping and a strong delay. J. Math. Phys. 57, 111501 (2016) Feng, B.: General decay for a viscoelastic wave equation with strong time-dependent delay. Bound. Value Probl. 2017, 57 (2017) Dafermos, C.M.: Asymptotic stability in viscoelasticity. Arch. Ration. Mech. Anal. 37, 297–308 (1970) Fabrizio, M., Giorgi, C., Pata, V.: A new approach to equations with memory. Arch. Ration. Mech. Anal. 198, 189–232 (2010) Giorgi, C., Rivera, J.E.M., Pata, V.: Global attractors for a semilinear hyperbolic equation in viscoelasticity. J. Math. Anal. Appl. 260, 83–99 (2001) Pata, V., Zucchi, A.: Attractors for a damped hyperbolic equation with linear memory. Adv. Math. Sci. Appl. 11, 505–529 (2001) This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2017R1D1A1B03028291). Department of Mathematics, Dong-A University, Busan, Korea Jum-Ran Kang The work was realized by the author. The author read and approved the final manuscript. Correspondence to Jum-Ran Kang. Kang, JR. General decay for viscoelastic plate equation with p-Laplacian and time-varying delay. Bound Value Probl 2018, 29 (2018). https://doi.org/10.1186/s13661-018-0942-x 74Dxx Plate equation General decay rate p-Laplacian Time-varying delay
CommonCrawl
Why does Newton's third law exist even in non-inertial reference frames? While reviewing Newton's laws of motion I came across the statement which says Newton's laws exist only in inertial reference frames except the third one. Why is it like that? newtonian-mechanics forces reference-frames inertial-frames Qmechanic♦ Rajath Krishna RRajath Krishna R $\begingroup$ I know this isn't what you're looking for, but it is reasonable: "Because if it were otherwise the laws of physics would be non-discoverable." $\endgroup$ – Joshua Jan 25 '15 at 23:42 Edited answer to answer the question If we define a rest-frame such that $$ \mathbf{r} = \mathbf{R}_0 + \mathbf{r}' \\ \mathbf{v} = \mathbf{V}_0 + \mathbf{v}' \\ \mathbf{a} = \mathbf{A}_0 + \mathbf{a}' $$ where $\mathbf{R}_0$ represents the distance from the rest-frame origin to the moving-frame origin (and similarly for $\mathbf{V}_0$ and $\mathbf{A}_0$). If $\mathbf{A}_0=0$, then $\mathbf{F}=m\mathbf{a}=m\mathbf{a}'$. However, if $\mathbf{A}_0\neq0$, then in this case, the force becomes $$ \mathbf{F} = m\mathbf{A}_0+m\mathbf{a}' $$ which we can re-write as $$ \mathbf{F}-m\mathbf{A}_0 = m\mathbf{a}' $$ Which we can then define $\mathbf{F}'=\mathbf{F}-m\mathbf{A}_0$ to get $$ \mathbf{F}'=m\mathbf{a}' $$ which is similar to Newton's second law. If object A acts on object B, then the forces are $\mathbf{F}'_{AB}=-\mathbf{F}'_{BA}$. Since both have the same $-m\mathbf{A}_0$ term, then this reduces to $\mathbf{F}_{AB}=-\mathbf{F}_{BA}$ which is Newton's 3rd law. Original answer, based off my misreading the question The force is given by $$ F=ma=m\frac{d^2x}{dt^2} $$ If we move to a inertial frame (and assuming non-relativistic speeds), we are really letting $x\to x+Vt$ where $V$ denotes the moving velocity. The time derivatives then become $$ \frac{dx}{dt} \to \frac{dx}{dt}+V $$ $$ \frac{d}{dt}\left(\frac{dx}{dt}\right)=\frac{dx^2}{dt^2} \to \frac{d^2x}{dt^2} $$ Thus the force is not changed under this change of frame. $\begingroup$ But, non-inertial reference frames are accelerating frames and not those going with a uniform velocity.In frames moving with uniform velocity all the three laws are valid. $\endgroup$ – Rajath Krishna R Oct 18 '13 at 14:39 $\begingroup$ Oops, I misread the question. $\endgroup$ – Kyle Kanos Oct 18 '13 at 14:41 $\begingroup$ @RajathKrishnaR: I edited my answer to actually answer your question this time. $\endgroup$ – Kyle Kanos Oct 18 '13 at 14:49 $\begingroup$ the v3 answer is wrong. The transformation $x \to x +Vt$ takes you to another inertial frame (like riding a bus does). Your explanation does not go through if you allow arbitrary (non-inertial) changes of frame. $\endgroup$ – Brian Moths Oct 18 '13 at 16:06 $\begingroup$ @NowIGetToLearnWhatAHeadIs: Huh? The first part covers the arbitrary non-inertial changes. The second part covers my incorrect reading of the question for an arbitrary inertial change. $\endgroup$ – Kyle Kanos Oct 18 '13 at 16:09 Interesting interpretation. I would put it exactly the other way around: in a noninertial frame, the first and second laws hold, but the third law doesn't. Let's say we're in a rotating frame, and in that frame, a baseball experiences a centrifugal force. There is no third-law partner for this force: the baseball doesn't create a force back on any other object. This is because the centrifugal force is not an interaction between two objects, so we can't have the third-law pattern of A on B, B on A. On the other hand, the first and second laws certainly apply to the baseball, provided that we include the centrifugal and Coriolis forces as forces. These fictitious forces also obey the law of vector addition, which is a fundamental law of Newtonian mechanics, although not traditionally considered one of Newton's laws. I suppose the opposite interpretation, as given in the question, occurs if you refuse to consider fictitious forces as forces. Then they don't violate Newton's third law, because they're not forces. (Dogs can't violate the law against murder, because the law only applies to people, and dog's are not considered people.) The first and second laws are then violated, because we refuse to put in the inertial forces that would have been needed in order to make them work. Ben CrowellBen Crowell The cutest way to see this is to restate Newton's third law as "no interaction can change the total momentum of the universe." Then, note that since an accelerating reference frame is accelerating with respect to whatever "base" inertial reference frame you're using, everything else seems to be accelerating away. Therefore, the net momentum of the universe is changing. Therefore, Newton's Third Law does not hold in this reference frame. edited May 14 at 13:42 Jerry SchirmerJerry Schirmer $\newcommand{fp}[0]{\vec{F}_\textrm{phys}}$ $\newcommand{fn}[0]{\vec{F}_\textrm{non-inertial}}$ $\newcommand{fab}[0]{\vec{F}_{AB}}$ $\newcommand{fba}[0]{\vec{F}_{BA}}$In a non-inertial frame, every object feels the physical force $\fp$, that it felt in the inertial frame, plus a force $\fn$. The non intertial force felt by an object may depend on its mass, position, time, and possibly other things. An objects acceleration is then given by $m \vec{a} = \fn + \fp$. Thus newton's second law, $m \vec{a} = \fp$, breaks down, and you need a correction for the non-inertial forces. Let's look at newton's third law. It says $\fab= -\fba$. We know this holds true in the inertial frame. If we transform these forces to a non-inertial frame, the transformed coordinates will be different, but because of the way coordinate transformations work, it will still be true that $\fab = -\fba$ in the transformed coordinate system. Thus newton's third law still holds. Brian MothsBrian Moths Not the answer you're looking for? Browse other questions tagged newtonian-mechanics forces reference-frames inertial-frames or ask your own question. Don't Inertial forces obey Newton's third law? Inertial frames of reference Inertial Frames of Reference - Inertial vs. Accelerated Frames How does Newton's first law asserts the existence of inertial frames? Inertial and Non-inertial frames of reference About reference frame in Newton's second law? Inertial Reference Frames (Newtonian Mechanics) Inertial reference frames vs. non-inertial reference frames Does Newton's third law apply in inertial frames of reference? Newton's Third Law and conservation of momentum
CommonCrawl
Machine-learning classification of texture features of portable chest X-ray accurately classifies COVID-19 lung infection Lal Hussain ORCID: orcid.org/0000-0003-1103-49381,2, Tony Nguyen3, Haifang Li3, Adeel A. Abbasi1, Kashif J. Lone1, Zirun Zhao3, Mahnoor Zaib2, Anne Chen3 & Tim Q. Duong3 The large volume and suboptimal image quality of portable chest X-rays (CXRs) as a result of the COVID-19 pandemic could post significant challenges for radiologists and frontline physicians. Deep-learning artificial intelligent (AI) methods have the potential to help improve diagnostic efficiency and accuracy for reading portable CXRs. The study aimed at developing an AI imaging analysis tool to classify COVID-19 lung infection based on portable CXRs. Public datasets of COVID-19 (N = 130), bacterial pneumonia (N = 145), non-COVID-19 viral pneumonia (N = 145), and normal (N = 138) CXRs were analyzed. Texture and morphological features were extracted. Five supervised machine-learning AI algorithms were used to classify COVID-19 from other conditions. Two-class and multi-class classification were performed. Statistical analysis was done using unpaired two-tailed t tests with unequal variance between groups. Performance of classification models used the receiver-operating characteristic (ROC) curve analysis. For the two-class classification, the accuracy, sensitivity and specificity were, respectively, 100%, 100%, and 100% for COVID-19 vs normal; 96.34%, 95.35% and 97.44% for COVID-19 vs bacterial pneumonia; and 97.56%, 97.44% and 97.67% for COVID-19 vs non-COVID-19 viral pneumonia. For the multi-class classification, the combined accuracy and AUC were 79.52% and 0.87, respectively. AI classification of texture and morphological features of portable CXRs accurately distinguishes COVID-19 lung infection in patients in multi-class datasets. Deep-learning methods have the potential to improve diagnostic efficiency and accuracy for portable CXRs. In December 2019, in the Wuhan Hubei province of China, a cluster of cases of pneumonia with an unknown cause was reported [1]. Eventually, it was discovered as severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2, previously named as 2019 novel coronavirus or COVID-19) which has then caused major public health issues and became a large global outbreak. According to the recent statistics, there are millions of confirmed cases in United States and India, and the number is still increasing. The WHO also declared on January 13, 2020 that COVID-19 was the sixth public health emergency of international concern following H1N1 (2009), polio (2014), Ebola in West Africa (2014), Zika (2016) and Ebola in the Democratic Republic of Congo (2019) [2]. It was also found that the novel coronaviral pneumonia is similar to another severe acute respiratory syndrome caused by the Middle East respiratory syndrome (MERS) coronavirus and that it was also capable of causing a more severe form known as acute respiratory distress syndrome (ARDS) [3, 4]. Consensus, criteria, and guidelines were being established with the aim to prevent transmission and facilitate diagnosis and treatment [2, 5, 6]. The rapid incidences of infection are due in part by the relatively slow onset of symptoms, thus enabling widespread transmission by asymptomatic carriers [7]. Along with the global connectivity of today's travel society, this infection readily spread worldwide [7], giving rise to a pandemic [8, 9]. Radiological imaging of the COVID-19 pneumonia reveals the destruction of pulmonary parenchyma which includes extensive consolidation and interstitial inflammation as previously reported in other coronavirus infections [10, 11]. In total, interstitial lung disease (ILD) comprises of more than 200 different types of chronic lung disorders that is characterized by inflammation of lung tissue, usually referred to as pulmonary fibrosis. The fibrosis causes lung stiffness, and this reduces the ability of the air sacs (i.e., spaces within an organism where there is the constant presence of air) to carry out and deliver oxygen into the bloodstream. This eventually can lead to the permanent loss of the ability to breathe. The ILDs are also heterogeneous diseases histologically but mostly contain similar clinical manifestations to each other or with other different lung disorders. This makes determining the differential diagnosis difficult. In addition, the large quantity of radiological data that radiologists are required to scrutinize (with lack of strict clinical guidelines) leads to a low diagnostic accuracy and high inter- and intra-observer variability, which was reported as great as 50% [12]. The most commonly used diagnosis for COVID-19 infections is through reverse transcription-polymerase chain reaction (RT-PCR) assays of nasopharyngeal swabs [13]. However, the high false-negative rate [14], length of test, and shortage of RT-PCR assay kits for the early stages of the outbreak can restrict a prompt diagnosis of infected patients. Computed tomography (CT) and chest X-ray (CXR) are well suited to image the lung of COVID-19 infections. In contrast to the swab test, CT and CXR reveals a spatial location of the suspected pathology as well as the extent of damages. The hallmark pathology of CXR are bilateral distribution of peripheral hazy lung opacities include air space consolidation [15]. The advantage of imaging is that it has good sensitivity, a fast turnaround time, and it can visualize the extent of infection in the lung. The disadvantage of imaging is that it has low specificity, challenging to distinguish different types of lung infection especially when there is severity in the lung infection. Computer-aided diagnostic (CAD) systems can assist radiologists to increase diagnostic accuracy. Currently, researchers are using the hand-crafted or learning features which are based on the texture, geometry, and morphological characteristics of the lung for detection. However, it is often crucial and challenging to choose the appropriate classifier that can optimally handle the property of the feature spaces of the lung. The traditional image recognition methods are Bayesian networks (BNs), support vector machine (SVM), artificial neural networks (ANNs), k-nearest neighbors (kNN), and Adaboost, decision trees (DTs). These machine-learning methods [16, 17] require hand-crafted features to compute such as texture, SIFT, entropy, morphological, elliptic Fourier descriptors (EFDs), shape, geometry, density of pixels, and off-shelf classifiers as explained in [18]. In addition, the machine-learning (ML) feature-based methods are known as non-deep learning methods. There are many applications for these non-deep learning methods such as uses in neurodegenerative diseases, cancer detection, and psychiatric diseases. [17, 19,20,21,22]. However, the major limitations of non-deep learning methods are that they are dependent on the feature extraction step and this makes it difficult to find the most relevant feature which are needed to obtain the most effective result. To overcome these difficulties, the use of artificial intelligence (AI) can be employed. The AI technology in the field of medical imaging is becoming popular especially for the technology advancement and development of deep learning [23,24,25,26,27,28,29,30,31,32]. Recently, [33] used Inf-net for automatic detection of COVID-19 lung infection segmentation from CT images. Moreover, [18] employed momentum contrastive learning for few shot COVID-19 diagnosis from chest CT images. There are vast applications of deep convolutional neural network (DCNN) and machine-learning algorithms in medical imaging problems [32, 34,35,36,37,38]; however, this study is specifically aimed to apply machine-learning algorithms with feature extraction approach. The main advantage of this method is the ability to learn the adaptive image features and classification, which are able to be performed simultaneously. The general goals are to develop automated tools by employing and optimizing machine-learning models along with texture and morphological features to detect early, to distinguish coronavirus-infected patients from non-infected patients. This proposed method will help the healthcare clinicians and radiologists for further diagnosis and tracking the disease progression. The AI-based system, once verified, and tested can lead towards crucial detection and control of patients affected from COVID-19. Furthermore, the machine-learning image analysis tools can potentially support the radiologists by providing an initial read or second opinion. In this study, we employed machine-learning methods to classify texture features of portable CXRs with the aim to identify COVID-19 lung infection. Comparison of texture and morphological features on COVID-19, bacterial pneumonia, non-COVID-19 viral pneumonia, and normal CXRs were made. AI-based classification methods were used for differential diagnosis of COVID-19 lung infection. We tested the hypothesis that AI classification of texture features of CXR can accurately detect the COVID-19 lung infection. We applied five supervised machine-learning classifiers (XGB-L, XGB-Tree, CART, KNN and Naïve Bayes) to classify COVID-19 from bacterial pneumonia, non-COVID-19 viral pneumonia, and normal lung CXRs. Table 1 shows the results of AI classification of texture and morphological features for COVID-19 vs normal utilizing five different classifiers: XGB-L, XGB-Tree, CART (DT), KNN, and Naïve Bayes. All classifiers yielded essentially 100% accuracy by all performance measures along with top four ranked features (i.e., compactness, thin ratio, perimeter, standard deviation), indicating that there is significant difference between the two groups. Table 1 Performance of AI classification of texture and morphological features utilizing five different classifiers of COVID-19 (N = 130) vs normal (N = 138) Table 2 shows the results of AI classification of texture and morphological features for COVID-19 vs bacterial pneumonia. All classifiers except KNN performed well by all performance measures. Specifically, the XGB-L and XGB-Tree classifier yielded the highest classification accuracy (96.34% and 91.46%, respectively), while KNN classifier performed the worst (accuracy of 71.95%). While with the top four ranking features, the XGB-L and XGB-tree classifiers yielded highest accuracy of 85.37% and 86.59%, respectively. Table 2 Performance of AI classification of texture and morphological features utilizing five different classifiers of COVID-19 (N = 130) vs bacterial pneumonia (N = 145) Table 3 shows the results of AI classification of texture and morphological features for COVID-19 vs non-COVID viral pneumonia. All classifiers except KNN performed well by all performance measures. Specifically, the XGB-L and XGB-Tree classifier yielded the highest classification accuracy (97.56% and 95.12%, respectively), while KNN classifier performed the worst (accuracy of 79.27%). Table 3 Performance of AI classification of texture and morphological features utilizing five different classifiers of COVID-19 (N = 130) vs viral pneumonia (N = 145) Table 4 shows the two-class classification using the XGB-L classifier. The result showed that model classified COVID-19 from normal patients most accurately, followed by COVID-19 from bacterial pneumonia, and lastly by COVID-19 from viral pneumonia. Table 4 Two-class classification using XGB-linear with texture + morphological features for COVID-19 (N = 130) vs bacterial pneumonia (N = 145), COVID-19 vs non-COVID-19 viral (N = 145) and COVID-19 vs normal (N = 138) Table 5 shows the results of the multi-class classification using the XGB-L classifier. For multi-class classification problem, the average accuracy for classification of all four classes is used to measures the performance of the classifier (i.e., combined accuracy and AUC). Multi-class classification was able to classify COVID-19 amongst the four groups, with a combined AUC of 0.87 and accuracy of 79.52%. While with the top two ranked features, the combined AUC of 0.82 and accuracy of 66.27% was obtained. Sensitivity, specificity, positive predictive value, and negative predictive value were similarly high. As reflected in Tables 1, 2, 3 and 4, the two-class classification performance (i.e., COVID-19 vs normal, COVID-19 vs bacterial pneumonia, COVID-19 vs viral pneumonia) in terms of sensitivity and PPV was higher than 95%, while these measures using multi-class (COVID-19 vs normal vs bacterial vs viral pneumonia) could achieve performance greater than 74% and 83% to detect COVID-19, respectively. Table 5 Multi-class classification using XGB-linear with texture + morphological features Feature ranking algorithms are mostly used for ranking features independently without using any supervised or unsupervised learning algorithm. A specific method is used for feature ranking in which each feature is assigned a scoring value, then selection of features will be made purely on the basis of these scoring values [39]. The finally selected distinct and stable features can be ranked according to these scores and redundant features can be eliminated for further classification. We first extracted first extracted texture features based on GLCM and morphological features from COVID-19, normal, viral and bacterial pneumonia CXR images and then ranked them based on empirical receiver-operating characteristic curve (EROC) and random classifier slop [40], which ranks features based on the class separability criteria of the area between EROC and random classifier slope. The ranked features show the features importance based on their ranking which can be helpful for distinguish these different classes for improving the detection performance and decision making by the radiologists. Figure 1 shows the ranking features of COVID-19 vs bacterial infection, COVID-19 vs normal, and their multi-class features. The top four features from COVID-19 vs bacterial CXR based on AUC were: skewness, entropy, compactness, and thin ratio. The top four features from COVID-19 vs normal CXR based on AUC were: compactness, thin ratio, perimeter, and standard deviation. The top feature from the multi-class was by far perimeter. Ranking parameters: a COVID vs bacterial infection, b COVID-19 vs normal, and c multi-class feature ranking We employed an automated supervised learning AI classification of texture and morphological-based features on portable CXRs to distinguish COVID-19 lung infections from normal, and other lung infections. The major finding was that the multi-class classification was able to accurately identify COVID-19 from amongst the four groups with a combined AUC of 0.87 and accuracy of 79.52%. The hallmarks of COVID-19 lung infection on CXR are bilateral and peripheral hazy lung opacities and air space consolidation [15]. These features of COVID-19 lung infection likely stood out compared to other pneumonia, giving rise to distinguishable texture features. Our AI algorithm was able to distinguish COVID-19 vs normal CXR with 100% accuracy, COVID-19 vs bacterial pneumonia with 96.34% accuracy, and COVID-19 vs non-COVID-19 viral infection with 92.68% accuracy. These findings suggest that it is trivial to distinguish COVID-19 from normal CXR and the two viral infections were more similar than bacterial infection. With the multi-class classification, all performance measures dropped significantly (except normal CXR) as expected. Nonetheless, the combined AUC and accuracy remained high. These findings are encouraging and suggest that the multi-class classification is able to distinguish COVID-19 lung infection from other similar lung infections. The top four features from COVID-19 vs bacterial infection were skewness, entropy, compactness, and thin ratio. The top four features from COVID-19 vs normal were: compactness, thin ratio, perimeter, and standard deviation. The top feature from the multi-class was perimeter. Perimeter is the total count of pixels at the boundary of an image. It showed that the perimeter of COVID-19 lung CXRs differed significantly from other bacterial and viral infections as well as normal lung X-rays. These results together suggest that perimeter is a key distinguishable feature, consistent with a key observation that COVID-19 lung infection tends to be more peripheral and lateral together the boundaries of the lung. A few studies have reported CNN analysis of CXR and CT for classification of COVID-19 [41,42,43,44,45]. Li et al. performed a retrospective multi-center study using a deep-learning model to extract visual features from chest CT to distinguish COVID-19 from community acquired pneumonia (CAP) and non-pneumonia CT with a sensitivity of 90%, specificity 95%, and AUC 0.96 (p value < 0.001) [41]. Hurt et al. performed a retrospective study using a U-net (CNN), to predict pixel-wise probability maps for pneumonia only from a public dataset that comprised of 22,000 radiographs. For their classification of pneumonia, the area under the receiver-operator characteristic curve was 0.854 with a sensitivity of 82.8% and specificity of 72.6 [46]. Wang et al. developed a deep CNN to detect COVID-19 cases from non-COVID CXR. This study used interpretable AI to visualize the location of the abnormality and was able to distinguish COVID-19 from non-COVID-19 viral infection, bacterial infection, and normal with a sensitivity of 81.9%, 93.1%, and 73.9%, respectively, with an overall accuracy of 83.5% [38]. Gozes et al. developed a deep-learning algorithm to analyze CT images to detect COVID-19 patients from non-COVID-19 cases with 0.996 AUC (95% CI 0.989–1.00), 98.2% sensitivity and 92.2% specificity [43]. Apostolopoulos and Mpesiana [45] used deep learning with a transfer learning approach to extract features from X-rays to distinguish between COVID-19 and bacterial pneumonia, viral pneumonia, and normal with a sensitivity of 98.66%, specificity of 96.46%, and accuracy of 96.78%. Overall, most of these studies used two-class comparison (i.e., pneumonia vs COVID-19, or pneumonia vs normal) mostly on CT which is less suitable for contagious diseases. In these previous studies, two-class prediction performance was computed and yielded fine results but could not achieve the highest performance as compared to our approach. The aim of this research was to improve the prediction performance by extracting texture and morphological features from CXR images. As the machine-learning performance is still a challenging task to extract the most relevant and appropriate features by the researchers. The results reveal that features extracted using our approach contain the most pertinent and appropriate hidden information present in the COVID-19 lung infection which improved the two-class and multi-class classification. These features are then used as input to the robust machine-learning classifiers. The results obtained outperformed than these previously traditional methods. There are several limitations of this study. This is a retrospective study with a small COVID-19 sample size. Portable CXR is sensitive but not specific as the phenotypes of different lung infections are similar on CXR. We used only four classes (disease types). Future studies should expand to include additional lung disorders. In conclusion, deep learning of texture and morphological-based features accurately distinguish CXR of COVID-19 patients from normal subjects and patients with bacterial and non-COVID-19 viral pneumonia. This approach can be used to improve workflow, facilitate in early detection and diagnosis of COVID-19, effective triage of patients with or without the infectious disease, and provide efficient tracking of disease progression. Limitation and future directions This study is specifically aimed to extract the texture features and apply the machine-learning algorithms to predict the COVID-19 from multi-class. The texture features correctly predict the COVID-19 from multi-class; however, in future, we will employ and optimize the deep convolutional neural network models including ResNet101, GoogleNet, AlexNet, Inception-V3 and use will use some other modalities, clinical profiles and bigger datasets. In this study, we used publicly available data of COVID-19 and non-COVID and normal chest CXR images. The COVID-19 images were downloaded from https://github.com/ieee8023/covid-chestxray-dataset [47] on Mar 31, 2020. The original download contained 250 scans of COVID-19 and SARS of CT and CXR taken in multiple directions. Two board-certified chest radiologists (one with 20 + years of experience) and one 2nd year radiology resident evaluated the images for quality and relevance. Only CXR from COVID-19 taken at anterior–posterior (AP) direction was included in this study, resulting in a final sample size of 130. The other dataset was taken from the Kaggle chest X-ray image (pneumonia) dataset (https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia) [42]. Although the Kaggle database has a large sample size, we randomly selected a sample size comparable to that of COVID-19. The sample chosen for the bacterial pneumonia, non-COVID-19 viral pneumonia, and normal CXR were 145, 145, and 138, respectively. We first split the dataset into training and testing data with a 70% and 30% ratio using a stratified sampling method. Then for feature selection, we only used the training data instead of the whole dataset. Figure 2 below outlines the workflow and steps used in this study. Flow of data and analysis Figure 2 outlines the workflow with the initial input of lung CXRs going through feature extraction for texture + morphological analysis followed by the AI classifiers to determine the sensitivity, specificity, PPV, NPV, accuracy, and AUC of the four groups of interest (COVID-19, bacterial and viral pneumonia, and normal). These calculations are further outputted for data validation with fivefold cross-validation technique. Finally, data are statistically analyzed for significance using MATLAB 2018b and RStudio 1.2.5001. The texture features are estimated from the Grey-level Co-occurrence Matrix (GLCM) covering the pixel (image) spatial correlation. Each GLCM input image \(\left( {u,v} \right){\text{th}}\) defines how often pixels with intensity value \(u\) co-occur in a defined connection with pixels with intensity value \(v\). We extracted second-order features consisting of contrast, correlation, mean, entropy, energy, variance, inverse different moment, standard deviation, smoothness, root mean square, skewness, kurtosis, and homogeneity previously used in [48,49,50,51,52,53,54]. Morphological features Morphological feature plays an important role in the detection of malignant tissues. Morphological features convert image morphology into a set of quantitative values that can be used for classification [55]. Morphological feature-extracting method (MFEM) is a nonlinear filtering process and its basic purpose is to search and find valuable information from an image and transform it morphologically according to the requirements for segmentation [56] and so on. The MFEM takes binary cluster as an input and finds the associated components in the clusters having an area greater than a certain threshold. There are several features that can be extracted from an image and area can be calculated from the number of pixels of an image. Area and perimeter combined helps to calculate the values of other different morphology features. The following formulas in [50] can be used to calculate the values of morphological features. We applied and compared five supervised machine-learning classification algorithms: XG boosting linear (XGB-L), XG boosting tree (XGB-tree), classification and regression tree (CART), k-nearest neighbor (KNN) and Naïve Bayes (NB). We used XGB ensemble methods in this study. In machine learning, ensemble is the collection of multiple models and is one of the self-efficient methods as compared to other basic models. Ensemble technique combines different hypothesis to hopefully provide best hypothesis. Basically, this method is used for obtaining a strong learner with the help of combination of weak learners Experimentally, ensembles methods provide more accurate results even there is considerable diversity between the models. Boosting is a most common types of ensemble method that works by discovering many weak classification rules using subset of the training examples simply by sampling again and again from the distribution. XGBoost algorithms Chen and Guestrin proposed XGBoost a gradable machine-learning system in 2016 [57]. This system was most popular and became the standard system when it was employed in the field of machine learning in 2015 and it provides us with better performance in supervised machine learning. The Gradient boosting model is the original model of XGBoost, which combine and relates a weak base with stronger learning models in an iterative manner [58]. In this study, we used XGBoost linear and tree with following optimization parameters. We used the following parameter of each model in this study. For XGB-linear we initialized the parameters as lambda = 0, alpha = 0 and eta = 0.3, where lambda and alpha are the regularization term on weights and eta is the learning rate. For XGB-Tree, we initialized the parameters with maximum depth of tree i.e., max-depth = 30, learning rate eta = 0.3, maximum loss reduction i.e., gamma = 1, minimum child weight = 1, subsample = 1. The nearest neighbor k = 5 was used. For CART, we initialized parameters with minsplit = 20, complexity parameter, i.e., cp = 0.01, maximum depth = 30. For Naïve Bayes, we initialized the parameters with search method = grid, laplace = 0, and adjust = 1. Classification and regression tree (CART) A CART is a predictive algorithm used in the machine learning to explain how the target variable values can be predicted based on the other values. It is a decision tree where each fork is a split in a predictor variable and each node at the end has a prediction for the target variable. Decision tree (DT) algorithm was first proposed by Breiman in 1984 [59], is a learning algorithm or predictive model or decision support tool of Machine Learning and Data Mining for the large size of input data, which predict the target value or class label based on several input variables. In decision tree, the classifier compares and checks the similarities in the dataset and ranked it into distinct classes. Wang et al. [60] used DTs for classifying the data based on choice of an attribute which maximizes and fix the data division. Until the conclusion criteria and condition is met, the attributes of datasets are split into several classes. DT algorithm is constructed mathematically as: $$ \overline{X} = \{ X_{1} ,X_{2} ,X_{3} , \ldots ,X_{m} \}^{{\text{T }}} , $$ $$ X_{i} = \left\{ {x_{1} ,x_{2} ,x_{3} , \ldots ,x_{ij} , \ldots ,x_{in} } \right\}, $$ $$ S = \left\{ {S_{1} ,S_{2} , \ldots ,S_{i} , \ldots ,S_{m} } \right\}. $$ Here the number of observations is denoted by m in the above equations, n represent number of independent variables, S is the m-dimension vector spaces of the variable forecasted from \(\overline{X}\) in the above equation. \(X_{i}\) is the ith module of n-dimension autonomous variables \(x_{i1} ,x_{i2} ,x_{i3} , \ldots ,x_{in}\) are autonomous variable of pattern vector \(X_{i}\) and T is the transpose symbol. The purpose of DTs is to forecast the observations of \(\overline{X}\). From \(\overline{X}\), several DTs can be developed by different accuracy level; although, the best and optimum DT construction is a challenge due to the exploring space has enormous and large dimension. For DT, appropriate fitting algorithms can be developed which reflect the trade-off between complexity and accuracy. For partition of the dataset \(\overline{X}\), there are several sequences of local optimum decision about the feature parameters are used using the Decision Tree strategies. Optimal DT, \(T_{k0}\) is developed according to a subsequent optimization problem: $$ \hat{R}\left( {T_{k0} } \right) = \min \left\{ {\hat{R}\left( {T_{k0} } \right)} \right\}, k = 1,2,3, \ldots ,K, $$ $$ \hat{R}\left( T \right) = \mathop \sum \limits_{t \in T}^{k} \left\{ {r\left( t \right)p\left( t \right)} \right\} . $$ In the above equation, \(\hat{R}\left( T \right)\) represents an error level during the misclassification of tree \(T_{k}\), \(T_{k0}\) represented the optimal DT that minimizes an error of misclassification in the binary tree, T represent a binary tree \( \in \left\{ {T_{1} ,T_{2} , \ldots ,T_{k} ,t_{1} } \right\}\), the index of tree is represented by k, tree node with t, root node by t1, resubstituting an error by r(t) which misclassify node t, probability that any case drop into node t is represented with p(t). The left and right sets of partition of sub trees are denoted by \( T^{L} \; {\text{and}}\; T^{R}\). The result of feature plan portioning the tree T is formed. Naïve Bayes (NB) The NB [61] algorithm is based on Bayesian theorem [62] and it is suitable for higher dimensionality problems. This algorithm is also suitable for several independent variables whether they are categorical or continuous. Moreover, this algorithm can be the better choice for the average higher classification performance problem and have minimal computational time to construct the model. Naïve Bayes classification algorithm was introduced by Wallace and Masteller in 1963. Naïve Bayes relates with a family of probabilistic classifier and established on Bayes theorem containing compact hypothesis of independence among several features. Naïve Bayes is most ubiquitous classifier used for clustering in Machine Learning since 1960. Classification probabilities are able to compute using Naïve Bayes method in machine learning. Naïve Bayes is utmost general classification techniques due to highest performance than the other algorithm such as decision tree (DT), C-means (CM) and SVM. Bayes decision law is used to find the predictable misclassification ratio whereas assuming that true classification opportunity of an object belongs to every class is identified. NB techniques were greatly biased because its probability computation errors are large. To overcome this task, the solution is to reduce the probability valuation errors by Naïve Bayes method. Conversely, dropping probability computation errors did not provide the guarantee for achieving better results in classification performance and usually make it poorest because of its different bias-variance decomposition among classification errors and probability computation error [63]. Naïve Bayes is widely used in present advance developments [64,65,66,67] due to its better performance [68]. Naïve Bayes techniques need a large number of parameters during learning system or process. The maximum possibility of Naïve Bayes function is used for parameter approximation. NB represents conditional probability classifier which can be calculated using Bayes theorem: problem instance which is to be classified, described by a vector \(Y = \left\{ {Y_{1} , Y_{2} , Y_{3} , \ldots ,Y_{n} } \right\}\) shows n features spaces, conditional probability can be written as: $$ S(N_{k} |Y_{1} , Y_{2} , Y_{3} , \ldots Y_{n} ). $$ For each class \(N_{k}\) or each promising output, statistically Bayes theorem can be written as: $$ S\left( {N_{k} |Y} \right) = \frac{{S\left( {N_{k} } \right)S\left( {Y|N_{k} } \right)}}{S\left( Y \right)}. $$ Here, \(S\left( {N_{k} {|}Y} \right)\) represents the posterior probability while \(S\left( {N_{k} } \right)\) represents the preceding probability, \(S\left( {Y|N_{k} } \right)\) represents the likelihood and \(S\left( Y \right)\) represents the evidence. NB is represented mathematically as: $$ S\left( {N_{k} |Y_{1} , Y_{2} , Y_{3} , \ldots ,Y_{n} } \right) = \frac{1}{T}S\left( {N_{k} } \right)\mathop \prod \limits_{i = 1}^{n} S(Y_{i} |N_{k} ). $$ Here \(T = S\left( y \right)\) is scaling factor which is depends upon \((Y_{1} , Y_{2} , Y_{3} , \ldots ,Y_{n} )\), \(S\left( {N_{k} } \right)\) is a parameter used for the calculation of marginal probability and conditional probability for each attribute or instances is represented by \(S(Y_{i} |N_{k} )\). Naïve Bayes become most sensitive in the presence of correlated attributes. The existence of extremely redundant or correlated objects or features can bias the decision taken by Naïve Bayes classifier [67]. K-nearest neighbor (KNN) KNN is most widely used algorithm in the field of machine learning, pattern recognition and many other areas. Zhang [69] used KNN for classification problems. This algorithm is also known as instance based (lazy learning) algorithm. A model or classifier is not immediately built but all training data samples are saved and waited until new observations need to be classified. This characteristic of lazy learning algorithm makes it better than eager learning, that construct classifier before new observation needs to be classified. Schwenker and Trentin [70] investigated that this algorithm is also more significant when dynamic data are required to be changed and updated more rapidly. KNN with different distance metrics were employed. KNN algorithm works according to the following steps using Euclidean distance formula. Step I: To train the system, provide the feature space to KNN. Step II: Measure distance using Euclidean distance formula: $$ d\left( {x_{i} , y_{i} } \right) = \mathop \sum \limits_{i = 1}^{n} \sqrt {(x_{i - } y_{i} )^{2} } . $$ Step III: Sort the values calculated using Euclidean distance using \(d_{i} \le d_{i} + 1, \;{\text{where}}\; i = 1,2,3, \ldots ,k\). Step IV: Apply means or voting according to the nature of data. Step V: Value of K (i.e., number of nearest Neighbors) depends upon the volume and nature of data provided to KNN. For large data, the value of k is kept as large, whereas for small data the value of k is also kept small. In this study, these classification algorithms were performed using RStudio with typical default parameters for each of the classifiers (XGB-L, GXB-tree, CART, KNN, NB) with a fivefold cross-validation. As we divided our dataset into train and test sets, so while training a classifier on train data we used the K-fold cross-validation technique, which shuffles the data and splits it into k number of folds (groups). In general, K-fold validation is performed by taking one group as the test data set, and the other k − 1 groups as the training data, fitting and evaluating a model, and recording the chosen score on each fold. As we used fivefold cross-validation, so the train set is equally divided into five parts from which one is used as validation and the other four used for training of classifier on each fold. Performance evaluation measures The performance was evaluated with the following parameters. The sensitivity measure also known as TPR or recall is used to test the proportion of people who test positive for the disease among those who have the disease. Mathematically, it is expressed as: $$ {\text{Sensitivity}} = \frac{{\mathop \sum \nolimits {\text{True}}\;{\text{ positive}}}}{{\mathop \sum \nolimits {\text{Condition}}\; {\text{positive}}}}, $$ $$ {\text{Sensitivity}} = \frac{{{\text{TP}}}}{{{\text{TP}} + {\text{FN}}}}, $$ i.e., the probability of positive test given that patient has disease. The TNR measure also known as specificity is the proportion of negatives that are correctly identified. Mathematically, it is expressed as: $$ {\text{Specificity}} = \frac{{\mathop \sum \nolimits {\text{True }}\;{\text{negative}}}}{{\mathop \sum \nolimits {\text{Condition}}\;{\text{negative}}}}, $$ $$ {\text{Specificity}} = \frac{{{\text{TN}}}}{{{\text{TN}} + {\text{FP}}}}, $$ i.e., probability of a negative test given that patient is well. Positive predictive value (PPV) PPV is mathematically expressed as: $$ {\text{PPV}} = \frac{{\sum {{\text{True}}\;{\text{positive}}} }}{{\sum {{\text{Predicted}}\;{\text{condition}}\;{\text{positive}}} }}, $$ $$ {\text{PPV}} = \frac{{{\text{TP}}}}{{{\text{TP}} + {\text{FP}}}}, $$ where TP denotes that the test makes a positive prediction and subject has a positive result under gold standard while FP is the event that test make a positive perdition and subject make a negative result. Negative predictive value (NPV) NPV can be computed as: $$ {\text{NPV}} = \frac{{\sum {{\text{True }}\;{\text{negative}}} }}{{\sum {{\text{Predicted}}\; {\text{condition}}\;{\text{negative}}} }}, $$ $$ {\text{NPV}} = \frac{{{\text{TN}}}}{{{\text{TN}} + {\text{FN}}}}, $$ where TN indicates that test make negative prediction and subject has also negative result, while FN indicate that test make negative prediction and subject has positive result. The total accuracy is computed as: $$ {\text{Accuracy}} = \frac{{{\text{TP}} + {\text{TN}}}}{{{\text{TP}} + {\text{FP}} + {\text{FN}} + {\text{TN}}}}. $$ Receiver-operating characteristic (ROC) curve Based on sensitivity, i.e., true-positive rate (TPR) and specificity, i.e., false-positive rate (FPR) values of COVID-19 and non-COVID subjects. The mean values for COVID-19 subjects are classified as 0 and for non-COVID subjects are classified as 1. Then obtained vector is passed through ROC function, which plots each value against sensitivity and specificity values. ROC is considered as one of the standard methods for computation and graphical representation of the performance of a classifier. ROC plots FPR against x-axis and TPR against y-axis, while part of a square unit is represented by area under the curve (AUC). The value of AUC lies between 0 and 1 where AUC > 0.5 indicates the separation. Higher area under the curve represents the better and improved diagnostic system [71]. The number of correct positive cases divided by the total number of positive cases represents TPR. While the number of negative cases predicted as positive cases divided by the total number of negative cases represent FPR [72]. Training/testing data formulation The Jack-knife fivefold cross-validation (CV) technique was applied for the training and testing of data formulation and parameter optimization. It is one of the most well known, commonly practiced, and successfully used methods for validating the accuracy of a classifier using fivefold CV. The data are divided into fivefold in training, the fourfold participate, and classes of the samples for remaining folds are classified based on the training performed on fourfold. For the trained models, the test samples in the test fold are purely unseen. The entire process is repeated five times and each class sample is classified accordingly. Finally, the unseen samples classified labels that are to be used for determining the classification accuracy. This process is repeated for each combination of each systems' parameters and the classification performance have been reported for the samples as depicted in the Tables 1, 2, 3 and 4. Statistical analysis and performance measures Analyses examining differences in outcomes used unpaired two-tailed t tests with unequal variance. Receiver-operating characteristic (ROC) curve analysis was performed with COVID-19, normal, bacterial, and non-COVID-19 viral pneumonia as ground truth. The performance was evaluated by standard ROC analysis, including sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, area under the receiver-operating curve (AUC) with 95% confidence interval, and significance with the P value. AUC with lower and upper bounds and accuracy were tabulated. MATLAB (R2018b, MathWorks, Natick, MA) and RStudio 1.2.5001 were used for statistical analysis. MERS: RT-PCR: CXR: Computer-aided diagnostic BNs: ANNs: Artificial neural networks kNN: K-nearest neighbors DTs: Adaboost, decision trees Scale-invariant Fourier transform EFDs: Elliptic Fourier descriptors DCNN: Deep convolutional neural network Cross-validation Receiver-operating characteristic Lu H, Stratton CW, Tang Y. Outbreak of pneumonia of unknown etiology in Wuhan, China: The mystery and the miracle. J Med Virol. 2020;92:401–2. Li Q, Guan X, Wu P, Wang X, Zhou L, Tong Y, et al. Early transmission dynamics in Wuhan, China, of novel coronavirus-infected pneumonia. N Engl J Med. 2020;382:2001316. Huang C, Wang Y, Li X, Ren L, Zhao J, Hu Y, et al. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet. 2020;395:497–506. Graham RL, Donaldson EF, Baric RS. A decade after SARS: strategies for controlling emerging coronaviruses. Nat Rev Microbiol. 2013;11:836–48. Chen N, Zhou M, Dong X, Qu J, Gong F, Han Y, et al. Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study. Lancet. 2020;395:507–13. Chen Z-M, Fu J-F, Shu Q, Chen Y-H, Hua C-Z, Li F-B, et al. Diagnosis and treatment recommendations for pediatric respiratory infection caused by the 2019 novel coronavirus. World J Pediatr. 2020;16:240–6. Biscayart C, Angeleri P, Lloveras S, Chaves TSS, Schlagenhauf P, Rodríguez-Morales AJ. The next big threat to global health? 2019 novel coronavirus (2019-nCoV): what advice can we give to travellers?—Interim recommendations January 2020, from the Latin-American society for Travel Medicine (SLAMVI). Travel Med Infect Dis. 2020;33:101567. Carlos WG, Dela Cruz CS, Cao B, Pasnick S, Jamil S. Novel Wuhan (2019-nCoV) coronavirus. Am J Respir Crit Care Med. 2020;201:P7-8. Munster VJ, Koopmans M, van Doremalen N, van Riel D, de Wit E. A novel coronavirus emerging in China—key questions for impact assessment. N Engl J Med. 2020;382:692–4. Chung M, Bernheim A, Mei X, Zhang N, Huang M, Zeng X, et al. CT imaging features of 2019 novel coronavirus (2019-nCoV). Radiology. 2020;295:202–7. Fang Y, Zhang H, Xu Y, Xie J, Pang P, Ji W. CT manifestations of two cases of 2019 novel coronavirus (2019-nCoV) pneumonia. Radiology. 2020;295:208–9. Sluimer I, Schilham A, Prokop M, van Ginneken B. Computer analysis of computed tomography scans of the lung: a survey. IEEE Trans Med Imaging. 2006;25:385–405. Xie X, Zhong Z, Zhao W, Zheng C, Wang F, Liu J. Chest CT for typical 2019-nCoV pneumonia: relationship to negative RT-PCR testing. Radiology. 2020;296:200343. Chan JF-W, Yuan S, Kok K-H, To KK-W, Chu H, Yang J, et al. A familial cluster of pneumonia associated with the 2019 novel coronavirus indicating person-to-person transmission: a study of a family cluster. Lancet. 2020;395:514–23. Wong HYF, Lam HYS, Fong AH-T, Leung ST, Chin TW-Y, Lo CSY, et al. Frequency and distribution of chest radiographic findings in COVID-19 positive patients. Radiology. 2019;296:201160. Fehr D, Veeraraghavan H, Wibmer A, Gondo T, Matsumoto K, Vargas HA, et al. Automatic classification of prostate cancer Gleason scores from multiparametric magnetic resonance images. Proc Natl Acad Sci. 2015;112:E6265–73. Orrù G, Pettersson-Yeo W, Marquand AF, Sartori G, Mechelli A. Using support vector machine to identify imaging biomarkers of neurological and psychiatric disease: a critical review. Neurosci Biobehav Rev. 2012;36:1140–52. Chen X, Yao L, Zhou T, Dong J, Zhang Y. Momentum contrastive learning for few-shot COVID-19 diagnosis from chest CT images. arXiv preprint arXiv:2006.13276 . Parmar C, Bakers FCH, Peters NHGM, Beets RGH. Deep learning for fully-automated localization and segmentation of rectal cancer on multiparametric. Sci Rep. 2017;7:1–9. Oakden-Rayner L, Carneiro G, Bessen T, Nascimento JC, Bradley AP, Palmer LJ. Precision radiology: predicting longevity using feature engineering and deep learning methods in a radiomics framework. Sci Rep. 2017;7:1648. Cruz JA, Wishart DS. Applications of machine learning in cancer prediction and prognosis. Cancer Inform. 2006;2:117693510600200. Doyle S, Hwang M, Shah K, Madabhushi A, Feldman M, Tomaszeweski J. Automated grading of prostate cancer using architectural and textural image features. In: 2007 4th IEEE International Symposium on Biomedical Imaging From Nano to Macro. IEEE; 2007. pp 1284–7 Wang J, Ding H, Bidgoli FA, Zhou B, Iribarren C, Molloi S, et al. Detecting cardiovascular disease from mammograms with deep learning. IEEE Trans Med Imaging. 2017;36:1172–81. Shen D, Wu G, Suk H-I. Deep learning in medical image analysis. Annu Rev Biomed Eng. 2017;19:221–48. Lee H, Tajmir S, Lee J, Zissen M, Yeshiwas BA, Alkasab TK, et al. Fully automated deep learning system for bone age assessment. J Digit Imaging. 2017;30:427–41. Gao XW, Hui R, Tian Z. Classification of CT brain images based on deep learning networks. Comput Methods Programs Biomed. 2017;138:49–56. Forsberg D, Sjöblom E, Sunshine JL. Detection and labeling of vertebrae in MR images using deep learning with clinical annotations as training data. J Digit Imaging. 2017;30:406–12. Zhang Q, Xiao Y, Dai W, Suo J, Wang C, Shi J, et al. Deep learning based classification of breast tumors with shear-wave elastography. Ultrasonics. 2016;72:150–7. Ortiz A, Munilla J, Górriz JM, Ramírez J. Ensembles of deep learning architectures for the early diagnosis of the Alzheimer's disease. Int J Neural Syst. 2016;26:1650025. Nie D, Zhang H, Adeli E, Liu L, Shen D. 3D deep learning for multi-modal imaging-guided survival time prediction of brain tumor patients. Cham: Springer; 2016. p. 212–20. Ithapu VK, Singh V, Okonkwo OC, Chappell RJ, Dowling NM, Johnson SC. Imaging-based enrichment criteria using deep learning algorithms for efficient clinical trials in mild cognitive impairment. Alzheimer's Dement. 2015;11:1489–99. Anthimopoulos M, Christodoulidis S, Ebner L, Christe A, Mougiakakou S. Lung pattern classification for interstitial lung diseases using a deep convolutional neural network. IEEE Trans Med Imaging. 2016;35:1207–16. Fan D-P, Zhou T, Ji G-P, Zhou Y, Chen G, Fu H, et al. Inf-Net: automatic COVID-19 lung infection segmentation from CT images. IEEE Trans Med Imaging. 2020;39:2626–37. Cha KH, Hadjiiski L, Samala RK, Chan H-P, Caoili EM, Cohan RH. Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets. Med Phys. 2016;43:1882–96. Ghafoorian M, Karssemeijer N, Heskes T, Bergkamp M, Wissink J, Obels J, et al. Deep multi-scale location-aware 3D convolutional neural networks for automated detection of lacunes of presumed vascular origin. NeuroImage Clin. 2017;14:391–9. Lekadir K, Galimzianova A, Betriu A, del Mar VM, Igual L, Rubin DL, et al. A convolutional neural network for automatic characterization of plaque composition in carotid ultrasound. IEEE J Biomed Heal Inform. 2017;21:48–55. Rajkomar A, Lingam S, Taylor AG, Blum M, Mongan J. High-throughput classification of radiographs using deep convolutional neural networks. J Digit Imaging. 2017;30:95–101. Samala RK, Chan H-P, Hadjiiski L, Helvie MA, Wei J, Cha K. Mass detection in digital breast tomosynthesis: deep convolutional neural network with transfer learning from mammography. Med Phys. 2016;43:6654–66. Wang H, Raton B. A comparative study of filter-based feature ranking techniques. IEEE IRI. 2010;1:43–8. Bradley AP. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognit. 1997;30:1145–59. Li L, Qin L, Xu Z, Yin Y, Wang X, Kong B, et al. Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT. Radiology. 2020;296:200905. Wang L, Wong A. COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-Ray images. arXiv Prepr arXiv200309871. 2020. Gozes O, Frid-Adar M, Greenspan H, Browning PD, Zhang H, Ji W, et al. Rapid AI Development cycle for the coronavirus (COVID-19) pandemic: initial results for automated detection & patient monitoring using deep learning CT image analysis. arXiv preprint arXiv:2003.05037. Narin A, Kaya C, Pamuk Z. Automatic Detection of Coronavirus Disease (COVID-19) Using X-ray Images and Deep Convolutional Neural Networks. arXiv Prepr arXiv200310849. 2020. Apostolopoulos ID, Mpesiana TA. Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks. Phys Eng Sci Med. 2020;43:635–40. Hurt B, Yen A, Kligerman S, Hsiao A. augmenting interpretation of chest radiographs with deep learning probability maps. J Thorac Imaging. 2020;1. Cohen JP, Morrison P, Dao L. COVID-19 Image Data Collection. arXiv Prepr arXiv200311597. 2020. Khalvati F, Wong A, Haider MA. Automated prostate cancer detection via comprehensive multi-parametric magnetic resonance imaging texture feature models. BMC Med Imaging. 2015;15:27. Haider MA, Vosough A, Khalvati F, Kiss A, Ganeshan B, Bjarnason GA. CT texture analysis: a potential tool for prediction of survival in patients with metastatic clear cell carcinoma treated with sunitinib. Cancer Imaging. 2017;17:4. Guru DS, Sharath YH, Manjunath S. Texture features and KNN in classification of flower images. Int J Comput Appl. Special Issue on RTIPPR (1) 2010;21–9. Yu H, Scalera J, Khalid M, Touret A-S, Bloch N, Li B, et al. Texture analysis as a radiomic marker for differentiating renal tumors. Abdom Radiol. 2017;42:2470–8. Castellano G, Bonilha L, Li LM, Cendes F. Texture analysis of medical images. Clin Radiol. 2004;59:1061–9. Khuzi AM, Besar R, Zaki WMDW. Texture features selection for masses detection in digital mammogram. IFMBE Proc. 2008;21:629–32. Esgiar AN, Naguib RNG, Sharif BS, Bennett MK, Murray A. Fractal analysis in the detection of colonic cancer images. IEEE Trans Inf Technol Biomed. 2002;6:54–8. Masseroli M, Bollea A, Forloni G. Quantitative morphology and shape classification of neurons by computerized image analysis. Comput Methods Programs Biomed. 1993;41:89–99. Li YM, Zeng XP. A new strategy for urinary sediment segmentation based on wavelet, morphology and combination method. Comput Methods Programs Biomed. 2006;84:162–73. Chen T, Guestrin C. XGBoost. Proc 22nd ACM SIGKDD Int Conf Knowl Discov Data Min—KDD '16. New York: ACM Press; 2016. pp. 785–94. Friedman JH. Greedy function approximation: a gradient boosting machine. Ann Stat. 2001;29:1189–232. Ariza-López FJ, Rodríguez-Avi J, Alba-Fernández MV. Complete control of an observed confusion matrix. In: International Geoscience Remote Sensors Symposium. IEEE; 2018. pp. 1222–5. Wang R, Kwong S, Wang X, Jiang Q. Continuous valued attributes. 45. Nahar J, Chen Y-PP, Ali S. Kernel-based Naive Bayes classifier for breast cancer prediction. J Biol Syst. 2007;15:17–25. Yamauchi Y, Mukaidono M. Probabilistic inference and Bayesian theorem based on logical implication. Lecture notes on computer science. Berlin: Springer; 1999. p. 334–42. Fang X. Naïve Bayes: inference-based Naïve Bayes cost-sensitive turning. Nai. 2013;25:2302–14. Zaidi NA, Du Y, Webb GI. On the effectiveness of discretizing quantitative attributes in linear classifiers. J Mach Learn Res. 2017;01. Zhang J, Chen C, Xiang Y, Zhou W, Xiang Y. Internet traffic classification by aggregating correlated naive bayes predictions. IEEE Trans Inf Forensics Secur. 2013;8:5–15. Chen C, Zhang G, Yang J, Milton JC, Alcántara AD. An explanatory analysis of driver injury severity in rear-end crashes using a decision table/Naïve Bayes (DTNB) hybrid classifier. Accid Anal Prev. 2016;90:95–107. Bermejo P, Gámez JA, Puerta JM. Knowledge-based systems speeding up incremental wrapper feature subset selection with Naive Bayes classifier. Knowl-Based Syst. 2014;55:140–7. Huang T, Weng RC, Lin C. Generalized Bradley-Terry models and multi-class probability estimates. J Mach Learn Res. 2006;7:85–115. Zhang P, Gao BJ, Zhu X, Guo L. Enabling fast lazy learning for data streams. In: Proceedings of IEEE international conference on data mining, ICDM. 2011; pp. 932–41. Schwenker F, Trentin E. Pattern classification and clustering: a review of partially supervised learning approaches. Pattern Recognit Lett. 2014;37:4–14. Hussain L, Ahmed A, Saeed S, Rathore S, Awan IA, Shah SA, et al. Prostate cancer detection using machine learning techniques by employing combination of features extracting strategies. Cancer Biomark. 2018;21:393–413. Rathore S, Hussain M, Khan A. Automated colon cancer detection using hybrid of novel geometric features and some traditional features. Comput Biol Med. 2015;65:279–96. Department of Computer Science and IT, King Abdullah Campus, University of Azad Jammu and Kashmir, Muzaffarabad, 13100, Azad Kashmir, Pakistan Lal Hussain, Adeel A. Abbasi & Kashif J. Lone Department of Computer Science and IT, Neelum Campus, University of Azad Jammu and Kashmir, Athmuqam, 13230, Azad Kashmir, Pakistan Lal Hussain & Mahnoor Zaib Department of Radiology, Renaissance School of Medicine at Stony Brook University, 101 Nicolls Rd, Stony Brook, NY, 11794, USA Tony Nguyen, Haifang Li, Zirun Zhao, Anne Chen & Tim Q. Duong Lal Hussain Haifang Li Adeel A. Abbasi Kashif J. Lone Zirun Zhao Mahnoor Zaib Anne Chen Tim Q. Duong LH conceptualized the study, analyzed data and wrote the paper. TN conceptualized the study, and edited the paper. AAA edited the paper. KJL edited the paper. ZZ edited the paper. MZ edited and reviewed the paper. AC edited the paper. TQ conceptualized the study and edited the paper. All the authors read and approved the final manuscript. Correspondence to Lal Hussain. Availability of supporting data and materials These data are already available via https://github.com/ieee8023/covid-chestxray-dataset. Ethical approval and consent to participate Not applicable. Data were obtained from a publicly available, deidentified dataset. https://github.com/ieee8023/covid-chestxray-dataset Hussain, L., Nguyen, T., Li, H. et al. Machine-learning classification of texture features of portable chest X-ray accurately classifies COVID-19 lung infection. BioMed Eng OnLine 19, 88 (2020). https://doi.org/10.1186/s12938-020-00831-x Morphological Feature extraction
CommonCrawl
Express Letter High QScS beneath the Ontong Java Plateau Daisuke Suetsugu ORCID: orcid.org/0000-0001-9913-33291, Hajime Shiobara2, Hiroko Sugioka3, Aki Ito1, Takehi Isse2, Yasushi Ishihara1, Satoru Tanaka1, Masayuki Obayashi1, Takashi Tonegawa1, Junko Yoshimitsu1 & Takumi Kobayashi3 The Ontong Java Plateau (OJP) in the southwest Pacific is the largest oceanic large igneous provinces (LIP) on Earth. Detailed seismic structure of the plateau has not been understood well because of sparse seismic stations. We investigated seismic attenuation of the mantle beneath the plateau by analyzing data from temporary seismic stations on the seafloor and islands in and around the plateau. We analyzed the spectra of multiple ScS waves to determine the average attenuation of the mantle (QScS) beneath the plateau. We estimated the average QScS values for the paths with bounce points located in the plateau to be 309, which is significantly higher than the average (i.e., weaker attenuation than average) estimated in the western Pacific and is close to that of stable continents. We obtained positive residuals of 6 s for travel times of multiple ScS waves, which indicate that the average S velocity in the entire mantle beneath the OJP is low. While the positive residuals are at least partially attributable to the Pacific Large Low Shear Velocity Province (Pacific LLSVP), it is difficult to conclude whether low-velocity anomalies are required in the OJP upper mantle to explain the residuals from the multiple ScS analysis. The Ontong Java Plateau (OJP) is the most voluminous large igneous province (LIP) in the oceanic region of the Earth (Fig. 1), whose elevation is approximately 2000 m above the surrounding seafloor. The OJP is known to have been emplaced primarily at 120 and 90 Ma by massive volcanism (e.g., Coffin and Eldholm 1994; Neal et al. 1997) based on petrological and geochemical studies, but the cause of this volcanism remains controversial. Previous studies of seismic tomography using seismological data from islands in the region surrounding the OJP showed an anomalous mantle structure beneath the OJP, although the seismic images are not in agreement with each other. Richardson and Okal (2000) performed surface wave tomography using data from four temporary stations on islands in the northern margin of the OJP, which showed a low-velocity zone of − 5% down to 300 km beneath the entire OJP region. Based on an SKS splitting analysis, Klosko et al. (2001) interpreted the low-velocity zone obtained by Richardson and Okal (2000) as a rheologically strong and chemically distinct mantle root of the OJP. On the other hand, Covellone et al. (2015) conducted surface wave tomography using earthquake and ambient noise data from permanent seismic stations in the western Pacific. Their model has a high-velocity anomaly in the center of the OJP down to 100 km. The contradictory results obtained in the previous studies suggest that even first-order images, such as the presence of the low-velocity zone, remain to be resolved. Map showing seismological stations of the OJP array and permanent stations around the OJP. The black and small gray triangles denote the stations of the OJP array, and the open triangles denote permanent stations. Seismograms at the locations of the black and open triangles were analyzed in the present study. The small gray triangles denote stations with low signal-to-noise ratios of multiple ScS waves or stations that were terminated at the time of the event on Aug. 31, 2016 The discrepancy in the previous seismic images is mainly due to a shortage of in situ geophysical observations. To improve the spatial resolution for the geophysical structure beneath the OJP, we deployed a new temporary seismological and electromagnetic observation network, referred to herein as the OJP array, in the OJP and its vicinity (Suetsugu et al. 2018). In the present study, we analyzed seismic attenuation in the mantle beneath the OJP using data from the OJP array, which could provide another constraint on the mantle structure beneath the OJP. Only one study has been performed on the seismic attenuation beneath the OJP (Gomer and Okal 2003) by analyzing multiple ScS waves (QScS), which indicated that the attenuation in the OJP mantle was weak (high QScS). However, whether attenuation is weak over the entire OJP region remains unclear, because attenuation could be analyzed for a single event–station pair because only one pair was available for the analysis of multiple ScS waves due to few seismological stations located in the OJP. While spatial resolution in global three-dimensional attenuation models have been improved recently (see review by Romanowicz and Mitchell 2015), they show variable results for the OJP region. Bhattacharyya et al. (1996) analyzed S and SS waves to determine the global distribution of Qs in the upper mantle. They showed that Qs beneath the OJP region is similar to the global average. Warren and Shearer (2002) analyzed spectra of P and PP waves to determine a global attenuation model, indicating that the OJP region has weaker attenuation. Attenuation studies using Rayleigh waves showed that the OJP region has an attenuation that is weaker than or close to the global average (Selby and Woodhouse 2002; Dalton et al. 2008). At present, global attenuation tomography may still have difficulty in resolving the seismic attenuation of the OJP. In the present study, we present QScS values estimated from multiple ScS waves recorded at 12 stations, including six stations of the OJP array for a deep earthquake. We examine whether the high QScS obtained by Gomer and Okal (2003) for a single event–station pair represents the seismic attenuation of the entire OJP region. We analyzed waveform data recorded by a temporary broadband seismic network (OJP array) on the OJP and its vicinity along with six permanent stations operated by the IRIS/IDA, IRIS/USGS, Geoscience Australia, and Pacific21 seismic networks. The OJP array, which consisted of 23 broadband ocean-bottom seismic (BBOBS) stations and two land-based broadband stations on the Chuuk and Kosrae islands (Fig. 1), was operated from late 2014 to early 2017. We analyzed waveform data of a deep earthquake that occurred in the New Ireland region of Papua New Guinea on Aug. 31, 2016, which was nearly at the end of the observation period. The hypocenter parameters determined by USGS are 3.685° S, 152.792° E, 476 km, and 6.8 for latitude, longitude, focal depth, and moment magnitude, respectively (referred to as the 2016 event, Fig. 1). We also analyzed the data used in Gomer and Okal (2003) for comparison with the event that occurred beneath the Solomon Islands on May 2, 1996 (referred to as the 1996 event). Figure 2 shows examples of multiple ScS waves for the 2016 event, which is visible up to ScS3 on the bandpass-filtered seismogram at periods between 0.01 and 0.05 Hz. Note that the BBOBS data at OJ13 have a signal-to-noise ratio that is comparable to that at the continental station CTAO for this event. a Transverse component seismograms for the 2016 event at OJ13 (top) and CTAO (bottom). b Schematic diagrams of multiple ScS ray paths We analyzed a multiple ScS phase pair (ScSn+1 and ScSn, sScSn+1 and sScSn) for each event–station pair to measure seismic attenuation and travel time, where n is the number of reflections at the core–mantle boundary. Multiple ScS waves are sensitive to structures near the source, station, and bounce points at the Earth's surface and the core–mantle boundary (CMB) (e.g., Liu and Tromp 2008). Analyzing the phase pair, the effects of structure near the source and station are substantially canceled, and the effects near the Earth's surface and the CMB are likely to remain. The effect of the Earth's surface is expected to be more enhanced than that near the CMB in measurements of seismic attenuation and travel time, because of much lower velocities near the Earth's surface compared to those near the CMB. Even so, the CMB effect should be also taken into consideration in the present study, because the lower mantle beneath the OJP is known as the Pacific Large Low Shear Velocity Province (Pacific LLSVP) of low shear velocity and strong attenuation (e.g., Garnero and McNamara 2008; Ritsema et al. 2010; Konishi et al. 2017). We used a spectral ratio method to measure QScS along the propagation path in the mantle from an event to each station (e.g., Jordan and Sipkin 1977; Nakanishi 1979; Suetsugu 2001). We applied the stacking procedure for multiple ScS spectra developed by Jordan and Sipkin (1977) to the spectral ratio method. In this method, QScS is estimated from the ratio of stacked spectra of a multiple ScS pair (ScSn+1/ScSn and sScSn+1/sScSn) for each event–station pair. We used a time window of 180 s of the rotated transverse component seismograms to compute the multiple ScS spectra. The starting time of the first phase was placed 40 s prior to the theoretical IASP91 arrival time (Kennett and Engdahl 1991). We chose this starting time to avoid errors in QScS caused by crustal reverberations (Isse and Nakanishi 1997). A 30% cosine taper was applied to two phases when the spectra were computed. We calculated noise power spectra from a 180-s-long record preceding the first phase and smoothed with a running average of 0.01 Hz, from which we estimated the standard errors for the weighting factor of stacking (Jordan and Sipkin 1977). Geometrical spreading was corrected for the amplitude of multiple ScS waves. The logarithm of the stacked spectral ratio A(f) is related to QScS by $$\ln A\left( f \right) = - \frac{\pi Tf}{{Q_{\text{ScS}} }} + \varepsilon ,$$ where T is the travel time difference between the two phases, which is determined by cross-correlating the two phases, f is frequency, and ε is an error term. We discarded the phase pairs for which the cross-correlation was less than 0.6. The QScS value was computed from Eq. (1) by applying a least-squares line fitting technique to ln A(f). The frequency range is basically from 0.01 to 0.05 Hz, whereas the highest and lowest limits of frequencies vary within 0.015 Hz based on the frequency-dependent signal-to-noise ratio at each event–station pair. Travel time residuals of ScSn+1–ScSn and sScSn+1–sScSn were also obtained by cross-correlating waveforms of a phase pair. The residuals were corrected for Earth's ellipticity and water depths at surface bounce points. Figure 3 shows examples of QScS measurements at OJ13 and HNR stations. The slope of amplitude spectra with respect to the frequency is gentler at OJ13 than at HNR, indicating that QScS is higher (weaker attenuation) at OJ13 than that at HNR. Multiple ScS waveforms (a, d), amplitude spectra (b, e), and stacked spectral ratios (c, f) at OJ13 (a–c) and HNR (d–f) stations. In (a, d), the dotted curves denote the noise waveforms before arrivals of the multiple ScS waves. In (b, e), spectra of ScSn and sScSn are indicated by red curves and those of ScSn+1 and sScSn+1 by blue curves, those of noises are by dotted curves. In (c, f), the logarithm of the amplitude (left) and the phase (right) of the stacked spectral ratio are shown, respectively. Errors are estimated from noise spectra. A line fitting technique gives 331 ± 58 at OJ13 and 165 ± 54 at HNR We obtained QScS from eight land-based stations and four BBOBS stations in the northern part of the OJP array, as shown in Table 1. The operation of the BBOBS stations in the eastern part of the array had already been terminated at the time of the 2016 event. The BBOBS stations in the western and central parts of the array did not record multiple ScS waves with a good S/N ratio. As a result, we could determine QScS with bounce points of multiple ScS waves located only in the northern half of the OJP. Table 1 QScS, standard errors, travel time residuals, and theoretical travel time residuals calculated from S40RTS (Ritsema et al. 2010), and pairs of multiple ScS used in the present study Figure 4a, b illustrates the QScS values and the travel time residuals plotted at surface bounce points, respectively. The QScS values with the bounce points located in the OJP range from 213 to 413, whereas the standard errors are large (19–227) (Table 1), as estimated from the signal-to-noise ratio of the multiple ScS spectra at each station (Jordan and Sipkin 1977). The large standard errors are due to a large horizontal-component noise on the ocean-bottom seismograph (e.g., Suetsugu and Shiobara 2014). The average QScS value beneath the OJP calculated from QScS at each station is 309 ± 55, which is significantly higher than the average QScS computed from global one-dimensional models, such as the PREM (223, Dziewonski and Anderson 1981) or the QL6 model (233, Durek and Ekstrӧm 1996), or that for the western Pacific (156 ± 17) estimated by Sipkin and Jordan (1980), despite the significant variations of the observed QScS values. Travel time residuals with the bounce points located in the OJP range from 4.5 s to 6.9 s (6.0 s ± 0.8 s) with respect to those calculated from the IASP91 model, which is markedly larger than those with the bounce points outside the OJP (1.4–4.2 s). The large positive residuals beneath the OJP indicate that the average S velocity of the entire mantle is low beneath the OJP. QScS values (a) and travel time residuals (b) of multiple ScS waves estimated in the present study. Surface bounce points of ScS2, ScS3, sScS, sScS2, and sScS3 waves are denoted by circles, diamonds, crosses, solid triangles, and inverted triangles, respectively. The dotted lines are great circles from event to stations (projection of ScS wave paths on the surface). The colors of the symbols and the dotted curves represent QScS values in (a) and travel time residuals in (b). Stars and open triangles are epicenters and stations, respectively We estimated the QScS values to be 324 ± 34 and 218 ± 20 at PATS and CTAO, respectively, for the 1996 event, which is the event used by Gomer and Okal (2003). These values are close to the value of 366 (error bar from 253 to 634) at PATS and 177–200 at CTAO, as obtained by Gomer and Okal (2003). The relatively small difference between the two studies is due mainly to difference in of the methodologies of the two studies, such as a time-windowing method. Since there was no previous QScS study beneath the OJP in the past, except for Gomer and Okal (2003), we compared QScS values obtained in previous studies with those of the present study in broader regions, including the OJP. Sipkin and Jordan (1980) and Chan and Der (1988) estimated the QScS values from Fiji–Tonga events to central Japan as 173 ± 37 and 214 ± 42, respectively, which are higher than the average QScS value of 156 in the western Pacific (Sipkin and Jordan 1980). The QScS values obtained by the present study (309 ± 55) are even higher than those obtained by previous studies, probably because multiple ScS waves analyzed by the two previous studies sampled a broader region from the Fiji–Tonga region to Japan, including the OJP, than the present study. The QScS beneath the OJP is close to those beneath the stable continents of 280–333 (Chan and Der 1988; Revenaugh and Jordan 1991; Sipkin and Revenaugh 1994). The QScS values beneath the Solomon subduction zone and the Coral Sea are 100–200, which is significantly lower than those of the OJP. Very low QScS (109) are observed at the COEN station in the present study, probably because the multiple ScS phases travel long distances in the tectonically active region of the Papua New Guinea. While it is difficult to estimate Qs of the upper and lower mantle separately from multiple ScS waves of nearly vertical paths, the spectral ratio and travel time difference of sScS and ScS waves bear information on the Qs and S velocity anomaly in the upper mantle above the 2016 event (Additional file 1: Table S1, Figure S1). The average Qs in the upper 500 km above the source is estimated to be 98 ± 15, which is close to the Qs of 104 for the PREM. The travel time residual of sScS–ScS is 5.5 ± 0.6 s at stations located toward the OJP from the source, which corresponds to S velocity lower by 2.4% than that of the IASP91 model in the upper 500 km of the 2016 event. However, these values may not represent the upper mantle beneath the OJP, because the Qs and travel time residual from the sScS and ScS pairs are affected by the Papua New Guinea and Solomon Islands subduction zones with presumably strong lateral heterogeneities. Next, we referred to the existing Qs model of the lower mantle to estimate the Qs of the OJP upper mantle from the QScS value of 309 using a ray theory. Using the PREM Qs value of 312 for the lower mantle, the Qs of the upper mantle is estimated to be 303. Since the LLSVP is seated in the OJP lower mantle, the effect of the LLSVP on attenuation should be taken into consideration. Konishi et al. (2017) determined one-dimensional Vs and Qs models at depths greater than 2000 km using a waveform inversion technique beneath the western Pacific region, including the OJP. The Qs values beneath the OJP are approximately 260 at depths from 2000 km to 2850 km and 216 at depths from 2850 to the core–mantle boundary. Konishi et al. (2017) attributed the low Qs values to thermo-chemical anomalies of the Pacific LLSVP. Using the Qs of 260 at depths from 2000 km to the core–mantle boundary and the Qs of PREM in the rest of the lower mantle, we obtained a Qs value of 367 for the OJP upper mantle. The Qs in the OJP upper mantle is higher than those computed from one-dimensional models, such as the PREM (134) and the laterally averaged SEMUCB-WM1 model (130, Karaoğlu and Romanowicz 2018). Considering the similarity of the QScS values beneath the OJP by the present study and those of stable continents reported in previous studies, as mentioned above, we estimated the upper mantle Qs values by a ray theory beneath the continents from the QScS values obtained in the previous studies (Chan and Der 1988; Revenaugh and Jordan 1991; Sipkin and Revenaugh 1994) by assuming a lower mantle Qs to be that of PREM and compared these values with the Qs estimated for the OJP upper mantle. The QScS of 280–333 for the stable continents resulted in an upper mantle Qs of 244–395, which is close to the Qs of the OJP upper mantle estimated above (303 and 367). The upper mantle Qs of the OJP is in the estimated range of those for stable continents. The travel time residuals of multiple ScS waves are 6.0 ± 0.8 s, which indicates that the average velocity of the entire mantle beneath the OJP is low. The LLSVP is located in the lower mantle at depths from 1000 km to the core–mantle boundary, and the observed positive residuals should be attributable, at least partially, to the low velocities of the LLSVP. We examined how large residuals can be explained by the LLSVP by calculating the theoretical residuals with the three-dimensional S velocity model S40RTS (Ritsema et al. 2010) using a ray theory. Although a geographical pattern of the observed residuals is reproduced in the theoretical residuals (largely positive residuals beneath the OJP and less positive residuals outside the OJP), the theoretical residuals are 3.3 ± 1.1 s, which is approximately half of the observed residuals. While the upper mantle of the S40RTS model has strong high-velocity (1.5–3% in the shallowest 100 km) and low-velocity anomalies (− 1 to − 3% at depths from 200 km to 300 km), the net effects on multiple ScS waves are negligible, because the effects are canceled for nearly vertical paths of the multiple ScS waves. The theoretical residuals are therefore mainly due to the broadly low-velocity anomalies of the LLSVP. Assuming that the remaining 2.7 s occurs in the entire upper mantle and the top 300 km depths, the velocity anomalies are − 0.9 ± 0.4% and − 1.9 ± 0.8%, respectively. There are two ways to interpret the remaining positive residuals. One is the positive residuals caused by the low-velocity zone in the upper mantle obtained by Richardson and Okal (2000), and the other is the positive residuals due to underestimated correction of the LLSVP. Amplitudes of velocity anomalies estimated by seismic tomography are well known to depend on the parameterization and regularization used in tomography. Some models have velocities as low as approximately − 2.5 to − 3% in the LLSVP (e.g., Lu and Grand 2016), whereas the S40RTS has velocities as low as approximately − 1.5 to − 2%. Determining the velocity structure in the upper mantle using multiple ScS studies is difficult. Seismic tomography using data from the OJP array is expected to provide a tight constraint on the velocity structure beneath the OJP. In summary, the QScS in the mantle beneath the northern OJP is estimated to be 309 ± 55, which is consistent with the result of Gomer and Okal (2003). This is higher than the average QScS in the western Pacific and that calculated from global one-dimensional Qs models and is close to the QScS beneath stable continents. Assuming the lower mantle Qs based on existing Qs models with LLSVP effects accounted for, the seismic attenuation in the upper mantle is probably weak beneath the OJP, as is that of stable continents. The travel times of multiple ScS waves are as large as 6 s. While the positive residuals are at least partially explained by the effect of the Pacific LLSVP in the lower mantle, it is difficult to conclude whether the low-velocity zone in the upper mantle is required to explain the positive residuals, which remains to be concluded by seismic tomography using data from in situ stations such as the OJP array. Data supporting the results of the present article are available upon request to the authors. OJP: Ontong Java Plateau BBOBS: broadband ocean-bottom seismograph PREM: Preliminary Reference Earth Model IRIS: Incorporated Research Institutions for Seismology International Deployment of Accelerometers USGS: US Geological Survey Bhattacharyya J, Masters G, Shearer PM (1996) Global lateral variations of shear wave attenuation in the upper mantle. J Geophys Res 101:22273–22289 Chan WW, Der ZA (1988) Attenuation of multiple ScS in various parts of the world. Geophys J Int 92:303–314 Coffin MF, Eldholm O (1994) Large igneous provinces: crustal structure, dimensions, and external consequences. Rev Geophys 32:1–36 Covellone BM, Savage B, Shen Y (2015) Seismic wave speed structure of the Ontong Java Plateau. Earth Planets Sci Lett 420:140–150 Dalton CA, Ekstrӧm G, Dziewonski AM (2008) The global attenuation structure of the upper mantle. J Geophys Res. https://doi.org/10.1029/2007JB005429 Durek JJ, Ekstrӧm G (1996) A radial model of anelasticity consistent with long-period surface-wave attenuation. Bull Seismol Soc Am 86:144–158 Dziewonski A, Anderson DL (1981) Preliminary reference Earth model. Phys Earth Planet Inter 25:297–356 Garnero EJ, McNamara AK (2008) Structure and dynamics of Earth's lower mantle. Science 320:626–628. https://doi.org/10.1126/science.1148028 Gomer B, Okal EA (2003) Multiple-SCS probing of the Ontong-Java Plateau. Phys Earth Planet Inter 138:317–331 Isse T, Nakanishi I (1997) The effect of the crust on the estimation of mantle Q from spectral ratios of multiple ScS phases. Bull Seismol Soc Am 87:778–781 Jordan TH, Sipkin SA (1977) Estimation of the attenuation operator for multiple ScS waves. Geophys Res Lett 4:167–170 Karaoğlu H, Romanowicz B (2018) Inferring global upper-mantle shear attenuation structure by waveform tomography using the spectral element method. Geophys J Int 213:1536–1558. https://doi.org/10.1093/gji/ggy030 Kennett BLN, Engdahl ER (1991) Traveltimes for global earthquake location and phase identification. Geophys J Int 105:429–465. https://doi.org/10.1111/j.1365-246X.1991.tb06724.x Klosko ER, Russo RM, Okal EA, Richardson WP (2001) Evidence for a rheologically strong chemical mantle root beneath the Ontong-Java Plateau. Earth Planets Sci Lett 186:347–361 Konishi K, Fuji N, Deschamps F (2017) Elastic and anelastic structure of the lowermost mantle beneath the western Pacific from waveform inversion. Geophys J Int 208:1290–1304. https://doi.org/10.1093/gji/ggw450 Liu Q, Tromp J (2008) Finite-frequency sensitivity kernels for global seismic wave propagation based upon adjoint methods. Geophys J Int 174:265–286. https://doi.org/10.1111/j.1365-246X.2008.03798.x Lu C, Grand SP (2016) The effect of subducting slabs in global shear wave tomography. Geophys J Int 205:1074–1085. https://doi.org/10.1093/gji/ggw072 Nakanishi I (1979) Attenuation of multiple ScS waves in the Japanese arc. Phys Earth Planet Inter 19:337–347 Neal C, Mahoney JJ, Kroenke LW, Duncan RA, Petterson MG (1997) The Ontong Java Plateau. In: Mahoney JJ, Coffin MF (eds) Large igneous provinces: continental, oceanic, and planetary flood volcanism, 100th edn. American Geophysical Union, Washington, pp 183–216. https://doi.org/10.1029/gm100p0183 Revenaugh J, Jordan TH (1991) Mantle layering from ScS reverberations 1. Waveform inversion of zeroth-order reverberations. J Geophys Res 96:19749–19762 Richardson W, Okal EA, van der Lee S (2000) Rayleigh-wave tomography of the Ontong-Java Plateau. Phys Earth Planet Inter 118:29–51 Ritsema J, Deuss A, van Heijst HJ, Woodhouse JH (2010) S40RTS: a degree-40 shear-velocity model for the mantle from new Rayleigh wave dispersion, teleseismic traveltime and normal-mode splitting function measurements. Geophys J Int 184:1223–1236. https://doi.org/10.1111/j.1365-246X.2010.04884.x Romanowicz, BA, Mitchell, BJ (2015) Deep earth structure: Q of the Earth from crust to core. In: Schubert G (ed). Treatise on geophysics. 2nd edn. pp 789–827 Selby ND, Woodhouse JH (2002) The Q structure of the upper mantle: constraints from Rayleigh wave amplitudes. J Geophys Res. https://doi.org/10.1029/2001JB000257 Sipkin SA, Jordan TH (1980) Regional variation of Q ScS. Bull Seismol Soc Am 70:1071–1102 Sipkin SA, Revenaugh J (1994) Regional variation of attenuation and travel time in China from analysis of multiple-ScS phases. J Geophys Res 99:2687–2699 Suetsugu D (2001) A low Q ScS anomaly near the South Pacific Superswell. Geophys Res Lett 28:391–394 Suetsugu D, Shiobara H (2014) Broadband ocean bottom seismology. Ann Rev.Earth Planet Sci 42:27–43 Suetsugu D, Shiobara H, Sugioka H, Tada N, Ito A, Isse T, Baba K, Ichihara H, Ota T, Ishihara Y, Tanaka S, Obayashi M, Tonegawa T, Yoshimitsu J, Kobayashi T, Utada H (2018) The OJP array: seismological and electromagnetic observation on seafloor and islands in the Ontong Java Plateau. JAMSTEC Rep Res Dev 26:54–64. https://doi.org/10.5918/jamstecr.26.54 Warren LM, Shearer PM (2002) Mapping lateral variations in upper mantle attenuation by stacking P and PP spectra. J Geophys Res. https://doi.org/10.1029/2001JB001195 The authors are grateful to the captains and crews of R/V MIRAI of JAMSTEC and R/V HAKUHO-MARU of JAMSTEC for the installation and recovery cruises, respectively. Their devoted efforts led to the success of the OJP array observation. The authors thank IRIS Data Management Center for making their data available. We thank B. Romanowicz and an anonymous reviewer for their constructive comments, which substantially improved the present study. The present study was supported by a Grant-in-Aid for Scientific Research (15H03720) from the Japan Society for the Promotion of Science. The present study was supported by a Grant-in-Aid for Scientific Research (15H03720) from the Japan Society for the Promotion of Science and Grants for Operating Expenses of JAMSTEC and the University of Tokyo. Japan Agency for Marine-Earth Science and Technology, 2-15, Natsushima, Yokosuka, Kanagawa, 237-0061, Japan Daisuke Suetsugu, Aki Ito, Yasushi Ishihara, Satoru Tanaka, Masayuki Obayashi, Takashi Tonegawa & Junko Yoshimitsu Earthquake Research Institute, The University of Tokyo, 1-1-1, Yayoi, Bunkyo, Tokyo, 113-0032, Japan Hajime Shiobara & Takehi Isse Department of Planetology, Graduate School of Science, Kobe University, 1-1 Rokkodai-cho, Nada-ku, Kobe, Hyogo, 657-8501, Japan Hiroko Sugioka & Takumi Kobayashi Daisuke Suetsugu Hajime Shiobara Hiroko Sugioka Aki Ito Takehi Isse Yasushi Ishihara Satoru Tanaka Masayuki Obayashi Takashi Tonegawa Junko Yoshimitsu Takumi Kobayashi DA organized the seafloor observation, analyzed data, and wrote the manuscript. HSH, HSU, AI, TI, and TK performed the seafloor observation. YI, ST, MO, TT, and JY organized and performed the island observation. All authors read and approved the final manuscript. Correspondence to Daisuke Suetsugu. Additional file 1: Table A1. Qs and travel time residuals estimated from sScS-ScS pairs for the 2016 event. PATS, CHUK, KOSR, OJ09, OJ11, OJ12, OJ13 and MJR are located towards the OJP from the epicenter. Figure A1. Qs obtained from spectral ratio of sScS and ScS phases for the 2016 event (star). Surface bounce points of sScS phase are denoted by crosses. The dotted lines are surface projections of sScS ray paths in the upper 500 km of the mantle. The colors of the symbols and the dotted curves represent Qs values. Suetsugu, D., Shiobara, H., Sugioka, H. et al. High QScS beneath the Ontong Java Plateau. Earth Planets Space 71, 97 (2019). https://doi.org/10.1186/s40623-019-1077-8 Seismic attenuation Multiple ScS phases Large igneous provinces 4. Seismology
CommonCrawl
First Law of Thermodynamics Thermodynamic System and Work Done Thermodynamics is the branch of science which deals with the measurement of transformation if heat into mechanical work. That means it mainly describes the inter-relationship between heat and mechanical energy. Thermodynamic System It is an assembly of an extremely large number of particles having a certain pressure, volume and temperature. A thermodynamic system can exchange energy with its surrounding by transfer or do mechanical work. The state of a system is the physical condition of system which are described by pressure, volume and temperature. A thermodynamic system may be isolated or closed system. Thermal Equilibrium A thermodynamic system is in thermal equilibrium if the temperature of all parts of it is same. So, a system in thermal equilibrium does not exchange heat between its different parts or with surroundings. Work Done Let us consider an ideal gas is enclosed in a cylinder fitted with a frictionless moveable piston. Let the pressure exerted by the gas be P and the cross-sectional area of the piston be A. The force exerted by the gas on the piston is F= PA. Work Done by Gas during Expansion Work done by a gas during an expansion of dx. When the gas expands, the piston moves out through a small distance dx and work done by the force $$dW = F dx = PA dx$$ Since \(A dx = dV\), a small increase in the volume of the gas, the work done by the gas during the expansion. $$dW = PdV \dots (i)$$ When the volume of the gas changes from V1 to V2, the total work done W is obtained by integrating the equation (i) within the limits V1 to V2. $$W = \int dW = \int _{V_1}^{V_2} P dV \dots (ii)$$ When the final volume V2 is greater than initial volume V1, then the change in volume V2- V1 is positive. Hence during the expansion of the gas work done by a system is taken as positive. When the gas is compressed, the final volume V2 is less than the initial volume V1then the change in volume V2- V1 is negative.Hence during the compression of the gas work done by a system is taken as negative. (a) Indicator diagram showing the work done by gas. (b) Work done by gas on expansion at constant pressure. Indicator Diagram Let E be a point on the indicator diagram. P and V be the pressure and volume at the point. Let the volume increase by small amount dV at constant P to a point F which is close to E. From the figure, we have Area of the small strip \( EFGH = HG \times EH = dV \times P = PdV \) where dV is a small volume represented by HG during which the pressure P = HE remains constant. \(\therefore\) The area of the strip EFGH = PdV = work done during a small change of volume dV. Thus, the work done during expansion from the initial state A(P1, V1) to the final state (P2, V2) can be obtained by adding the area of such small stripes $$W = \text {area of ABCD}$$ Thus work done by a system is numerically equal to the area under PV-diagram. So, $$W = \int _{V_1}^{V_2} P dV= \text {area of ABCD} $$ When the pressure P remains constant throughout the expansion, the work done by the gas is $$W = P(V_2 - V_1)$$ Work done during a cyclic process Work Done in a Cyclic Process After thecertain process, if the system returns to its initial state is cyclic process is called cyclic process. Let us consider a system expand from Initial stage A to the state B along the path X and the system compress from B to A along the path Y which is shown in the figure. The work done by system when system expand from A to B is represented by the curve AXB and volume axis that is $$W_1 = \text {area of AXBB'A'A}$$ The work done on the system when system contract fromBA is given by the area between curve BYA and volume axis i.e. $$W_2 = -(\text {area of BYAA'B'B})$$ $$\therefore \text {Net work done in the cyclic process is}$$ $$W = W_1 + W_2$$ $$= \text {area of AXBB'A'A} +(\text {-area of BYAA'B'B})$$ $$= + \text {area of AXBYA} $$ Thermodynamics is the branch of science which deals with the measurement of transformation if heat into mechanical work. A thermodynamic system is in thermal equilibrium if the temperature of all parts of it is same. After certain process if the system returns to its initial state is cyclic process is called cyclic process. Internal Energy, First Law of Thermodynamics and Specific Heat Capacities of a Gas ASK ANY QUESTION ON Thermodynamic System and Work Done Nischal Maths error
CommonCrawl
DOI:10.1016/j.jocn.2004.06.008 The role of α2-agonists in neurosurgery @article{Cormack2005TheRO, title={The role of $\alpha$2-agonists in neurosurgery}, author={J. D. Cormack and R M Orme and T G Costello}, journal={Journal of Clinical Neuroscience}, J. Cormack, R. Orme, T. Costello Journal of Clinical Neuroscience Summary α 2 -agonists have been extensively used and studied in anaesthesia and intensive care medicine. A list of benefits includes anxiolysis, blood pressure stabilization, analgesia, anaesthetic sparing effects and sedation without respiratory depression or significant cognitive impairment. Fear of inadvertent hypotension, bradycardia or post-operative sedation, and the variability of the haemodynamic response to different doses or rates of administration, have meant that universal… doi.org Dexmedetomidine Imidazolines Cognition Disorders Bradycardia Respiratory Insufficiency Science of neurosurgery anxiolysis (emotion) Sedation procedure Off-label Drugs in Perioperative Medicine: Clonidine C. Gregoretti, P. Pelosi Clonidine has been widely used for >35 years to treat hypertension, and off-label for a variety of purposes, including withdrawal from long-term abuse of drugs or alcohol, as well as to reduce perioperative stress in patients who are at risk ofperioperative ischaemia and are in chronic pain therapy. Neuroprotection by dexmedetomidine S. Himmelseher, E. Kochs The current understanding of DEX-mediated neuroprotection is outlined, and this state of knowledge is compared with key criteria for the design of a clinical trial by experts in the field of anaesthetic neuroprotection. Dexmedetomidine Improves Cardiovascular and Ventilatory Outcomes in Critically Ill Patients: Basic and Clinical Approaches R. Castillo, M. Ibacache, +8 authors Aníbal Méndez Frontiers in Pharmacology Evidence supporting the use of dexmedetomidine in different settings is shown from its use in animal models of ischemia-reperfusion, and cardioprotective signaling pathways, and by a group of Chilean pharmacologists and clinicians who have worked for more than 10 years on DEX. The Perioperative Management of Pain from Intracranial Surgery A. Gottschalk, M. Yaster Although pain following intracranial surgery appears to be more intense than initially believed, it is readily treated safely and effectively with techniques that have proven useful following other types of surgery, including patient-controlled administration of opioids. Dexmedetomidine in Modern Neuroanesthesia Practice I. Kapoor, C. Mahajan, H. Prabhakar Current Anesthesiology Reports The purpose of this review is to discuss the role of dexmedetomidine in various procedures in clinical neurosciences. It also gives an overview of the various neurosurgical procedures where… Dexmedetomidine effects in different experimental sepsis in vivo models. Ioannis Dardalas, E. Stamoula, +8 authors C. Pourzitaki Overall results show evidence that DEX may decrease mortality and inhibit inflammation, as it enhances the activity of the immune system while reducing its systemic reaction and lowering cytokine concentrations. A randomized controlled study to compare the effectiveness of intravenous dexmedetomidine with placebo to attenuate the haemodynamic and neuroendocrine responses to fixation of skull pin head holder for craniotomy K. Mushtaq, Z. Ali, N. Shah, S. Syed, Imtiyaz A. Naqash, A. Ramzan It is demonstrated that, a single bolus dose of dexmedetomidine before induction of anesthesia attenuated the hemodynamic and neuroendocrinal responses to skull-pin insertion in patients undergoing craniotomy. The effect of dexmedetomidine on cerebral perfusion and oxygenation in healthy piglets with normal and lowered blood pressure anaesthetized with propofol-remifentanil total intravenous anaesthesia Mai Louise Grandsgaard Mikkelsen, R. Ambrus, +4 authors T. Eriksen Acta Veterinaria Scandinavica This study investigates the effect of dexmedetomidine on CPO in piglets with normal and lowered blood pressure during background anaesthesia with propofol-remifentanil TIVA and results show a significant decrease in cerebral oxygenation measurements in Piglets with lowering blood pressure. Dexmedetomidine: A Review of a Newer Sedative in Dentistry. A. Devasya, M. Sarpangala The Journal of clinical pediatric dentistry The study revealed that Dexmedetomidine being a new drug with its added advantages makes a better choice for sedation in dentistry, but with limited studies on DexmedETomidine, the recommendation to use the drug exclusively is still under debate. Dexmedetomidine Attenuates the Hemodynamic and Neuroendocrinal Responses to Skull-pin Head-holder Application During Craniotomy A. Uyar, H. Yağmurdur, Y. Fidan, Çiğdem Topkaya, H. Başar Journal of neurosurgical anesthesiology The results suggested that, a single bolus dose of dexmedetomidine before induction of anesthesia attenuated the hemodynamic and neuroendocrinal responses to skull-pin insertion in patients undergoing craniotomy. Alpha2‐adrenergic agents in anaesthesia R. Aantaa, M. Scheinin Acta anaesthesiologica Scandinavica Clonidine, a centrally acting antihypertensive agent, has attracted increasing interest as an adjunct to anaesthesia and the prospect of using specific antagonists to reverse the effects induced by a,adrenergic receptors adds to the attractiveness of this approach. Alpha‐2 and imidazoline receptor agonistsTheir pharmacology and therapeutic role Z. P. Khan, C. Ferguson, R. M. Jones The more selective alpha‐2 adrenoceptor agonists, dexmedetomidine and mivazerol, may also have a role in providing haemodynamic stability in patients who are at risk of peri‐operative ischaemia. Molecular pharmacology of α2-adrenoceptor subtypes R. Aantaa, A. Marjamäki, M. Scheinin This work has cloned and designated three human α2-adrenoceptor subtype genes, corresponding to the previously identified pharmacological receptor subtypes α2A, α2C and α2B, and identified some structurally and functionally important domains that are very well conserved. Intramuscular dexmedetomidine, a novel alpha2‐ adrenoceptor agonist, as premedication for minor gynaecological surgery R. Aantaa, J. Kanto, M. Scheinin The haemodynamic as well as the sedative effects of dexmedetomidine after surgery lasted until the end of the observation period, 4 h after the injection of the drug, indicating that intramuscular administration of this premedication agent may result in a longer than optimal duration of pharmacological actions in connection with short surgical procedures. Effect of Dexmedetomidine, a Selective and Potent α2‐Agonist, on Cerebral Blood Flow and Oxygen Consumption During Halothane Anesthesia in Dogs B. Karlsson, M. Forsman, O. Roald, M. Heier, P. Steen Anesthesia and analgesia The cerebral vasoconstrictive effect, combined with the 90% reduction in MAC for halothane, indicates that dexmedetomidine might be a useful adjunct to inhalation anesthetics during neurosurgery in situations where an increase in CBF should be avoided. Intracranial pressure effects of dexmedetomidine in rabbits. M. Zornow, M. Scheller, P. Sheehan, M. A. Strnat, M. Matsumoto The alpha 2-adrenergic receptor agonist dexmedetomidine produces an anesthetic state in a variety of species and was associated with a 14% decrease in sagittal sinus blood flow that was not statistically significant in these experiments. Dexmedetomidine, an α2‐Adrenergic Agonist, Decreases Cerebral Blood Flow in the Isoflurane‐Anesthetized Dog M. Zornow, J. E. Fleischer, M. Scheller, K. Nakakimura, J. Drummond 10 μg/kg of dexmedetomidine in isoflurane-anesthetized dogs is associated with a profound decrease in CBF and cardiac output in the face of an unaltered CMRo2, and there was no evidence of global cerebral ischemia. Cardiovascular and endocrine effects of clonidine premedication in neurosurgical patients D. Gaumann, E. Tassonyi, R. Rivest, M. Fathi, A. Reverdin Canadian journal of anaesthesia = Journal canadien d'anesthesie Though statistically significant, the observed inhibitory haemodynamic and endocrine effects of clonidine seem to be of minor clinical importance, and there is no advantage in the preanaesthetic administration ofClonidine to neurosurgical patients with normal cardiovascular status. Reversal of the Sedative and Sympatholytic Effects of Dexmedetomidine with a Specific α2-Adrenoceptor Antagonist Atipamezole: A Pharmacodynamic and Kinetic Study in Healthy Volunteers H. Scheinin, R. Aantaa, M. Anttila, P. Hakola, A. Helminen, S. Karhuvaara The sedative and sympatholytic effects of intramuscular dexmedetomidine were dose dependently antagonized by intravenous atipamezole, an [Greek small letter alpha]2-antagonist in healthy persons. Neuroprotection by the alpha 2-adrenoreceptor agonist dexmedetomidine in a focal model of cerebral ischemia. C. Maier, G. Steinberg, G. Sun, G. T. Zhi, M. Maze Results from this study indicate that postischemic administration of dexmedetomidine, in a dose that reduces the anesthetic requirements by 50%, has a neuroprotective effect in this model of focal cerebral ischemia.
CommonCrawl
upcScavenger Product Code Database Example Keywords: psp -mmorpg $82-120 Advanced search barcode-scavenger upcScavenger » Atmospheric Thermodynamics » Wiki: Pressure ( Atmospheric Thermodynamics ) Scalar nature Fluid pressure Explosion or deflagratio.. Negative pressures Stagnation pressure Surface pressure and sur.. Pressure of an ideal gas Liquid pressure Direction of liquid pres.. Kinematic pressure C O N T E N T S Force Pressure Surface Water Rank: 100% Pressure (symbol: p or P) is the force applied perpendicular to the surface of an object per unit area over which that force is distributed. Gauge pressure (also spelled gage pressure)The preferred spelling varies by country and even by industry. Further, both spellings are often used within a particular industry or country. Industries in British English-speaking countries typically use the "gauge" spelling. is the pressure relative to the ambient pressure. Various units are used to express pressure. Some of these derive from a unit of force divided by a unit of area; the SI unit of pressure, the pascal (Pa), for example, is one newton per square metre; similarly, the pound-force per square inch (psi) is the traditional unit of pressure in the imperial units and US customary systems. Pressure may also be expressed in terms of standard atmospheric pressure; the atmosphere (atm) is equal to this pressure, and the torr is defined as of this. Manometric units such as the centimetre of water, millimetre of mercury, and inch of mercury are used to express pressures in terms of the height of column of a particular fluid in a manometer. Pressure is the amount of force applied at to the surface of an object per unit area. The symbol for it is p or P. Douglas G. Giancoli (2019). 9780130606204, Pearson Education. ISBN 9780130606204 The IUPAC recommendation for pressure is a lower-case p. (2019). 9780967855097, Blackwell Scientific Publications. . ISBN 9780967855097 However, upper-case P is widely used. The usage of P vs p depends upon the field in which one is working, on the nearby presence of other symbols for quantities such as power and momentum, and on writing style. Mathematically: p = \frac{F}{A}, p is the pressure, F is the magnitude of the normal force, A is the area of the surface on contact. Pressure is a scalar quantity. It relates the vector surface element (a vector normal to the surface) with the normal force acting on it. The pressure is the scalar proportionality constant that relates the two normal vectors: d\mathbf{F}_n = -p\,d\mathbf{A} = -p\,\mathbf{n}\,dA. The minus sign comes from the fact that the force is considered towards the surface element, while the normal vector points outward. The equation has meaning in that, for any surface S in contact with the fluid, the total force exerted by the fluid on that surface is the surface integral over S of the right-hand side of the above equation. It is incorrect (although rather usual) to say "the pressure is directed in such or such direction". The pressure, as a scalar, has no direction. The force given by the previous relationship to the quantity has a direction, but the pressure does not. If we change the orientation of the surface element, the direction of the normal force changes accordingly, but the pressure remains the same. Pressure is distributed to solid boundaries or across arbitrary sections of fluid normal to these boundaries or sections at every point. It is a fundamental parameter in thermodynamics, and it is conjugate to volume. The SI unit for pressure is the pascal (Pa), equal to one newton per square metre (N/m2, or kg·m−1·s−2). This name for the unit was added in 1971; before that, pressure in SI was expressed simply in newtons per square metre. Other units of pressure, such as pounds per square inch and bar, are also in common use. The CGS unit of pressure is the barye (Ba), equal to 1 dyn·cm−2, or 0.1 Pa. Pressure is sometimes expressed in grams-force or kilograms-force per square centimetre (g/cm2 or kg/cm2) and the like without properly identifying the force units. But using the names kilogram, gram, kilogram-force, or gram-force (or their symbols) as units of force is expressly forbidden in SI. The technical atmosphere (symbol: at) is 1 kgf/cm2 (98.0665 kPa, or 14.223 psi). Since a system under pressure has the potential to perform work on its surroundings, pressure is a measure of potential energy stored per unit volume. It is therefore related to energy density and may be expressed in units such as per cubic metre (J/m3, which is equal to Pa). Mathematically: p = \frac{F \times \text{distance}}{A \times \text{distance}} = \frac{\text{work}}{\text{volume}} = \frac{\text{energy (J)}}{\text{volume }(\text{m}^3)}. Some prefer the hectopascal (hPa) for atmospheric air pressure, which is equivalent to the older unit millibar (mbar). Similar pressures are given in kilopascals (kPa) in most other fields, where the hecto- prefix is rarely used. The inch of mercury is still used in the United States. Oceanographers usually measure underwater pressure in (dbar) because pressure in the ocean increases by approximately one decibar per metre depth. The standard atmosphere (atm) is an established constant. It is approximately equal to typical air pressure at Earth mean sea level and is defined as . Because pressure is commonly measured by its ability to displace a column of liquid in a manometer, pressures are often expressed as a depth of a particular fluid (e.g., centimetres of water, millimetres of mercury or inches of mercury). The most common choices are mercury (Hg) and water; water is nontoxic and readily available, while mercury's high density allows a shorter column (and so a smaller manometer) to be used to measure a given pressure. The pressure exerted by a column of liquid of height h and density ρ is given by the hydrostatic pressure equation , where g is the gravitational acceleration. Fluid density and local gravity can vary from one reading to another depending on local factors, so the height of a fluid column does not define pressure precisely. When millimetres of mercury or inches of mercury are quoted today, these units are not based on a physical column of mercury; rather, they have been given precise definitions that can be expressed in terms of SI units. One millimetre of mercury is approximately equal to one torr. The water-based units still depend on the density of water, a measured, rather than defined, quantity. These manometric units are still encountered in many fields. Blood pressure is measured in millimetres of mercury in most of the world, and lung pressures in centimetres of water are still common. Underwater divers use the metre sea water (msw or MSW) and foot sea water (fsw or FSW) units of pressure, and these are the standard units for pressure gauges used to measure pressure exposure in and Dive computer. A msw is defined as 0.1 bar (= 100000 Pa = 10000 Pa), is not the same as a linear metre of depth. 33.066 fsw = 1 atm (1 atm = 101325 Pa / 33.066 = 3064.326 Pa). Note that the pressure conversion from msw to fsw is different from the length conversion: 10 msw = 32.6336 fsw, while 10 m = 32.8083 ft. Gauge pressure is often given in units with "g" appended, e.g. "kPag", "barg" or "psig", and units for measurements of absolute pressure are sometimes given a suffix of "a", to avoid confusion, for example "kPaa", "psia". However, the US National Institute of Standards and Technology recommends that, to avoid confusion, any modifiers be instead applied to the quantity being measured rather than the unit of measure. For example, rather than . Differential pressure is expressed in units with "d" appended; this type of measurement is useful when considering sealing performance or whether a valve will open or close. Presently or formerly popular pressure units include the following: atmosphere (atm) manometric units: centimetre, inch, millimetre (torr) and micrometre (mTorr, micron) of mercury, height of equivalent column of water, including millimetre (mm ), centimetre (cm ), metre, inch, and foot of water; imperial and customary units: kip, short ton-force, long ton-force, pound-force, ounce-force, and poundal per square inch, short ton-force and long ton-force per square inch, fsw (feet sea water) used in underwater diving, particularly in connection with diving pressure exposure and decompression; non-SI metric units: bar, decibar, millibar, msw (metres sea water), used in underwater diving, particularly in connection with diving pressure exposure and decompression, kilogram-force, or kilopond, per square centimetre (technical atmosphere), gram-force and tonne-force (metric ton-force) per square centimetre, barye (dyne per square centimetre), kilogram-force and tonne-force per square metre, sthene per square metre (pieze). As an example of varying pressures, a finger can be pressed against a wall without making any lasting impression; however, the same finger pushing a thumbtack can easily damage the wall. Although the force applied to the surface is the same, the thumbtack applies more pressure because the point concentrates that force into a smaller area. Pressure is transmitted to solid boundaries or across arbitrary sections of fluid normal to these boundaries or sections at every point. Unlike stress, pressure is defined as a scalar quantity. The negative gradient of pressure is called the force density. Another example is a knife. If we try to cut with the flat edge, force is distributed over a larger surface area resulting in less pressure, and it will not cut. Whereas using the sharp edge, which has less surface area, results in greater pressure, and so the knife cuts smoothly. This is one example of a practical application of pressure. For gases, pressure is sometimes measured not as an absolute pressure, but relative to atmospheric pressure; such measurements are called gauge pressure. An example of this is the air pressure in an automobile tire, which might be said to be "", but is actually 220 kPa (32 psi) above atmospheric pressure. Since atmospheric pressure at sea level is about 100 kPa (14.7 psi), the absolute pressure in the tire is therefore about . In technical work, this is written "a gauge pressure of ". Where space is limited, such as on pressure gauges, , graph labels, and table headings, the use of a modifier in parentheses, such as "kPa (gauge)" or "kPa (absolute)", is permitted. In non-SI technical work, a gauge pressure of is sometimes written as "32 psig", and an absolute pressure as "32 psia", though the other methods explained above that avoid attaching characters to the unit of pressure are preferred.NIST, Rules and Style Conventions for Expressing Values of Quantities , Sect. 7.4. Gauge pressure is the relevant measure of pressure wherever one is interested in the stress on Pressure vessel and the plumbing components of fluidics systems. However, whenever equation-of-state properties, such as densities or changes in densities, must be calculated, pressures must be expressed in terms of their absolute values. For instance, if the atmospheric pressure is , a gas (such as helium) at (gauge) ( absolute) is 50% denser than the same gas at (gauge) ( absolute). Focusing on gauge values, one might erroneously conclude the first sample had twice the density of the second one. In a static gas, the gas as a whole does not appear to move. The individual molecules of the gas, however, are in constant Brownian motion. Because we are dealing with an extremely large number of molecules and because the motion of the individual molecules is random in every direction, we do not detect any motion. If we enclose the gas within a container, we detect a pressure in the gas from the molecules colliding with the walls of our container. We can put the walls of our container anywhere inside the gas, and the force per unit area (the pressure) is the same. We can shrink the size of our "container" down to a very small point (becoming less true as we approach the atomic scale), and the pressure will still have a single value at that point. Therefore, pressure is a scalar quantity, not a vector quantity. It has magnitude but no direction sense associated with it. Pressure force acts in all directions at a point inside a gas. At the surface of a gas, the pressure force acts perpendicular (at right angle) to the surface. A closely related quantity is the stress tensor σ, which relates the vector force \mathbf{F} to the vector area \mathbf{A} via the linear relation \mathbf{F} = \sigma\mathbf{A}. This tensor may be expressed as the sum of the viscous stress tensor minus the hydrostatic pressure. The negative of the stress tensor is sometimes called the pressure tensor, but in the following, the term "pressure" will refer only to the scalar pressure. According to the theory of general relativity, pressure increases the strength of a gravitational field (see stress–energy tensor) and so adds to the mass-energy cause of gravity. This effect is unnoticeable at everyday pressures but is significant in , although it has not been experimentally tested. Fluid pressure is most often the compressive stress at some point within a fluid. (The term fluid refers to both liquids and gases – for more information specifically about liquid pressure, see section below.) Fluid pressure occurs in one of two situations: An open condition, called "open channel flow", e.g. the ocean, a swimming pool, or the atmosphere. A closed condition, called "closed conduit", e.g. a water line or gas line. Pressure in open conditions usually can be approximated as the pressure in "static" or non-moving conditions (even in the ocean where there are waves and currents), because the motions create only negligible changes in the pressure. Such conditions conform with principles of fluid statics. The pressure at any given point of a non-moving (static) fluid is called the hydrostatic pressure. Closed bodies of fluid are either "static", when the fluid is not moving, or "dynamic", when the fluid can move as in either a pipe or by compressing an air gap in a closed container. The pressure in closed conditions conforms with the principles of fluid dynamics. The concepts of fluid pressure are predominantly attributed to the discoveries of Blaise Pascal and Daniel Bernoulli. Bernoulli's equation can be used in almost any situation to determine the pressure at any point in a fluid. The equation makes some assumptions about the fluid, such as the fluid being ideal Finnemore, John, E. and Joseph B. Franzini (2019). 9780072432022, McGraw Hill, Inc.. ISBN 9780072432022 and incompressible. An ideal fluid is a fluid in which there is no friction, it is inviscid (zero viscosity). The equation for all points of a system filled with a constant-density fluid is NCEES (2019). 9781932613599, NCEES. ISBN 9781932613599 \frac{p}{\gamma} + \frac{v^2}{2g} + z = \mathrm{const}, p = pressure of the fluid, {\gamma} = ρg = density · acceleration of gravity = specific weight of the fluid, v = velocity of the fluid, g = acceleration of gravity, z = elevation, \frac{p}{\gamma} = pressure head, \frac{v^2}{2g} = velocity head. Hydraulic head Turgor pressure Pythagorean cup Explosion or deflagration pressures Explosion or deflagration pressures are the result of the ignition of explosive , mists, dust/air suspensions, in unconfined and confined spaces. While pressures are, in general, positive, there are several situations in which negative pressures may be encountered: When dealing in relative (gauge) pressures. For instance, an absolute pressure of 80 kPa may be described as a gauge pressure of −21 kPa (i.e., 21 kPa below an atmospheric pressure of 101 kPa). When attractive intermolecular forces (e.g., van der Waals forces or ) between the particles of a fluid exceed repulsive forces due to thermal motion. These forces explain ascent of sap in tall plants. A negative pressure acts on water molecules at the top of any tree taller than 10 m, which is the pressure head of water that balances the atmospheric pressure. Intermolecular forces maintain cohesion of columns of sap that run continuously in xylem from the roots to the top leaves. The Casimir effect can create a small attractive force due to interactions with vacuum energy; this force is sometimes termed "vacuum pressure" (not to be confused with the negative gauge pressure of a vacuum). For non-isotropic stresses in rigid bodies, depending on how the orientation of a surface is chosen, the same distribution of forces may have a component of positive pressure along one surface normal, with a component of negative pressure acting along another surface normal. The stresses in an electromagnetic field are generally non-isotropic, with the pressure normal to one surface element (the normal stress) being negative, and positive for surface elements perpendicular to this. In the cosmological constant. Stagnation pressure is the pressure a fluid exerts when it is forced to stop moving. Consequently, although a fluid moving at higher speed will have a lower static pressure, it may have a higher stagnation pressure when forced to a standstill. Static pressure and stagnation pressure are related by: p_{0} = \frac{1}{2}\rho v^2 + p p_0 is the stagnation pressure v is the flow velocity p is the static pressure. The pressure of a moving fluid can be measured using a Pitot tube, or one of its variations such as a Kiel probe or Cobra probe, connected to a manometer. Depending on where the inlet holes are located on the probe, it can measure static pressures or stagnation pressures. Surface pressure and surface tension There is a two-dimensional analog of pressure – the lateral force per unit length applied on a line perpendicular to the force. Surface pressure is denoted by π: \pi = \frac{F}{l} and shares many similar properties with three-dimensional pressure. Properties of surface chemicals can be investigated by measuring pressure/area isotherms, as the two-dimensional analog of Boyle's law, , at constant temperature. Surface tension is another example of surface pressure, but with a reversed sign, because "tension" is the opposite to "pressure". In an ideal gas, molecules have no volume and do not interact. According to the ideal gas law, pressure varies linearly with temperature and quantity, and inversely with volume: p = \frac{nRT}{V}, p is the absolute pressure of the gas, n is the amount of substance, T is the absolute temperature, V is the volume, R is the ideal gas constant. exhibit a more complex dependence on the variables of state.P. Atkins, J. de Paula Elements of Physical Chemistry, 4th Ed, W. H. Freeman, 2006. . Vapour pressure is the pressure of a vapour in thermodynamic equilibrium with its condensed phases in a closed system. All liquids and have a tendency to evaporate into a gaseous form, and all have a tendency to condense back to their liquid or solid form. The atmospheric pressure boiling point of a liquid (also known as the normal boiling point) is the temperature at which the vapor pressure equals the ambient atmospheric pressure. With any incremental increase in that temperature, the vapor pressure becomes sufficient to overcome atmospheric pressure and lift the liquid to form vapour bubbles inside the bulk of the substance. liquid bubble formation deeper in the liquid requires a higher pressure, and therefore higher temperature, because the fluid pressure increases above the atmospheric pressure as the depth increases. The vapor pressure that a single component in a mixture contributes to the total pressure in the system is called partial pressure. When a person swims under the water, water pressure is felt acting on the person's eardrums. The deeper that person swims, the greater the pressure. The pressure felt is due to the weight of the water above the person. As someone swims deeper, there is more water above the person and therefore greater pressure. The pressure a liquid exerts depends on its depth. Liquid pressure also depends on the density of the liquid. If someone was submerged in a liquid more dense than water, the pressure would be correspondingly greater. thus we can say that the depth, density and liquid pressure are directly proportionate. The pressure due to a liquid in liquid columns of constant density or at a depth within a substance is represented by the following formula: p = \rho gh, p is liquid pressure, g is gravity at the surface of overlaying material, ρ is density of liquid, h is height of liquid column or depth within a substance. Another way of saying the same formula is the following: p = \text{weight density} \times \text{depth}. This is derived from the definitions of pressure and weight density. Consider an area at the bottom of a vessel of liquid. The weight of the column of liquid directly above this area produces pressure. From the definition \text{weight density} = \frac{\text{weight}}{\text{volume}} we can express this weight of liquid as \text{weight} = \text{weight density} \times \text{volume}, where the volume of the column is simply the area multiplied by the depth. Then we have \text{pressure} = \frac{\text{force}}{\text{area}} = \frac{\text{weight}}{\text{area}} = \frac{\text{weight density} \times \text{volume}}{\text{area}}, \text{pressure} = \frac{\text{weight density} \times \text{(area} \times \text{depth)}}{\text{area}}. With the "area" in the numerator and the "area" in the denominator canceling each other out, we are left with \text{pressure} = \text{weight density} \times \text{depth}. Written with symbols, this is our original equation: p= \rho gh. The pressure a liquid exerts against the sides and bottom of a container depends on the density and the depth of the liquid. If atmospheric pressure is neglected, liquid pressure against the bottom is twice as great at twice the depth; at three times the depth, the liquid pressure is threefold; etc. Or, if the liquid is two or three times as dense, the liquid pressure is correspondingly two or three times as great for any given depth. Liquids are practically incompressible – that is, their volume can hardly be changed by pressure (water volume decreases by only 50 millionths of its original volume for each atmospheric increase in pressure). Thus, except for small changes produced by temperature, the density of a particular liquid is practically the same at all depths. Atmospheric pressure pressing on the surface of a liquid must be taken into account when trying to discover the total pressure acting on a liquid. The total pressure of a liquid, then, is ρgh plus the pressure of the atmosphere. When this distinction is important, the term total pressure is used. Otherwise, discussions of liquid pressure refer to pressure without regard to the normally ever-present atmospheric pressure. It is important to recognize that the pressure does not depend on the amount of liquid present. Volume is not the important factor – depth is. The average water pressure acting against a dam depends on the average depth of the water and not on the volume of water held back. For example, a wide but shallow lake with a depth of exerts only half the average pressure that a small deep pond does (note that the total force applied to the longer dam will be greater, due to the greater total surface area for the pressure to act upon, but for a given section of each dam, the deep water will apply half the force of deep water). A person will feel the same pressure whether his/her head is dunked a metre beneath the surface of the water in a small pool or to the same depth in the middle of a large lake. If four vases contain different amounts of water but are all filled to equal depths, then a fish with its head dunked a few centimetres under the surface will be acted on by water pressure that is the same in any of the vases. If the fish swims a few centimetres deeper, the pressure on the fish will increase with depth and be the same no matter which vase the fish is in. If the fish swims to the bottom, the pressure will be greater, but it makes no difference what vase it is in. All vases are filled to equal depths, so the water pressure is the same at the bottom of each vase, regardless of its shape or volume. If water pressure at the bottom of a vase were greater than water pressure at the bottom of a neighboring vase, the greater pressure would force water sideways and then up the narrower vase to a higher level until the pressures at the bottom were equalized. Pressure is depth dependent, not volume dependent, so there is a reason that water seeks its own level. Restating this as energy equation, the energy per unit volume in an ideal, incompressible liquid is constant throughout its vessel. At the surface, gravitational potential energy is large but liquid pressure energy is low. At the bottom of the vessel, all the gravitational potential energy is converted to pressure energy. The sum of pressure energy and gravitational potential energy per unit volume is constant throughout the volume of the fluid and the two energy components change linearly with the depth.Streeter, V. L., Fluid Mechanics, Example 3.5, McGraw–Hill Inc. (1966), New York. Mathematically, it is described by Bernoulli's equation, where velocity head is zero and comparisons per unit volume in the vessel are \frac{p}{\gamma} + z = \mathrm{const}. Terms have the same meaning as in section Fluid pressure. Direction of liquid pressure An experimentally determined fact about liquid pressure is that it is exerted equally in all directions.Hewitt 251 (2006) If someone is submerged in water, no matter which way that person tilts his/her head, the person will feel the same amount of water pressure on his/her ears. Because a liquid can flow, this pressure isn't only downward. Pressure is seen acting sideways when water spurts sideways from a leak in the side of an upright can. Pressure also acts upward, as demonstrated when someone tries to push a beach ball beneath the surface of the water. The bottom of a boat is pushed upward by water pressure (buoyancy). When a liquid presses against a surface, there is a net force that is perpendicular to the surface. Although pressure doesn't have a specific direction, force does. A submerged triangular block has water forced against each point from many directions, but components of the force that are not perpendicular to the surface cancel each other out, leaving only a net perpendicular point. This is why water spurting from a hole in a bucket initially exits the bucket in a direction at right angles to the surface of the bucket in which the hole is located. Then it curves downward due to gravity. If there are three holes in a bucket (top, bottom, and middle), then the force vectors perpendicular to the inner container surface will increase with increasing depth – that is, a greater pressure at the bottom makes it so that the bottom hole will shoot water out the farthest. The force exerted by a fluid on a smooth surface is always at right angles to the surface. The speed of liquid out of the hole is \scriptstyle \sqrt{2gh}, where h is the depth below the free surface. This is the same speed the water (or anything else) would have if freely falling the same vertical distance h. P=p/\rho_0 is the kinematic pressure, where p is the pressure and \rho_0 constant mass density. The SI unit of P is m2/s2. Kinematic pressure is used in the same manner as kinematic viscosity \nu in order to compute Navier–Stokes equation without explicitly showing the density \rho_0. Navier–Stokes equation with kinematic quantities \frac{\partial u}{\partial t} + (u \nabla) u = - \nabla P + \nu \nabla^2 u. Introduction to Fluid Statics and Dynamics on Project PHYSNET Pressure being a scalar quantity wikiUnits.org - Convert units of pressure Categories: Atmospheric Thermodynamics, Underwater Diving Physics, Concepts In Physics, Fluid Dynamics, Fluid Mechanics, Hydraulics, Thermodynamic Properties, State Functions Social: Privacy Policy Pages: Scavenge .. QRCode .. Tags Items: Shopping Cart .. Favorites UPC Scavenger Android App General: Atom Feed .. Entire Sitemap Help: Index .. Editing .. Full List Category: All .. Products .. Vendors Media: Product .. Wiki .. User Posts: Product .. Wiki .. User .. Forum Page: Revisions .. Tag Cloud Summary: Database .. Activity 10/10 Page Rank 5 Page Refs 2s Time
CommonCrawl
Learning Conditional Information by Jeffrey Imaging on Stalnaker Conditionals Mario Günther ORCID: orcid.org/0000-0001-6208-448X1 Journal of Philosophical Logic volume 47, pages851–876(2018)Cite this article We're sorry, something doesn't seem to be working properly. Please try refreshing the page. If that doesn't work, please contact support so we can address the problem. We propose a method of learning indicative conditional information. An agent learns conditional information by Jeffrey imaging on the minimally informative proposition expressed by a Stalnaker conditional. We show that the predictions of the proposed method align with the intuitions in Douven (Mind & Language, 27(3), 239–263 2012)'s benchmark examples. Jeffrey imaging on Stalnaker conditionals can also capture the learning of uncertain conditional information, which we illustrate by generating predictions for the Judy Benjamin Problem. Buy single article Instant access to the full article PDF. Tax calculation will be finalised during checkout. Subscribe to journal Immediate online access to all issues from 2019. Subscription will auto renew annually. Rent this article via DeepDyve. Learn more about Institutional subscriptions Cf. [2,3,4,5]. [4, p. 213]. Cf. [14]. Note that Robert Stalnaker's theory of conditionals aims to account for both indicative and counterfactual conditionals. We set the complicated issue of this distinction aside in this paper. However, we want to emphasise that Douven's examples and the Judy Benjamin Problem only involve indicative conditionals. Here as elsewhere in the paper, the strict relation w′< ww″ is defined as w′ ≤ ww″ and not w″ ≤ ww′ For Stalnaker's presentation of his semantics see [15]. Cf. [11]. We assume here that there are only finitely many worlds. Note also that if α is possible, then there exists some wα. We assume here that each world is distinguishable from any other world, i. e. for two arbitrary worlds, there is always a formula in \(\mathcal {L}\) such that the formula is true in one of the worlds, but false in the other. In other words, we consider no copies of worlds. Cf. [9]. In personal communication, Benjamin Eva and Stephan Hartmann mentioned that the idea behind Jeffrey imaging is already used in artificial intelligence research to model the retrieval of information. [13, p. 3] mentions the name 'Jeffrey imaging' without writing down a corresponding formula. [1, p. 262] says that [13] suggested "a new variant of standard imaging called retrieval by Jeffrey's logical imaging". However, the formalisation of Jeffrey's idea on p. 263 differs from mine in at least two respects. (i) An additional truth evaluation function occurs in the formalisation for determining whether a formula (i. e. 'query') is true at a world (i. e. 'term'). (ii) Instead of a parameter k locally governing the probability kinematics of each possible world, Crestani simply uses a global constraint on the posterior probability distribution. In other words, we consider "small" possible worlds models and do not allow for copies of worlds, i. e. worlds that satisfy the same formulas. For proposals and justifications of a similar rationale, see [6] and [16]. For a critical and elucidating discussion of the principle of minimal or conservative belief change, see [12]. Here the question may arise why we do not simply learn conditional information by Jeffrey imaging on the material implication. A short answer will be provided in the Conclusion. Notice that the assumption of no additional information literally excludes that there is an epistemic reason, i. e. some belief apart from [α > γ]min, to change the probability of the antecedent. Douven [2] argues more precisely that the probability of the antecedent should only change if the antecedent is explanatorily relevant for the consequent. It is noteworthy that if the probability of the antecedent should intuitively change in one of Douven's examples, the explanatory relations always involve beliefs in additional propositions (apart from the conditional) given by the example's context description. Cf. [2, p. 8]. Note that the Sundowners Example seems to be somewhat artificial. It seems plausible that upon hearing her sister's conditional, Sarah would promptly ask "why?" in order to obtain some more contextual information, before setting her probability for sundowners and rain to 0. After all, she "thinks that they can always enjoy the view from inside". In [7], we extend the proposed method to the learning of causal information, which allows us to define an inference to the best explanation scheme, as Douven envisioned for the Ski Trip Example. Cf. [17, pp. 376–379]. The Appendix contains a model of [5]'s Jeweller Example. There, we show that our method also applies to examples where uncertain factual information is learned. This paper and [7] overlap insofar the latter contains parts of the proposed method of learning conditional information as a constituent of the adapted method. In [7], only the adapted method for learning causal information is applied to Douven's examples and the Judy Benjamin Problem; the proofs for Theorem 2 are not included. Crestani, F. (1998). Logical imaging and probabilistic information retrieval. In Crestani, F., Lalmas, M., van Rijsbergen, C.J. (Eds.) Information Retrieval: Uncertainty and Logics: Advanced Models for the Representation and Retrieval of Information (pp. 247–279). Boston: Springer. Douven, I. (2012). Learning conditional information. Mind & Language, 27(3), 239–263. Douven, I., & Dietz, R. (2011). A puzzle about stalnaker's hypothesis. Topoi, 30(1), 31–37. Douven, I., & Pfeifer, N. (2014). Formal epistemology and the new paradigm psychology of reasoning. Review of Philosophy and Psychology, 5, 199–221. Douven, I., & Romeijn, J.-W. (2011). A new resolution of the judy benjamin problem. Mind, 120(479), 637–670. Gärdenfors, P. (1988). Knowledge in flux. Cambridge: MIT Press. Günther, M. (2017). Learning conditional and causal information by jeffrey imaging on stalnaker conditionals. Organon F, 24(4), 456–486. Hartmann, S., & Rad, S.R. (2017). Learning indicative conditionals. Unpublished manuscript, 1–28. Jeffrey, R.C. (1965). The logic of decision. New York: Mc Graw-Hill. Lewis, D.K. (1973). Causation. Journal of Philosophy, 70(17), 556–567. Lewis, D.K. (1976). Probabilities of conditionals and conditional probabilities. The Philosophical Review, 85(3), 297–315. Rott, H. (2000). Two dogmas of belief revision. Journal of Philosophy, 97, 503–522. Sebastiani, F. (1998). Information retrieval, imaging and probabilistic logic. Computers and Artificial Intelligence, 17(1), 1–16. Stalnaker, R.C. (1975). A theory of conditionals. In Sosa, E. (Ed.) Causation and Conditionals (pp. 165–179). OUP. Stalnaker, R.C., & Thomason, R.H. (1970). A semantic analysis of conditional logic. Theoria, 36(1), 23–42. Van Benthem, J., & Smets, S. (2015). Dynamic logics of belief change. In Van Ditmarsch, H., Halpern, J. Y., Van der Hoek, W., Kooi, B. (Eds.) Handbook of Logics for Knowledge and Belief, chapter 7 (pp. 299–368): College Publications. van Fraassen, B.C. (1981). A problem for relative information minimizers in probability kinematics. The British Journal for the Philosophy of Science, 32(4), 375–379. Thanks to Hannes Leitgeb, Stephan Hartmann, Igor Douven, and Hans Rott for helpful discussions. Special thanks go to an anonymous referee for very constructive comments. I am grateful that I had the opportunity to present parts of this paper and obtain feedback at the Munich Centre for Mathematical Philosophy (LMU Munich), at the Inaugural Conference of the East European Network for Philosophy of Science (New Bulgarian University), at the International Rationality Summer Institute 2016 (Justus Liebig University), at the Centre for Advanced Studies Workshop on "Learning Conditionals" (LMU Munich), at the University of Bayreuth and the University of British Columbia. This research is supported by the Graduate School of Systemic Neurosciences. Munich Center for Mathematical Philosophy, Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität, München, Germany Mario Günther Correspondence to Mario Günther. A Possible Worlds Model of the Jeweller Example Following the presentation in [5], we consider the Jeweller Example. The Jeweller Example ([5, p. 654]) A jeweller has been shot in his store and robbed of a golden watch. However, it is not clear atthis point what the relation between these two events is; perhaps someone shot the jewellerand then someone else saw an opportunity to steal the watch. Kate thinks there is somechance that Henry is the robber (R). On the other hand, she strongly doubts that he iscapable of shooting someone, and thus, that he is the shooter (S). Now the inspector, afterhearing the testimonies of several witnesses, tells Kate: $$ \text{If Henry robbed the jeweller, then he also shot him.} $$ As a result, Kate becomes more confident that Henry is not the robber, whilst her probability forHenry having shot the jeweller does not change. We model Kate's belief state as the Stalnaker model \(\mathcal {M}_{St} = \langle W, R, \le , \le ^{\prime } V \rangle \) depicted in Fig. 8. W contains four elements covering the possible events of R,¬R,S,¬S, where R stands for "Henry is the robber", and S for "Henry has shot the jeweller". The example suggests that 0 < P(R) < 1 and P(S) = 𝜖 for a small 𝜖, and thus P(¬S) = 1 − 𝜖. The prescribed intuitions are that P∗(R) < P(R)and P∗(S) = P(S). We know about Kate's degrees of belief before receiving the conditional information that 0 < P(w1) + P(w2) < 1 and P(w1) + P(w3) = 𝜖, as well as P(w2) + P(w4) = 1 − 𝜖. Note that Kate is 'almost sure' that ¬S, and thus we may treat ¬S as 'almost factual' information. A Stalnaker model for Kate's belief state in the Jeweller Example. The blue arrow indicates the unique w((R>S)∧¬S)-world under ≤. The red arrows indicate that each world is its most similar ¬((R > S) ∧¬S)-world under ≤′. The teal arrows represent the transfer of (1 − 𝜖) ⋅ P(w), whilst the violet arrows represent the transfer of 𝜖 ⋅ P(w) Kate receives certain conditional information. She learns the minimally informative proposition [R > S] = {w1, w3, w4} such that P(R > S) = PR(S) = 1. By the law of total probability, P(R > ¬S) = PR(¬S) = 0. Taking her uncertain but almost factual information into account, Kate learns in total the minimally informative proposition [(R > S) ∧¬S], which is identical to {w4}. By P(R > S) = 1, P((R > S) ∧¬S) = P(¬S) = 1 − 𝜖. Note the tension expressed in P((R > S) ∧¬S) = 1 − 𝜖. It basically says that S is almost surely not the case and, under the supposition of R, we exclude the possibility of ¬S. Intuitively, the thought expressed by this statement should cast doubt as to whether R is the case. By ¬((R > S) ∧¬S) ≡ (R > ¬S) ∨ S, we also know that P(R > ¬S) ∨ S) = 𝜖. Note that the proposition [(R > S) ∧¬S] = {w4}(interpreted as minimally informative) specifies a similarity order ≤ such that w(R>S)∧¬S = w4 for all w. In contrast, the proposition [(R > ¬S) ∨ S] is minimally informative in a strong sense, since it does not exclude any world w. Hence, the 'maximally inclusive' proposition [(R > ¬S) ∨ S] = {w1, w2, w3, w4} specifies a similarity order ≤′≠ ≤according to which w(R>¬S)∨S = w for each w. We apply now Jeffrey imaging to the Jeweller Example, where k = 1 − 𝜖. $$\begin{array}{@{}rcl@{}} P^{(R > S) \land \neg S}_{1 - \epsilon}(w^{\prime}) = P^{*}(w^{\prime}) &=& \sum\limits_{w} \left( P(w) \cdot \left\{ \begin{array}{ll} 1 - \epsilon & \text{if \(w_{(R > S) \land \neg S} = w^{\prime}\)} \\ 0 & \text{otherwise} \end{array} \right\} \right.\\ &&\left.+P(w) \cdot \left\{ \begin{array}{ll} \epsilon & \text{if \(w_{(R > \neg S) \lor S} = w^{\prime}\)} \\ 0 & \text{otherwise} \end{array} \right\}\right) \end{array} $$ We obtain the following probability distribution after learning: $$\begin{array}{@{}rcl@{}} P^{*}_{1 - \epsilon}(w_{1}) \!&=&\! P^{*}_{1 - \epsilon}(R \land S) = \epsilon \cdot P(w_{1})\quad\quad P^{*}_{1 - \epsilon}(w_{2}) \,=\, P^{*}_{1 - \epsilon}(R \land \neg S) \,=\, \epsilon \cdot P(w_{2}) \\ P^{*}_{1 - \epsilon}(w_{3}) \!&=&\! P^{*}_{1 - \epsilon}(\neg R \land S) = \epsilon \cdot P(w_{3})\quad P^{*}_{1 - \epsilon}(w_{4}) \,=\, P^{*}_{1 - \epsilon}(\neg R \land \neg S)= (1 - \epsilon) \\ && \cdot (P(w_{1}) \,+\, P(w_{2}) + P(w_{3}) \\ && +P(w_{4})) + \epsilon \cdot P(w_{4})\\ \end{array} $$ The results almost comply with the prescribed intuitions. The intuition concerning the degree of belief in R is met: P∗(R) < P(R), since \(P^{*}_{1 - \epsilon }(w_{1}) + P^{*}_{1 - \epsilon }(w_{2}) < P(w_{1}) + P(w_{2})\). The intuition concerning the degree of belief in S is 'almost' met: \(P^{*}_{1 - \epsilon }(S) \approx P(S)\), for P(w1) + P(w3) = 𝜖 and \(P^{*}_{1 - \epsilon }(w_{1}) + P^{*}_{1 - \epsilon }(w_{3}) = P(w_{1}) \cdot \epsilon + P(w_{3}) \cdot \epsilon \approx \epsilon \). In words, the method gives us the result that Kate is now pretty sure that Henry is neither the shooter nor the robber. Günther, M. Learning Conditional Information by Jeffrey Imaging on Stalnaker Conditionals. J Philos Logic 47, 851–876 (2018). https://doi.org/10.1007/s10992-017-9452-z Issue Date: October 2018 Learning conditional information Stalnaker conditional Douven's examples Judy Benjamin problem Not logged in - 54.85.57.0
CommonCrawl
Skip to main content Skip to sections Evolutionary Genomics pp 309-330 | Cite as Bayesian Molecular Clock Dating Using Genome-Scale Datasets Mario dos Reis Ziheng Yang First Online: 06 July 2019 Bayesian methods for molecular clock dating of species divergences have been greatly developed during the past decade. Advantages of the methods include the use of relaxed-clock models to describe evolutionary rate variation in the branches of a phylogenetic tree and the use of flexible fossil calibration densities to describe the uncertainty in node ages. The advent of next-generation sequencing technologies has led to a flood of genome-scale datasets for organisms belonging to all domains in the tree of life. Thus, a new era has begun where dating the tree of life using genome-scale data is now within reach. In this protocol, we explain how to use the computer program MCMCTree to perform Bayesian inference of divergence times using genome-scale datasets. We use a ten-species primate phylogeny, with a molecular alignment of over three million base pairs, as an exemplar on how to carry out the analysis. We pay particular attention to how to set up the analysis and the priors and how to diagnose the MCMC algorithm used to obtain the posterior estimates of divergence times and evolutionary rates. Molecular clock Bayesian analysis MCMC Fossil Phylogeny Primates Genome Download protocol PDF The molecular clock hypothesis, which states that the rate of molecular evolution is approximately constant with time, provides a powerful way to estimate the times of divergence of species in a phylogeny. Since its proposal over 50 years ago [1], the molecular clock hypothesis has been used countless times to calibrate molecular phylogenies to geological time, with the ultimate aim of dating the tree of life [2, 3]. Several statistical inference methodologies have been developed for molecular clock dating analyses; however, during the past decade, the Bayesian method has emerged as the method of choice [4, 5], and several Bayesian inference software packages now exist to carry out this type of analysis [6, 7, 8, 9, 10]. In this protocol, we will explain how to use the computer program MCMCTree to estimate times of species divergences using genome-scale datasets within the Bayesian inference framework. Bayesian inference is well suited for divergence time estimation because it allows the natural integration of information from the fossil record (in the form of prior statistical distributions describing the ages of nodes in a phylogeny) with information from molecular sequences to estimate node ages, or geological times of divergence, of a species phylogeny [6, 11]. Another advantage of the Bayesian clock dating method is that relaxed-clock models, which allow for violations of the molecular clock, can be easily implemented as the prior on the evolutionary rates for the branches in the phylogeny [6]. MCMCTree allows analyses to be carried out using two popular relaxed-clock models (the autocorrelated and independent log-normally distributed rates models [12, 13]), as well as under the strict molecular clock. Furthermore, MCMCTree allows the user to build flexible fossil calibrations based on various statistical distributions (such as the uniform, truncated-Cauchy, and skew-t, and skew-normal distributions [12, 14, 15]). But perhaps the main advantage of MCMCTree is the implementation of an approximate algorithm to calculate the likelihood [6, 16], which allows the computer analysis of genome-scale datasets to be completed in reasonable amounts of time. The disadvantage of the algorithm is that it only works on fixed tree topologies. Several software packages that perform co-estimation of times and tree topology, but which do not use the approximation, are available [8, 9, 17, 18]. In this protocol, we focus on how to carry out a clock dating analysis with MCMCTree, paying particular attention to diagnosing the MCMC algorithm (the workhorse algorithm within the Bayesian method). Theoretical details of the Bayesian clock dating methods implemented in the program MCMCTree are described in [12, 13, 14, 15, 16, 19]. For general introductions to Bayesian statistics and Bayesian molecular clock dating, the reader may consult [20, 21]. 2 Software and Data Files To run the protocol, you will need the MCMCTree and BASEML programs, which are part of the PAML software package for phylogenetic analysis [22]. The source code and compiled versions of the code are freely available from bit.ly/ziheng-paml. All the data files necessary to run the protocol can be obtained from github.com/mariodosreis/divtime. Please create a directory called divtime in your computer and download all the data files from the GitHub repository. This protocol was tested with PAML version 4.9e. You are assumed to have basic knowledge of the command line in Unix or Windows (also known as command prompt, shell, or terminal). Simple tutorials for users of Windows, Mac OS, and Linux are posted at bit.ly/ziheng-software. Install MCMCTree and BASEML in your computer system, and make sure you have the mcmctree and baseml executables in your system's path (see bit.ly/ziheng-paml for details on how to do this). Finally, it is helpful (but not indispensable) to have knowledge of the R statistical environment (www.r-project.org). R is quite useful to analyze the output of the program, perform convergence diagnostics, and create nice-looking plots. File R/analysis.R contains some examples for this tutorial. In this protocol, we will estimate the divergence times of nine primates and one scandentian (an out-group), using a very long alignment (over three million nucleotides long). This dataset was chosen because it can be analyzed very quickly with MCMCTree and it is thus suitable to illustrate the method. We also provide a dataset of 330 species (276 primates and 4 out-groups) with a shorter alignment, to illustrate time estimation in a taxon-rich dataset (see Sect. 5.5 for details). 2.1 Tree and Fossil Calibrations The phylogenetic tree of the ten species is shown in Fig. 1. The tree encompasses members of all the main primate lineages. The ten species were chosen because they have had their complete genomes sequenced. They are a subset of the 36 mammal species analyzed in [23]. File data/10s.tree contains the tree with fossil calibrations in Newick format, which is the format required by MCMCTree. The eight fossil calibrations are shown in Table 1. The calibrations are the same used to estimate primate divergence times in [24]. We discuss fossil calibrations in detail in the "Sampling from the Prior" section. The time unit in the analysis is 100 million years (My). Thus, the calibration B(0.075, 0.10) means the node age is constrained to be between 7.5 and 10 million years ago (Ma). Open image in new window The tree of ten species. Nodes with fossil calibrations are indicated with black dots (see Table 1 for calibration densities). Internal nodes are numbered from 11 to 19 according to the nomenclature used by MCMCTree List of fossil calibrations used in this tutorial Nodea Crown group MCMCTree calibrationb Chimp-human B(0.075, 0.10, 0.01, 0.20) Gorilla-human B(0.10, 0.132, 0.01, 0.20) Hominidae Catarrhini B(0.25, 0.29, 0.01, 0.10) Anthropoidea ST(0.4754, 0.0632, 0.98, 22.85) Strepsirrhini S2N(0.698, 0.65, 0.0365, −3400, 0.650, 0.138, 11409) Euarchonta G(36, 36.9) aNode numbers as in Fig. 1 bB(a, b, pL, pU) means the calibration is a uniform distribution between a and b, with probabilities pL and pU that the true node age is outside the calibration bounds. ST(location, scale, shape, df) means the calibration is a skew-t distribution. S2N(p, location1, scale1, shape1, location2, scale2, shape2) means the calibration is a p:1 − p mixture of two skew-normal distributions. G(α, β) means the calibration is a gamma distribution with shape α and rate β. See MCMCTree's manual for the full details on fossil calibration formats. The calibrations are from the primate analysis in [24] 2.2 Molecular Sequence Data The molecular data are an alignment of 5614 protein-coding genes from the ten species. All ambiguous codon sites were removed, and thus the alignment contains no missing data. The alignment was separated into two partitions: A partition consisting of all the first and second codon positions (2,253,316 nucleotides long) and a partition of third codon positions (1,126,658 nucleotides long). The alignment is a subset of the larger 36-mammal-species alignment in [23]. See also ref. 24. File 10s.phys in the data directory contains the alignment. The alignment is compressed into site patterns (a site pattern is a unique combination of character states in an alignment column) to save disk space. 3 Tutorial We seek to obtain the posterior distribution (i.e., the estimates) of the divergence times (t) and the molecular evolutionary rates (r, μ, σ2) for the species in the phylogeny of Fig. 1. Here t = (t11, …, t19) are the nine species divergence times; r = (r1,12, …, r1,19, r2,12, …, r2,19) are the 2 × 8 = 16 molecular rates, one per branch and partition (i.e., there are eight branches in the tree and two partitions in the molecular data); and μ = (μ1, μ2) and σ2 = (\( {\sigma}_1^2,{\sigma}_2^2 \)) are the mean rates and the log-variance of the rates, for each partition. The posterior distribution is $$ f\left(\mathbf{t},\mathbf{r},\mu, {\sigma}^2|D\right)\propto f\left(\mathbf{t}\right)f\left(\mathbf{r}|\mathbf{t},\mu, {\sigma}^2\right)f\left(\mu \right)f\left({\sigma}^2\right)f\left(D|\mathbf{r},\mathbf{t}\right), $$ where f(t) is the prior on times; f(r|t, μ, σ2)f(μ)f(σ2) is the prior on the branch rates, mean rates, and variances of the log-rates; and f(D|t, r) is the molecular sequence likelihood. The prior on the times is constructed by combining the birth-death process with the fossil calibration densities (see ref. 13 for details). The prior on the rates is constructed under a model of rate evolution, assuming, in this tutorial, that the branch rates are independent draws from a log-normal distribution with mean μi and log-variance \( {\sigma}_i^2 \) [13]. Bayesian phylogenetic inference using MCMC is computationally expensive because of the repeated calculation of the likelihood on a sequence alignment. The time it takes to compute the likelihood is proportional to the number of site patterns in the alignment. Thus, longer alignments take longer to compute. For genome-scale alignments, the computation time is prohibitive. MCMCTree implements an approximation to the likelihood that speeds computation time substantially, making analysis of genome-scale data feasible. The approximate likelihood method for clock dating was proposed by Thorne et al. [6] and extended within MCMCTree [16]. The method relies on approximating the log-likelihood surface on the branch lengths by its Taylor expansion. Write ℓ(bj) = log f(D| bj) for the log-likelihood as a function of the branch lengths bj = (bj,i = rj,iti) for the alignment partition j. The Taylor approximation is $$ \mathrm{\ell}\left({\mathbf{b}}_j\right)\approx \mathrm{\ell}\left({\widehat{\mathbf{b}}}_j\right)+{\left({\mathbf{b}}_j-{\widehat{\mathbf{b}}}_j\right)}^{\mathrm{T}}{\mathbf{g}}_j+\frac{1}{2}{\left({\mathbf{b}}_j-{\widehat{\mathbf{b}}}_j\right)}^{\mathrm{T}}{\mathbf{H}}_j\left({\mathbf{b}}_j-{\widehat{\mathbf{b}}}_j\right), $$ where \( {\widehat{\mathbf{b}}}_j \) are the maximum likelihood estimates (MLEs) of the branch lengths and gj and Hj are the gradient (vector of first derivatives) and Hessian (matrix of second derivatives) of the log-likelihood surface evaluated at the MLEs for the partition. The approximation can be improved by applying transformations to the branch lengths (see ref. 16 for details). To use the approximation, one first fixes the topology of the phylogeny, and then estimates the branch lengths for each alignment partition on the fixed tree by maximum likelihood. The gradient and Hessian of the log-likelihood are obtained for each partition at the same time as the MLEs of the branch lengths. Note that parameters of the substitution model—such as the transition/transversion ratio, κ, in the HKY model or the α parameter in the discrete gamma model of rate variation among sites—are estimated at this step. Thus, different substitution models will generate different approximations, because they will have different MLEs for the branch lengths, gradient, and Hessian. Note that the time it takes to compute the approximate likelihood depends only on the number of species (which determines the size of b and H) and not on the alignment length, that is, once g and H have been calculated, MCMC sampling on the approximation takes the same time regardless of the length of the original alignment. We will use the approximate likelihood method to speed up the computation of the likelihood on the large genome alignment. The general strategy for the analysis is as follows: Approximate likelihood calculation: First, we will calculate the gradient (g) and Hessian (H) matrix of the branch lengths on the unrooted tree. For this step, we will need to use the MCMCTree and BASEML programs (BASEML will carry out the actual computation of g and H). The substitution model is chosen at this step. MCMC sampling from the posterior: Once g and H have been calculated and we have decided on our priors, we can use MCMCTree to perform MCMC sampling from the posterior distribution of times and rates. We will then look at the summaries of the posterior (such as posterior mean times and rates and 95% credibility intervals). Convergence diagnostics: The MCMC algorithm is a stochastic algorithm that visits regions of the parameter space in proportion to the posterior distribution. Due to its very nature, it is possible that sometimes the MCMC chain is terminated before it has had a chance to explore the parameter space appropriately. The way to guard against this is to run the analysis two or more times and compare the summary statistics from the two (or more) MCMC chains. If the results from different runs are very similar, then convergence to the posterior distribution can be reasonably assumed. MCMC sampling from the prior: Finally, we will sample directly from the prior of times and rates. This is particularly important in Bayesian molecular clock dating because in most cases the prior on times may look quite different from the fossil calibration densities specified by the user. Thus, sampling from the prior allows the user to check the soundness of the prior actually used. Note that in this protocol we assume the user has chosen a suitable sequence alignment and a phylogenetic tree to carry out the analysis. For genome-scale alignments, it is important that the genes chosen among the various species are orthologous and that the alignment has been checked for accuracy. Several chapters in this volume can guide the user in this purpose. 3.2 Calculation of the Gradient and Hessian to Approximate the Likelihood Go into the gH directory, and open the mcmctree-outBV.ctl file using your favorite text editor. This control file contains the set of parameters necessary for MCMCTree to carry out the calculations of the gradient and Hessian needed for the approximate likelihood method. Figure 2 shows the contents of the mcmctree-outBV.ctl file. The gH/mcmctree-outBV.ctl file, with appropriate options to set up calculation of the gradient and Hessian matrix for the approximate likelihood method The first two items, seqfile and treefile, indicate the alignment and tree files to be used. The third item, ndata, indicates the number of partitions in the sequence file, in this case, two partitions. The fifth item, usedata, is very important, as it tells MCMCTree the type of analysis being carried out. The options are 0, to sample from the prior; 1, to sample from the posterior using exact likelihood; 2, to sample from the posterior using approximate likelihood; and 3, to prepare the data for calculation of g and H. The last is the option we will be using in this step. The next three items, model, alpha, and ncatG, set up the nucleotide substitution model, in this case the HKY + Gamma model [25]. Finally, the cleandata option tells MCMCTree whether to remove ambiguous data. Our alignment has no ambiguous sites, so this option has no effect in this case. Using a terminal, go to the gH directory and type $ mcmctree mcmctree-outBV.ctl (Don't type in the $ as this represents the command prompt!) This will start the MCMCTree program. MCMCTree will prepare several tmp????.* files and will then call the BASEML program to estimate g and H. For this step to work correctly, the baseml executable must be in your system's path. Once BASEML and MCMCTree have finished, you will notice a file called out.BV has been created. Figure 3 shows part of the contents of this file. The first line indicates the number of species (10), followed by the tree with branch lengths estimated under maximum likelihood for the first partition (first and second codon sites). Next, we have the MLEs of the 17 branch lengths (these are the same as in the tree but printed in a different order). Then we have the gradient, g1, the vector of 17 first derivatives of the likelihood at the branch length MLEs for partition 1. For small datasets, the gradient is usually zero. For large datasets, the likelihood surface is too sharp (i.e., bends downward sharply and it is very narrow at the MLEs), and the gradient is not zero for numerical issues. But this is fine. Next, we have the 17 × 17 Hessian matrix, H1, the matrix of second derivatives of the likelihood at the branch length MLEs for partition 1. If you scroll down the file, you will find the second block, with the tree, branch length MLEs, g2, and H2 for partition 2 (third codon positions). The gH/out.BV file produced by BASEML. The first line has the number of species (10), the second line has the tree topology with MLEs of branch lengths, and the MLEs of branch lengths are given again in the third line. The fourth line contains the gradient, g, followed by the Hessian, H, for partition 1. This file will be renamed in.BV and placed into the mcmc/ directory to carry out MCMC sampling using the approximate likelihood method 3.3 Calculation of the Posterior of Times and Rates 3.3.1 Control File and Priors Now that we have calculated g and H, we can proceed to MCMC sampling of the posterior distribution using the approximate likelihood method. Copy the gH/out.BV file into the mcmc directory, and rename it as in.BV. Now go into the mcmc directory. There you will find mcmctree.ctl, the necessary MCMCTree control file to carry out MCMC sampling from the posterior. Figure 4 shows the contents of the file. The first item, seed, is the seed for the random number generator used by the MCMC algorithm. Here it is set to −1, which tells MCMCTree to use the system's clock time as the seed. This is useful, as running the program multiple times will generate different outputs. The mcmc/mcmctree.ctl file necessary to sample from the posterior distribution using the approximate likelihood method The mcmcfile option tells MCMCTree where to save the parameters sampled (divergence times and rates) during the MCMC iterations. Here we will save them to a file named mcmc.txt. Once the MCMC sampling has completed, MCMCTree will read the sample from the mcmc.txt file and generate a summary of the MCMC output. This summary will be saved to a file called out.txt (outfile option). The option usedata is set to 2 here, which tells MCMCTree to calculate the likelihood approximately by using the g and H values saved in the in.BV file. Option clock sets the clock model. Here we use clock = 2, which assumes rates are identical, independent realizations from a log-normal distribution [7, 26]. Option RootAge sets the calibration on the root node of the phylogeny, if none are present in the tree file. In our case, we already have a calibration on the root, so this option has no effect. The next three options, model, alpha, and ncatG, have no effect as the substitution model was chosen during estimation of g and H. The following options are very important as they determine the prior used in the analysis. BDparams sets the prior on node ages for those nodes without fossil calibrations by using the birth-death process [12]. Here we use 1 1 0, which means node ages are uniformly distributed between present time and the age of the root. Options kappa_gamma and alpha_gamma set gamma priors for the κ and α parameters in the substitution model. These have no effect as we are using the likelihood approximation. Options rgene_gamma and sigma2_gamma set the gamma-Dirichlet prior on the mean substitution rate for partitions and for the rate variance parameter, σ2 [19]. The prior on the mean rate is Gamma(2, 40), which has mean 0.05 substitutions per time 100 My. A symmetric Dirichlet distribution with concentration parameter equal to 1 is used to spread the rate prior across partitions (thus rgene_gamma = 2 40 1). See ref. 19 for details. The prior on σ2 is Gamma(1, 10) which has mean 0.1. A Dirichlet is also used to spread the prior across partitions. The final block of options, print, burnin, sampfreq, and nsample, control the length and sampling frequency of the MCMC. We will discard the first 20,000 iterations as the burn-in and then print parameter values to the mcmc.txt file every 100 iterations, to a maximum of 20,000 + 1 samples. Thus, our MCMC chain will run for a total of 20,000 + 20,000 × 100 = 2,020,000 iterations. 3.3.2 Running and Summarizing the MCMC Go into the mcmc directory and type $ mcmctree mcmctree.ctl This will start the MCMC sampling. First, MCMCTree will iterate the chain for a set number of iterations, known as the burn-in. During this period, the program will fine-tune the step sizes for proposing parameters in the chain. Once the burn-in is finished, sampling from the posterior will start. Figure 5 shows a screenshot of MCMCTree in action. The leftmost column indicates the progress of the sampling as a percentage of the total (5%, 10% of total iterations, and so on). The next numbers represent the acceptance proportions, which are close to 30% (this is the result of fine-tuning by the program). After the five acceptance proportions, the programs prints a few parameters to the screen and in the last columns the log-likelihood and the time taken. Screenshot of MCMCTree's output during MCMC sampling of the posterior. Different runs of the program will give slightly different output values The above analysis takes about 2 min and 30 s to complete on a 2.2 GHz Intel Core i7 Processor. Once the analysis has finished, you will see that MCMCTree has created several new files in the mcmc directory. Rename mcmc.txt to mcmc1.txt and out.txt to out1.txt. Now, on the command line, type again This will run the analysis a second time. The results should be slightly different to the previous run due to the stochastic nature of the algorithm. Once the second run has finished, rename mcmc.txt to mcmc2.txt and out.txt to out2.txt. If you want to conduct two runs simultaneously, you can create two directories (say r1/ and r2/) and copy the necessary files into them. Then open two terminal windows to start the runs from within each directory. Using your favorite text editor, open file out1.txt, which contains the summary of the first MCMC run. Scroll to the end of the file (see screenshot, Fig. 6). You will see the time used by the program (in my case 2:32), the posterior means of the parameters sampled, and three phylogenetic trees in Newick format. The first tree simply has internal nodes labelled with a number. This is useful to compare the tree with the posterior means of times at the end of the file. The second tree is the tree with branch lengths in absolute time units. The third tree is like the second by including the 95% credibility intervals (CIs) of the node ages. At the bottom of the file, you have a table with all the divergence times (from t_n11 to t_n19), the mean substitution rates for the two partitions (mu1 and mu2), the rate variation coefficients (sigma2_1 and sigma2_2), and finally the log-likelihood (lnL). The table gives the posterior means, equal-tail CIs, and high-posterior-density CIs. For example, the posterior age of the root (node 11, Fig. 1) is 116.8 Ma (95% CI, 144.2–92.4 Ma) while for the divergence between human and chimp (node 19, Fig. 1) is 8.52 Ma (95% CI, 7.58–9.81 Ma). The end of the mcmc/out.txt file produced by MCMCTree at the end of the MCMC sampling of the posterior You will also notice that MCMCTree created a file called FigTree.tre. This contains the posterior tree in Nexus format, suitable for plotting in the program FigTree (tree.bio.ed.ac.uk/software/figtree/). Figure 7 shows the posterior tree plotted in FigTree, with the time unit set to 1 My. The dated primate phylogeny with error bars (representing 95% CIs of node ages), drawn with FigTree. The time unit is 1 My 3.4 Convergence Diagnostics of the MCMC Diagnosing convergence of the MCMC chains is extremely important. Several software tools have been written for this purpose. For example, the user-friendly Tracer program (beast.bio.ed.ac.uk/tracer) can be used to read in the mcmc1.txt and mcmc2.txt files and calculate several convergence statistics. Here we will use R to perform basic convergence tests (check out file R/analysis.R). The first step to assess convergence is to compare the posterior means among the different runs. You can visually inspect the posterior means reported in the out1.txt and out2.txt files (Fig. 8), although this may be cumbersome. Figure 8a shows a plot, made with R, of posterior times for run 1 vs. those from run 2. You can see that the points fall almost perfectly on the y = x line, indicating that both runs have converged to the same distribution (hopefully the posterior!). Convergence diagnostic plots of the MCMC drawn with R (see R/analysis.R) Another useful statistic to be calculated is the effective sample size (ESS). This gives the user an idea about whether an MCMC chain has been run long enough. Tracer calculates ESS automatically for all parameters. Function coda::effectiveSize in R will do the same. Figure 9 shows the posterior mean, ESS, posterior variance, and standard error of posterior means calculated with R for run 1 of the MCMC. The longer the ESS, the better. As a rule of thumb, one should seek ESS larger than 1000, although this may not always be practical in phylogenetic analysis. Note in Fig. 9 that some estimates have very low ESSs, while others have substantially higher ESSs. For example, t_n11 has ESS = 76.1, while t_n19 has ESS = 1261. Running the analysis again and increasing the total number of iterations (e.g., by increasing samplefreq or nsample) will lead to higher ESS values for all parameters. Calculations of posterior mean, ESS, posterior variance, and standard error of the posterior mean in R (see R/analysis.R) Let v be the posterior variance of a parameter. The standard error of the posterior mean of the parameter is S.E. = √(v/ESS). This is why having large ESS is important: Large ESS leads to small S.E. and better estimates of the posterior mean. For example, for t_n11, the posterior mean is 116.8 Ma, with standard error 1.53 My (Fig. 9). That is, we have estimated the mean accurately to within 2 × 1.53 My = 3.06 My. To reduce the S.E. by half, you need to increase the ESS four times. Note that independent MCMC runs can be combined into a single run. Thus, you may save time by running several MCMC chains in parallel for computationally expensive analyses, although care must be taken to ensure each chain has run long enough to exit the burn-in phase and explore the posterior appropriately. Trace plots and histograms are useful to spot problems and check convergence. Figure 8b, c shows trace plots for t_n19 and t_n11, respectively. The trace of t_n19, which has high ESS, looks like a "hairy caterpillar." Compare it to the trace of t_n11, which has low ESS. Visual inspection of a trace plot usually gives a sense of whether the parameter has an adequate ESS without calculating it. Note that both traces are trendless, that is, the traces oscillate around a mean value (the posterior mean). If you see a persistent trend in the trace (such as an increase or a decrease), that most likely means the MCMC did not converge to the posterior and needs a longer burn-in period. Figure 8d shows the smoothed histograms (calculated using density in R) for t_n11 for the two runs. Notice that the two histograms are slightly different. As the ESS becomes larger, histograms for different runs will converge in shape until becoming indistinguishable. If you see large discrepancies between histograms, that may indicate serious problems with the MCMC, such as lack of convergence due to short burn-in or the MCMC getting stuck in different modes of a multimodal posterior. 3.5 MCMC Sampling from the Prior Note that fossil calibrations (such as those of Table 1) are represented as statistical distributions of node ages. MCMCTree uses these distributions to construct the prior on times. However, the resulting time prior used by the program may be substantially different from the original fossil calibrations, because the program applies a truncation so that daughter nodes are younger than their ancestors [14, 27]. Thus, it is advisable to calculate the time prior explicitly by running the MCMC with no data so that it can be examined and compared with the fossil calibrations and the posterior. Go to the prior directory and type $ mcmctree mcmctree-pr.ctl This will start the MCMC sampling from the prior. File mcmctree-pr.ctl is identical to mcmc/mcmctree.ctl except that option usedata has been set to 0. Sampling from the prior is much quicker because the likelihood does not need to be calculated. It takes about 1 min on the Intel Core i7 for MCMCTree to complete the analysis. Rename files mcmc.txt and out.txt to mcmc1.txt and out1.txt, and run the analysis again. Rename the new files as appropriate. Check for convergence by calculating the ESS and plotting the traces and histograms. Figure 10 shows the prior densities of node ages obtained by MCMC sampling (shown in gray) vs. the posterior densities (shown in black). Notice that for four nodes t_n19, t_n18, t_n17, and t_n16, the posterior times "agree" with the prior, that is, the posterior density is contained within the prior density. For nodes t_n15, t_n13, and t_n11, there is some conflict between the prior and posterior densities. However, for nodes t_n14 and t_n12, there is substantial conflict between the prior and the posterior. In both cases the molecular data (together with the clock model) suggest the node age is much older than that implied by the calibrations. This highlights the problems in construction of fossil calibrations. Prior (gray) and posterior (black) density plots of node ages plotted with R (see R/analysis.R) Each fossil calibration represents the paleontologist's best guess about the age of a node. For example, the calibration for the human-chimp ancestor is B(0.075, 0.10, 0.01, 0.20); thus, the calibration is a uniform distribution between 7.5 and 10 million years ago (Ma). The bounds of the calibration are soft, that is, there is a set probability that the bound is violated. In this case the probabilities are 1% for the minimum bound and 20% for the maximum bound. The bound probabilities are asymmetrical because they reflect the nature of the fossil information. Minimum bounds are usually set with confidence because they are based on the age of the oldest fossil member of a clade. For example, the minimum of 7.5 Ma is based on the age of †Sahelanthropus tchadensis, recognized as the oldest fossil within the human lineage [28]. On the other hand, establishing maximum bounds is difficult, as absence of fossils for certain clades cannot be interpreted as evidence that the clade in question did not exist during a particular geological time [29]. Our maximum here of 10 Ma represents the paleontologist's informed guess about the likely oldest age of the clade; however, a large probability of 20% is given to allow for the fact that the node age could be older. The conflict between the prior and posterior seen in Fig. 10 evidences this. Note that when constructing the time prior, the Bayesian dating software must respect the constraints whereby daughter nodes must be younger than their parents. This means that calibration densities are truncated to accommodate the constraint, with the result that the actual prior used on node ages can be substantially different to the calibration density used (see Sect. 5.4). Detailed analyses of the interactions between fossil calibrations and the time prior and the effect of truncation are given in [14, 27]. 4 General Recommendations for Bayesian Clock Dating Extensive reviews of best practice in Bayesian clock dating are given elsewhere [4, 20, 21, 30, 31]. Here we give a few brief recommendations. 4.1 Taxon Sampling, Data Partitioning, and Estimation of Tree Topology In this tutorial we used a small phylogeny to illustrate Bayesian time estimation using approximate likelihood calculation. In practical data analysis, it may be desirable to analyze much larger phylogenies (see Sect. 5.5). In large phylogenies, there may be uncertainties in the relationships of some groups. The approximate method discussed here can only be applied to a fixed (known) tree topology. If the uncertainties in the tree are few so that just a handful of tree topologies appear reasonable, the approximate method can be used by analyzing each topology separately [23, 32]. This involves estimating g and H for each topology and then running separate MCMC chains on each topology to estimate the times. Several methods to co-estimate divergence times and tree topology are available [8, 9, 17, 18], although they do not implement the approximate likelihood method and are thus unsuitable for the analysis of genome-scale datasets. We note that partitioning of sites in genomic datasets may have important effects on divergence time estimation. The infinite-sites theory [13, 33] studies the asymptotic behavior of the posterior distribution of times when the amount of molecular data (measured by the number of partitions and the number of sites per partition) increases in a relaxed-clock dating analysis. This theory shows that increasing the number of sites per partition will have minimal effects on time estimation when the sequences per partition are moderately long (>1000 sites, say), but the precision improves when the number of partitions increases, eventually approximating a limit when the number of partitions is infinite. The theory also predicts that very different time estimates may be obtained if the same genomic sequence alignment is analyzed as one partition or as multiple partitions [34]. Furthermore, while more partitions tend to produce more precise time estimates, with narrow CIs, they may not necessarily be more reliable, depending on the correctness of the fossil calibrations and the appropriateness of the partitioning strategies. Unfortunately it is hard to decide on a good partitioning strategy given the genome-scale sequence data, despite efforts to design automatic partitioning strategies for phylogenetic analysis and divergence time estimation [34, 35, 36]. Commonly used approaches partition sites in the alignment by codon position or by protein-coding genes of different relative rates [23]. We recommend the use of the infinite-sites plot [14], in which uncertainty in divergence time estimates (measured as the CI width) is plotted against the posterior mean of times. If the scatter points fall on a straight line, information due to the molecular sequence data has reached saturation, and uncertainty in time estimate is predominantly due to uncertainties in fossil calibrations. 4.2 Selection of Fossil Calibrations Fossil calibrations are one of the most important pieces of information needed to perform divergence time estimation and thus should be chosen after careful consideration of the fossil record, although this may involve some subjectivity [29]. Parham et al. [30] discuss best practice for construction of fossil calibrations. For example, minimum bounds on node ages are normally set to be the age of the oldest fossil member of the crown group. A small probability (say 2.5%) should be set for the probability that the node age violates the minimum bound (e.g., to guard against misidentified or incorrectly dated fossils). Specifying maximum bounds is more difficult, as absence of fossils for a given geological period is not evidence that the clade in question was absent during the period [31]. Current practice is to set the maximum bound to a reasonable value according to the expertise of the paleontologist (see ref. 29 for examples), although a large probability (say 10% or even 20%) may be required to guard against badly specified maximum bounds. Calibration densities based on statistical modeling of species diversification, fossil preservation, and discovery are also possible [15]. In so-called tip-dating approaches, fossil species are included as taxa in the analysis (which may or may not include morphological information for the fossil and extant taxa) [37, 38, 39]. Thus, in tip-dating, explicit specification of a fossil calibration density for a node age is not necessary. 4.3 Construction of the Time Prior The birth-death process with species sampling was used here to construct the time prior for nodes in the phylogeny for which fossil calibrations are not available. Varying the birth (μ), death (λ), and sampling (ρ), parameters can result in substantially different time priors. For example, using μ = λ = 1 and ρ = 0 leads to a uniform distribution prior on node ages. This diffuse prior appears appropriate for most analyses. Varying the values of μ, λ, and ρ is useful to assess whether the time estimates are robust to the time prior. Parameter configurations can be set up to generate time densities that result in young node ages or in very old node ages (see p. 381 in [20] for examples). 4.4 Selection of the Clock Model In analysis of closely related species (such as the apes), the clock assumption appears to be appropriate for time estimation. A likelihood ratio test can be used to determine whether the strict clock is appropriate for a given dataset [40]. If the clock is rejected, then Bayesian molecular clock dating should proceed using one of the various relaxed-clock models available [7, 13]. In this case, Bayesian model selection may be used to choose the most appropriate relaxed-clock model [41], although the method is computationally expensive and thus only applicable to small datasets. The use of different relaxed-clock models (such as the autocorrelated vs. the independent log-normally distributed rates) may result in substantially different time estimates (see ref. 32 for an example). In such cases, repeating the analysis under the different clock models may be desirable. 5.1 Autocorrelated Rate Model Modify file mcmc/mcmctree.ctl and set clock = 3. This activates the autocorrelated log-normal rates model, also known as the geometric Brownian motion rates model [6, 13]. Run the MCMC twice and check for convergence. Compare the posterior times obtained with those obtained under the independent log-normal model (clock = 2). Are there any systematic differences in node age estimates between the two analyses? Which clock model produces the most precise (i.e., narrower CIs) divergence time estimates? 5.2 MCMC Sampling with Exact Likelihood Calculation Modify file mcmc/mcmctree.ctl and set clock = 2 (independent rates), usedata = 1 (exact likelihood), burnin = 200, sampfreq = 2, and nsample = 500. These last three options will lead to a much shorter MCMC chain, with a total of 1200 iterations. Run the MCMC sampling twice, and check for convergence using the ESS, histograms, and trace plots. How long does it take for the sampling to complete? Can you estimate how long it would take to run the analysis using 2,020,000 iterations, as long as for the approximate method of Sect. 3.3.2? Did the two chains converge despite the low number of iterations? 5.3 Change of Fossil Calibrations There is some controversy over whether †Sahelanthropus, used to set the minimum bound for the human-chimp divergence, is indeed part of the human lineage. The next (younger) fossil in the human lineage is †Orrorin which dates to around 6 Ma. Modify file data/10s.tree and change the calibration in the human-chimp node to B(0.057, 0.10, 0.01, 0.2). Also change the calibration on the root node to B(0.615, 1.315, 0.01, 0.05). Run the MCMC analysis with the approximate method and again sampling from the prior. Are there any substantial differences in the posterior distributions of times under the new fossil calibrations? Which nodes are affected? How bad is the truncation effect among the calibration densities and the prior? 5.4 Comparing Calibration Densities and Prior Densities This is a difficult exercise. Use R to plot the prior densities of times sampled using MCMC (the same as in Fig. 10). Now try to work out how to overlay the calibration densities onto the plots. For example, see Fig. 3 in [23] for an idea. First, write functions that calculate the calibration densities. The dunif function in R is useful to plot uniform calibrations. Functions sn::dsn and sn::dst (in the SN package) are useful to plot the skew-t (ST) and skew-normal (SN) distributions. Calibration type S2N (Table 1) is a mixture of two skew-normal distributions [15]. How do the sampled priors compare to the calibration densities? Are there any substantial truncation effects? 5.5 Time Estimation in a Supermatrix of 330 Species Good taxon sampling is critical to obtaining robust estimates of divergence times for clades. In the data/ directory, an alignment of the first and second codon positions from mitochondrial protein-coding genes from 330 species (326 primate and 4 out-group species) is provided, 330s.phys, with corresponding tree topology, 330s.tree. First, place the fossil calibrations of Table 1 on the appropriate nodes of the species tree. Then obtain the gradient and Hessian matrix for the 330-species alignment using the HKY + G model. Finally, estimate the divergence times on the 330-species phylogeny by using the approximate likelihood method. How does taxon sampling affect node age estimates when comparing the 10-species and 330-species trees? How does uncertainty in node ages in the large tree, which was estimated on a short alignment, compare with the estimates on the small tree, but with a large alignment? Zuckerkandl E, Pauling L (1965) Evolutionary divergence and convergence in proteins. In: Bryson V, Vogel HJ (eds) Evolving genes and proteins. Academic, New York, pp 97–166CrossRefGoogle Scholar Kumar S (2005) Molecular clocks: four decades of evolution. Nat Rev Genet 6:654–662PubMedCrossRefGoogle Scholar Bromham L, Penny D (2003) The modern molecular clock. Nat Rev Genet 4:216–224PubMedCrossRefGoogle Scholar dos Reis M, Donoghue PCJ, Yang Z (2016) Bayesian molecular clock dating of species divergences in the genomics era. Nat Rev Genet 17:71–80PubMedCrossRefGoogle Scholar Donoghue PCJ, Yang Z (2016) The evolution of methods for establishing evolutionary timescales. Philos Trans R Soc B Biol Sci 371:20160020CrossRefGoogle Scholar Thorne JL, Kishino H, Painter IS (1998) Estimating the rate of evolution of the rate of molecular evolution. Mol Biol Evol 15:1647–1657PubMedCrossRefGoogle Scholar Drummond AJ, Ho SYW, Phillips MJ et al (2006) Relaxed phylogenetics and dating with confidence. PLoS Biol 4:699–710CrossRefGoogle Scholar Ronquist F, Teslenko M, Van Der Mark P et al (2012) Mrbayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Syst Biol 61:539–542PubMedPubMedCentralCrossRefGoogle Scholar Lartillot N, Lepage T, Blanquart S (2009) PhyloBayes 3: a Bayesian software package for phylogenetic reconstruction and molecular dating. Bioinformatics 25:2286–2288PubMedCrossRefGoogle Scholar Heath TA, Holder MT, Huelsenbeck JP (2012) A Dirichlet process prior for estimating lineage-specific substitution rates. Mol Biol Evol 29:939–955PubMedCrossRefGoogle Scholar Kishino H, Thorne JL, Bruno WJ (2001) Performance of a divergence time estimation method under a probabilistic model of rate evolution. Mol Biol Evol 18:352–361PubMedCrossRefGoogle Scholar Yang Z, Rannala B (2006) Bayesian estimation of species divergence times under a molecular clock using multiple fossil calibrations with soft bounds. Mol Biol Evol 23:212–226PubMedCrossRefGoogle Scholar Rannala B, Yang Z (2007) Inferring speciation times under an episodic molecular clock. Syst Biol 56:453–466PubMedCrossRefGoogle Scholar Inoue J, Donoghue PCJ, Yang Z (2010) The impact of the representation of fossil calibrations on Bayesian estimation of species divergence times. Syst Biol 59:74–89PubMedCrossRefGoogle Scholar Wilkinson RD, Steiper ME, Soligo C et al (2011) Dating primate divergences through an integrated analysis of palaeontological and molecular data. Syst Biol 60:16–31PubMedCrossRefGoogle Scholar Dos Reis M, Yang Z (2011) Approximate likelihood calculation on a phylogeny for Bayesian estimation of divergence times. Mol Biol Evol 8(7):2161–2172CrossRefGoogle Scholar Bouckaert R, Heled J, Kühnert D et al (2014) BEAST 2: a software platform for Bayesian evolutionary analysis. PLoS Comput Biol 10(4):e1003537PubMedPubMedCentralCrossRefGoogle Scholar Höhna S, Landis MJ, Heath TA et al (2016) RevBayes: Bayesian phylogenetic inference using graphical models and an interactive model-specification language. Syst Biol 65:726–736PubMedPubMedCentralCrossRefGoogle Scholar Dos Reis M, Zhu T, Yang Z (2014) The impact of the rate prior on Bayesian estimation of divergence times with multiple loci. Syst Biol 63:555–565PubMedPubMedCentralCrossRefGoogle Scholar Yang Z (2014) Molecular Evolution: A Statistical Approach. Oxford University Press, OxfordCrossRefGoogle Scholar Heath TA, Moore BR (2014) Bayesian inference of species divergence times. In: Chen M-H, Kuo L, Lewis PO (eds) Bayesian Phylogenetics: Methods, Algorithms, and Applications. CRC Press, Boca Raton, pp 277–318Google Scholar Yang Z (2007) PAML 4: phylogenetic analysis by maximum likelihood. Mol Biol Evol 24:1586–1591PubMedPubMedCentralCrossRefGoogle Scholar dos Reis M, Inoue J, Hasegawa M et al (2012) Phylogenomic datasets provide both precision and accuracy in estimating the timescale of placental mammal phylogeny. Proc Biol Sci 279:3491–3500PubMedPubMedCentralCrossRefGoogle Scholar dos Reis M, Gunnell G, Barba-Montoya J et al (2018) Using phylogenomic data to explore the effects of relaxed clocks and calibration strategies on divergence time estimation: primates as a test case. Syst Biol 67(4):594–615PubMedPubMedCentralCrossRefGoogle Scholar Yang Z (1996) Among-site rate variation and its impact on phylogenetic analyses. Trends Ecol Evol 11(9):367–372PubMedCrossRefGoogle Scholar Gillespie JH (1984) The molecular clock may be an episodic clock. Proc Natl Acad Sci U S A 81:8009–8013PubMedPubMedCentralCrossRefGoogle Scholar Warnock RCM, Yang Z, Donoghue PCJ (2012) Exploring uncertainty in the calibration of the molecular clock. Biol Lett 8:156–159PubMedCrossRefGoogle Scholar Zollikofer CPE, Ponce de León MS, Lieberman DE et al (2005) Virtual cranial reconstruction of Sahelanthropus tchadensis. Nature 434:755–759PubMedCrossRefGoogle Scholar Benton MJ, Donoghue PCJ (2007) Paleontological evidence to date the tree of life. Mol Biol Evol 24(1):26–53PubMedCrossRefGoogle Scholar Parham JF, Donoghue PCJ, Bell CJ et al (2012) Best practices for justifying fossil calibrations. Syst Biol 61(2):346–359PubMedCrossRefGoogle Scholar Ho SYW, Phillips MJ (2009) Accounting for calibration uncertainty in phylogenetic estimation of evolutionary divergence times. Syst Biol 58:367–380PubMedCrossRefGoogle Scholar Dos Reis M, Thawornwattana Y, Angelis K et al (2015) Uncertainty in the timing of origin of animals and the limits of precision in molecular timescales. Curr Biol 25:2939–2950PubMedPubMedCentralCrossRefGoogle Scholar Zhu T, Reis MD, Yang Z (2014) Characterization of the uncertainty of divergence time estimation under relaxed molecular clock models using multiple loci. Syst Biol 64(2):267–280PubMedPubMedCentralCrossRefGoogle Scholar Angelis K, Alvarez-Carretero S, dos Reis M et al (2018) An evaluation of different partitioning strategies for Bayesian estimation of species divergence times. Syst Biol 67(1):61–77PubMedCrossRefGoogle Scholar Lanfear R, Calcott B, Ho SYW et al (2012) PartitionFinder: combined selection of partitioning schemes and substitution models for phylogenetic analyses. Mol Biol Evol 29:1695–1701PubMedCrossRefGoogle Scholar Duchêne S, Molak M, Ho SYW (2014) ClockstaR: choosing the number of relaxed-clock models in molecular phylogenetic analysis. Bioinformatics 30:1017–1019PubMedCrossRefGoogle Scholar Heath TA, Huelsenbeck JP, Stadler T (2014) The fossilized birth-death process for coherent calibration of divergence-time estimates. Proc Natl Acad Sci U S A 111:E2957–E2966PubMedPubMedCentralCrossRefGoogle Scholar Ronquist F, Klopfstein S, Vilhelmsen L et al (2012) A total-evidence approach to dating with fossils, applied to the early radiation of the hymenoptera. Syst Biol 61:973–999PubMedPubMedCentralCrossRefGoogle Scholar O'Reilly JE, dos Reis M, Donoghue PCJ (2015) Dating tips for divergence-time estimation. Trends Genet 31(11):637–650PubMedCrossRefGoogle Scholar Felsenstein J (1981) Evolutionary trees from DNA sequences: a maximum likelihood approach. J Mol Evol 17:368–376PubMedCrossRefGoogle Scholar Lepage T, Bryant D, Philippe H et al (2007) A general comparison of relaxed molecular clock models. Mol Biol Evol 24:2669–2680PubMedCrossRefGoogle Scholar © The Author(s) 2019 Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. 1.School of Biological and Chemical SciencesQueen Mary University of LondonLondonUK 2.Department of Genetics, Evolution and EnvironmentUniversity College LondonLondonUK dos Reis M., Yang Z. (2019) Bayesian Molecular Clock Dating Using Genome-Scale Datasets. In: Anisimova M. (eds) Evolutionary Genomics. Methods in Molecular Biology, vol 1910. Humana, New York, NY First Online 06 July 2019 Publisher Name Humana, New York, NY Not logged in Massachusetts Institute of Technology MIT Libraries (1600010533) 18.206.12.79
CommonCrawl
Long-term integrity protection of genomic data Johannes Buchmann1, Matthias Geihs1, Kay Hamacher1, Stefan Katzenbeisser1 & Sebastian Stammler ORCID: orcid.org/0000-0002-6458-58401 EURASIP Journal on Information Security volume 2019, Article number: 16 (2019) Cite this article Genomic data is crucial in the understanding of many diseases and for the guidance of medical treatments. Pharmacogenomics and cancer genomics are just two areas in precision medicine of rapidly growing utilization. At the same time, whole-genome sequencing costs are plummeting below $ 1000, meaning that a rapid growth in full-genome data storage requirements is foreseeable. While privacy protection of genomic data is receiving growing attention, integrity protection of this long-lived and highly sensitive data much less so.We consider a scenario inspired by future pharmacogenomics, in which a patient's genome data is stored over a long time period while random parts of it are periodically accessed by authorized parties such as doctors and clinicians. A protection scheme is described that preserves integrity of the genomic data in that scenario over a time horizon of 100 years. During such a long time period, cryptographic schemes will potentially break and therefore our scheme allows to update the integrity protection. Furthermore, integrity of parts of the genomic data can be verified without compromising the privacy of the remaining data. Finally, a performance evaluation and cost projection shows that privacy-preserving long-term integrity protection of genomic data is resource demanding, but in reach of current and future hardware technology and has negligible costs of storage. Full genome sequencing is becoming a standard medical procedure in the near future, not only in the assessment of many diseases but also in the research or consumer services setting. For example, in its recent annual report [1], the UK's chief medical officer called for a revolution of gene testing and wants whole-genome sequencing to become a standard procedure for National Health Service patients—not only for cancer treatment but also rare diseases testing, targeting of drugs etc. With decreasing sequencing costs, periodic and tissue specific sequencing will be the next step forward. Thus, storage requirements are ever increasing and long-term data protection schemes become more complex. While genomic privacy is attracting much attention recently [2–4], the assurance of genomic data integrity has almost not been discussed yet. Genomic data not only requires hundreds of gigabytes of storage but also needs to be secured against loss and tampering for at least a human life span. This paper is concerned with the integrity protection of genomic data for decades after data generation. As cryptographic primitives such as hash algorithms and signatures may become insecure in the future this undertaking is challenging. Endeavors like the 100,000 Genomes Project [5] in the UK show that one important scenario to consider is the outsourcing of genomic data storage to a trusted third party. The key challenge is to guarantee that none of the outsourced data gets ever modified, either by an outside attacker or even an insider, over a hundred years. In the future, doctors might get authorized access to parts of a patient's genome, stored in a national database, to support personalized medicine decisions. A renowned example from pharmacogenomics is the dosage determination for drug Warfarin based on just a few single-nucleotide polymorphisms (SNPs) [6–8]: for certain variants of CYP2C9, only a fifth of the normal dose is recommended. This prime example shows why even the change, or suppression, of a few entries in a database of genomic variants can have disastrous consequences on treatment decisions with implications for liability and legal procedures. On the technical side, cryptographic primitives like symmetric encryption schemes, digital signature schemes, or hash functions are deemed to break over time. For example, in 1997 the widely used symmetric encryption scheme DES was broken by brute force for the first timeFootnote 1 and can nowadays be broken for a small fee on crack.sh . Also in 1997, the results of Shor [9] showed that the RSA signature scheme is insecure against quantum computers. In 2004, Wang et al. [10] for the first time found collisions for the three then popular hash functions MD5, HAVAL-128, and RIPEMD. Thus, long-term security needs to take future breaches of cryptographic primitives into account. In this paper, we propose a solution that allows to store genetic data in a database, while guaranteeing integrity and authenticity over long time periods. Data may be stored in plain-text, encrypted, or secretly shared form. We examine a scenario in which a full set of raw sequencer reads, alignments, and genomic variant data files are generated and stored in a certified database (see Sections 2 and 3). We propose a long-term protection scheme (Section 4) that uses unconditionally hiding commitments, Merkle hash trees, and digital signatures for protecting the integrity of the data while preserving confidentiality. The scheme allows querying and proving of integrity and authenticity of specific positions in the genome while leaving the remaining data undisclosed. No information can be inferred about adjacent positions. The scheme supports updating the integrity protection in case one of the used cryptographic schemes (i.e., commitments, hashes, or signatures) is expected to become insecure in the near future. The integrity update procedure uses timestamping while it is guaranteed that no information is leaked to the involved timestamp servers. We also evaluate the performance of our scheme (Section 5), in a scenario with periodic updates of the timestamps, commitments and hashes. Our performance evaluation shows that long-term integrity protection of a human genome of size 3·109 is feasible on current hardware. Furthermore, verification of the integrity of a small subset of genomic data is fast. Various timestamping-based long-term integrity protection schemes for various use cases have been proposed in the literature [11, 12]. However, these schemes leak information to the involved timestamp services and therefore do not preserve long-term confidentiality of the protected data. Braun et al. [13] use unconditionally hiding commitments to combine long-term integrity with long-term confidentiality protection. However, they only consider the protection of a single large data item while genomic databases consist of a large number of relatively small data items. Computation and storage costs of their scheme scale unfavorably for such databases, because each data item needs to be protected by a separate signature-timestamp pair, which is costly to generate and store. We resolve this issue by using Merkle Hash Trees [14] which enable us to protect a whole dataset with just a single signature-timestamp pair. As an alternative to computationally secure signature schemes, proposals for unconditionally secure signature schemes which do not rely on computational assumptions [15] exist as well. However, these schemes function inherently differently from their computationally secure counterparts and require a number of other strong assumptions, e.g., that data verifiers are known and active at scheme initialization. They are thus not applicable to the scenarios discussed here. In the field of genomic data security, the recent work by Bradley et al. [16] explores several methods for the integrity protection of genomic data. Merkle hash trees are also studied to deliver integrity protection of single positional mutations while keeping the remaining positions confidential. Instead of commitments, they use a similar approach by salting the leaf values before hashing. The authors argue that, without salting, up to 32 neighboring base nucleotide leafs could be revealed by learning the hashes along the path to the MHT root. However, the paper does not consider the long-term aspect of data storage, with cryptographic primitives becoming insecure over time. Achieving long-term security is the main focus of this work. The same can be said about recent works on blockchain-based integrity protection [17, 18]. While decentralized blockchain technology is a novel and promising approach to data integrity and time-stamping, it faces the same long-term security issues like any other scheme that does not include regular updates of hash functions. Hence, these works do not solve the problem of long-term protection. Recently, Bansarkhani et al. [19] explored long-term integrity of blockchains. When the time comes to replace a hash function, the authors propose to hash the whole blockchain and store this hash in a new block, resulting in extended data integrity. However, this approach is not applicable to the random-access queries that we will introduce, where we only want to proof integrity of parts of the genomic data. Genomic data For completeness, we give a short overview of all relevant genomic file formats even thought our actual scheme will only be applied to variant data (VCF files). The initial data produced by genome sequencers goes through several steps of processing to reach different levels of representation and abstraction. In our scenario, we are interested in storing genomic variations, which have high utility in personalized medicine. They allow random access to specific positions and, at the same time, protection of adjacent genomic positions. Sequencers produce short raw reads, that, in a first step, are aligned to form a contiguous genome. Those aligned genomes can then be compared to a reference genome to deliver a more interpreted view, highlighting the genomic variation. Raw reads Typically, sequencing machines produce output in the FASTQ format, consisting of billions of small unaligned so-called reads (of nucleotides, making up the full DNA) together with a quality score for each nucleotide. FASTQ files are usually stored in compressed form [20]. Depending on coverage and read length, they are typically of size between 10 GB and 70 GB. Aligned reads Assembly of raw reads to a full genome is performed via an alignment of the short reads in FASTQ format to a reference genome (e.g., GRCh38 [21]). The alignment information is most commonly stored in SAM/BAM [22] or CRAM [23] files. By applying lossy compression to quality scores, CRAM achieves the smallest file sizes [24]. For example, the 1000 Genomes Project [5] distributes CRAM files with quality scores compressed into 8 bins. Depending on coverage, file sizes vary between 3 GB and 14 GB for full genome alignments [25] (excluding high-coverage alignments). Variant calls Variant callsFootnote 2 of aligned genomes are usually stored in the variant call format (VCF) [26], or its binary counterpart BCF. They represent a difference against the reference genome and are thus an abstract representation in comparison to the aforementioned alignment formats. Coverage and read length do not play a role anymore, as each line in a VCF file represents a called mutation at a unique position of the reference genome. A human genome has approximately 4 to 5 million variations compared to a reference genome [27]. VCF files that store this information typically require a few hundred megabytes of storage. Usually, a single file per genome, or per chromosome, is produced. This translates to an average storage requirement of about 100 bytes per variation in VCF. Efficient random access Efficient random access for SAM, BAM, CRAM, and VCF files is realized by storing the data sorted by chromosome and position and then creating an index map, which stores for a chosen set of positions the corresponding location in the file. Data access scenarios The following scenarios describe different access patterns to genomic data for real-world applications. In particular, the first scenario motivates the solution developed in this work. Personalized medicine and testing A typical workflow in personalized medicine requires access to a few mutations in the genome during regular visits to a doctor or hospital. This random access to genomic variant data (e.g., stored in VCF) is roughly required at most once a month for older patients who routinely need to see a doctor. The same is true for ancestry and paternity tests, which primarily access tandem repeat variations. Cancer researchers need access to the full alignments (BAM/CRAM) of healthy and cancer tissue. That is, several full-genome datasets per patient are accessed. Pan-genome studies like genome-wide association studies (GWAS) will probably access whole BAM/CRAM files to produce study-specific input files, for each study participant's genome. We consider an application scenario for personalized medicine that involves a patient, a sequencing laboratory, a certified genome database and the patient's doctors and hospitals. The genome of the patient is stored in the certified database and the doctors regularly request parts of the patient's genome (e.g., to identify the best medication and dosage, or to detect possible genomic predispositions). The patient may also want to prove the authenticity of its genomic data towards a third party verifier (e.g., a judge in court in case of a law suit because of a wrong treatment). An overview of the application scenario is depicted in Fig. 1 and the details are described in the following subsections. Overview of the application scenario for our protection scheme When the genome of the patient is sequenced for the first time (e.g., at birth), the sequencing laboratory timestamps and signs the resulting FASTQ files. The laboratory then creates an alignment of those raw reads against some standardized current version of a human reference genome in the CRAM format. Additionally, variants are called and stored in a VCF file. Both the alignment and variants are timestamped and signed by the laboratory. The data is then transferred to the genome database, who will also conduct future integrity proof updates, without any interaction with the laboratory. From this point on, the laboratory is not involved in any further protocol. The data may be stored in blocks of plain-text, encrypted with a symmetric block-cipher, or secretly shared, since our scheme works on any kind of data blocks. The block cipher would need to be seekable, e.g., AES in counter mode, so that blocks can be decrypted individually. A position in the human genome takes ⌈log(3·109)⌉=32 bits. A pseudorandom permutation could be applied to the 32-bit index of each block to hide the accessed positions. A detailed analysis of the different kinds of block storage are out of scope of this work and we focus on the long-term integrity of data blocks. Note that we do not consider the scenario of re-sequencing a human's genome and the subsequent regeneration of the genomic data. This case is discussed in the outlook Section 6.3. Consider a doctor who wants to identify the best medicine and dosage for their patient, or detect possible genomic predispositions that could influence future treatment. Such a procedure requires to query dozens (and in the future, possibly thousands) of variants from the most recently stored VCF file. A current real-world example is the medicine Warfarin, whose optimal dosage is highly dependent on a patient's genome (cf. motivation Section 1.1). More precisely, eight SNPsFootnote 3 were identified that significantly influence a person's dosage dependent response to the drug. If the data blocks are stored in encrypted form, the patient or a designated doctor or hospital would need to manage the secret keys to assist the decryption of retrieved data blocks. Protection goals and threat model We demand that a solution for holistic genomic data protection achieves the following protection goals: Integrity. The integrity of the genomic data as produced by the laboratory should be protected. That is, it should be infeasible for an adversarial entity to modify the data at rest or in transit without the modifications being detected at a subsequent data access. Confidentiality. The confidentiality of genomic data that is not revealed should be protected. An authorized querier should only learn the requested genomic data. That is, a patient or database must be able to prove the integrity of parts of the genomic data without leaking information about the remaining parts of the data. Authenticity. The database or patient should be able to prove authenticity of the genomic data to a third party verifier. We allow the querier to be adversarial, i.e., they may try to infer any additional information beyond the authorized parts of the genomic data from their interaction with the database. An adversary within the certified database may have full read and write access to the, possibly encrypted, genomic data blocks. We furthermore consider two cases: if the database provider can be trusted to keep the data confidential, it may be stored in plain text. Otherwise, it should be encrypted or secretly shared. Note that after initial data generation and signing by the laboratory, only the database and requesters are involved in any protocol. Protection scheme To meet the above stated demands of long-term integrity and confidentiality protection, we have derived a protection scheme, which is described in this chapter. Full-retrieval data Unprocessed raw reads, e.g., stored in compressed FASTQ format, and resulting alignments, e.g., stored in CRAM format, are usually only accessed as a whole and a long-term protection scheme for that use case was proposed in [13]. The scheme presented here in Section 4.3 enhances the integrity protection scheme of [13], so that a large number of small data items can be protected together efficiently. Random access data As opposed to whole-data integrity proofs, our scheme provides random access integrity proofs of genomic variation data on the finest level possible—per position in the reference genome. We view genomic variation data like VCF/BCF files as a table G, where for each genome position i, G[ i] denotes the corresponding variant data entry in G. If there is no mutation at position i, we set G[ i] to 0. Note that we do not need to actually store those 0s as the absence of a variation implicitly represents a 0. However, the scheme also needs to create commitments for the absence of variants so that absence can also be proven. Since a human genome has about 3·109 positions, this is the size of table G and the number of commitments that have to be created, independent of the underlying data format. For genome data G, generated and signed by a sequencing laboratory, the scheme generates an integrity proof P. The validity period of such a proof is limited in time because the cryptographic primitives used for its generation have a limited validity period. Therefore, the proof is updated regularly. Furthermore, we describe how a partial integrity proof for a subset G′⊂G can be extracted from P, and how such a partial integrity proof is verified. Our scheme thus delivers random access to G′⊂G with integrity proofs while keeping the remaining data G∖G′ private. We also present a security analysis of the proposed scheme. The scheme uses components of the schemes Lincos [13] and Mops [12]. More information on the used cryptographic primitives (i.e., timestamps, commitments, hashes, and signatures) can be found in the respective publications. Scheme description Our scheme for long-term integrity protection of genomic data provides the algorithms Protect, Update, PartialProof, and Verify. Algorithm Protect generates the initial integrity proof when genomic data is stored. Algorithm Update updates the integrity proof if a used cryptographic primitive (e.g., the hash function) is threatened to become insecure. Algorithm PartialProof generates a partial integrity proof for verification of a subset of the genomic data. Algorithm Verify allows a verifier to verify the integrity of a given genomic dataset using a given partial integrity proof. Initial protection The initial integrity proof P for sequenced genome data G is generated by the sequencing laboratory using algorithm Protect (Algorithm 1). The algorithm obtains as input genome data G, an information-theoretic hiding commitment algorithm Com [28], a hash algorithm Hash, a signing algorithm Sign, and a time-stamping algorithm TS. The algorithm first uses algorithm Com to generate commitments and decommitments to all entries in G. The commitments can be used as placeholders for the data items, which itself do not leak information, and the decommitments can be used to prove the connection between the commitment and the corresponding data item. Then, it uses the hash algorithm Hash to compute a Merkle hash tree (MHT) [14] for the generated commitment values. The root node of the generated tree is then signed using algorithm Sign and timestamped using the trusted timestamp authority TS [29]. Output of the initial protection algorithm is an integrity proof P which contains the commitments, the decommitments, the MHT, the signature, and the timestamp. In our algorithm listings we denote by MHT:(Hash,L)→T an algorithm that on input a hash algorithm Hash and a set of leaf nodes L, outputs a MHT T. Furthermore, we denote the root of a MHT T by T.r. Protection update Timestamps, hash values, and commitments have a limited validity periods, which in turn limits the validity period of the corresponding integrity proof. The overall validity of an integrity proof is therefore prolonged regularly by the genome database by running Algorithm 2. The input parameter op∈{upCHT,upHT,upT} determines which primitives are updated; op=upCHT updates commitments, hashes, and timestamps; op=upHT updates only hashes and timestamps; and op=upT updates only timestamps. For op=upCHT, first new information theoretically hiding commitments are generated. Then, a new MHT T is generated and finally the root of T is timestamped. Output of the update algorithm is an updated integrity proof P′. In the algorithm listings, we denote by AuthPath(T,i)→A an algorithm that on input MHT T and leaf index i, outputs the authentication path A from leaf node i to root node T.r. Generate partial integrity proof A data owner may want to create a partial integrity proof P′ for a subset G′⊂G such that P′ does not reveal any information about G∖G′. This can be done using Algorithm 3. The algorithm extracts from P all information relevant for proving the integrity of G′ and outputs them in form of a partial integrity proof P′. In particular, the partial integrity proof contains the commitments corresponding to the positions contained in G′, the corresponding hash tree authentication paths, as well as the corresponding timestamps and the corresponding signature. A verifier receives partial genome data G′ and a corresponding partial integrity proof P′. Additionally, it uses a trusted verification algorithm Ver and reads the current time tn+1. It then uses Algorithm 4 to verify the integrity of G′. The trusted verification algorithm Ver is used for verifying the validity of timestamps, hashes, commitments, and signatures. It can be realized by leveraging trusted public key certificates that include verification parameters and validity periods. It must provide the following functionality. If VerTS(m,ts;t)=1, then ts is a valid timestamp for m at time t, meaning that the cryptographic algorithms used for generating the timestamp are considered secure at time t. The time that the timestamp ts refers to is denoted by ts.t. Hence, VerTS(m,ts;t)=1 means that it is safe to believe at time t that data m existed at time ts.t. Similarly, VerMHT(m,a,r;t)=1 means that at time t, a is a valid authentication path for m through a hash tree with root r. VerCom(m,c,d;t)=1 means that at time t, d is a valid decommitment from commitment c to message m. VerSign(m,σ;t)=1 means that at time t, σ is a valid signature for message m. We refer to Section 5.2 for more details on how the validity periods of the cryptographic primitives are derived. We use the following shorthand notations tNxTs(i), tNxHa(i), tNxCo(i) to denote update times with respect to a given partial integrity proof \(P^{\prime }= \left [\sigma,P^{\prime }_{1},\ldots,P^{\prime }_{n}\right ]\). By tNxTs(i) we denote the time of the next timestamp update after Pi, i.e., tNxTs(i)= min{tsj.t:j>i}. Likewise, by tNxHa(i) we denote the time of the next hash tree update after Pi, and by tNxCo(i) we denote the time of the next commitment update after Pi. The verification function Verify of the genome data protection scheme works as follows. It checks whether the integrity proof has been constructed correctly, and whether the cryptographic primitives have been updated before becoming invalid. We refer the reader to the next section (Section ??) for more details on the security of this scheme. We now analyze the security of the proposed scheme and argue that it fulfills the requirements described in Section 3.3. We observe that a partial integrity proof P′ for genome data G′⊂G does not reveal any information about the remaining data G∖G′ by the following argument. Let \(P^{\prime } = (\sigma,P^{\prime }_{1},\ldots,P^{\prime }_{n})\) be a partial integrity proof for G′, where \(P^{\prime }_{i} = (\textsf {op}_{i},C^{\prime }_{i},D^{\prime }_{i},A^{\prime }_{i},T_{i}.r,\textsf {ts}_{i})\). We observe that for every i∈{1,…,n}, opi, \(C^{\prime }_{i}\), and \(D^{\prime }_{i}\) are independent of G∖G′ because of the information-theoretic hiding property of the commitments. Furthermore, \(A^{\prime }_{i}\) contains authentication paths that only depend on information theoretically hiding commitments and thus does not reveal any information as long as the decommitment values are not revealed. Hence, also the tree root Ti.r, the timestamp tsi, and the signature σ are independent of G∖G′. Next, we show that it is infeasible for an adversary, who cannot break any of the used cryptographic primitives within their validity period, to present a valid partial integrity proof P′ for partial genome data G′ if G′ has not been originally signed by the laboratory. For our security analysis, we consider an adversary that can potentially become computationally more powerful over time and use methods developed in [30–32] for arguing about the knowledge of an adversary at an earlier point in time. For this, we require that the timestamp, commitment, and hash algorithms chosen by the user are extractable. Thereby, we are able to show that if an adversary presents a valid integrity proof, then the signed data together with the signature must have been known at a point when the corresponding signature scheme was considered valid. If the signature is valid for the data, then it follows that the data is authentic. Here, we use the following notation to express the knowledge of the adversary. For any data m and time t, we write \(m \in \mathcal {K}[t]\) to denote that the adversary knows m at time t. We remark that for any t<t′, \(m \in \mathcal {K}[t]\) implies \(m \in \mathcal {K}[t^{\prime }]\). Extractable timestamping [30, 32] guarantees that if at some time t, a timestamp ts and message m are known and ts is considered valid for m at time t, then m must have been known at time ts.t, or in the notation introduced above: $$ (m,\textsf{ts}) \in \mathcal{K}[\!t] \land \textsf{Ver}_{\textsf{TS}}(m,\textsf{ts};t) \implies m \in \mathcal{K}[\textsf{ts}.t] \text{.} $$ Moreover, extractable commitments [31] guarantee that if a commitment value is known at time t, and a message m and a valid decommitment value are known at a later time t′>t, then the message m was already known at commitment time t, i.e.: $$\begin{array}{*{20}l} &c \in \mathcal{K}[\!t] \land (m,d) \in \mathcal{K}[\!t^{\prime}] \land \textsf{Ver}_{\textsf{Com}}(m,c,d;t^{\prime})\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\implies m \in \mathcal{K}[\!t] \text{.} \end{array} $$ Extractable hash trees [32] provide similar guarantees, i.e., for any hash tree root value r, time t, message m, hash tree authentication path a, and times t,t′: $$\begin{array}{*{20}l} &r \in \mathcal{K}[\!t] \land (m,a) \in \mathcal{K}[\!t^{\prime}] \land \textsf{Ver}_{\textsf{MHT}}(m,a,r;t^{\prime})\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\implies m \in \mathcal{K}[t] \text{.} \end{array} $$ Furthermore, we know that if a signature σ and a message m are known at some time t, and σ is considered valid for m at time t, then by the existential unforgeability of the signatures it follows that m is authentically signed [30, 33]: $$ (m,\sigma) \in \mathcal{K}[t] \land \textsf{Ver}_{\textsf{Sign}}(m,\sigma;t) \implies m\ \text{is authentic} \text{.} $$ Finally, it is known that signing the root of a Merkle tree preserves the integrity of the leafs. Furthermore, if the leafs are commitments, the authenticity of the committed messages is preserved. That is, for any hash tree root value r, signature σ, commitment c, hash tree authentication path a, message m, decommitment d, and times t,t′,t′′: $$\begin{array}{*{20}l} &{}(r,\sigma) \in \mathcal{K}[t] \land \textsf{Ver}_{\textsf{Sign}}(r,\sigma;t) \land\\ &{}(c,a) \in \mathcal{K}[t^{\prime}] \land \textsf{Ver}_{\textsf{MHT}}\left(c,a,r;t^{\prime}\right) \land\\ &{}(m,d) \in \mathcal{K}[t^{\prime\prime}] \land \textsf{Ver}_{\textsf{Com}}\left(m,c,d;t^{\prime\prime}\right) \\ &\qquad\qquad\qquad\qquad\qquad\qquad\quad\implies m\ \text{is authentic} \text{.} \end{array} $$ We now show that it is infeasible to produce a valid integrity proof for genome data that is not authentically signed. Assume an adversary outputs (G′,P′) at some point in time tn+1 and let Ver be a verification function trusted by the verifier. We show that if P′ is a valid partial integrity proof for data G′ (i.e., Verify(Ver,G′,P′)=1), then the signature σ for G′ is not a forgery. Let \(P^{\prime } = (\sigma,P^{\prime }_{1},\ldots,P^{\prime }_{n})\), where \(P^{\prime }_{i} = (\textsf {op}_{i},C^{\prime }_{i},D^{\prime }_{i},\allowbreak {}A^{\prime }_{i},\allowbreak {}T_{i}.r,\allowbreak {}\textsf {ts}_{i})\). Define \(P^{\prime \prime }_{i} = (\sigma,P^{\prime }_{1},\ldots,P^{\prime }_{i})\) and ti=tsi.t. In the following, we show recursively for i∈[n,…,1], that given Verify(Ver,G′,P′)=1, statement \(\textsf {St}(i) = \langle (G^{\prime },P^{\prime \prime }_{i}) \in \mathcal {K}[t_{i+1}] \rangle \) holds. We observe that St(n) is trivially true because the adversary presents valid (G′,P′) at tn+1 by assumption. Next, we show that assuming St(i) holds, then also St(i−1) holds. Given St(i), we observe that by VerTS([σ,(T1.r,…,Ti.r),(ts1,…,tsi−1)],tsi;tNxTs(i))=1 and (1), we have \([\sigma, (T_{1}.r, \ldots, T_{i}.r),\allowbreak (\textsf {ts}_{1},\ldots,\textsf {ts}_{i-1})] \allowbreak \in \mathcal {K}[t_{i}]\). Furthermore, by $$\textsf{Ver}_{\textsf{MHT}}(\textsf{CA}^{\prime}(i,j), A^{\prime}_{i}[\!j], T_{i}.r; t_{\textsf{NxHa}}(i)) = 1$$ and (3), we have \(\textsf {CA}^{\prime }(i,j) \in \mathcal {K}[\!t_{i}]\) for every j∈G′. Finally, by $$\begin{array}{*{20}l} &\textsf{Ver}_{\textsf{Com}}([G^{\prime}[j], D^{\prime}_{1}[\!j], \ldots, D^{\prime}_{i-1}[\!j]], C^{\prime}_{i}[\!j], D^{\prime}_{i}[\!j]; \\ &\qquad\qquad\qquad\qquad\qquad\qquad t_{\textsf{NxCo}}(i)) = 1 \end{array} $$ and (2) we have \([G^{\prime }[j], D^{\prime }_{1}[j], \ldots, D^{\prime }_{i-1}[j]] \in \mathcal {K}[t_{i}]\) for every j∈G′. Combined, we obtain \((G^{\prime },P^{\prime \prime }_{i-1}) \in \mathcal {K}[t_{i}]\), which means that St(i−1) holds. We observe that St(1), VerTS([σ,T1.r],ts1;tNxTs(1))=1, and (1) implies that \([\sigma, T_{1}.r] \in \mathcal {K}[t_{1}]\). Furthermore, by VerSign(T1.r,σ;t1)=1 and (4), we obtain that σ is genuine for T1.r. Finally, we observe that for every i∈G, \(\textsf {Ver}_{\textsf {MHT}}(C^{\prime }_{1}[i], A^{\prime }_{1}[i], T_{1}.r; t_{\textsf {NxHa}}(1)) = 1\), \(\textsf {Ver}_{\textsf {Com}}(G[i], C^{\prime }_{1}[i], D^{\prime }_{1}[i]; t_{\textsf {NxCo}}(1)) = 1\), and we obtain by (5) that σ is a genuine signature for G′. In order to illustrate the applicability of our scheme to today's challenges in bioinformatics and medicinal informatics, in the following, we evaluate the performance of the scheme described in Section 4.3 in this chapter. Protection scenario We focus on the following situation: a human genome is sequenced and protected for a human lifespan of 100 years. The scenario starts with sequencing the genomic data G in 2019 and creating an integrity proof P. Here, we are only interested in the protection of a single-genome dataset, that is, we do not consider additional genomic data generated due to resequencing. We assume that the lifetime of signature-based timestamps is based on the lifetime of the corresponding public key certificate, which is typically 2 years. For our commitments and hash functions, we assume a longer validity period of 10 years, as they are not dependent on secret parameters which may leak over time. The integrity protection update schedule is summarized in Table 1. Table 1 Schedule for updating the integrity proof Instantiation of cryptographic primitives For our analysis, we instantiatiate the cryptographic algorithms of our protection scheme as follows. As hash functions, we use the ones from the SHA-2 hash function family [34], which are extractable if modeled as a random oracle [35]. As timestamp schemes, we employ signature-based timestamps [29] based on the XMSS signature scheme [36], which is a hash-based signature scheme conjectured secure against quantum computers. As commitment schemes, we use the construction proposed by Halevi and Micali [37], which uses a hash function and is extractable if the hash function is extractable [35]. When generating Merkle hash trees, we use an optimization where we take commitments to the data directly as the leafs of the hash trees in order to save one hash tree level. Cryptographic parameters are chosen based on the recommendations by Lenstra and Verheul [38, 39]. The chosen parameters are summarized in Table 2. Table 2 Parameter selection based on Lenstra and Verheul [38, 39] We show the storage space consumed by an integrity proof P corresponding to genome data G containing 3·109 entries, which is roughly the number of nucleotides of a human genome. We also show the storage space required by a partial integrity proof P′ corresponding to partial genome data G′ containing 1,100 or 105 entries. As the Warfarin example shows, current personalized medicine applications would only be concerned with a few dozen entries. To take future medical scientific advances into accounts, we choose to evaluate partial proofs of size up to 105. We also measure the time it takes to generate the initial integrity proof, to update an integrity proof, and to verify a partial integrity proof. We remark that we measure the space consumed in terms of the size of the commitments, timestamps, and hashes to be stored. Likewise, we measure the time consumed for generating and updating an integrity proof in terms of the computation time required to generate the commitments, timestamps, and hashes. For the verification time, we sum up the time required for verification of the individual cryptographic elements. The time and sizes required for hashing, signing, and committing to a message of size 128 B are shown in Table 3. This is an upper bound on the average storage requirement for a variation in VCF, cf. 2.1.3. For XMSS, the height parameter is chosen as 10. The timings were taken on a computer with a 2.9 GHz Intel Core i5 CPU and 8 GB RAM running Java. Table 3 Space and time required for storing, generating, and verifying, hashes (SHA), commitments (HM), and signatures (XMSS) Size of integrity proof Figure 2 shows the storage space over time required for storing the full integrity proof. The size of the initial integrity proof in year 2019 is 391 GB. The size only increases minimally when updating the timestamps. When updating the commitment, hashes, and timestamps together, the size grows significantly. After the first such update, the size of the integrity proof is 782 GB. After 100 years, the size of the integrity proof is 5309 GB. Comparing this to the size of an average 600 MB VCF file shows that after 100 years, the integrity proof is roughly 10,000 times larger than the actual variant data. Size of integrity proof for whole-genome data G For |G′|∈{1,100,105}, Fig. 3 shows the size of a partial integrity proof P′ for G′ over time. As the number of elements covered by the partial integrity proof is considerably smaller, also its size is much smaller compared to the full integrity proof. For the largest partial proof parameter |G′|=105, the size of P′ ranges from 9.62 MB in 2019 to 130.67 MB in 2118, growing roughly linearly. For a fixed point in time and |G′|≥100, the size also grows roughly proportionally to |G′|. Size of partial integrity proof for partial genome data G′ Cost projection for integrity proof storage Although it is impossible to predict long-term storage costs, we will nevertheless try to give a rough cost projection into the future. We examined two sources of historical hard disk prices and found that between 1980 and 2010 HDD storage costs per gigabyte roughly halved every 14 months [40], leading to a cost reduction by a factor of 10 roughly every 4 years. Then since 2009, this rapid decline in storage costs has slowed down, only showing a reduction in storage costs by a factor of 4–5 over the last 10 years Footnote 4. However, new technologies like HAMR and MAMR [41] are on the horizon, which are expected to show HDDs of size 4 TB by 2025, according to Western Digital [42]. We calculated yearly expenses for the storage of a full integrity proof, considering three cost-per-storage projection scenarios: no change in storage costs and cost reductions by rates of R = 2 and 4 per 10 years. In view of past developments, we deem those rates conservative. We furthermore assumed that HDDs have to be replaced every 5 years and started with storage costs of $ 15 per TB[4]. The results can be seen in Fig. 4. The first year of storage costs 0.391 TB·$ 15/5 = $ 1.15. From then on, while the amount of data increases, thanks to the exponential decline in costs, the overall yearly costs decline sharply for R = 2 and 4. For R = 1, it is proportional to the amount of storage (Fig. 2). Even in the unrealistic case that storage costs do not drop over 100 years, the costs still only grow to $ 15.55 yearly in 2190. For R = 2, the costs decline to 22 cents in 2069 and 2 cents in 2119. For R = 4, the costs reach 1 cent in 2069 and after that are well below 1 cent. To be fair, in reality, this data would probably be stored redundantly to protect against data loss, so the actual costs would need to be multiplied by the amount of redundancy. Projected yearly storage costs of integrity proofs. The calculation starts with initially $ 15 per TB, then reduces the costs per TB by a rate of R=1, 2 and 4 per 10 years. HDDs are assumed to be replaced every 5 years Computation time The time required for the initial integrity proof generation in year 2019 is 5.85 h, for G with |G| = 3·109. Figure 5 shows the time required for performing a commitment, timestamp, and hash update of the integrity proof. Computation time for each full update every 10 years is comparable to the computation time of the initial integrity proof. However, it should be considered that with more powerful computers in the future these update times can be expected to decrease significantly. Computation time for updating integrity proof for whole-genome data G Figure 6 shows the time required for verifying a partial integrity proof P′ corresponding to partial genome data G′ with |G′|∈{1,100,105}. The computation time required for verification of P′ of the largest partial size, generated in 2019, is 0.46 s. For P′ generated in 2119 the verification time is 5.37 s. Computation time for verifying partial integrity proof for partial genome data G′ Comparison with [13] We briefly compare the performance of our scheme with performance of the integrity protection scheme of [13]. We observe that for protecting a dataset with [13], for each data item, a separate commitment, decommitment, signature, and timestamp need to be generated and stored. This results in an initial proof generation time of 28338 h (or 3.2 years) and a size of 14283 GB. In comparison, our scheme generates the initial proof in 5.9 h and the proof has a size of 391 GB. Conclusion and future work We have evaluated a scenario where the integrity of genomic data is protected over a time span of 100 years. We first described a scenario in which genomic data is generated and accessed for medical treatment and analyzed the protection requirements. Next, we proposed a long-term integrity protection scheme suitable for this scenario. Then, we analyzed the performance of the proposed scheme for the given scenario. We estimate that long-term integrity protection of a genome database with 3·109 independently verifiable entries for 100 years requires a storage space of approximately up to 5.3 TB in 2119. We estimated the yearly storage costs of the integrity proof to start at $ 1.15 and, depending on the assumed reduction in general storage prices, reach $ 15.55 in 2119 (no reduction) or fall to negligible levels for reduction rates of R=2 or 4 per 10 years. We therefore deem this 10,000-fold increase in storage compared to the actual variation data as acceptable, considering the possible dangers of unprotected integrity and low actual yearly costs. It takes approximately 5.9 h to generate the initial integrity proof and up to 6.3 h to update it when used cryptographic primitives must be replaced. The size of a partial integrity proof for a genome subset of size 105, assumed to be a future-proof choice for personalized medicine, after 100 years is approximately 130 MB and the verification takes approximately 5 s. The computation times can be expected to decrease in the future when more powerful computers will be available. In Section 3.1 we explain that our scheme works on any data that is stored in blocks, also in encrypted form. If the database is an untrusted cloud, it obviously makes sense to not store the data in plain text. To achieve certain long-term confidentiality, only information-theoretically secure methods such as secret sharing should be used. This stems from the simple fact that once data is obtained in encrypted form by an adversary, they only have to wait until the encryption is broken in the future. We leave it as future work to combine Oblivious RAM techniques [43] with our long-term integrity scheme to achieve better query pattern hiding. Genome re-sequencing Our scenario only considered a single production of genomic data, e.g., at birth. After that, only updated integrity proofs were generated. However, it is foreseeable that advanced sequencing technology will be used to re-sequence a human's genome periodically, e.g., every 10 years, once personalized medicine has gone mainstream. Additionally, it is already becoming standard procedure to sequence somatic cancer tissue of patients with certain types of cancers [44, 45]. More cancer types will follow to be subjected to genetic analysis. Furthermore, once cancer is detected, a re-sequencing of cancer tissue every few weeks seems plausible in the future, to observe the development of the cancer's genome. Every (re)sequencing of either healthy or cancer tissue follows the alignment and variant calling procedures, so FASTQ, CRAM, and BCF files, or future enhanced versions thereof, are produced. How to provide long-term protection of this additional data, in combination with existing data, will be investigated in future work. It could also become feasible to redo the alignment and variant calling step, once a new reference genome might be agreed upon on a (super)national health governance level. An open question is whether alignments against obsolete reference genomes could be safely deleted, since they could still be reproduced from the raw reads. This, however, is solely determined by medical needs and legislative issues (liability and regulatory mechanisms). Omics data Other data apart from the genome itself, typically summarized under the term omics, like genome methylation pattern sequencing [46, 47] is receiving increasing attention in the area of precision medicine [48]. For these advanced but forseeable areas, an all-encompassing data integrity solution needs to combine integrity proofs of newly generated and updated data, taken at different time intervals. Such a full solution, however, is beyond the scope of the present study and will be pursued in the future. The timings presented in Section 5.3 were obtained by running implementations of the respective cryptographic algorithms. The source code for the timing measurements is available from the corresponding author on reasonable request. Achieved by the DESCHALL Project, the winners of the first $10,000 DES Challenge by RSA Security. In the context of genomics, the verb to call is often used in the sense to determine. E.g. a variant call is a variant determined from the underlying data. two in gene CYP2C9, one in gene GGCX and five in gene VKORC1 On 28 July 2019, there were available a 4 TB HDD for $64 and 6 TB for $90 at the price comparison website newegg.com. D. S. C. Davies, Chief Medical Officer annual report 2016: Generation Genome - GOV.UK. Technical Report 8, Department of Health (July 2017). https://www.gov.uk/government/publications/chief-medical-officer-annual-report-2016-generation-genome. Accessed 4 July 2017. M. Naveed, E. Ayday, E. W. Clayton, J. Fellay, C. A. Gunter, J. -P. Hubaux, B. A. Malin, X. Wang, Privacy in the Genomic Era. ACM Comput. Surv.48(1), 6–1644 (2015). https://doi.org/10.1145/2767007. Accessed 25 May 2016. M. Akgün, A. O. Bayrak, B. Ozer, M. Ş. Sağıroğlu, Privacy preserving processing of genomic data: a survey. J. Biomed. Inf.56:, 103–111 (2015). https://doi.org/10.1016/j.jbi.2015.05.022. Accessed 28 July 2016. T. Dugan, X. Zou, in 2016 IEEE First International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE). A Survey of Secure Multiparty Computation Protocols for Privacy Preserving Genetic Tests, (2016), pp. 173–182. https://doi.org/10.1109/CHASE.2016.71. M. Caulfield, J. Davies, M. Dennys, L. Elbahy, T. Fowler, S. Hill, T. Hubbard, L. Jostins, N. Maltby, J. Mahon-Pearson, G. McVean, K Nevin-Ridley, M. Parker, V. Parry, A. Rendon, L. Riley, C. Turnbull, K. Woods, The 100,000 Genomes Project Protocol (2017). https://doi.org/10.6084/m9.figshare.4530893.v2. https://figshare.com/articles/GenomicEnglandProtocol_pdf/4530893. M. Wadelius, L. Y. Chen, K. Downes, J. Ghori, S. Hunt, N. Eriksson, O. Wallerman, H. Melhus, C. Wadelius, D. Bentley, P. Deloukas, Common VKORC1 and GGCX polymorphisms associated with warfarin dose. Pharmacogenomics J.5(4), 262–270 (2005). https://doi.org/10.1038/sj.tpj.6500313. Accessed 22 June 2017. T. I. W. P. Consortium, Estimation of the Warfarin Dose with Clinical and Pharmacogenetic Data. New Engl. J. Med.360(8), 753–764 (2009). https://doi.org/10.1056/NEJMoa0809329. Accessed 26 July 2017. J. A. Johnson, L. H. Cavallari, Warfarin pharmacogenetics. Trends Cardiovasc. Med.25(1), 33–41 (2015). https://doi.org/10.1016/j.tcm.2014.09.001. Accessed 26 July 2017. P. Shor, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J. Comput.26(5), 1484–1509 (1997). https://doi.org/10.1137/S0097539795293172. http://arxiv.org/abs/https://doi.org/10.1137/S0097539795293172. X. Wang, D. Feng, X. Lai, H. Yu, Collisions for hash functions MD4, MD5, HAVAL-128 and RIPEMD (2004). Cryptology ePrint Archive, Report 2004/199. https://eprint.iacr.org/2004/199. M. Vigil, J. Buchmann, D. Cabarcas, C. Weinert, A. Wiesmaier, Integrity, authenticity, non-repudiation, and proof of existence for long-term archiving: A survey. Comput. Secur.50:, 16–32 (2015). C. Weinert, D. Demirel, M. Vigil, M. Geihs, J. Buchmann, in Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. ASIA CCS '17. Mops: A modular protection scheme for long-term storage (ACMNew York, 2017), pp. 436–448. J. Braun, J. Buchmann, D. Demirel, M. Geihs, M. Fujiwara, S. Moriai, M. Sasaki, A. Waseda, in Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. ASIA CCS '17. Lincos: A storage system providing long-term integrity, authenticity, and confidentiality (ACMNew York, 2017), pp. 461–468. R. C. Merkle, in Advances in Cryptology — CRYPTO' 89 Proceedings, ed. by G. Brassard. A certified digital signature (SpringerNew York, 1990), pp. 218–238. C. M. Swanson, D. R. Stinson, in Information Theoretic Security, ed. by S. Fehr. Unconditionally secure signature schemes revisited (SpringerBerlin, 2011), pp. 100–116. T. Bradley, X. Ding, G. Tsudik, Genomic Security (Lest We Forget). IEEE Secur. Priv.15(5), 38–46 (2017). https://doi.org/10.1109/MSP.2017.3681055. Accessed 9 May 2018. E. Gaetani, L. Aniello, R. Baldoni, F. Lombardi, A. Margheri, V. Sassone, in Italian Conference on Cybersecurity (20/01/17). Blockchain-based database to ensure data integrity in cloud computing environments, (2017). http://ceur-ws.org/Vol-1816/paper-15.pdf. Accessed 20 July 2019. C. Esposito, A. D. Santis, G. Tortora, H. Chang, K. R. Choo, Blockchain: A Panacea for Healthcare Cloud-Based Data Security and Privacy?IEEE Cloud Comput.5(1), 31–37 (2018). https://doi.org/10.1109/MCC.2018.011791712. R. Bansarkhani, M. Geihs, J. Buchmann, PQChain: Strategic design decisions for distributed ledger technologies against future threats. IEEE Secur. Priv.16(04), 57–65 (2018). https://doi.org/10.1109/MSP.2018.3111246. J. K. Bonfield, M. V. Mahoney, Compression of FASTQ and SAM format sequencing data. PLoS ONE. 8(3), 59190 (2013). https://doi.org/10.1371/journal.pone.0059190. Accessed 21 June 2017. The Genome Reference Consortium, The Genome Reference Consortium. http://genomereference.org/. Accessed 31 July 2017. H. Li, B. Handsaker, A. Wysoker, T. Fennell, J. Ruan, N. Homer, G. Marth, G. Abecasis, R. Durbin, The sequence alignment/map format and SAMtools. Bioinformatics. 25(16), 2078–2079 (2009). https://doi.org/10.1093/bioinformatics/btp352. Accessed 20 Apr 2017. M. H. -Y. Fritz, R. Leinonen, G. Cochrane, E. Birney, Efficient storage of high throughput DNA sequencing data using reference-based compression. Genome Res.21(5), 734–740 (2011). https://doi.org/10.1101/gr.114819.110. Accessed 21 June 2017. S. Deorowicz, S. Grabowski, Data compression for sequencing data. Algoritm. Mol. Biol.8:, 25 (2013). https://doi.org/10.1186/1748-7188-8-25. Accessed 15 June 2017. 1000 Genomes Project, IGSR: The International Genome Sample Resource. http://www.internationalgenome.org/ Accessed 31 July 2017. P. Danecek, A. Auton, G. Abecasis, C. A. Albers, E. Banks, M. A. DePristo, R. E. Handsaker, G. Lunter, G. T. Marth, S. T. Sherry, G. McVean, R. Durbin, The variant call format and VCFtools. Bioinformatics. 27(15), 2156–2158 (2011). https://doi.org/10.1093/bioinformatics/btr330. Accessed 20 Apr 2017. The 1000 Genomes Project Consortium, A global reference for human genetic variation. Nature. 526(7571), 68–74 (2015). https://doi.org/10.1038/nature15393. Accessed 31 July 2017-07-31. S. Halevi, S. Micali, in Advances in Cryptology — CRYPTO '96, ed. by N. Koblitz. Practical and provably-secure commitment schemes from collision-free hashing (SpringerBerlin, 1996), pp. 201–215. C. Adams, P. Cain, D. Pinkas, R. Zuccherato, RFC 3161: Internet X.509 Public Key Infrastructure Time-Stamp Protocol (TSP) (2001). https://doi.org/10.17487/rfc3161. M. Geihs, D. Demirel, J. Buchmann, in 2016 14th Annual Conference on Privacy, Security and Trust (PST). A security analysis of techniques for long-term integrity protection, (2016). https://doi.org/10.1109/pst.2016.7906995. A. Buldas, M. Geihs, J. Buchmann, in Information Security and Privacy: 22nd Australasian Conference, ACISP 2017, Auckland, New Zealand, July 3–5, 2017, Proceedings, Part I, ed. by J. Pieprzyk, S. Suriadi. Long-term secure commitments via extractable-binding commitments (SpringerCham, 2017), pp. 65–81. A. Buldas, M. Geihs, J. Buchmann, in Provable Security, ed. by T. Okamoto, Y. Yu, M. H. Au, and Y. Li. Long-term secure time-stamping using preimage-aware hash functions (SpringerCham, 2017), pp. 251–260. S. Goldwasser, S. Micali, R. Rivest, A digital signature scheme secure against adaptive chosen-message attacks. SIAM J. Comput.17(2), 281–308 (1988). https://doi.org/10.1137/0217017. http://arxiv.org/abs/https://doi.org/10.1137/0217017. National Institute of Standards and Technology (NIST), FIPS PUB 180-4: Secure hash standard (SHS) (2015). M. Geihs, Long-term protection of integrity and confidentiality ? security foundations and system constructions. PhD thesis, Technische Universität, Darmstadt (2018). http://tubiblio.ulb.tu-darmstadt.de/108203/. J. Buchmann, E. Dahmen, A. Hülsing, in Post-Quantum Cryptography: 4th International Workshop, PQCrypto 2011, Taipei, Taiwan, November 29 – December 2, 2011. Proceedings, ed. by B. -Y. Yang. Xmss - a practical forward secure signature scheme based on minimal security assumptions (SpringerBerlin, 2011), pp. 117–129. S. Halevi, S. Micali, in Advances in Cryptology — CRYPTO '96: 16th Annual International Cryptology Conference Santa Barbara, California, USA August 18–22, 1996 Proceedings, ed. by N. Koblitz. Practical and provably-secure commitment schemes from collision-free hashing (SpringerBerlin, 1996), pp. 201–215. A. K. Lenstra, E. R. Verheul, Selecting cryptographic key sizes. J. Cryptol.14(4), 255–293 (2001). A. K. Lenstra, in Bidgoli, Hossein. Handbook of Information Security, Information Warfare, Social, Legal, and International Issues and Security Foundations. Vol. 2. Key lengths (Wiley, 2006), pp. 617–635. M. Komorowski, A History of Storage Cost (2009). https://www.mkomo.com/cost-per-gigabyte. Accessed 28 July 2019. Y. Shiroishi, K. Fukuda, I. Tagawa, H. Iwasaki, S. Takenoiri, H. Tanaka, H. Mutoh, N. Yoshikawa, Future Options for HDD Storage. IEEE Trans. Magn.45(10), 3816–3822 (2009). https://doi.org/10.1109/TMAG.2009.2024879. T. S. Ganesh, Western Digital Stuns Storage Industry with MAMR Breakthrough for Next-Gen HDDs (2017). https://www.anandtech.com/show/11925/western-digital-stuns-storage-industry-with-mamr-breakthrough-for-nextgen-hdds. Accessed 28 July 2019. E. Stefanov, M. van Dijk, E. Shi, C. Fletcher, L. Ren, X. Yu, S. Devadas, in Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security. CCS '13. Path ORAM: An Extremely Simple Oblivious RAM Protocol (ACMNew York, 2013), pp. 299–310. https://doi.org/10.1145/2508859.2516660. C. Tan, X. Du, KRAS mutation testing in metastatic colorectal cancer. World J. Gastroenterol. : WJG. 18(37), 5171–5180 (2012). https://doi.org/10.3748/wjg.v18.i37.5171. Accessed 28 July 2017. S. Kummar, P. M. Williams, C. -J. Lih, E. C. Polley, A. P. Chen, L. V. Rubinstein, Y. Zhao, R. M. Simon, B. A. Conley, J. H. Doroshow, Application of molecular profiling in clinical trials for advanced metastatic cancers. JNCI: J. Natl. Cancer Inst.107(4) (2015). https://doi.org/10.1093/jnci/djv003. Accessed 28 July 2017. B. E. Bernstein, A. Meissner, E. S. Lander, The mammalian epigenome. Cell. 128(4), 669–681 (2007). https://doi.org/10.1016/j.cell.2007.01.033. P. A. Jones, T. K. Archer, S. B. Baylin, S. Beck, S. Berger, B. E. Bernstein, J. D. Carpten, S. J. Clark, J. F. Costello, R. W. Doerge, M. Esteller, A. P. Feinberg, T. R. Gingeras, J. M. Greally, S. Henikoff, J. G. Herman, L. Jackson-Grusby, T. Jenuwein, R. L. Jirtle, Y. -J. Kim, P. W. Laird, B. Lim, R. Martienssen, K. Polyak, H. Stunnenberg, T. D. Tlsty, B. Tycko, T. Ushijima, J. Zhu, V. Pirrotta, C. D. Allis, S. C. Elgin, J. Rine, C. Wu, Moving AHEAD with an international human epigenome project. Nature. 454(7205), 711–715 (2008). https://doi.org/10.1038/454711a. Accessed 1 Aug 2017. I. S. Chan, G. S. Ginsburg, Personalized Medicine: Progress and Promise. Ann. Rev. Genom. Hum. Genet.12(1), 217–244 (2011). https://doi.org/10.1146/annurev-genom-082410-101446. Accessed 1 Aug 2017. The research reported in this paper has been supported by the German Federal Ministry of Education and Research (BMBF) [and by the Hessian Ministry of Science and the Arts] within CRISP (www.crisp-da.de), as well as by collaborations within the BMBF-funded HiGHmed consortium. This work has been co-funded by the DFG as part of project S6 within the CRC 1119 CROSSING. Matthias Geihs and Sebastian Stammler contributed equally to this work. Technische Universität Darmstadt, Department of Computer Science, Hochschulstraße 10, Darmstadt, 64289, Germany Johannes Buchmann, Matthias Geihs, Kay Hamacher, Stefan Katzenbeisser & Sebastian Stammler Johannes Buchmann Matthias Geihs Kay Hamacher Stefan Katzenbeisser Sebastian Stammler JB, KH, and SK played a major role in sparking the idea for this research and supervising it. KH and SS contributed with their background knowledge on genomic data formats and protection requirements. MG and SS together designed the protection scheme. MG evaluated the performance of the scheme. MG and SS are major contributors in writing the manuscript. KH and SK supported the revision of the manuscript. All authors read and approved the final manuscript. Correspondence to Sebastian Stammler. Matthias Geihs and Sebastian Stammler are equal contributors. Buchmann, J., Geihs, M., Hamacher, K. et al. Long-term integrity protection of genomic data. EURASIP J. on Info. Security 2019, 16 (2019). https://doi.org/10.1186/s13635-019-0099-x DOI: https://doi.org/10.1186/s13635-019-0099-x Long-term security Genomic privacy Genomic security Renewable cryptography
CommonCrawl
BMC Medical Research Methodology Can the buck always be passed to the highest level of clustering? Christian Bottomley1, Matthew J. Kirby1, Steve W. Lindsay1 & Neal Alexander1 BMC Medical Research Methodology volume 16, Article number: 29 (2016) Cite this article Clustering commonly affects the uncertainty of parameter estimates in epidemiological studies. Cluster-robust variance estimates (CRVE) are used to construct confidence intervals that account for single-level clustering, and are easily implemented in standard software. When data are clustered at more than one level (e.g. village and household) the level for the CRVE must be chosen. CRVE are consistent when used at the higher level of clustering (village), but since there are fewer clusters at the higher level, and consistency is an asymptotic property, there may be circumstances under which coverage is better from lower- rather than higher-level CRVE. Here we assess the relative importance of adjusting for clustering at the higher and lower level in a logistic regression model. We performed a simulation study in which the coverage of 95 % confidence intervals was compared between adjustments at the higher and lower levels. Confidence intervals adjusted for the higher level of clustering had coverage close to 95 %, even when there were few clusters, provided that the intra-cluster correlation of the predictor was less than 0.5 for models with a single predictor and less than 0.2 for models with multiple predictors. When there are multiple levels of clustering it is generally preferable to use confidence intervals that account for the highest level of clustering. This only fails if there are few clusters at this level and the intra-cluster correlation of the predictor is high. Observations are often grouped in assortative clusters, so that two observations from the same cluster tend to be more similar than two selected at random. For example, members of the same household might share genetic and environmental risk factors such that the presence of a disease in one member is predictive of that in others in the same household. Clustering can influence the amount of uncertainty in parameter estimates. For the sample mean, the standard estimate of the variance must be inflated by a factor 1+ρ(n−1), where ρ is the intra-cluster correlation, which equals the ratio of the variance of cluster means to the total variance of the observations [1], and n is the number of clusters. For measures of association between an outcome (y) and predictor (x) the effect of clustering in the outcome is complicated by the distribution of x across clusters—i.e., the degree of clustering in x—and it may not always inflate the variance. In a linear regression model the variance of the regression coefficient associated with the predictor is increased by 1+(n−1)ρ x ρ y relative to the OLS estimate [2, 3]. Thus clustering has no effect when either ρ x or ρ y is zero and a large effect when both are close to one. Generally, parameter estimates from generalised linear models, such as logistic regression, are consistent in the presence of clustering, provided that the relationship between the mean of the outcome and the predictor variables is correctly specified. But the standard variance estimates of the regression parameters that ignore clustering are not consistent, and therefore confidence intervals that are based on these variance estimates are incorrect [4]. Fortunately, it is possible to obtain consistent variance estimates for regression parameters using cluster-robust variance estimates (CRVE), which are consistent irrespective of the correlation structure within clusters, provided that observations between clusters are independent [4]. In particular, when there is more than one level of clustering (e.g., individuals clustered in households and households clustered in villages), then CRVE applied at the higher level are consistent, despite the correlation structure within the higher-level clusters (villages) being complicated by correlations within the lower-level clusters (households). Thus a researcher who is faced with multiple levels of clustering can obtain consistent confidence intervals by using CRVE at the highest level of clustering: Angrist and Pischke refer to this as 'passing the clustering buck' to the higher level [5]. Consistency, however, guarantees lack of bias only asymptotically, i.e., for sufficiently large sample sizes. Unfortunately, CRVE are biased when there are few clusters. Furthermore, the bias is usually downward so that confidence intervals are too narrow [6]. There is therefore a trade-off. At the lower level of clustering there will be more clusters, but observations from different clusters will be dependent. While, at the higher level, observations from different clusters are more likely to be independent but there will be fewer clusters and the CRVE will be biased. In this study we explore this trade-off in the context of logistic regression. We use a random effects (conditional) model to simulate binary data that are clustered at two levels, and fit a marginal model to these data, using CRVE to adjust for clustering at either the higher or the lower level. Before we present the simulation, we describe the relationship between marginal and conditional models, and discuss the intra-cluster correlation as a measure of the degree of clustering. Marginal and conditional models We model the relationship between a binary outcome and a set of binary predictors in the presence of nested clusters, where the disease and predictors can vary in prevalence between clusters. For example, we might want to predict the probability of a disease based on certain risk factors, and the disease and risk factors are known to cluster in households and villages. In this example, households are the lower-level clusters, and they are nested in villages because members of a household belong to the same village. One approach used to account for clustering is to include random effects in the regression model. For example, we might model the effects of household and village as independent, normally distributed random variables z jk and u k and include these, together with the predictors x 1,…,x p , in the model $$ {\begin{aligned} \text{log} \left(\frac{\pi_{ijk}}{1-\pi_{ijk}} \right)=\alpha_{0}+\alpha_{1} x_{1ijk}+ \alpha_{2} x_{2ijk}+\ldots+\alpha_{p} x_{pijk}+u_{k}+z_{jk} \end{aligned}} $$ ((1)) where π ijk is the probability of disease in individual i from household j and village k. We refer to this as the conditional model as the parameter estimates for the predictor variables are conditional on the village and household effects. The model can be fitted by integrating the likelihood over the distribution of the unobserved random effects of village and household, and then maximising this marginal likelihood. A drawback of this approach is that it is necessary to assume distributions for the random effects, and the parameter estimates can be sensitive to the choice of distribution [7]. Alternatively, we can fit a marginal, or population average, logistic regression model that ignores clustering $$ \text{log} \left(\frac{\pi_{ijk}}{1-\pi_{ijk}} \right)=\beta_{0}+\beta_{1} x_{1ijk} + \beta_{2} x_{2ijk}+\ldots+\beta_{p} x_{pijk}. $$ The parameters of this model can be estimated by fitting the model using maximum likelihood, ignoring the cluster effects. This is equivalent to solving a set of estimating equations (Eq. A-2 in the Appendix) that have been derived by setting the derivative of the log-likelihood to zero. Each parameter estimates is consistent, provided that the relationship between the probability of disease and predictor variables is correctly specified, but the usual variance estimate based on the second derivative of the log-likelihood is not correct. For a single level of clustering, a cluster robust variance estimate (CRVE) can be used instead (see appendix). This estimate is unbiased as the number of clusters tends to infinity, but may be biased when the number of clusters is small. When there are multiple levels of clustering, a consistent variance estimate can be obtained by adjusting for clustering at the higher level—this implicitly accounts for lower-level clustering—but, since the number of higher-level clusters is often small, bias maybe a concern. The parameters, apart from the intercept, represent log odds ratios in both models. But they are interpreted differently in the two models. For example, for a single, binary predictor, x 1, β 1 is the difference in log odds comparing individuals with x 1=1 and x 1=0 across the whole population; while α 1 is the difference comparing x=1 and x=0 within a household. The odds ratio, unlike the risk difference and risk ratio, is not collapsible across strata, therefore α 1 and β 1 will be different unless α 1=β 1=0 or there is no variation between households and villages in disease risk. In general, the relationship between the two sets of parameters can be derived by imagining a dataset that consists of the entire population, and that is generated by the random effects model. The parameters of the marginal model are the 'estimates' that are obtained when the marginal model is fitted to this dataset. Mathematically, this is equivalent to solving equation A-2 in the appendix, after replacing Y ij with E α [Y ij |x ij ]=π ij [8]. Using this approach, Zeger et al. [8] derive the following relationship $$ \beta \approx \alpha \left(1+c^{2}\left({\sigma_{h}^{2}}+{\sigma_{v}^{2}}\right)\right)^{-1/2} $$ where α is the vector of parameters from the random effects model, β is the vector of parameters from the marginal model and \(c=16\sqrt {3}/(15\pi)\). From equation 3, it can be seen the odds ratio is closer to the null in the marginal model than the random effects model, and the magnitude of the difference between the odds ratios depends on the amount of variation between clusters, both at the level of the household and the village. Intra-cluster correlation The variance of a regression parameter estimate depends on the amount of clustering in both the outcome and the predictor. The intra-cluster correlation, defined as the correlation between two observations from the same cluster, can be used to quantify the degree of clustering in both variables. Mathematically, it is defined as $$ \rho=\frac{\mathrm{E}(Z_{ij}-\mu)\left(Z_{i^{*}j}-\mu\right)}{\mathrm{E}\left(Z_{ij} - \mu\right)^{2}} \hspace{1cm} i^{*} \neq i $$ where μ is the overall mean and the expectation is taken over all clusters and pairs of observations within clusters [1]. Assuming that observations are independent conditional on the cluster $$ \rho=\frac{\mathrm{E}\left(\mu_{j}-\mu\right)^{2}}{\mathrm{E}(Z_{ij}-\mu)^{2}} $$ where μ j is the mean for cluster j. Therefore ρ represents the ratio of the variance in cluster means to the overall variance of the observations. By definition, ρ=1 for cluster-level variables because all the variation is then between clusters, but ρ is less than 1 when variables pertain to lower-level units. For example, in a study where data are collected from different villages, village size would be a cluster-level variable with ρ=1, but for household and individual-level variables ρ<1. In fact, the intra cluster correlation is usually considerably less than 1 for observations made on lower level units. In a survey of binary and continuous outcomes recorded in cluster-based studies conducted in primary care the median intra-cluster correlation was 0.01 and 90 % were less than 0.055 [9]. The intra-cluster correlation of the outcome can be calculated directly from the random effects model (Eq. 1) for given values of the parameters and covariate. The intra-cluster correlation can also be calculated for each of the predictors, but in this case since these are not defined by a stochastic model it is calculated based on an empirical version of Eq. 4. Note that Eq. 5 implies that ρ≥0, but for the predictors the intra-cluster correlation is calculated from the sample rather than the model, consequently the independence assumption necessary for Eq. 5 is not met and the intra-cluster correlation is not necessarily positive. In fact, it reaches a lower bound of −1/(n−1) when the prevalence of the predictor is the same in each of n clusters [3]. We will use the notation ρ y to denote intra-cluster correlation defined by the stochastic model for the outcome and \(\hat {\rho }_{x}\) to denote the empirical intra-cluster correlation of a predictor. We conducted a simulation study to explore the coverage of confidence intervals for the parameters of the marginal model. The parameter values used in the simulation are given in Table 1, and we estimated coverage for every combination of these parameters. Table 1 Parameter values For each parameter combination, we estimated coverage by simulating 10,000 samples from the population using the conditional model (Eq. 1). The marginal model (Eq. 2) was fitted to each sample, and we calculated confidence intervals unadjusted for clustering (CI (un)), and intervals adjusted for clustering within households (CI (hh)) and villages (CI (vil)). We estimated the coverage for each type of interval by calculating the proportion of the 10,000 intervals that contained the true marginal log odds ratio, which was calculated by solving Eq. A-2 in the Appendix with Y ij replaced by E α [Y ij |x ij ]=π ij (see previous section on marginal and conditional models). We used predictors of the outcome that varied in their degree of clustering within households and villages. At the extremes, we explored predictors where the proportion positive for the predictor was the same in each village such that \(\hat {\rho }_{x}^{(vil)} =-1/(n-1)\), and predictors where the village consists entirely of positives or negatives \(\hat {\rho }_{x}^{(vil)}=1\). Table 2 shows, for each predictor, the proportion of individuals positive in each village. We used both household, x 1−x 4, and individual-level, x 5−x 7 predictors. The former are the same for all members of the household (e.g., household income) and the latter vary between household members (e.g., age). We fitted models with a single predictor and also multivariable models that included all the predictors simultaneously. Table 2 Distribution of predictors (x 1- x 7) across villages (V1-V5) and the resulting intra-cluster correlation of the predictor CI (hh) and CI (vil) were calculated using CRVE (see Appendix) with two corrections to adjust for downward bias. First, the CRVE was inflated by a factor of n/(n−1), where n is the number of clusters. Second, the confidence interval was calculated using a t-distribution with n−1 degrees of freedom as the reference distribution rather than a standard normal distribution. We did the simulations in R [10] using the rms package [11] to fit logistic regression models and calculate CRVE. CI (un) and CI (hh) had close to 95 % coverage when there was limited village-level clustering in the outcome or predictor, but for both coverage decreased as clustering in the outcome and predictor increased (Fig. 1). CI (vil) performed well when the number of villages was large, and also when the number of villages was small (K=5), provided that the predictor was not too strongly clustered at the village-level. For example, coverage was more than 85 % for \(\hat {\rho }_{x}^{(vil)}<0.5\), in models with a single predictor, and in models that included all predictors simultaneously it was more than 85 % for \(\hat {\rho }_{x}^{(vil)}<0.2\). CI (vil) was only outperformed by CI (hh) when there was limited village-level clustering in the outcome \((\rho _{y}^{(vil)}<0.02)\) and the intra-cluster correlation of the predictor was close to 1 \((\hat {\rho }_{x}^{(vil)}\approx 1)\). Coverage of 95 % confidence intervals for the log odds ratio of a household-level predictor. The lines correspond to coverage of confidence intervals that adjust for clustering at the village-level, household-level or do not adjust for clustering. Coverage is presented as a function of the degree of village-level clustering in the outcome as measured by the intra-cluster correlation (ICC). The intra-cluster correlation of the predictor ranges between 0 (top) and 1 (bottom), for K=5 (left) or K=20 (right) villages. The remaining parameter values are α 0=log(0.1/0.9), α 1=σ h =log(2), I=5, J=20 Our findings were similar irrespective of whether a household (Fig. 1) or individual-level predictor (Additional file 1: Figure S1) was used. They were also similar when all predictors (individual and household-level) were included in the model simultaneously (Additional file 2: Figure S2 and Additional file 3: Figure S3), although the coverage of CI (vil) was less good. We illustrate our findings by analysing data from a randomised trial of a house screening intervention to reduce malaria in children 6 months to 10 years [12]. The intervention was evaluated in terms of its impact on the numbers of mosquitoes caught, anaemia and malaria parasitaemia. The study also collected data on risk factors for malaria, including bed net use. Here we will focus on the presence of malaria parasites in the child, and estimate its association with bed net use and house screening. We use data from the six largest villages (or residential blocks in urban areas) collected on 428 children living in 209 households. The protocol was approved by the Health Services and Public Health Research Board of the MRC UK and The Gambia Government and MRC Laboratories Joint Ethics Committee, and the Ethics Advisory Committee of Durham University. All participants provided consent. At household-level, malaria was strongly clustered, as were the two predictors: the intracluster correlation was 0.47 for malaria, 0.79 for bed net use and 1 for screening (by design). At the village-level, malaria and bed net use were strongly clustered (intra-cluster correlations 0.28 and 0.33), but screening was not clustered because it was randomly allocated to households. The odds ratio for screening was 1.13 and the 95 % confidence interval adjusted for household clustering was 0.55 to 2.31. Since there are many households and screening is uncorrelated with village, we expect the coverage of CI (hh) to be close to 95 %. The odds ratio for bed net use was 0.90. The confidence interval adjusted for household clustering was 0.50 to 1.63 and adjusted for village clustering it was 0.30 to 2.76. Because malaria and bed net use are both highly clustered at the village-level, we expect that CI (vil) will have better coverage than CI (hh). To further explore the difference between coverage of the two confidence intervals, we fitted a random effects model to the malaria data with bed net use as the predictor. We then simulated samples from this model to estimate the coverage of CI (hh) and CI (vil) for the marginal odds ratio, using the approach described in the previous section. As predicted, we found that the coverage of CI (hh) (68 %) was considerably worse than CI (vil), which had reasonable coverage (85 %), despite the small number of villages. In general, we recommend using CRVE to adjust for clustering at the higher level. From simulation studies, we found that generally the coverage was better when confidence intervals were adjusted for the higher level of clustering. Adjusting for the lower level of clustering only gave better coverage (i.e., a higher proportion of confidence intervals included the true odds ratio) when the number of higher-level clusters was small and the intra cluster correlation of the predictor at this level was close to 1. Neither adjustment produced satisfactory coverage when, at the higher level, there were few clusters and the outcome and predictor were both highly correlated with cluster. We used two simple adjustments to improve the coverage of confidence intervals: the variance estimate was multiplied by n/(n−1), and the t-distribution with n−1 degrees of freedom was used as the reference distribution rather than the standard normal distribution. Both adjustments are implemented in the svyset command in Stata. Other methods for adjusting confidence intervals might give better coverage, but are not currently implemented in routine software. Pan and Wall [13] suggest modifying the degrees of freedom used for the reference t-distribution, and a number of authors have proposed methods for correcting for the bias in the variance estimates [14–16]. Bootstrap methods, in which clusters are resampled with replacement, offer another approach, but do not perform better than CRVE [17]. The results we have presented here are from simulation, rather than algebraic demonstration. Nevertheless the simulations cover wide ranges of the key parameters— ρ x and ρ y at the higher level of clustering—and our conclusions were not sensitive to the values used for the other parameters, except the number of higher level clusters. For this parameter we present results for a small (K=5) and large (K=20) number. At K=20 the coverage was close to 95 % when confidence intervals were adjusted for higher-level clustering, and we expect coverage to improve if K>20. We chose not to explore with further granularity the region of parameter space where adjustment at the lower level of clustering is favourable (high ρ x , low ρ y at the higher level of clustering and a small number of clusters at this level) because the region is small and the lower level has only a slight advantage here. We have explored the performance of standard errors adjusted for clustering without adjusting the log odds ratio. In the framework of Generalised Estimating Equations (GEE) this is equivalent to assuming an 'independence' working correlation matrix. The log odds ratio can be estimated more efficiently (i.e., with smaller asymptotic variance) if the correlation structure is used to inform the estimate. For a single level of clustering, a constant correlation between observations from the same cluster is often assumed—the so-called 'exchangeable' correlation structure. When there are multiple levels of clustering one could assume a constant correlation at the higher-level, but this is a crude approximation because the correlation between observations from the same higher-level cluster will depend on whether they also come from the same lower-level cluster. Several authors have therefore modelled the correlation structure that occurs when there is multi-level clustering and have demonstrated that this gives more efficient estimates compared to either the independence or the exchangeable structure [18–20]. While these methods provide benefit in terms of efficiency, the complicated correlation structure is not easily implemented in standard software, and the additional parameters can lead to problems with convergence, particularly when the number of cluster is small [20]. Furthermore, the loss of efficiency that results from assuming an independence structure is generally small [4, 21], except when the intra-cluster correlation of the outcome is large (ρ y >0.3) and the predictor varies within clusters [22]. The relative simplicity of assuming an 'independence' correlation structure (i.e., the CRVE approach discussed in this manuscript) might therefore remain attractive to the applied researcher, even if the resulting estimate it is not the most efficient. CRVE are commonly used to construct confidence intervals that take account of clustering. When clustering occurs at multiple levels, CRVE can be used at the higher level of clustering, except if there are few clusters at this level and the intra-cluster correlation of the predictor is high. In a logistic regression model, the relationship between a binary variable Y ij and predictors x 1i j ,…,x pij is $$\begin{array}{@{}rcl@{}} \text{log} \left(\frac{\pi_{ij}}{1-\pi_{ij}} \right)&=&\beta_{0} + \beta_{1}x_{1ij}+\beta_{2} x_{2ij} +\ldots+\beta_{p} x_{pij} \\ &=& x'_{ij} \beta \end{array} $$ ((A-1)) where π ij =P (Y ij =1|x ij ) for observation i from cluster j. Assuming responses are independent, the maximum likelihood estimate, \(\hat {\beta }\), is the solution to the equations $$ U(\beta)=\sum_{j=1}^{n} X_{j}'\left(Y_{j}-\pi_{j}(\beta)\right)=0 $$ where Y j is a column vector of responses in cluster j, and X j is matrix whose columns are the predictors of Y j . Since the Y j are independent, it can be shown by the central limit theorem and using a Taylor expansion that, asymptotically, as the number of clusters (n) tends to infinity, \(\hat {\beta }\) is normally distributed with mean β and variance $$ \left(\frac{\partial U'}{\partial \beta} \right)^{-1} \text{Var}(U(\beta)) \left(\frac{\partial U'}{\partial \beta} \right)^{-1} $$ where \(\text {Var}(U(\beta))=\sum _{j} X_{j}' \text {Var}(Y_{j}) X_{j} \), \(\frac {\partial U'}{\partial \beta }=\sum _{j} X_{j}' V_{j} X_{j}\) and V j is a diagonal matrix with elements π ij (1−π ij ). The so-called sandwich estimator, which is also referred to as the cluster robust variance estimate (CRVE), is obtained by replacing π ij in V j with \(\hat {\pi }_{ij}\) and using \((y_{j}-\hat {\pi }_{j})(y_{j} -\hat {\pi }_{j})'\) to estimate the covariance matrix Var(Y j ). CRVE: Cluster robust variance estimate OLS: Ordinary least squares GEE: Generalised estimating equation SD: Eldridge SM, Ukoumunne OC, Carlin JB. The intra-cluster correlation coefficient in cluster randomised trials: a review of definitions. Int Stat Rev. 2009; 77(3):378–394. Moulton BR. Random group effects and the precision of regression estimates. J Econ. 1986; 32(3):385–397. Scott AJ, Holt D. The effect of two-stage sampling on ordinary least squares methods. J Am Stat Assoc. 1982; 77(380):848–854. Liang KY, Zeger SL. Longitudinal data analysis using generalized linear models. Biometrika. 1986; 73(1):13–22. Angrist JD, Pischke JS. Mostly harmless econometrics. 6 Oxford Street, Woodstock, Oxfordshire OX20 1TW: Princeton University Press; 2009. Bell RM, McCaffrey DF. Bias reduction in standard errors for linear regression with multi-stage samples. Surv Methodol. 2002; 28(2):169–181. Hubbard AE, Ahern J, Fleischer NL, Van der Laan M, Lippman SA, Jewell N, et al.To GEE or not to GEE: comparing population average and mixed models for estimating the associations between neighborhood risk factors and health. Epidemiology. 2010; 21(4):467–74. Zeger SL, Liang KY, Albert PS. Models for longitudinal data: a generalized estimating equation approach. Biometrics. 1988; 44(4):1049–60. Adams G, Gulliford MC, Ukoumunne OC, Eldridge S, Chinn S, Campbell MJ. Patterns of intra-cluster correlation from primary care research to inform study design and analysis. J Clin Epidemiol. 2004; 57(8):785–94. Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing. Austria: Vienna; 2015. http://www.r-project.org/. Accessed 24 Feb 2016. Harrell FE. rms: Regression Modeling Strategies. R package version 4.4-0. 2015. http://cran.r-project.org/package=rms. Accessed 24 Feb 2016. Kirby MJ, Ameh D, Bottomley C, Green C, Jawara M, Milligan PJ, et al.Effect of two different house screening interventions on exposure to malaria vectors and on anaemia in children in The Gambia: a randomised controlled trial. Lancet. 2009; 374(9694):998–1009. Pan W, Wall MM. Small-sample adjustments in using the sandwich variance estimator in generalized estimating equations. Stat Med. 2002; 21(10):1429–1441. McCaffrey DF, Bell RM. Improved hypothesis testing for coefficients in generalized estimating equations with small samples of clusters. Stat Med. 2006; 25(23):4081–4098. Fay MP, Graubard BI. Small-sample adjustments for Wald-type tests using sandwich estimators. Biometrics. 2001; 57(4):1198–1206. Mancl LA, DeRouen TA. A covariance estimator for GEE with improved small-sample properties. Biometrics. 2001; 57(1):126–134. Cameron AC, Miller DL. A practitioner's guide to cluster-robust inference. J Hum Resour. 2015; 50(2):317–372. Qaqish BF, Liang KY. Marginal models for correlated binary responses with multiple classes and multiple levels of nesting. Biometrics. 1992; 48(3):939–50. Chao EC. Structured correlation in models for clustered data. Stat Med. 2006; 25(14):2450–68. Stoner JA, Leroux BG, Puumala M. Optimal combination of estimating equations in the analysis of multilevel nested correlated data. Stat Med. 2010; 29(4):464–73. McDonald BW. Estimating logistic regression parameters for bivariate binary data. J R Stat Soc Ser B. 1993; 55(2):391–397. Fitzmaurice GM. A caveat concerning independence estimating equations with multivariate binary data. Biometrics. 1995; 51(1):309–317. This work was supported by funding from the United Kingdom Medical Research Council (MRC) and Department for International Development (DFID) (MR/K012126/1). The STOPMAL trial was funded by the UK Medical Research Council and registered as an International Standard Randomised Controlled Trial, number ISRCTN51184253. We would like to thank Richard Hayes for reviewing the manuscript. MRC Tropical Epidemiology Group, London School of Hygiene & Tropical Medicine, Keppel Street, London, UK Christian Bottomley , Matthew J. Kirby , Steve W. Lindsay & Neal Alexander Search for Christian Bottomley in: Search for Matthew J. Kirby in: Search for Steve W. Lindsay in: Search for Neal Alexander in: Correspondence to Christian Bottomley. CB conducted the simulation study. CB and NA wrote the first draft of the manuscript. All authors reviewed the final draft of the manuscript. Additional file 1 Figure S1. Coverage of 95 % confidence intervals for the log odds ratio of an individual-level predictor. The intra-cluster correlation of the predictor ranges between 0 (top) and 1 (bottom), for K=5 (left) or K=20 (right) villages. The remaining parameter values are α 0=log(0.1/0.9), α 1=σ h =log(2), I=5, J=20. (TIFF 30617 kb) Figure S2. Coverage of 95 % confidence intervals for the log odds ratio of a household-level predictor from a logistic regression model that includes multiple predictors (x1−x7). The intra-cluster correlation of the predictor ranges between 0 (top) and 1 (bottom), for K=5 (left) or K=20 (right) villages. The remaining parameter values are α 0=log(0.1/0.9), α 1=⋯=α 7=σ h =log(2), I=5, J=20. (TIFF 30617 kb) Figure S3. Coverage of 95 % confidence intervals for the log odds ratio of an individual-level predictor from a logistic regression model that includes multiple predictors (x1−x7). The intra-cluster correlation of the predictor ranges between 0 (top) and 1 (bottom), for K=5 (left) or K=20 (right) villages. The remaining parameter values are α 0=log(0.1/0.9), α 1=⋯=α 7=σ h =log(2), I=5, J=20. (TIFF 30617 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Bottomley, C., Kirby, M.J., Lindsay, S.W. et al. Can the buck always be passed to the highest level of clustering?. BMC Med Res Methodol 16, 29 (2016) doi:10.1186/s12874-016-0127-1 DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12874-016-0127-1 Sandwich estimator Robust variance estimate Data analysis, statistics and modelling
CommonCrawl
EURASIP Journal on Bioinformatics and Systems Biology December 2016 , 2016:18 | Cite as Towards organizing health knowledge on community-based health services Mohammad Akbari Xia Hu Liqiang Nie Tat-Seng Chua Biomedical Informatics with Optimization and Machine Learning Online community-based health services accumulate a huge amount of unstructured health question answering (QA) records at a continuously increasing pace. The ability to organize these health QA records has been found to be effective for data access. The existing approaches for organizing information are often not applicable to health domain due to its domain nature as characterized by complex relation among entities, large vocabulary gap, and heterogeneity of users. To tackle these challenges, we propose a top-down organization scheme, which can automatically assign the unstructured health-related records into a hierarchy with prior domain knowledge. Besides automatic hierarchy prototype generation, it also enables each data instance to be associated with multiple leaf nodes and profiles each node with terminologies. Based on this scheme, we design a hierarchy-based health information retrieval system. Experiments on a real-world dataset demonstrate the effectiveness of our scheme in organizing health QA into a topic hierarchy and retrieving health QA records from the topic hierarchy. Consumer health information Community question-answering Information organization Information retrieval The emergence of online health information needs has given rise to the establishment of online health services. Broadly speaking, current online health services can be divided into two categories. The first is the professional health provider released sources, such as Yahoo! Health1 and WebMD2. These sources provide trustworthy and formally-written health information. They are usually well-structured in terms of health topics. The second category is the community-based health services (CHSs), such as HealthTap3 and HaoDF4. These services allow health seekers to freely post health-oriented questions, and encourage doctors to provide quality answers. Compared to the former sources, CHSs have some intrinsic properties. First, they are crowdsourcing data that are continually growing at a fast pace, and it is thus not practical to organize them manually. Second, they are unstructured and unlabeled in terms of topics, which greatly hinder their retrieval and browsing by user. Third, health seekers and doctors with diverse backgrounds tend to present the same concepts in colloquial style, which leads to a wide vocabulary gap. Together, these pose big challenges for data access and navigation. Recent efforts [1] indicate that organizing the community-contributed data into a hierarchical structure may enhance coarse-grained browsing and fined-grained search. Several practical systems and research efforts have been dedicated to organizing community-contributed data [1, 2]. Most of these efforts, however, suffer from the following limitations. First, they typically utilized predefined taxonomies in the form of tree structures and expect users or computers to assign data instances into these taxonomies based upon their understanding. However, the available taxonomies in health domain are usually too shallow with broad categorizations. For example, Yahoo! Answer5 partitions health data into only nine main categories which are too general to summarize the diverse health information. Some popular topics such as "pregnancy" cannot be directly browsed here, because they do not fall under the predefined fixed category structure. Besides, these fixed taxonomies usually face the problems of being too centralized, conservative, and ambiguous [3]. Moreover, manual assignment by health seeker is probably not applicable since they do not sufficiently understand their health problems. Second, existing efforts enable each data instance to be assigned into only one leaf node of the hierarchy. However, the health records are usually more verbose and complex, and probably convey multiple concerns. They hence should be assigned into more topic-level leaf nodes. Third, topic hierarchy construction approaches in general domain often annotate each node of the hierarchy with frequent occurrence terms or concepts. However, in vertical domain hierarchy construction, such as health domain, labeling nodes with standard terminologies is preferable, since it facilitates data reusability and exchange. Fourth, the existing efforts are unable to adaptively build the skeleton hierarchy. Specifically, the number of children for each given parent node and the number of layers in the entire hierarchy are either extracted from existing external structures or predefined by the so-called domain experts. They are often biased towards specific context or personal perspectives [4]. To overcome these limitations, we propose a top-down scheme that can organize the unstructured health records into a structured hierarchical tree. First, nodes in higher layers of the tree represent abstract topics. These nodes usually do not have clear definition and are thus difficult to be extracted automatically. On the other hand, even though the existing health-related taxonomies are very general, they still capture the high-level structures of the health domain well. We naturally leverage such prior domain knowledge to construct the higher layers of our hierarchy. Second, we propose an expanding approach to perform overlapping partitioning of each node to generate its children. Starting from the higher layer node, we try to obtain a hierarchy of its children. However, without termination criteria, the generated tree will be very huge in which each leaf node may contain only one health record. To address this problem, we propose a shrinkage approach to monitor and infer whether the node is specific enough before expansion. Following the breadth-first tree traversal trajectory, we alternatively employ expansion and shrinkage approaches to inspect each node and generate a proper hierarchy. In addition, all involved nodes are profiled with terminologies selected from the Unified Medical Language System (UMLS) Metathesaurus6. Based on our proposed organization scheme, we develop a hierarchy-based health information retrieval system. Health information search has attracted intensive attentions from industry and academia [5, 6, 7, 8, 9]. The effectiveness and efficiency of these efforts, however, are limited due to the inconsistent terms used in health domain and the need for exhaustive search in the entire data corpus. Our application adopts the topic-based matching and performs intelligent pruning of irrelevant branches of the generated hierarchy, and it can boost search performance significantly. The contributions of our work are threefold: To the best of our knowledge, this is the first work on automatic organization of community-contributed health data. With prior domain knowledge, we propose a top-down organization scheme where skeleton hierarchy is automatically determined, multiple relations are enabled, and nodes are profiled with terminologies. We propose a hierarchy-based health information retrieval system. Extensive evaluations demonstrate its promising performance. The remainder of this paper is organized as follows. Sections 2 and 3, respectively, detail our organization scheme and our hierarchy-based health information retrieval. Section 4 introduces the representation of QA records and similarity measures used. Experimental results and analysis are presented in Section 5. Section 6 reviews the related work, followed by our conclusion and future work in Section 7. 2 Top-down organization scheme This paper targets at generating a rooted, directed, and profiled tree H from a given data corpus that contains n health-related question answering (QA) records \(\mathcal {D}=\{x_{1},x_{2},\ldots,x_{n}\}\). Each node \(\mathcal {V}\) in H is a subset of \(\mathcal {D}\), representing a latent topic of semantically similar records. Notably, the root node \(\mathcal {V}_{0}\) involves all the records in \(\mathcal {D}\). The child nodes loosely partition their parent nodes, where overlapping is allowed. Figure 1 representatively shows the loose partitioning of the given parent node. From this figure, it can be seen that one health QA record can be assigned into two or more sibling nodes. Illustration of loose partition of parent node. Dashed and solid circles stand for nodes and health QA records, respectively 2.1 Incorporation of domain knowledge As aforementioned, the current health-related taxonomies are usually very general and shallow. For example, the taxonomies provided by WebMD and Yahoo! Health are almost flat. They typically capture the high-level categorizations and structures of health domain. They are user-oriented categories which model human expectation of abstract categories in health records. On the other hand, automatic extraction of high-level categories of a given corpus is non-trivial, since there are overlaps and inter-correlation between topics especially in health domain. Take the categories of "mental health" and "women's health" as an example; they are partially overlapped rather than being mutually exclusive and complementary. Regarding the aforementioned discussion, we employed such a domain knowledge to guide the construction of topic hierarchy and ensure that the generated structure is human readable and interpretable. While different kinds of domain knowledge may be available, in this paper, we assume that prior domain knowledge is available as a predefined hierarchy structure. The predefined hierarchy structure may include several layers of nodes labeled with keywords. To facilitate the formalization of our hierarchy generation, in this paper, we utilized a one-level tree structure which includes several child nodes following the root node. We utilize the categorization of healthexchange7 as our initial first layer following the root node. Having a predefined hierarchy structure, we construct a set of classifiers to categorize health QA records in the root node into these categories. To accomplish this task, we first extract a set of exemplar QA pairs to represent the semantic context of each category. To do so, we employ each category's name as a query and obtain the top 100 relevant QA pairs from HealthTap. To form negative samples, we randomly select 100 negative samples from the other categories. We then trained a SVM classifier using the samples for each category. 2.2 Expanding approach Through incorporating domain knowledge, we have partitioned the root node into a list of high-level categories which correspond to user expectation of knowledge structure. Each category is viewed as a node in the first layer. This subsection details the expanding approach to further generate a fine-grained hierarchy. According to our definition, each node \(\mathcal {V}\) in the target hierarchy H is a set of health QA records. We assume that this collection of health QA records can be explained by a set of unobserved abstract groups, and each group contains a small set of semantically similar health QA records talking about the same health topic. We then naturally shift our expanding task into topic modeling problem. The latent Dirichlet allocation (LDA) model [10] is utilized here, which is a generative model for discovering the unobserved abstract groups that occur in a data collection. The main challenge in the expanding phase is to determine the proper number of children for each given node. Each child node should represent one aspect of the parent node, and complement to its siblings instead of mutually overlapping. Our proposed expansion approach selects the number of children via a tuning procedure. This procedure seeks for the children number that minimizes the LDA model's perplexity [10] on a held-out testing data set. It is formulated on a hold-out set with m health QA records as $$\begin{array}{*{20}l} \text{perplexity}= \text{exp}\left\{ \frac{\sum_{i=1}^{m} \log p(d_{i})}{\sum_{i=1}^{m} l_{i}}\right\}, \end{array} $$ where l i is the length of health QA record d i . The lower the perplexity value is, the better is the ability of the corresponding trained model in capturing the text collection. Based on the proposed expanding approach, nodes in each layer are divided to subtopics where they contain sets of more compact health QA records as compared to their parents. As a byproduct of expansion, we train an optimal LDA model for each node in our generated hierarchy, which is utilized to facilitate health QA records assignment and hierarchy-based retrieval. 2.3 Shrinking approach Before expanding a given node, we need to estimate how specific the node is, which is the key to automatically determining the depth of the hierarchy and prevents further segmentation of homogeneous nodes. Common approaches predefine a fixed depth and divide the data collection continuously until the depth constraint is satisfied. Approaches of this kind generate balanced trees where all leaves have the same depth. However, they have two limitations. First, the generated hierarchies might be biased towards the experiences of the persons who predefine the depth. Second, the underlying assumption of these approaches is that all sibling nodes have the same complexity and generality, which is not true in health domain. For example, the node talking about "cancer" is more general and should have deeper layers as compared to one that representing "acne." We propose a shrinking approach to accomplish this task. Initially, we assume that the given node \(\mathcal {V}\) can be further expanded, by dividing it into two child nodes, \(\mathcal {A}\) and \(\mathcal {B}\). Obviously, \(\mathcal {V}\) equals to the union of \(\mathcal {A}\) and \(\mathcal {B}\), i.e., \(\mathcal {V}=\mathcal {A} \cup \mathcal {B}\). We then estimate the average similarity between these two nodes by $$\begin{array}{*{20}l} R(\mathcal{A}, \mathcal{B})=\frac{1}{|\mathcal{A}|\cdot|\mathcal{B}|} \sum\limits_{\mathbf{x}_{i} \in \mathcal{A}, \mathbf{x}_{j} \in \mathcal{B}} S(\mathbf{x}_{i}, \mathbf{x}_{j}), \end{array} $$ where S(x i ,x j ) is their similarity estimation. Based on the formulation of \(R(\mathcal {A}, \mathcal {B})\), we can intuitively have the normalized definitions of inter-node relation and intra-node relation as follows: $$ \left\{\begin{array}{ll} \text{inter}(\mathcal{A}, \mathcal{B})=\frac{R(\mathcal{A}, \mathcal{B})}{R(\mathcal{A}, \mathcal{V})} + \frac{R(\mathcal{A}, \mathcal{B})}{R(\mathcal{B}, \mathcal{V})},\\ \text{intra}(\mathcal{A}, \mathcal{B})=\frac{R(\mathcal{A}, \mathcal{A})}{R(\mathcal{A}, \mathcal{V})} + \frac{R(\mathcal{B}, \mathcal{B})}{R(\mathcal{B}, \mathcal{V})}. \end{array}\right. $$ The stronger the inter-node relation between \(\mathcal {A}\) and \(\mathcal {B}\) is, the more indivisible they are. On the other hand, a smaller intra-node relation indicates a more tighter consolidation of \(\mathcal {V}\), and hence, it is not necessary to split it further. In our work, if \(\text {inter}(\mathcal {A}, \mathcal {B})\) is larger than our threshold δ, we will terminate the expanding phase. The threshold is obtained empirically based on our experiments. 2.4 Health QA record assignment As aforementioned, health QA records usually involve multiple topics. For example, this question is selected from HealthTap, "what can cause breast cancer to 25 years old married girl within the first 3 months of pregnancy?" It explicitly talks about at least three topics: "breast cancer," "female health," and "pregnancy." Therefore, assigning such records into multiple and complementary child nodes is desired in health domain. Based on our LDA model, each health QA record in the parent node can be represented as a mixture of all its children topics with different weights, i.e., \(p(\mathcal {V}_{i}|\mathbf {x})\), denoting the probability of a health QA record x associated to a child node \(\mathcal {V}_{i}\). Some child nodes with larger probabilities capture the principle components of the given health QA record, while others play supporting roles. However, there is not an indisputable approach to determine how many nodes should be selected for the assignment. If we choose too many child nodes, we may bring in noise for those nodes that are not the principle topics of the given health QA record. If we choose too few, we lose relevant category information of the given health QA record. As a rule of thumb, we should select only the leading interpretable child nodes. An important observation reveals that the leading child nodes make significantly larger impact than other supporting child nodes. As Fig. 2 shows, there is a large gap between the impact of leading child nodes, i.e., v 1 and v 2, and that of the supporting child nodes, i.e., v 3, v 4, and v 5. This gap shows that the given QA record is highly relevant to the first two child nodes while it is less relevant to the last three child nodes. Hence, we assign the current health QA record, i.e., x, into just highly relevant child nodes, i.e., v 1 and v 2 in our example. Based on this observation, the number of leading child nodes can be heuristically selected according to Algorithm 1. The algorithm first calculates the difference between two adjacent values in the ranked list of child nodes (line 2). It then finds the maximum difference to compute the number of leading nodes for current health QA records (line 3). The complexity of this algorithm is O(nlogn). Similar approach was utilized to determine the leading roles from movies [11]. Illustration of leading nodes selection 2.5 Node profiling with terminologies Our LDA-based top-down scheme automatically extracts child nodes in the form of multinomial distributions of words from the parent node. In general, it is very difficult for users to understand a child node only based on the multinomial distribution of words, especially when they are not familiar with the context. Consequently, we need to generate meaningful labels for each node to ease understanding. In this section, we propose an approach for profiling nodes of the constructed hierarchy with medical terminology. Early literatures [10, 12, 13] on topic labeling generally either select the top statistical terms in the distribution as primitive labels or generate labels manually in a subjective manner. These approaches, however, are not applicable to CHSs due to the following reasons. First, frequent terms might not be medical concepts, such as "desktop." Second, terms are less descriptive than phrase-based concepts. Third, manual generation is time-consuming and error-prone. In addition, terms are not standardized and inconsistent. Therefore, it is essential to automatically profile nodes with phrase-based standard terminologies. Given one node, we initially assign part-of-speech tags to each word for all the health QA records associated with this node8. We then extract the noun phrases where their tag sequences match a fixed pattern, $$\begin{array}{*{20}l} &(\text{Adjective} | \text{Noun})^{*} (\text{Noun}\quad \text{Preposition})\\ & ? (\text{Adjective} | \text{Noun})^{*} \text{Noun}. \end{array} $$ A sequence matching this pattern ensures a noun phrase, such as the phrase "ineffective treatment of terminal lung cancer." We do basic post processing to link the variants of terms together, such as singularizing all plural variants. We select the top k frequent noun phrases \(\mathcal {C}=(c_{1},c_{2},\ldots,c_{k})\) and normalize them into authenticated terminologies in ULMS via a voting method. More specifically, we first use MetaMap tool9 to map each phrase into the ULMS terminology. It is worth highlighting that some distinct noun phrases may be mapped to the same terminology. For example, "painful neck" and "neck ache" are both normalized to "neck pain." We next use a voting strategy to rank terminology candidates \(\mathcal {T}=(t_{1},t_{2},\ldots t_{m})\) and produce the final labels by selecting the top ones, $$\begin{array}{*{20}l} \text{score}(t_{i}) = \sum\limits_{j=1}^{k} \text{vote}(c_{j}, t_{i}), \end{array} $$ where vote(c j ,t i ) is a binary form definition $$ \text{vote}(c_{j}, t_{i})= \left\{\begin{array}{ll} 1& \text{if \(t_{i}\) is terminology of \(c_{j}\)}\\ 0& \text{otherwise} \end{array},\right. $$ where Eq. (5) aggregates all the votes for each terminology phrase and Eq. (6) increases the score of a terminology if it can be inferred from a noun phrase. A ranking list of terminologies for each node can be generated and the top ones are truncated as labels. The above voting approach preserves two characteristics. First, it assigns higher score to medical terminologies which are relevant to frequent occurring noun phrases in the cluster. Second, by inferring medical terminologies using MetaMap tool, we indeed normalize noun phrases into a standard medical terminology, i.e., UMLS. 3 Hierarchy-based retrieval Reported by a national survey, which was conducted by the Pew Research Center10, retrieval is the main mode of acquiring health information by users. Keyword-based indexing and matching is the prevailing method of retrieval. However, it is not sufficient for healthcare domain because of the complex, inconsistent and ambiguous terms used by users. In fact, the same questions may be described in substantially different ways by two individual health seekers, even by the well-trained doctors. For example, the query "I want to get pregnant what is the first thing I should do in diet and supplementary term?" and the archived health QA record "what are the best vitamins for a woman who decides to have a child soon?" are too semantically similar and both talking about mothers' worries about pregnancy. However, they are not very syntactically similar to be matched. To boost the search performance, we propose a hierarchy-based retrieval application. It first deems the given query as a health QA record and performs health QA record assignment to the offline generated hierarchy. This is done by routing the given query from root level down to appropriate leaves of the tree. Obviously, this process plays an essential role in pruning the search space via routing the given query to the relevant branches. Meanwhile, the health QA record assignment actually employs the topic-based representation to semantically match the query to the relevant branches, which naturally tackles many of the limitations associated with term-based matching. For a given query, a small set of leaf nodes are located. However, the health QA records within these selected leaf nodes are still large that will easily overwhelm the health seekers. Therefore, ranking these health QA records and returning the top ones to the health seekers will enrich the users' search experiences. The existing ranking approaches generally fall into two classes [14, 15]. One is pseudo relevance feedback based [16, 17, 18], which treats a significant fraction of the top documents as pseudo-positive examples and collects some bottom documents as pseudo-negative examples. They then either learn a classifier or cluster the documents to perform ranking. The other class is graph based [19, 20, 21, 22] that propagates the initial ranking information over the whole graph until convergence. Inspired by [19], we adopt the graph-based random walk ranking method, which is formulated based on two assumptions: The relevance probability function is continuous and smooth in semantic space. This means that the relevance probabilities of semantically similar health QA records should be close. The final relevance probabilities should be close to the initialized ones for each health QA record. We construct a graph where the vertices are health QA records and the edges reflect their pairwise similarities. We first introduce some notations. We use W to denote the initialed similarity matrix and W ij , its (i,j)th element, indicates the similarity of x i and x j , estimated using Eq. (13). Let d ii denote the sum of the ith row of W, i.e., \(d_{ii}=\sum _{j}W_{ij}\). Then, the graph-based learning approach can be written as $$\begin{array}{*{20}l} \min\limits_{\mathbf{y}}\frac{1}{2}\sum\limits_{i,j}W_{ij}\left(\frac{y_{i}}{d_{ii}} - \frac{y_{j}}{d_{jj}}\right)^{2}+\lambda\sum\limits_{i}\frac{1}{d_{ii}}(y_{i}-\bar{y}_{i})^{2}, \end{array} $$ where λ is a weighting parameter and y i is the relevance probability of x i that we want to estimate. \(\bar {y}_{i}\) is the initialized relevant score estimated by Eq. (13). We can see that the smoothness assumption is enforced in the first term of the above equation, which enforces the relevance probabilities of semantically similar health QA records to be close. The second term reflects the second assumption, i.e., the probabilities we estimate should be close to the ranking-based probabilities. We use D to denote a diagonal matrix, with d ii to be its (i,i)th element; and let g denote \(\left [\frac {y_{1}}{d_{11}},\frac {y_{2}}{d_{22}},\ldots,\frac {y_{n}}{d_{nn}}\right ]^{T}\). Thus, Eq. (7) can be rewritten as, $$\begin{array}{*{20}l} \min_{\mathbf{g}} \mathbf{g}^{T}(\mathbf{D}-\mathbf{W})\mathbf{g} + \lambda (\mathbf{g}-\mathbf{D}^{-1}\bar{\mathbf{y}})^{T}\mathbf{D}(\mathbf{g}-\mathbf{D}^{-1}\bar{\mathbf{y}}). \end{array} $$ It can be derived that $$\begin{array}{*{20}l} \mathbf{y}=\frac{1}{1+\lambda}\mathbf{WD}^{-1}\mathbf{y}+\frac{\lambda}{1+\lambda}\bar{\mathbf{y}}. \end{array} $$ We can iterate the above equation and the convergence can be proven. With graph-based random walk ranking, we return an ordered list of health QA records to health seekers. 4 Features and similarity estimation To represent QA records, we extract lexical, syntactic, and semantic features. Weighted term kernel Φ 1 : Medical concepts usually convey more informative signals than others. It is reasonable to assign greater weights to these concepts. We propose a weighted bag-of-word approach to lexically represent health QA content. Specifically, medical concepts falling into certain UMLS semantic groups will be weighted twice [23]. These groups include disease or syndrome, body part organ or organ component, sign or symptoms, and neoplasm. These groups are chosen since they cover most of the medical concepts and the medical concepts within them are discriminative. Cosine similarity is then employed to calculate the lexical similarity between two QA records. Syntactic tree kernel Φ 2 : The tree kernel function is one of the most effective ways to represent the syntactic structure of a sentence [24]. The tree kernel was designed based on the idea of counting the number of tree fragments that are common to both parsing trees, and defined as $$\begin{array}{*{20}l} STKN(T_{i}, T_{j}) = \sum\limits_{n_{i} \in T_{i}} \sum\limits_{n_{j} \in T_{j}} C(n_{i}, n_{j}), \end{array} $$ where n i and n j are sets of nodes in two syntactic trees T 1 and T 2, and C(n i ,n j ) equals to the number of matched sub-trees rooted in nodes n i and n j , respectively. STKN is originally designed to measure the similarity between two sentences. However, health QA records usually includes multiple sentences. We thus generalize it to Φ 2 as $$\begin{array}{*{20}l} \Phi_{2}= \frac{\sum_{s_{i} \in d_{1}} \sum_{s_{j} \in d_{2}} STKN(T(s_{i}), T(s_{j}))}{\vert d_{1}\vert \vert d_{2} \vert}, \end{array} $$ where s i and s j are sentences from d 1 and d 2, respectively. In this way, we moderate the effects of the length of health QA records. Latent topic kernel Φ 3 : We explore the LDA-based high-level representation. For a collection of health QA records, LDA assigns semantically interrelated health concepts into the same latent group, which can be used to describe the underlying semantic structures of health data in the context of a hierarchical topic. In our work, each group is deemed as one feature dimension. Hence, for a given health QA record, it can be represented as a mixture of latent groups. The feature dimensions are determined via perplexity score. Traditionally, the Kullback-Leibler divergence (KL-divergence) is used to compute the similarity between two topic distributions. However, KL-divergence is asymmetry per se, which makes it difficult to be used as a similarity metric. To address the asymmetry of KL-divergence, we utilize the Jensen-Shannon divergence scores as follows: $$\begin{array}{*{20}l} \Phi_{3}= 0.5KL(p_{1} \Vert q)+0.5 KL(p_{2} \Vert q), \end{array} $$ where KL(.∥.) denotes KL-divergence score and q=0.5p 1+0.5p 2 [25]. To estimate the similarity between two QA records, we linearly fuse these three aspects, $$\begin{array}{*{20}l} & \Phi = \sum\limits_{i=1}^{3} \beta_{i} \Phi_{i}, \end{array} $$ where β i sums up to 1, and each of them is greater than 0. We conduct a grid search with step size 0.05 within [0,1] to tune β 1 and β 2 while β 3=1−β 1−β 2. The values that achieved the best results are selected. 5 Experiments 5.1 Experimental settings We collected approximately 109 thousand questions from HealthTap. For each question, we also collected its answers and tags, which are provided by doctors. Compared to normal documents, health questions are short and consist of only a few sentences. They thus do not provide sufficient word co-occurrences or shared contexts for effective similarity measurement. To compensate for this problem, we utilized corresponding answers and tags to contextualize the question parts. Note that for the hierarchy-based search, the newly incoming query contains only the question part. For the subsequent subjective evaluations, we invited three volunteers who majored in medicine. They were trained with short tutorials and a set of typical examples before their labeling. A majority voting scheme among the volunteers was adopted to alleviate the problem of ambiguity. 5.2 On hierarchy generation Currently, there are no widely accepted metrics to measure how well the generated hierarchy can explain the given data corpus. In our work, we propose objective and subjective approaches. We compare among three schemes: our scheme without domain knowledge, our scheme with domain knowledge, and hierarchical LDA (hLDA). The hLDA model [26] represents the distribution of topics within documents by organizing the topics into a tree. For hLDA, we assigned each health QA record into one child node based on the generative probability. We profiled each node with terminologies via mapping the top terms in each node to terminologies. 5.2.1 On objective hierarchy evaluation We objectively evaluate the generated hierarchies from local and global angles. Both of these two evaluation approaches view external standard medical knowledge structure as golden hierarchies. In our work, we chose medical subject headings11 (MeSH) as ground truth. It is a national library of medicines controlled vocabulary thesaurus. It consists of sets of terms naming descriptors in a hierarchical structure that permits searching at various levels of specificity. MeSH descriptors are arranged in both an alphabetic and a hierarchical structure. At the most general level of the hierarchical structure are very broad headings such as "anatomy" or "mental disorders." More specific headings are found at the narrower levels of the 12-level hierarchy, such as "ankle" and "conduct disorder." There are 27,149 descriptors in 2014 MeSH. For the local evaluation, we estimated the proportion of correct parent-child relations between labeled terminologies. We first formed a collection of relation tuples (parent, child) from the profiled hierarchies. We then inspected the correctness of each tuple in MeSH. However, there exist some parent-child relations in our generated hierarchies which cannot be identified exactly in MeSH. For example, terminology t i may be a grandchild of t j in MeSH, while it is a child of t j in the generated hierarchies. Therefore, local metrics are unable to comprehensively reflect the hierarchy cohesiveness. That motivates a global measure to estimate the cohesiveness, $$\begin{array}{*{20}l} \text{cohesivenss} =\frac{1}{M \cdot N} \sum\limits_{i=1}^{M} \sum\limits_{j=1}^{N} R(t_{i}, t_{j}), \end{array} $$ where t i is the terminology in the parent node of the generated hierarchy, while t j is the terminology in the adjacent child node. R(t i ,t j ) is calculated based on MeSH, $$ R(t_{i}, t_{j})= \left\{\begin{array}{ll} \frac{1}{2^{p}}& \text{if ancestor-child relations}\\ 0& \text{otherwise} \end{array},\right. $$ where p is the length of ancestor-child path between terminology t i and t j . The local and global evaluation results are presented in Table 1. It can be seen that our approaches extract much more relations between concepts from corpus. Moreover, our approaches outperform the hLDA in terms of local and global evaluations. The low cohesiveness values are caused by the fact that some parent-child terminologies are not represented in MeSH in the ancestral series. Finally, even though we use a very basic domain knowledge, it boosts the performance of the hierarchy generation, which validates the importance of domain knowledge for organizing medical data. Local and global evaluation results of the generated hierarchies Total tuples Correct tuples Accuracy (%) Cohesiveness hLDA 1.2 ×10−4 Ours without domain knowledge Ours with domain knowledge 5.2.2 On subjective hierarchy evaluation As a complementary evaluation approach, we subjectively validated the generated hierarchies. We asked the volunteers to not only focus on the high-level parent-child relations in terms of labeled terminologies, but also the fine-grained context of the generated hierarchy. Because the hierarchies are very large, we first segmented each hierarchy into several tree fragments based on three conditions: Each fragment contains at most two levels. At most four siblings are allowed. Fifty records were randomly sampled from each selected node to represent its context. The volunteers were required to go through all the health QA records in each fragment, which help them to grasp the contexts. After that, they were asked to annotate each fragment with ratings of "very satisfied," "satisfied," and "not satisfied." The results are presented in Table 2. As can be seen, our proposed schemes significantly outperform hLDA. Meanwhile, the hierarchy generated with domain knowledge can further reduce the "not satisfied" cases. We also evaluated the inter-volunteer agreement with the Kappa method [27]. The overall agreement value is 85.99%, while the fixed-marginal Kappa and free-marginal Kappa values are 0.7736 and 0.7899, respectively. They demonstrate that there are sufficient inter-volunteer agreements. Subjective evaluation of generated hierarchies # of fragments Not satisfied 5.2.3 On health QA records assignment Our scheme enables each health QA record to be assigned into multiple siblings. According to our statistics, on average each record is categorized into 1.7 child nodes. We aim to evaluate the precision and recall of our assignment approach. Precision equals to the number of correctly assigned child nodes over all assigned child nodes, while recall measures the fraction of principle topics of the given health QA record that are captured by the assigned child nodes. As aforementioned, for hLDA, each health QA record in the parent node was assigned into only one child node, which serves as a baseline to see how well our assignment approach performs. Specifically, we randomly selected 20 nodes and their child nodes from each of the three hierarchies. For each node, we randomly sampled 10 health QA records. Three volunteers were first asked to go through each node and their child nodes to understand what subtopic each child node stands for. In fact, this stage provides cues to the volunteers to which child nodes the given health QA record should be assigned. Suppose the volunteer thinks that the given health QA record should be assigned into v child nodes, while it was only correctly assigned into u, then the recall for this health QA record is u/v. Average recall over three volunteers was calculated for each health QA record. Naturally, we also obtained the assigning precision for each health QA record. Table 3 presents the results. It can be seen that our schemes show superiority over hLDA. Our scheme with domain knowledge achieves promising performance in terms of recall. Subjective evaluation of assignments of health QA records into hierarchies # of selected nodes # of Sampled records Recall (%) Precision (%) 5.2.4 On node profiling with terminologies It is well known that for the labeling task, precision is usually more important than recall. We thus adopted two metrics that are able to characterize precision from different aspects. The first one is average S @ K over all testing nodes, which measures the probability of finding a relevant terminology among the top K recommended candidate terms. To be specific, for each testing node, S @ K is assigned to 1 if a relevant terminology is positioned in the top K terms and 0 otherwise. The second one is average P @ K that measures the proportion of recommended terminologies that are relevant. It is formulated as \(P@K =\frac {\mid \mathcal {C} \cap \mathcal {R} \mid }{\mid \mathcal {C} \mid }\), where \(\mathcal {C}\) is a set of top K terminologies and \(\mathcal {R}\) is the manually labeled positive ones. The volunteers were required to label only top five suggested terminologies for each node, and they were labeled either as "positive" or "negative." Table 4 illustrates the results in terms of S @ K and P @ K. It can be seen that our methods consistently outperform hLDA in both S @ K and P @ K. This may be caused by the use of frequent terms in hLDA that are not medical terms. The evaluation results of node profiling with terminologies in terms of S @ K and p @ K S@1 (%) P@1 (%) 5.3 On hierarchy-based retrieval We comparatively evaluate the following unsupervised reranking methods: KB: term-based matching was implemented based on Apache Lucene12 via indexing all health QA records in our data corpus. PRF: pseudo-relevance feedback [16]. R_noDK: retrieval based on our scheme without domain knowledge. R_DK: retrieval based on our scheme with domain knowledge. To obtain the relevance ground truth of returned health QA record, we conducted a manual labeling procedure. Each health QA record was labeled by three volunteers to be very relevant (score 2), relevant (score 1), or irrelevant (score 0) with respect to the given query. We adopted NDCG @ n as our metric [28]. We randomly sampled 50 questions as queries. Figure 3 illustrates the experimental results with various NDCG depths. It can be observed that our proposed hierarchy-based retrieval approaches consistently outperform the other prevailing techniques. The possible reason may be the different search space. KB and PRF search over the entire data corpus, while ours route the given query to relevant leaf nodes that ensures the relevant search space in semantic topic level. The following graph-based random walk reranking further improves the precision. In addition, the R_DK approach performs better than R_noDK, because our scheme without domain knowledge is unable to precisely partition high-level groups. Performance comparison among search algorithms in terms of NDCG@N 6 Related work Related literatures on organizing user-generated contents can roughly be classified into three categories: pattern-based, statistical, and folksonomy-based approaches. The pattern-based approaches utilize predefined linguistic rules to identify concepts and their inter-relations, such as "is-a" and "whole-part." For example, Li et al. [29] defined a subsumption relation to extract ontological relations between complex concepts from text segments. Beyond hierarchy generation on individual data source, the effort in [2] concentrated on organizing information resources into a topic hierarchy from multiple independent sources. Statistical approaches either use hierarchical clustering methods or build a model to generate the hierarchy. For instance, Ming et al. [1] clustered web knowledge based on a predefined prototype hierarchy. Cimiano and Staab [30] constructed a hierarchy using agglomerative clustering and a hypernym oracle. Another example, Wang et al. [31] used generative model to cluster concepts for organizing information sources. Folksonomy-based approaches attempt to generate hierarchies in lights of the collaborative annotated tags. Tang et al. [32] presented an ontology learning method using generative probabilistic model. Tsui et al. [33] used heuristic rules and a concept-relation acquisition schema to convert folksonomies to taxonomy. Song et al. [34] proposed a hierarchical tag visualization approach based on greedy algorithm. They then iteratively selected an optimal tag from the ranking list and inserted it into the tree following the minimum-evolution criteria. However, most of these approaches are not suitable for CHSs due to the following issues. First, they usually allow each data instance to be assigned into only one leaf node. While each record in health domain usually covers more than one concern. Second, they label each node by a set of frequent concepts and terms instead of standard terminologies, which is not feasible for inter-system operations. Most importantly, the existing efforts do not consider flexible number of sub topics and layers for topic hierarchies. This paper presented a novel top-down hierarchy generation scheme that is able to automatically organize the community-contributed health data with prior domain knowledge. Each node in the generated hierarchy was labeled with terminologies. Meanwhile, each health record can be categorized into more than one leaf nodes. Based on the generated hierarchy, a search function was designed and implemented to boost health information retrieval performance. Our future work will focus on query-aware hierarchy generation. Specifically, given a natural language query, we will return a comprehensive hierarchy that covers various aspects expected by the query. 8 Endnotes 1 http://health.yahoo.net 2 http://www.webmd.com 3 https://www.healthtap.com 4 http://www.haodf.com 5 http://sg.answers.yahoo.com 6 http://www.nlm.nih.gov/research/umls/ 7 http://www.healthxchange.com.sg 8 http://nlp.stanford.edu/software/tagger.shtml 9 http://metamap.nlm.nih.gov/ 10 http://pewinternet.org/Reports/2013/Health-online.aspx 11 http://www.nlm.nih.gov/mesh/ 12 http://lucene.apache.org All authors contributed equally in this work. All authors read and approved the final manuscript. Ming, Z-Y, Wang, K, Chua, T-S (2010). Prototype hierarchy based clustering for the categorization and navigation of web collections. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval. ACM, (pp. 2–9).Google Scholar Zhu, X, Ming, Z-Y, Zhu, X, Chua, T-S (2013). Topic hierarchy construction for the organization of multi-source user generated contents. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval. ACM, (pp. 233–242).Google Scholar Nie, L, Zhao, Y, Wang, X, Shen, J, Chua, T-S (2014). Learning to recommend descriptive tags for questions in social forums. ACM Trans Inf Syst (TOIS), 32(1), 5. ACM.CrossRefGoogle Scholar Golder, SA, & Huberman, BA (2006). Usage patterns of collaborative tagging systems. J. Inform. Sci, 32(2), 198–208. Sage Publications.CrossRefGoogle Scholar Mankoff, J, Kuksenok, K, Kiesler, S, Rode, JA, Waldman, K (2011). Competing online viewpoints and models of chronic illness. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, (pp. 589–598).Google Scholar Yang, S-H, White, RW, Horvitz, E (2013). Pursuing insights about healthcare utilization via geocoded search queries. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval. ACM, (pp. 993–996).Google Scholar White, RW, & Horvitz, E (2012). Studies of the onset and persistence of medical concerns in search logs. In Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval. ACM, (pp. 265–274).Google Scholar Cartright, M-A, White, RW, Horvitz, E (2011). Intentions and attention in exploratory health search. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval. ACM, (pp. 65–74).Google Scholar Luo, G, & Tang, C (2008). On iterative intelligent medical search. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval. ACM, (pp. 3–10).Google Scholar Blei, DM, Ng, AY, Jordan, MI (2003). Latent dirichlet allocation. J Mach Learn Res, 3, 993–1022.zbMATHGoogle Scholar Weng, C-Y, Chu, W-T, Wu, J-L (2009). Rolenet: Movie analysis from the perspective of social networks. IEEE Trans Multimed, 11(2), 256–271. IEEE.CrossRefGoogle Scholar Ramage, D, Heymann, P, Manning, CD, Garcia-Molina, H (2009). Clustering the tagged web. In Proceedings of the Second ACM International Conference on Web Search and Data Mining. ACM, (pp. 54–63).Google Scholar Mao, X-L, Ming, Z-Y, Zha, Z-J, Chua, T-S, Yan, H, Li, X (2012). Automatic labeling hierarchical topics. In Proceedings of the 21st ACM international conference on Information and knowledge management. ACM, (pp. 2383–2386).Google Scholar Nie, L, Yan, S, Wang, M, Hong, R, Chua, T-S (2012). Harvesting visual concepts for image search with complex queries. In Proceedings of the 20th ACM international conference on Multimedia. ACM, (pp. 59–68).Google Scholar Nie, L, Wang, M, Zha, Z, Li, G, Chua, T-S (2011). Multimedia answering: enriching text QA with media information. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval. ACM, (pp. 695–704).Google Scholar Liu, Y, Mei, T, Hua, X-S, Tang, J, Wu, X, Li, S (2008). Learning to video search rerank via pseudo preference feedback. In 2008 IEEE International Conference on Multimedia and Expo. IEEE, (pp. 297–300).Google Scholar Natsev, AP, Naphade, MR, Tes, J̌,iĆ (2005). Learning the semantics of multimedia queries and concepts from a small number of examples. In Proceedings of the 13th annual ACM international conference on Multimedia. ACM, (pp. 598–607).Google Scholar Yan, R, Hauptmann, A, Jin, R (2003). Multimedia search with pseudo-relevance feedback. In International Conference on Image and Video Retrieval. Springer, (pp. 238–247).Google Scholar Akbari, M, Nie, L, Chua, T-S (2015). aMM: Towards adaptive ranking of multi-modal documents. Int J Multimedia Inf Retr, 4(4), 233–245. Springer.CrossRefGoogle Scholar Nie, L, Akbari, M, Li, T, Chua, T-S (2014). A joint local-global approach for medical terminology assignment. In MedIR@ SIGIR, (pp. 24–27).Google Scholar Nie, L, Zhao, Y-L, Akbari, M, Shen, J, Chua, T-S (2015). Bridging the vocabulary gap between health seekers and healthcare knowledge. IEEE Trans Knowl Data Eng, 27(2), 396–409. IEEE.CrossRefGoogle Scholar Akbari, M, Huc, X, Liqianga, N, Chua, T-S (2016). From tweets to wellness: Wellness event detection from twitter streams. In Thirtieth AAAI Conference on Artificial Intelligence. https://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/view/11931. Sondhi, P, Sun, J, Zhai, C, Sorrentino, R, Kohn, MS, Ebadollahi, S, Li, Y (2010). Medical case-based retrieval by leveraging medical ontology and physician feedback: Uiuc-ibm at imageclef 2010. In CLEF.Google Scholar Wang, K, Ming, Z, Chua, T-S (2009). A syntactic tree matching approach to finding similar questions in community-based qa services. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval. ACM, (pp. 187–194).Google Scholar Lin, J (1991). Divergence measures based on the Shannon entropy. IEEE Trans. Inform. Theory, 37(1), 145–151. IEEE.MathSciNetCrossRefzbMATHGoogle Scholar Blei, DM, Griffiths, TL, Jordan, MI, Tenenbaum, JB (2004). Hierarchical topic models and the nested chinese restaurant process. Advances in neural information processing systems, 16, 17. The MIT Press.Google Scholar Warrens, MJ (2010). Inequalities between multi-rater kappas. ADAC, 4(4), 271–286. Springer.MathSciNetCrossRefzbMATHGoogle Scholar Nie, L, Wang, M, Zha, Z-J, Chua, T-S (2012). Oracle in image search: a content-based approach to performance prediction. ACM Trans Graph (TOIS), 30(2), 13. ACM.Google Scholar Li, T, Chubak, P, Lakshmanan, LV, Pottinger, R (2012). Efficient extraction of ontologies from domain specific text corpora. In Proceedings of the 21st ACM international conference on Information and knowledge management. ACM, (pp. 1537–1541).Google Scholar Cimiano, P, & Staab, S (2005). Learning concept hierarchies from text with a guided agglomerative clustering algorithm. In ICML 2005 workshop on Learning and Extending Lexical Ontologies with Machine Learning Methods, Bonn, Germany. Citeseer, (pp. 6–16).Google Scholar Wang, C, Danilevsky, M, Desai, N, Zhang, Y, Nguyen, P, Taula, T, Han, J (2013). A phrase mining framework for recursive construction of a topical hierarchy. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, (pp. 437–445).Google Scholar Tang, J, Leung, H-f, Luo, Q, Chen, D, Gong, J (2009). Towards ontology learning from folksonomies. In IJCAI, (Vol. 9, pp. 2089–2094).Google Scholar Tsui, E, Wang, WM, Cheung, CF, Lau, AS (2010). A concept relationship acquisition and inference approach for hierarchical taxonomy construction from tags. Inf Process Manag, 46(1), 44–57. Elsevier.CrossRefGoogle Scholar Song, Y, Qiu, B, Farooq, U (2011). Hierarchical tag visualization and application for tag recommendations. In Proceedings of the 20th ACM international conference on Information and knowledge management. ACM, (pp. 1331–1340).Google Scholar Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Email authorView author's OrcID profile 1.School for Integrative Sciences and EngineeringNUSSingaporeSingapore 2.School of ComputingNUSSingaporeSingapore 3.Department of Computer Science and EngineeringTexas A&M UniversityTXUSA Akbari, M., Hu, X., Nie, L. et al. J Bioinform Sys Biology (2016) 2016: 18. https://doi.org/10.1186/s13637-016-0053-x Accepted 24 October 2016
CommonCrawl
What are the forces present in a coordinated turn? During straight and level flight, coordinated flight is assumed when there is no net lateral force (no slip or no skid). But this concept totally breaks down when it comes to turning, in a co-ordinated or unco-ordinated turn; there will always be a net lateral force due to centripetal force provided by the horizontal component of the lift force. The centrifugal force (inertial force) found in the books, in an attempt to explain it doesn't make sense to me as the centrifugal force is an imaginary force. So how is co-ordinated flight achieved in term of the forces? Thanks in advance. aerodynamics aircraft-physics flight-dynamics maneuver coordinated-flight ymb1 Moonzarin EshaMoonzarin Esha $\begingroup$ Centrifugal force exists in the reference frame of the aircraft. If you do the analysis there, you need it, if you are doing it in the reference frame of the air, you don't. Since General Relativity, inertial forces are now considered just as real as any other forces, and include gravitational force, which by principle of equivalence (the core postulate of GR) is locally indistinguishable from acceleration of the reference frame. Note I didn't say gravity, because gravity actually means sum of all inertial forces (includes centrifugal force due to Earth rotation). $\endgroup$ – Jan Hudec $\begingroup$ … doing analysis in the aircraft frame of reference is usually a bit nicer since there aircraft is stationary so all the aerodynamic and inertial forces cancel out. Plus it tells you what you'd feel if you were sitting in the aircraft, because you feel exactly the sum of inertial forces (a.k.a local gravity; includes gravitational forces). Of course the inertial forces just correspond to the acceleration as viewed from the reference frame of air, but since that frame is not in free fall, there are still some inertial forces and you have a more cases than in the aircraft one. $\endgroup$ $\begingroup$ Oh boy, here we go again... $\endgroup$ – Michael Hall $\begingroup$ related: What does the balance ball actually indicate? and Is this vector diagram of the forces at play in turning flight correct? $\endgroup$ – Manu H $\begingroup$ Maybe this will inspire me to draw those diagrams... $\endgroup$ – quiet flyer To get a deeper understanding beyond the superficial, it helps to break down the forces and moments present on an aircraft that may affect its rigid-body motion: Aerodynamic forces: These are the forces and moments exerted by the airflow on the aircraft Ground forces: These are the forces and moments exerted by the ground on the aircraft, transmitted through tyres and landing gear. Not applicable when it's flying. Propulsion: Forces and moments due to direct thrust. For simplification, let's assume that thrust acts inline with the forward axis. Gravitational forces: Gravity pulls the aircraft towards the ground. It's rather special because everything, be the aircraft structures, you, me, or the accelerometers, get pulled at the same rate ($g$)1. This is distinctly different than the other two types of forces, which are only present when airflow affects exposed areas, or when contact with the ground has been made. Inertial forces: These are fictitious forces and moments that are required to maintain non-uniform motion. This includes your centripetal force. Inertial forces must always be equal to the sum of all the aforementioned external forces. First of all, let's consider any turn a steady-state maneuver (thereby ignoring the transients like rolling in and rolling out), which means that the vector sum of all the external forces, including gravity, must sum to the inertial forces. As you've correctly pointed out, the sum of aerodynamic forces + gravitational forces must be equal to the centripetal force, which in the turning plane is provided by lift, side force and gravity. This must hold true for any steady-state turn, whether coordinated or not. There are two ways to define a coordinated turn. With all engines operating, they are approximately equivalent: A zero-sideslip turn A ball-centered turn: we are going to use this definition What ball-centered means is that there are no aerodynamic forces acting laterally on the airplane: lift must provide all the centripetal force required. Ball centered provides the best average feel for the occupants, since the forces are directly inline with the floor, and there's no side force causing sway. Since everything feels gravity at the same rate, occupants or the ball cannot detect gravity. For illustration: For the more math oriented, in inertial frame, Newton's second law is stated as: $$\vec{F_{i}}+m\vec{g_i}=m\frac{d\vec{V_i}}{dt} \tag{1}$$ The entire right hand side is considered to be inertial forces. However, if the measurements occur in a rotating frame (on an airplane, for example), then we need to express everything into the body frame. The left-hand side is easy: $$\vec{F_{b}} = C_{bi}\vec{F_{i}} \tag{2}$$ $$\vec{g_b} = C_{bi}\vec{g_i} \tag{3}$$ where $C_{bi}$ is the rotation matrix transforming a vector from inertial frame to body frame. The right-hand side requires some adjustments, because the Euler angles of the body themselves are changing: $$\frac{d\vec{V_b}}{dt}=\frac{d(C_{bi}\vec{V_i})}{dt}$$ Apply chain rule, and we have: $$\frac{d\vec{V_b}}{dt}=\frac{dC_{bi}}{dt}\vec{V_i}+C_{bi}\frac{d\vec{V_i}}{dt} \tag{4}$$ It can be mathematically shown the following identity: $$\frac{dC_{bi}}{dt}\vec{V_i} = -\vec{\omega_b} \times \vec{V_b} \tag{5}$$ Substitute (5) into (4), and we get: $$C_{bi}\frac{d\vec{V_i}}{dt} = \frac{d\vec{V_b}}{dt} + \vec{\omega_b} \times \vec{V_b}$$ Finally, if we multiply both sides of (1) by $C_{bi}$ and simplify with (2), (3) and (5), we get: $$\vec{F_{b}}+m\vec{g_b}=m \left( \frac{d\vec{V_b}}{dt}+\vec{\omega_b} \times \vec{V_b} \right) \tag{6}$$ (6) is the Newton's second law in a rotating frame. The entire right-hand side are still inertial forces (same as (1)), except now we also have the cross-product, which produces the centripetal inertial forces: $$\vec{\omega_b} \times \vec{V_b} = \begin{bmatrix}p \\ q \\ r\end{bmatrix} \times \begin{bmatrix}u \\ v \\ w\end{bmatrix} = \begin{bmatrix} qw - rv \\ ru - pw \\ pv - qu \end{bmatrix} \tag{7}$$ You will readily identify terms like $ru$, which is very close to the familiar $a_c=\omega V$ for a restricted 2D centripetal motion (whence $p=0$ and $q=0$). One final note, the right-hand side is fictitious in that they are not real forces! They are a result of the kinematic motion itself, and must require the left-hand side (which are the real forces), to sustain. 1: Technically, this is only true locally, because Earth is a sphere and does not exert a uniform field at different altitudes. But at the range of altitudes that airplanes will be flying, this is a rather good approximation. – Jamiec ♦ It is helpful to understand a basic concept in Physics to understand this. The answer to this depends on what frame of reference you are measuring things in. In what is called an "Inertial" frame of reference, only "real" forces need be considered, as only "real" forces cause acceleration (F=ma). In an inertial frame of reference the only real forces acting on an aircraft are the aerodynamic force (the pressure of the atmosphere on every square inch of the surface of the airframe), and the thrust produced by the engines. PERIOD. But we all eat, sleep, walk, and fly on the surface of the earth, which is an accelerated frame of reference, due to the acceleration of Gravity, the rotation of the earth, the motion of the earth around the sun, etc. etc., and in that accelerated frame of reference, in order to explain the apparent motion of any body, other "fictitious" forces, such as *centrifugal force, the force of gravity itself, and other inertial forces, must all be considered to make the math work out. (In other words, as mentioned in another answer, to make the forces cancel out when the aircraft is in a "steady state"). *Note: Centrifugal force is often added on diagrams of aircraft in turns to help explain the apparent "balance" of forces for the aircraft that is stable in the diagram. But the force, and the "stability" are both fictitious, because the diagram is depicted in a turning and rotating frame of reference. Basically, these fictitious forces must be added in to "subtract out" the acceleration of the frame of reference itself, because without considering them, the answer you get would only be calculating the acceleration of the aircraft within an inertial (zero-G) frame of reference, and we generally want to know our acceleration in the frame we are otherwise examining (generally the earth-frame). Charles BretanaCharles Bretana $\begingroup$ Re "Basically, these fictitious forces must be added in to "subtract out" the acceleration of the frame of reference itself... and we generally want to know our acceleration in the earth-frame."-- this answer might be a bit more clear if it explained that to calculate the acceleration in the earth-frame (or the airmass frame), the only "fictitious" force that must be added is gravity, not the "centrifugal force" that we typically see included in diagrams in flight training manuals. These diagrams are typically based on the aircraft reference frame, not the ground or airmass reference frame. $\endgroup$ $\begingroup$ (Which is why they have no explanatory power. The only thing they really "explain" is that the apparent net force as seen from the aircraft reference frame is always zero. They don't actually explain why the ball sits where it does in a skidding or slipping turn.) $\endgroup$ $\begingroup$ @quiet flyer, I only mention centrifugal force as a "fictitious" force because I have seen documents and texts with diagrams that attempt to explain the balance of forces in turns, coordinated and uncoordinated, where the diagram is done in the frame of reference of the aircraft itself, which, obviously, IS an accelerated frame of reference. $\endgroup$ – Charles Bretana $\begingroup$ I have edited my answer to make this clear. $\endgroup$ Not the answer you're looking for? Browse other questions tagged aerodynamics aircraft-physics flight-dynamics maneuver coordinated-flight or ask your own question. Can we replace an attitude indicator with a ball half full of liquid? What does the balance ball actually indicate? Why do planes not employ a non-technological banking indicator? How can an aircraft turn if the horizontal force component is zero? What exactly is a "coordinated" turn? Forces "felt" by pilot, G-meter, inclinometer--are they the aerodynamic forces generated by the aircraft, or the sum of weight+centrifugal force? What is airflow direction in turn? Is the turning motion of a banked airplane caused by true centripetal force?
CommonCrawl
The silicon cycle impacted by past ice sheets Large subglacial source of mercury from the southwestern margin of the Greenland Ice Sheet Jon R. Hawkings, Benjamin S. Linhoff, … Robert G. M. Spencer The 79°N Glacier cavity modulates subglacial iron export to the NE Greenland Shelf Stephan Krisch, Mark James Hopwood, … Eric Pieter Achterberg Nutrient release to oceans from buoyancy-driven upwelling at Greenland tidewater glaciers Mattias R. Cape, Fiammetta Straneo, … Matthew A. Charette Contemporary limnology of the rapidly changing glacierized watershed of the world's largest High Arctic lake K. A. St. Pierre, V. L. St. Louis, … Alex S. Gardner Export of nutrient rich Northern Component Water preceded early Oligocene Antarctic glaciation Helen K. Coxall, Claire E. Huck, … Jan Backman Sponge skeletons as an important sink of silicon in the global oceans Manuel Maldonado, María López-Acosta, … Aude Leynaert Homogeneous sulfur isotope signature in East Antarctica and implication for sulfur source shifts through the last glacial-interglacial cycle Sakiko Ishino, Shohei Hattori, … Naohiro Yoshida Hydrological control of river and seawater lithium isotopes Fei Zhang, Mathieu Dellinger, … Zhangdong Jin Silicic acid limitation drives bloom termination and potential carbon sequestration in an Arctic bloom Jeffrey W. Krause, Isabelle K. Schulz, … Susana Agustí Jon R. Hawkings1, Jade E. Hatton ORCID: orcid.org/0000-0002-9408-79811, Katharine R. Hendry ORCID: orcid.org/0000-0002-0790-58952, Gregory F. de Souza3, Jemma L. Wadham1, Ruza Ivanovic ORCID: orcid.org/0000-0002-7805-60184, Tyler J. Kohler5, Marek Stibal5, Alexander Beaton6, Guillaume Lamarche-Gagnon1, Andrew Tedstone1, Mathis P. Hain ORCID: orcid.org/0000-0002-8478-18577,8, Elizabeth Bagshaw9, Jennifer Pike ORCID: orcid.org/0000-0001-9415-60039 & Martyn Tranter ORCID: orcid.org/0000-0003-2071-30941 Element cycles Globally averaged riverine silicon (Si) concentrations and isotope composition (δ30Si) may be affected by the expansion and retreat of large ice sheets during glacial−interglacial cycles. Here we provide evidence of this based on the δ30Si composition of meltwater runoff from a Greenland Ice Sheet catchment. Glacier runoff has the lightest δ30Si measured in running waters (−0.25 ± 0.12‰), significantly lower than nonglacial rivers (1.25 ± 0.68‰), such that the overall decline in glacial runoff since the Last Glacial Maximum (LGM) may explain 0.06–0.17‰ of the observed ocean δ30Si rise (0.5–1.0‰). A marine sediment core proximal to Iceland provides further evidence for transient, low-δ30Si meltwater pulses during glacial termination. Diatom Si uptake during the LGM was likely similar to present day due to an expanded Si inventory, which raises the possibility of a feedback between ice sheet expansion, enhanced Si export to the ocean and reduced CO2 concentration in the atmosphere, because of the importance of diatoms in the biological carbon pump. Silicon (Si) plays a crucial role in global biogeochemical cycles because it is an essential nutrient for a number of marine organisms, including some species of sponges, radiolarians, silicoflagellates, and diatoms1. Marine diatoms account for 35–70% of marine primary production2, and are therefore key in maintaining ecosystem health, the ocean biological pump, and atmospheric carbon fixation3. The input of Si from terrestrial weathering via rivers is crucial as it sustains diatom productivity over the ocean's Si residence time1. Thus, understanding the sensitivity of the Si cycle in the past, and its likely response to future climate warming, is important for marine ecosystem change, biogeochemical carbon cycling, and by association the efficiency of the ocean's biological carbon pump. Variations in the δ30Si of natural waters, sediments, and biogenic silica reflect fractionation during continental and oceanic biogeochemical processing4. Lighter isotopes are incorporated into solids, for example during the precipitation of secondary weathering products and diatom frustule formation, thus inducing fractionation from parent material values and usually leading to accumulation of heavier isotopes in the dissolved Si phase4,5. The δ30Si of biogenic silica in marine sediment cores has been used as a proxy to explore past oceanic dissolved silica concentrations (DSi)6, infer diatom utilisation of Si7,8, and investigate changes in Si source inputs4. There has been a focus on the δ30Si change from the Last Glacial Maximum (LGM; ~21,000 years ago) to the present day, with Southern Ocean marine biogenic opal records showing a shift in δ30Si of ~+0.5–1.0‰, according to Southern Ocean core records7,8. This has been explained by changes in dissolved silica utilisation in surface waters7, variation in terrestrial silica inputs4,9 and by oscillations in intermediate and deep-water DSi as a result of changes in oceanic circulation8. Until recently, modelling studies have assumed that riverine δ30Si input was uniform over glacial−interglacial timescales in their first-order interpretation of downcore diatom records4,10,11. However, past research indicates that this is unlikely, and that at least part of the δ30Si shift from the LGM to the present day can be explained by a change in the δ30Si and/or magnitude of input fluxes, due to temporal changes in terrestrial weathering regimes4,9,12,13. There is additional uncertainty in interpretation of palaeo-records as downcore biogenic opal isotopic composition are likely to be a complex mixture of changes in silicic acid utilisation and concentrations, the δ30Si of the whole ocean silicon isotope inventory, and more localised changes in inputs. This is problematic given most core records come from the Southern Ocean at present. The role of the changing extent of ice sheets (i.e. palaeo-ice sheets, PIS) since the LGM on the Si cycle has yet to be fully considered14, despite their known impact on the global hydrological cycle15 and weathering of continental rocks16. Glaciers and ice sheets covered nearly 30% of the Earth's land surface at their greatest extent during the LGM, including much of North America and northwestern Eurasia17. Melting of this ice during deglaciation raised sea levels by ~130 m and exported large quantities of eroded sediment into the oceans18,19. The last deglaciation contained two major, rapid ice melt events: Meltwater Pulse 1a (MWP1a; ~14,000–15,000 BP) where sea levels rose by 12–22 m in <350 years20 and Meltwater Pulse 1b (MWP1b; ~11,000 BP) where sea levels may have risen by up to 10 m in ~500 years21. Glaciers and ice sheets are dynamic components of regional nutrient cycles22,23, exporting significant quantities of dissolved24 and labile amorphous silica (ASi) attached to fine-grained glacial suspended particulate matter (SPM)14, which are likely to impact primary productivity in surrounding oceans25. Large silica fluxes from glaciated regions likely lead to preferential growth of diatoms in downstream marine ecosystems compared to other nonsilicifying phytoplankton species24. However, silica fluxes and their associated δ30Si signature from the PIS have been overlooked in studies of ancient Si cycling26,27,28, despite evidence suggesting the role of terrigenous sediment delivery to the ocean is underappreciated in global elemental cycles29. Thus far it has been found that glacial rivers in Iceland have a distinctive low δ30Si12,13, but these data are based on point measurements rather than seasonal time series, and no data exist for large glacial systems more representative of PIS. Here, we present a unique time series of DSi and ASi concentrations and associated δ30Si composition for subglacial meltwaters exiting Leverett Glacier, a large (~600 km2)30 catchment of the Greenland Ice Sheet (GrIS). These data are used as a modern-day analogue for PIS runoff to investigate its potential influence on the Si cycle over glacial−interglacial and millennial timescales. Our findings suggest changing glacial silica fluxes could explain roughly 10–30% of the silicon isotope variation recorded in siliceous organisms since the LGM, instead of previously invoked changes in marine biological productivity or ocean circulation. Ice sheets are likely to have delivered large quantities of isotopically light silica to the oceans during periods of greater glacial activity, thereby augmenting the ocean's Si inventory and sustaining diatom productivity. Meltwater sampling Samples were collected from the proglacial river, ~1 km downstream from where subglacial meltwater exits the GrIS, from early May through July 2015. Leverett Glacier (67.06° N 50.15° W; Fig. 1) has a mean ablation season (May−September) discharge of ~150 m3 s−1 (Fig. 2, Supplementary Fig. 1), and is an order of magnitude larger than other glaciers studied for Si concentrations (DSi and ASi) and corresponding δ30Si to date. The bedrock geology is predominantly Precambrian Shield gneiss/granite, broadly similar to much of northern Canada and Scandinavia, covered by the Eurasian and North American ice sheets31 (Fig. 3). We contend that Leverett Glacier provides a first-order modern-day analogue for the PIS due to its size and bedrock type31. Location of Leverett Glacier. a Indicates the location of the study area in southwest Greenland. Samples were collected from the proglacial river of Leverett Glacier (b) as per Hawkings et al.22, 44, 64 and Lawson et al.83. Image (a) of Greenland is from Landsat (US Geological Survey, via Google), and image (b) of Leverett Glacier terminus is from DigitalGlobe (via Google) Hydrological and geochemical time series from Leverett Glacier proglacial river. a Bulk meltwater discharge, b suspended particulate matter concentration (SPM) and electrical conductivity (EC), c continuous pH time series, d Si isotope composition (δ30Si) of dissolved silica (DSi—closed symbols) and reactive amorphous silica (ASi—open symbols) with s.d., alongside suspended sediment and bedrock δ30Si range (n = 7) in the shaded grey region, e ASi concentrations, and f DSi concentrations. The approximate timing of meltwater outburst events is marked with orange shading Geological map of the Arctic with past ice sheet extent. a Arctic Polar Stereographic map with maximum palaeo-ice sheets extent (~21,000 years ago) indicated by a thick black line17, and insert (b) of the Leverett Glacier region (indicated by a red square on the west coast of Greenland in (a)) where samples were taken. The estimated Leverett Glacier catchment84 is shown in b by the filled grey area. Reproduced with the permission of OneGeology. Map created using QGIS Dissolved and amorphous silica concentrations in meltwaters DSi and ASi concentrations in Leverett Glacier runoff in 2015 were consistent with those previously reported for the GrIS14,24. Discharge-weighted mean concentrations were 20.8 (9.2–56.9) μM for DSi and 229 (69.8–336.6) μM for ASi. This equates to estimated DSi and ASi catchment fluxes of 30 (13–83) Mmol year−1 and 331 (101–488) Mmol year−1, within the same range as those reported at Leverett Glacier for the 2009–2012 period (396–1575 Mmol year−1)22. Meltwater outburst events (shaded red in Fig. 2) in response to supraglacial meltwater forcing of the subglacial system32 promote elevated DSi and ASi concentrations as Si-rich stored waters and sediments are flushed from the ice sheet bed. Si isotope composition of dissolved and amorphous Si We collected the first measurements of SPM δ30ASi (δ30Si of ASi; discharge-weighted mean of −0.21 ± 0.06‰, n = 11), extracted using a weak alkaline leach (see Methods and SI). Our δ30ASi values are lighter than those of local bedrock collected near the sampling site (0.00 ± 0.07‰; n = 3) and bulk suspended sediment δ30Si (−0.09 ± 0.07‰; n = 4; grey-shaded region in Fig. 2d) by ~0.1–0.2‰. The discharge-weighted mean for δ30DSi (δ30Si of DSi) is extremely light, at −0.25 ± 0.12‰ (n = 16), which is similar to δ30ASi, but significantly lower than the bedrock and SPM. Si isotope three-box model We used a modified version of a previously published three-box ocean model33 with an ensemble of 50 simulations (Supplementary Fig. 2) as a thought experiment to simulate plausible impacts of enhanced meltwater DSi and ASi fluxes from the LGM to the Holocene on the marine Si budget (Figs. 4 and 5; Methods). This model is used as a tool to see how the signal of changing glacial Si fluxes and their associated δ30Si composition would propagate into the ocean in the absence of any changes in marine Si cycling. We estimate a change in the DSi + ASi flux of −39 to +6%, and a change in δ30Si of the input flux of +0.15 to +0.43‰ from LGM conditions to present day, while MWP1a produces short-term decreases in the δ30Si of total Si input of ~0.1‰ to ~0.2‰ (Supplementary Table 1; Fig. 4). Glacial Si fluxes (and associated changes in nonglacial riverine fluxes) account for between 0.06 and 0.17‰ of the variation in the δ30DSi of the surface boxes and the deep ocean over the past ~21,000 years, with a further excursion of 0.01–0.08‰ during peak meltwater input (MPW1a and MWP1b). The results show relatively large changes in the ocean Si inventory over the deglaciation (Fig. 5), with an increase in the DSi concentration of up to 12.5 μM in the deep ocean at ~10,000 years before present, in response to deglacial meltwater inputs (Fig. 5). As Si input begins to decrease after the deglacial maximum, whole-ocean Si concentrations begin to decrease as well, with the strength of this decrease scaling directly with the LGM–present difference in total Si input flux. Model results further indicate that MWP1a20 and MWP1b21 impart a signature on marine DSi and δ30Si (Fig. 5), even though they are relatively short-lived events of a few hundred years34. This is especially evident in the larger low-latitude surface box (essentially the surface ocean outside of the Southern Ocean), where MWP1a leads to an excursion in δ30Si of up to −0.08‰. Freshwater flux estimates and model input comparison. a Freshwater fluxes from glacial and nonglacial sources, with estimated uncertainty given in the orange- and blue-shaded regions, and b the glacial DSi + ASi input flux, c weighted δ30Si composition of the Si input flux and d nonglacial riverine DSi + ASi input flux over each model simulation. Coloured lines correspond to the model simulations in Fig. 5. Additional input data are detailed in Supplementary Table 1. Meltwater Pulse 1a (MWP1a) is highlighted by the shaded blue region, and Meltwater Pulse 1b (MWP1b) by the shaded grey region Modelled impact of glacial to interglacial ice sheet wastage on the oceanic Si cycle. a, b Reflect low-latitude surface results for DSi concentrations and δ30DSi anomalies, respectively, c, d reflect high latitude (i.e. Southern Ocean) DSi concentration and isotopic composition anomalies, respectively and e, f reflect deep ocean DSi concentration and isotopic composition anomalies, respectively. Results were generated from an ensemble of 50 model simulations (coloured lines) to sample uncertainty in input fluxes and δ30Si composition, and burial and export efficiency (Fig. 4). Input variables for glacial (DSi + ASi flux and respective δ30Si composition), nonglacial (DSi + ASi flux and respective δ30Si composition), export efficiency to depth and burial efficiency in sediments were chosen using a Latin Hypercube. Each simulation was run from a Last Glacial Maximum (LGM) baseline, which was a 100,000-year spin up under LGM conditions (see Methods). Simulations are displayed as anomalies from LGM conditions. The LGM is highlighted by a dashed line, Meltwater Pulse 1a (MWP1a) by the shaded blue region, and Meltwater Pulse 1b (MWP1b) by the shaded grey region. Transient model inputs are displayed in Fig. 4 and Supplementary Table 1 Si isotope record in an Arctic marine sediment core A high-resolution record of spicule δ30Si, extracted from a sediment core off southeast Iceland (RAPiD 10-1P; 62.98°N, 17.59°W; 1237 m water depth35) was used to investigate possible changes in isotopic inputs in the high-latitudes during a glacial termination, and corroborate box model findings (Fig. 5). The spicule δ30Si record reflects both DSi concentration and δ30DSi of the water in which it was formed6. This new record reveals high-frequency variability in spicule δ30Si during ice sheet collapse at ~14,500 and ~11,500 years before present (Fig. 6), concurrent with fluctuations in carbonate stable isotopes35, assessed from paired planktonic-benthic foraminiferal records. Excursions in the spicule δ30Si of more than −0.5‰ can be observed at both MWP1a and MWP1b. Si isotope core records from deep-sea sponge spicules and other palaeo-data. a Greenland NGRIP ice core oxygen isotope record85, b benthic δ18O and δ13C record from Cibicidoides in the RAPiD-10-1P core (SE Iceland)62, and c Si isotope record from sponge spicules for the RAPiD-10-1P core (blue) and the lower resolution KNR140 core for comparison (white; Carolina Slope8). The shaded regions indicate approximate timings of Meltwater Pulse event 1a (blue) and Meltwater Pulse event 1b (grey) A striking feature of the isotopic composition of ASi from the GrIS is the ~0.2‰ fractionation from the bedrock signature of 0‰ (0.00 ± 0.07‰; Fig. 2). The consistently lighter δ30ASi signature indicates it is a siliceous precipitate or secondary weathering product4. However, it is at the heavier end of previous measurements4, and we are still uncertain of its origin14. Previous studies have suggested ASi forms through mechanochemical action36,37, dissolution-precipitation38,39 and as a surface layer left from the preferential leaching of cations during mineral dissolution40. The mean δ30DSi is the lightest isotopic composition recorded for riverine waters (glacial and nonglacial). It is lower than the only other measurements of δ30DSi in glacial meltwaters from glaciers in Iceland (0.02 ± 0.18‰)12,13, and much lower than global rivers (mean = 1.25 ± 0.68‰)4. It is also lower than the estimated mean groundwater δ30DSi (0.19 ± 0.86‰)4, and most similar to measurements of hydrothermal fluids (−0.30 ± 0.15‰)4. The anomalously low δ30DSi signal from glacial meltwaters requires either a light δ30DSi source or a heavy δ30DSi sink. The latter has only been documented with acidic hydrofluoric leaching of basalts41; therefore, secondary phase dissolution or rapid leaching of the mineral surface is more likely the source. The glacial meltwaters are significantly undersaturated with respect to ASi (mean SIASi = −2.1)14 and have high pH (Fig. 2c), so ASi is very likely to dissolve in meltwaters. Further evidence of this is given by the similar discharge-weighted mean for δ30DSi and δ30ASi meltwater composition. It is therefore possible that the lower δ30DSi composition from day ~170 onward is derived from the dissolution of ASi and the partial dissolution of other lighter secondary weathering products (e.g. clays)42, which may have an even lighter δ30Si signature than ASi43, explaining the lowest δ30DSi values observed. Dissolution of secondary weathering products is thought to occur in long residence time groundwaters43, and could reflect drainage of more isolated subglacial water sources further into the glacial catchment as the melt season progresses44. It is possible that either glacial chemical weathering preferentially removes 28Si during initial dissolution of silicate surface layers5, heavier 30Si isotopes have either been stripped out during previous chemical weathering (e.g. when glaciers retreated during previous interstadials and interglacials over weathered soils), or heavier isotopes are retained in the mineral weathering crust, to balance the long-term isotopic mass balance of the system. The time series of δ30DSi also shows a significant temporal shift of >1.3‰ toward lower values from early season low discharge to peak discharge waters, while δ30ASi shows little temporal variation (Fig. 2d; May−July). This likely reflects the weathering environment in which DSi is generated, as described above. The second lowest δ30DSi values were recorded on day 184 (−0.52‰), during a high discharge meltwater outburst event (~230 m3 s−1) characterised by a large spike in SPM concentration, electrical conductivity and pH (Fig. 2—red-shaded region). This indicates that flushing of long-term stored waters at the ice sheet bed is likely to contribute a very low δ30DSi signature. The Si isotopic composition and high DSi + ASi concentration of GrIS meltwaters, combined with previous measurements of Icelandic glacier meltwaters12, suggest that glacially derived Si is a previously underappreciated source of light Si isotopes to the ocean, either as DSi24 or dissolvable ASi14. We hypothesise that over periods of time similar to, or longer than, the residence time of silicon in seawater (10–15,000 years)1,4, changes in glacial land coverage would affect the ocean's Si inventory and isotopic composition, for example over glacial cycles4. On shorter (e.g. millennial) temporal and spatial scales, the extremely low δ30Si of glacial meltwater could influence regional marine silicon cycling and isotopic budgets. The extremely light isotopic signature of GrIS runoff is likely to be broadly representative of PIS during deglaciation. Leverett Glacier is likely to be a crude analogue of ice sheet meltwaters because it is a large glacial catchment (~600 km2), has bedrock geology broadly representative of the shield bedrock that underlies much of the land on which the Eurasian and North American ice sheets sat (Fig. 3), displays characteristic GrIS catchment sediment export and therefore physical erosion dynamic45, and has a hydrological regime thought to be typical of large outlet glaciers46,47,48. The simulated temporal evolution of glacial and riverine Si inputs to the ocean from the LGM to the present day leads to a switch from glacially dominated Si supply to nonglacial riverine-dominated supply in the present (Fig. 4). All model ensemble members show a broad maximum of Si input during the deglaciation, with sharp peaks during the meltwater pulses (Fig. 4). The δ30Si value of total Si input into the ocean varies over the deglaciation, decreasing sharply during meltwater pulses of isotopically light glacial Si, but generally increasing over the deglaciation as isotopically heavier riverine Si becomes progressively more important than low δ30Si glacial Si input (Fig. 4; Supplementary Table 1). The three-box model used as a thought experiment to simulate potential oceanic response to changes in glacier silica fluxes (Supplementary Fig. 2) suggests an expanded ocean Si inventory at the LGM and a lower marine δ30Si signature (Fig. 5). This simple experiment includes only variable weathering fluxes in the absence of any changes in marine Si cycling, and as such it is only intended to estimate the magnitude of changes caused by glacial weathering for comparison to available data. The lighter surface δ30Si predicted at the LGM (by 0.06–0.18‰) explains a portion of the glacial to present-day δ30Si increase found in diatomaceous remains, which is usually quoted as 0.5–1.0‰4,10. Much of this increase is reliant on ASi dissolution and bioavailability to marine organisms. ASi dissolution is catalyzed by the presence of the alkali metals49,50, which are found in high dissolved concentrations in seawater. Furthermore, a recent study demonstrates rapid dissolution of ASi from glacial SPM in natural seawater (up to 25% in less than 30 days), at high sediment loading concentrations (1 gL−1 of SPM)14, corroborating previous evidence of synthetic ASi dissolution in artificial seawater51. The isotopic signature from ASi will likely imprint upon ocean waters upon dissolution. There is an increase in whole-ocean δ30Si between the LGM and present day, as a result of the change in partitioning between isotopically heavy nonglacial river waters and isotopically light glacial meltwaters, in all ensemble members. The change in whole-ocean δ30Si reflects the change in the isotopic composition of the inputs (~0.15 to ~0.45‰), when the model is run to equilibrium (Supplementary Fig. 3). This indicates the modern ocean might still be responding to LGM and deglaciation meltwater inputs, and could continue to do so for at least another ~20,000 years. The δ30Si value of Si exported from the surface of the high-latitude box (i.e. a simulated opal export flux; Fig. 5) evolves according to the change in whole-ocean δ30Si. This is as expected for the three-box model, in which no change in Si utilisation was simulated in order to isolate the effect of external inputs (i.e. glacial meltwater versus nonglacial meltwater inputs) on the isotopic composition recorded in the marine diatom record. The modelled change in external input (i.e. glacial vs. nonglacial runoff) can explain only around ~0.1‰ of LGM–present change in marine diatom δ30Si records in this box (i.e. up to 20% of the observed change in Southern Ocean diatom core records), which is analogous to the Southern Ocean. There are several possible candidates to explain the discrepancy between modelled and observed δ30Si. First, an obvious candidate for the observed change is a difference in Si utilisation between the LGM and today7. Second, changes in external inputs not modelled in our simulations such as the input of Si through the dissolution of aeolian dust, or change in the nonglacial weathering regime4, may have a further impact. The model predicted that whole-ocean DSi concentrations were higher during the deglaciation and likely higher during the LGM than present day (Fig. 5). A larger Si inventory has implications for CO2 drawdown and ecosystem function, via increased diatom productivity, possibly at the expense of other phytoplankton groups52, as has been observed in Greenlandic fjords24. This has important implications for marine biogeochemical cycles, as higher Si input favours the growth of diatoms relative to other phytoplankton groups53. This is likely to have an impact on the organic carbon export (due to opal ballasting), surface alkalinity (by changing the proportion of silicifiers to calcifying phytoplankton), and the "silica pump", which controls the ratio of nutrients reaching the deep ocean52,53,54,55. The model results further indicate that MWP1a and MWP1b34 impart a signature on marine DSi and δ30Si (Fig. 5), even though they are relatively short-lived events of a few hundred years34. MWP1a is especially notable as it leads to whole ocean excursions in lower δ30Si (up to −0.08‰) and elevated DSi concentrations (up to 1.5 μM) in all simulations (Fig. 5—shaded blue). There is likely some influence of iceberg calving in the sea level rise observed during MWP1a. Although inputs of ASi from iceberg rafted debris may be significant14, this is not currently accounted for in the flux calculations and δ30Si composition of inputs. However, there is a growing consensus that around half of the sea level rise from MWP1a comes from the interior of the Laurentide ice sheet56, with smaller contributions from Antarctica (likely <2 m57), Eurasia (~2.5 m58) and Greenland (~0.5 m). Runoff from melting terrestrial ice therefore likely made up the dominant portion of MWP1a freshwater flux and there is little evidence to suggest iceberg calving contributed anywhere near as much to sea level rise during this period. The importance of iceberg Si inputs from large ice calving events (e.g. Heinrich events such as H1) are not included in our model but should be addressed in future research due to the potentially large associated Si fluxes14. While the whole ocean excursion during the meltwater pulse events is only of the order of the uncertainty on a δ30Si measurement, we would expect proximal downstream PIS effects to be more pronounced than whole ocean model simulations indicate (as per our sponge spicule record; Fig. 6). Although our model is a relatively crude representation of real-world complexity, it indicates that changes in continental ice sheet coverage and meltwater DSi/ASi input were likely of significant importance in the global Si (and by extension carbon) cycle over these time periods and, by extension, over previous glacial cycles26. Data from the sponge spicule record of a marine sediment core proximal to Iceland reveal high-frequency variability in spicule δ30Si during ice sheet collapse, of up to −0.6‰ over ~300 years (Fig. 6). These changes may have been driven by a doubling of DSi concentrations (from approximately 20 to 40–50 μM6), variation in seawater δ30DSi at the time of spicule formation, or a combination of both increased DSi concentrations and lower seawater δ30DSi. Such significant and rapid changes would require a local source that is highly enriched in DSi and/or contribute low δ30DSi to ocean surface waters, which our data indicate could be of glacial origin14,59. Data from Icelandic glaciers, including a spot measurement from Skeiðarárjökull (which has a catchment area ~1400 km2; δ30DSi of 0.01‰ and DSi concentration of 70.4 µM), corroborates the observation that an enriched and light δ30DSi source from local glacial meltwaters is likely12,13. This interpretation is supported by the observation that the excursions in spicule δ30Si from our record coincide, within the age model uncertainty, with estimates of flux pulses derived from sea-level changes and collapse of the local Icelandic ice sheet 15,000 to 14,700 years before present59 (Fig. 6). Further evidence for this comes from the coincidence of the lowest Si isotope values with fluctuations in carbonate stable isotopes35, assessed from paired planktonic-benthic foraminiferal records60 (Fig. 6). This signal is consistent with high-frequency switching between the influence of Arctic seawater and glacially derived waters at our study site; the isotopically high and low-δ30Si water arising from radiocarbon depleted and Si-poor Arctic seawater61 and comparatively well-ventilated subglacial input, respectively. Although there was likely reduced deep-water ventilation during the early part of the deglacial, this would likely have only influenced deeper waters (>2 km)62. Core evidence suggests that localised polar influence from meltwaters, sea-ice formation and brine rejection would likely have transferred a surface δ30Si signal to depth via overflow waters60,62. These overflow waters originated locally and from the Nordic Seas62, which would have been heavily influenced by ice melt from northern hemisphere ice sheets, with a corresponding light δ30Si composition and elevated Si concentrations. The part played by ice sheets and glaciers in marine nutrient and nutrient isotope cycling is only just starting to be appreciated1,14,63. Here we show that these systems act as a significant source of isotopically light Si, either directly via dissolved silica or indirectly as dissolvable amorphous silica attached to SPM. This has the potential to impact the marine Si inventory over a range of different spatial and temporal scales, given the meltwater input from the wastage of large palaeo-ice sheets since the LGM. Our model indicates that the magnitude of meltwater-derived Si inputs is sufficient to drive significant changes in the ocean's Si inventory on glacial/interglacial and deglacial timescales, thereby modulating the productivity of diatoms relative to other primary producers. Results provide evidence for significant low δ30Si release during rapid ice sheet wastage in the δ30Si composition of a high latitude North Atlantic sediment core record, which corroborates the hypothesis that terrestrial ice cover impacted the oceanic Si cycle, derived from modern ice sheet data and modelling results. These findings highlight the important role played by glacial meltwater in the marine Si cycle, aiding in our interpretation of palaeoceanographic proxies and our understanding of past and present carbon cycling. Our data demonstrate the potential for a feedback between PIS growth and decay, increased Si delivery to the ocean and CO2 drawdown via stimulating the productivity of diatoms. Hydrological monitoring Leverett Glacier runoff was hydrologically gauged from the onset (28 May) to the end of the ablation season (15 September) 2015. Gauging of Leverett Glacier meltwater river has been extensively discussed by others (see, e.g., refs. 22,30,32,48,64,65.). Stage (for conversion to discharge), electrical conductivity and turbidity (a proxy for suspended sediment concentration) were logged every 10 min at a stable bedrock section of the river ~2.2 km downstream of the glacier portal. Permanent (fixed in place) and mobile (re-located to keep pace with river stage) temporary pressure transducers monitored stage, which was converted into discharge using a stage-discharge rating curve of 26 Rhodamine-WT dye dilution injections (R2 = 0.81 for permanent pressure transducer, and 0.84 for the mobile pressure transducer). Discharge was calculated by dividing the amount of dye injected by the area under the return curve. The errors associated with discharge measurements are ±12.1 % following the methods of Bartholomew et al.32. Calibrating turbidity (in mV) against 23 manually collected samples (using a USDH48) allowed formation of a continuous suspended sediment record (in g L−1). Around 300–500 mL of meltwater was filtered through a pre-weighed 47 mm 0.45 µm cellulose nitrate filter (Whatman®), with the amount of meltwater filtered accurately measured using a measuring cylinder. On return to labs in the UK, filters were oven dried overnight at 40 °C and reweighed to four decimal places. Suspended sediment concentration was plotted against the turbidity at sampling time points to derive a linear relationship (R2 = 0.73). The linear regression between suspended sediment and turbidity was used to derive suspended sediment concentrations at each 10-min interval over the measurement period. Errors associated with suspended sediment measurements are estimated to be ±6%30. All water samples were collected from the same location ~1 km from Leverett Glacier terminus (Fig. 1) using 1 L HDPE bottles (NalgeneTM) and filtered immediately through 0.45 μm cellulose nitrate filters (Whatman®). Samples were stored in clean HDPE 30 ml Nalgene bottles and kept refrigerated until analysis. Cellulose nitrate filters were retained and also stored refrigerated until ASi and bulk SPM analysis. Dissolved silica (DSi) Dissolved silica (as silicic acid) was determined using LaChat QuickChem® 8500 series 2-flow injection analyser (QuickChem® Method 31-114-27-1-D). The methodological limit of detection was 0.3 μmol, precision was ±1.3% and accuracy was +2.1%, as determined from five replicates of a 250 μg L−1 (8.9 μmol) standard prepared by gravimetric dilution from a 1000 mg L−1 Si stock (CertiPur®). Amorphous silica (ASi) Amorphous silica was measured using the weak alkaline extraction method of DeMaster66, used to determine biogenic opal and, increasingly, inorganic amorphous silica in terrestrial soils and sediments67,68. The DeMaster66 method uses 0.1 M sodium carbonate (Na2CO3) solution, a weak base, which maximises the dissolution of amorphous Si with minimal impact on more refractory crystalline material. Approximately 30 mg of sample (accurately weighed) was placed in a clean 60 mL HDPE bottle (NalgeneTM) with 50 mL of 85 °C 0.1 M Na2CO3 solution. Bottles were placed in a hot water bath at 85 °C for the duration of the extraction. Aliquots of 1 mL were taken at 2, 3 and 5 h and stored refrigerated in a new, clean 2 mL microcentrifuge tube (polypropelene). Samples were measured using the same method as DSi (above) within 24 h immediately after dilution of 0.5 mL sample in 4.5 mL of 0.021 M HCl. At least two blanks were processed alongside samples. Precision was ±2.9% and accuracy was +0.4%, as determined from five replicates of a 250 μg L−1 (8.9 μmol) standard prepared by gravimetric dilution from a 1000 mg L−1 Si stock (CertiPur®). Amorphous silica was determined by using the intercept of the regression line. This method was also compared to the method of extracting total reactive silica for δ30ASi using 0.2 M sodium hydroxide (NaOH), to ensure consistency between %ASi results and silicon isotope composition (Supplementary Fig. 4). We are confident this method extracted mostly ASi as % extractable Si was lower or similar to the well-tested 0.1 M Na2CO3 method. Silicon isotope analysis To determine δ30ASi, approximately 10–30 mg of sample was air-dried in a laminar flow hood and accurately weighed into Teflon vials (Savillex). To this, 1 mL 0.2 M NaOH was added per mg of sediment. Samples were heated at 100 °C for 30 min to extract reactive silica (assumed to be mostly ASi; Supplementary Fig. 4). This method was compared to the method of extracting ASi using 0.1 M Na2CO3, to ensure consistency between %ASi results and silicon isotope composition (Supplementary Fig. 4). We are confident this method extracted mostly ASi as % extractable Si was lower or similar to the well-tested Na2CO3 method. Samples were then acidified with 8 N nitric acid (30 μL per 1 mL of solution), diluted 1 in 5, and then centrifuged for 5 min at 4000 rpm before being filtered through a 0.22 μm PES syringe filter (Pall Acrodiscs). A threshold minimum value for %ASi was used for samples (0.01% ASi), as samples with very low %ASi may have resulted in more refractory material also being extracted, skewing the δ30Si record. Bedrock (n = 3) and bulk SPM (n = 4) δ30Si were determined after alkaline fusion, adapted from the method of Georg et al.69. Bedrock samples (unsorted, coarse proglacial debris collected at the front of Leverett Glacier) were initially crushed using a sledgehammer on a metal plate (within multiple thick polyethylene bags), then ground for 1 min in a Fritsch Planetary Mono Mill Pulverisette 6 at 500 rpm, following the methods of Telling et al.70. Approximately 15 mg of bulk rock sample powder or suspended particulate material was subsequently accurately weighed into a silver crucible with ~200 mg of NaOH pellets (analytical grade). Crucibles were then placed in a muffle furnace, heated to 730 °C for 10 min to fuse, and allowed to cool for 10 min. Samples were added to 30 mL of deionised water (18.2 MΩ cm−1 Milli-Q Millipore), left overnight and then sonicated for 10 min to aid final dissolution. Samples were acidified and diluted with Milli-Q water using a ratio of 2.1 mL 8 N HNO3 per 500 mL water, before analysis as below. Dissolved samples were preconcentrated in Teflon vials by evaporating on a hotplate at 90 °C until approximately 2 ml of sample remained. All samples (equivalent 7.2 μg Si) were purified using precleaned BioRad exchange resin AG50W-X12 (in H+ form) columns and eluted with MQ water3, before being spiked with a Mg solution (Inorganic Ventures). Freshwater samples and their bracketing standards were additionally spiked with 50 μL 0.01 M sulphuric acid (Romil-UpA) to reduce mass bias resulting from high anion loading71. Silicon isotope composition (28Si, 29Si, 30Si) was determined by mass spectrometry using a Thermo ScientificTM Neptune PlusTM High Resolution MC-ICP-MS in the Bristol Isotope Group laboratories at the University of Bristol (Supplementary Table 3). Machine blanks were <1% of the signal on 28Si, and procedural blanks were below the limit of detection. A standard-sample-standard bracketing procedure with Mg-doping was used to correct for mass bias72. International reference NBS-28 was used as the bracketing standard and sample results were calculated using the δ30Si notation for deviations of 30Si/28Si from this bracketing standard (Eq. (1)). $$\delta ^{30}{\mathrm {Si}} = \left[ {\frac{{\left( {\,{}^{30}{\mathrm {Si}}{\mathrm{/}}\,{}^{28}{\mathrm {Si}}} \right)_{\mathrm {sample}} - \left( {\,{}^{30}{\mathrm {Si}}{\mathrm{/}}\,{}^{28}{\mathrm {Si}}} \right)_{{\mathrm {NBS28}}}}}{{\left( {\,{}^{30}{\mathrm {Si}}{\mathrm{/}}\,{}^{28}{\mathrm {Si}}} \right)_{{\mathrm {NBS28}}}}}} \right]\times1000.$$ δ30DSi internal precision was typically ±0.10‰ (2σ) for δ30Si and ±0.05‰ (2σ) for δ29Si. The long-term reproducibility was determined by analysis of two international reference standards, characterised by a number of research groups. The mean for diatomite was +1.26‰ ± 0.11‰ (2σ) for 27 measurements73 and the average for LMG08 (sponge) was −3.33‰ ± 0.15‰ (2σ) for 53 measurements74. External reproducibility of freshwater δ30Si was assessed using a lake water standard (RMR4) from the NERC Isotope Geosciences Laboratory UK (NIGL), which had mean δ29Si and δ30Si values of +0.46‰ ± 0.02‰ (2σ) and +0.91‰ ± 0.03‰ (2σ) respectively (n = 3) in good agreement with previous measurements from NIGL75. A three-isotope plot of all the samples measured during the study can be plotted along a straight line with a gradient of 0.523 ± 0.025 (0.526 showing mass-dependent fractionation; Supplementary Fig. 5). Flux estimates for the silicon isotope three-box model Glacial meltwater flux from the palaeo ice sheets into the oceans is calculated using the ICE-6G_C (VM5a) reconstructed ice mass loss leading to sea level rise from the LGM to present day (Supplementary Fig. 6)34,76. The timestep of the reconstruction is 500 years. Thus, although shorter meltwater pulse events (such as MWP1a and MWP1b) are not precisely resolved in time, their meltwater contribution to the global oceans is captured in the longer term. In addition to this deglaciation meltwater, we include a flux from modelled precipitation minus evaporation (i.e. meltwater runoff that is balanced by snow and ice accumulation) over the palaeo ice sheets77, which was calculated from the best 30 members of an ensemble of simulations validated against both the preindustrial and LGM climate. At the LGM, this yields an ice sheet runoff estimate of ~7700 ± 770 km3 year−1, peaking at 21,600 ± 2160 km3 year−1 during the period encompassing Meltwater Pulse 1a (MWP1a), and falling to ~1400 ± 140 km3 year−1 at present78. These values give a reasonable first-order approximation of changes in glacial runoff over the last 21,000 years. Riverine runoff input fluxes are calculated using two end members: modern-day runoff (37,288 ± 1846 km3 year−1)79 and the percentage difference in precipitation−evaporation simulated over non-ice covered land area for the modern day compared to LGM using the same 30 climate model ensemble as above (−24.6%; 28,115 ± 1391 km3 year−1)77. Changes in runoff at each time point during the deglaciation are approximated by scaling to the percentage land ice cover34 from these two end members. Estimates for other major silica (DSi + ASi) input fluxes (groundwater, aeolian dust, hydrothermal and sea floor weathering) are taken from Tréguer et al.1 and Frings et al.4. Aeolian dust fluxes are known to change significantly from the glaciation to present day80 but are kept constant in our simulations to allow for evaluation of glacier meltwater effect only. Riverine DSi and ASi concentrations and associated δ30Si composition are taken from Durr et al.81 (DSi), Conley82 (ASi) and Frings et al.4 (δ30DSi and δ30ASi composition) to calculate fluxes and δ30Si of riverine inputs at each model time step. Glacial meltwater DSi and ASi fluxes and δ30Si composition are taken from samples measured at Leverett Glacier in this paper (Supplementary Table 1). We estimate a change in the DSi + ASi flux of −11 (−39 to +6)%, and a change in weighted δ30Si of the input flux of +0.33 (+0.23 to +0.47)‰ from LGM conditions to present day, using these mass balance calculations (with minimum and maximum values; Supplementary Table 1; Fig. 4). Silicon isotope three-box model framework We adapt the three-box ocean model of Sarmiento and Toggweiler33 to simulate the deglacial oceanic cycle of Si and its isotopes in an open-system ocean, i.e. with Si inputs into and outputs from the ocean. The inputs to the ocean are computed as described above (note Heinrich event H1 is not included in our model due to the uncertainties in associated freshwater fluxes), while outputs are parameterised as a temporally constant fraction of export that is lost from the ocean by burial in sediment (see Model ensemble below). The model was run at a 1-year time step with a time-transient Si input (magnitude and weighted δ30Si of the input flux) dependent on the balance between ice sheet meltwater flux and riverine flux over the last 21,000 years (from LGM to present day; as below; e.g. Supplementary Tables 1, 2). The three-box model of Sarmiento and Toggweiler33 splits the ocean into a deep-ocean box (96.8% of ocean volume) and two surface-ocean boxes, one representing the low latitudes (2.2% of ocean volume, 85% of the ocean surface) and one representing the high latitudes (1% of ocean volume, 15% of the ocean surface). These boxes are connected by a simplified representation of the ocean circulation as represented in Supplementary Fig. 2. The model's Si cycle is driven by Si uptake in the two surface boxes, which is parameterised as a first-order function of Si concentration, with the rate constants kl and kh (see Supplementary Table 1). This uptake drives an export of Si into the deep ocean box. For a given steady-state Si concentration in the surface ocean boxes, a version of the model in which all Si taken up is exported to the deep ocean produces identical results (in terms of isotopic and concentration response) to a version in which 50% of the Si taken up is redissolved in the surface boxes (following e.g. Tréguer and De La Rocha1). A small fraction fb of this export flux does not dissolve within the deep ocean but is lost to burial in sediment, representing the output term that, in equilibrium, balances the input of Si from external sources. In addition to the glacial and riverine fluxes of Si to the ocean discussed above, other external sources of Si are also included (detailed in the section Flux estimates for silicon isotope three-box model), and are assumed to be constant over the deglaciation in order to isolate the effect of changes in glacial/riverine Si input on the oceanic Si system (Supplementary Table 2). These temporally constant fluxes are simulated following Frings et al.4, and include input of Si from aeolian deposition, groundwater discharge, hydrothermalism and seafloor weathering, each of which contributes Si to different boxes of the model (see Supplementary Fig. 2). Si isotopes are handled in the model by carrying a tracer of 30Si. The only process that produces isotope fractionation in the model is the uptake of Si in the two surface boxes; this fractionation is simulated by scaling the rate constant of 30Si uptake by the fractionation factor α = 0.9989 (i.e. an isotope effect of −1.1‰) relative to that for the Si tracer. No isotope fractionation during dissolution is modelled. The isotope composition of all input fluxes to the ocean (including the temporally variable glacial and riverine fluxes) is simulated as a temporally constant value (see Supplementary Table 2) based on Frings et al.4. We carry out an ensemble of 50 simulations in which a Latin Hypercube sampling method is used to choose a range of possible temporal evolutions of Si input fluxes from glacial and riverine fluxes, with the ranges derived as described above. With one exception, all parameters of the ocean-interior Si cycle are left unchanged in the ensemble. The uptake rate constants kl and kh are explicitly left unchanged so as to avoid any changes in the relative utilisation of Si between simulations, since we wish to quantify the degree to which the δ30Si of exported particulate Si may change over the deglaciation without any change in Si utilisation. The one uncertain parameter of the oceanic Si cycle that does change between simulations is the burial fraction fb. We constrain the range of possible values that fb may plausibly take in the context of this model by conducting a sensitivity test using a Latin Hypercube sampling approach: we run a 150-member model ensemble in which fb is varied concurrently with a range of Si input fluxes corresponding to the uncertainties on the modern Si flux to the ocean (riverine fluxes from Frings et al.4; glacial fluxes extrapolated from this study as in the main simulations). The resulting dependency of the whole-ocean mean Si concentration on fb (Supplementary Fig. 7) is used to determine an uncertainty range for fb. As can be seen, the modern mean-ocean Si concentration of ~92 μM [86] is reproduced for values of fb between 0.056 and 0.073, and we thus apply this range to the model ensemble. For each member of the ensemble, the model was spun up with the ensemble member's specific values of fb and LGM input fluxes for 100,000 years, followed by a 21,000-year simulation of the deglaciation. These results are presented in Fig. 5. Continuation of the simulations for a further 79,000 years (i.e. a total of 100,000-year post-spin up) allows us to assess the long-term response of the model. These results are presented in Supplementary Fig. 3. The data used in this article are available from the corresponding author ([email protected]) on request. This Article was originally published without the accompanying Peer Review File. This file is now available in the HTML version of the Article; the PDF was correct from the time of publication. Treguer, P. J. & De La Rocha, C. L. The world ocean silica cycle. Annu. Rev. Mar. Sci. 5, 477–501 (2013). Nelson, D. M. et al. Production and dissolution of biogenic silica in the ocean: Revised global estimates, comparison with regional data and relationship to biogenic sedimentation. Glob. Biogeochem. Cy. 9, 359–372 (1995). Egge, J. K. & Aksnes, D. L. Silicate as regulating nutrient in phytoplankton competition. Mar. Ecol. Prog. Ser. 83, 281–289 (1992). Frings, P. J. et al. The continental Si cycle and its impact on the ocean Si isotope budget. Chem. Geol. 425, 12–36 (2016). Ziegler, K., Chadwick, O. A., Brzezinski, M. A. & Kelly, E. F. Natural variations of δ30Si ratios during progressive basalt weathering, Hawaiian Islands. Geochim. Cosmochim. Acta 69, 4597–4610 (2005). Hendry, K. R. & Robinson, L. F. The relationship between silicon isotope fractionation in sponges and silicic acid concentration: modern and core-top studies of biogenic opal. Geochim. Cosmochim. Acta 81, 1–12 (2012). De La Rocha, C. L., Brzezinski, M. A., DeNiro, M. J. & Shemesh, A. Silicon-isotope composition of diatoms as an indicator of past oceanic change. Nature 395, 680–683 (1998). Hendry, K. R., Robinson, L. F., McManus, J. F. & Hays, J. D. Silicon isotopes indicate enhanced carbon export efficiency in the North Atlantic during deglaciation. Nat. Commun. 5, 3107 (2014). Article ADS PubMed CAS Google Scholar Georg, R. B., Reynolds, B. C., Frank, M. & Halliday, A. N. Mechanisms controlling the silicon isotopic compositions of river waters. Earth Planet. Sci. Lett. 249, 290–306 (2006). Horn, M. G., Beucher, C. P., Robinson, R. S. & Brzezinski, M. A. Southern ocean nitrogen and silicon dynamics during the last deglaciation. Earth Planet. Sci. Lett. 310, 334–339 (2011). Beucher, C. P., Brzezinski, M. A. & Crosta, X. Silicic acid dynamics in the glacial sub-Antarctic: implications for the silicic acid leakage hypothesis. Global Biogeochem. Cy. 21, GB3015 (2007). Opfergelt, S. et al. Riverine silicon isotope variations in glaciated basaltic terrains: implications for the Si delivery to the ocean over glacial-interglacial intervals. Earth Planet. Sci. Lett. 369, 211–219 (2013). Georg, R. B. et al. Silicon isotope variations accompanying basalt weathering in Iceland. Earth Planet. Sci. Lett. 261, 476–490 (2007). Hawkings, J. R. et al. Ice sheets as a missing source of silica to the polar oceans. Nat. Commun. 8, 14198 (2017). Article ADS PubMed PubMed Central CAS Google Scholar Jones, I. W. et al. Modelled glacial and non-glacial HCO3(-), Si and Ge fluxes since the LGM: little potential for impact on atmospheric CO2 concentrations and a potential proxy of continental chemical erosion, the marine Ge/Si ratio. Glob. Planet. Chang. 33, 139–153 (2002). Vance, D., Teagle, D. A. H. & Foster, G. L. Variable Quaternary chemical weathering fluxes and imbalances in marine geochemical budgets. Nature 458, 493–496 (2009). Ehlers, J., Gibbard, P. L. & Hughes, P. D. Quaternary Glaciations—Extent and Chronology: A Closer Look (Elsevier, Amsterdam, 2011). Knudson, K. P. & Hendy, I. L. Climatic influences on sediment deposition and turbidite frequency in the Nitinat Fan, British Columbia. Mar. Geol. 262, 29–38 (2009). Jaeger, J. M. & Koppes, M. N. The role of the cryosphere in source-to-sink systems. Earth-Sci. Rev. 153, 43–76 (2016). Deschamps, P. et al. Ice-sheet collapse and sea-level rise at the Bolling warming 14,600 years ago. Nature 483, 559–564 (2012). Peltier, W. R. & Fairbanks, R. G. Global glacial ice volume and Last Glacial Maximum duration from an extended Barbados sea level record. Quat. Sci. Rev. 25, 3322–3337 (2006). Hawkings, J. R. et al. The effect of warming climate on nutrient and solute export from the Greenland Ice Sheet. Geochem. Perspect. Lett. 1, 94–104 (2015). Wadham, J. et al. The potential role of the Antarctic Ice Sheet in global biogeochemical cycles. Earth Env. Sci. T. R. Soc. 104, 55–67 (2013). Meire, L. et al. High export of dissolved silica from the Greenland Ice Sheet. Geophys. Res. Lett. 43, 9173–9182 (2016). Arrigo, K. R. et al. Melting glaciers stimulate large summer phytoplankton blooms in southwest Greenland waters. Geophys. Res. Lett. 44, 6278–6285 (2017). Cermeno, P. et al. Continental erosion and the Cenozoic rise of marine diatoms. Proc. Natl. Acad. Sci. USA 112, 4239–4244 (2015). De La Rocha, C. L. & Bickle, M. J. Sensitivity of silicon isotopes to whole-ocean changes in the silica cycle. Mar. Geol. 217, 267–282 (2005). Georg, R. B., West, A. J., Basu, A. R. & Halliday, A. N. Silicon fluxes and isotope composition of direct groundwater discharge into the Bay of Bengal and the effect on the global ocean silicon isotope budget. Earth Planet. Sci. Lett. 283, 67–74 (2009). Jeandel, C. & Oelkers, E. H. The influence of terrigenous particulate material dissolution on ocean chemistry and global element cycles. Chem. Geol. 395, 50–66 (2015). Cowton, T. et al. Rapid erosion beneath the Greenland ice sheet. Geology 40, 343–346 (2012). Bouysse, P. Geological Map of the World at 1: 35 000 000, 3rd edn (CCGM-CGMW, Paris, France, 2014). Bartholomew, I. et al. Supraglacial forcing of subglacial drainage in the ablation zone of the Greenland ice sheet. Geophys. Res. Lett. 38, L08502 (2011). Sarmiento, J. L. & Toggweiler, J. R. A new model for the role of the oceans in determining atmospheric PCO2. Nature 308, 621 (1984). Peltier, W. R., Argus, D. F. & Drummond, R. Space geodesy constrains ice age terminal deglaciation: the global ICE-6G_C (VM5a) model. J. Geophys. Res.—Sol. Ea. 120, 450–487 (2015). Thornalley, D. J. R., Elderfield, H. & McCave, I. N. Intermediate and deep water paleoceanography of the northern North Atlantic over the past 21,000 years. Paleoceanography 25, PA1211 (2010). Lin, I. J. & Somasund, P. Alterations in properties of samples during their preparation by grinding. Powder Technol. 6, 171–179 (1972). Sánchez-Soto, P. J. et al. Effects of dry grinding on the structural changes of kaolinite powders. J. Am. Ceram. Soc. 83, 1649–1657 (2000). Hellmann, R. et al. Unifying natural and laboratory chemical weathering with interfacial dissolution-reprecipitation: a study based on the nanometer-scale chemistry of fluid-silicate interfaces. Chem. Geol. 294, 203–216 (2012). Ruiz-Agudo, E., Putnis, C. V. & Putnis, A. Coupled dissolution and precipitation at mineral-fluid interfaces. Chem. Geol. 383, 132–146 (2014). Casey, W. H. et al. Leaching and reconstruction at the surfaces of dissolving chain-silicate minerals. Nature 366, 253–256 (1993). Chemtob, S. M. et al. Silicon isotope systematics of acidic weathering of fresh basalts, Kilauea Volcano, Hawai'i. Geochim. Cosmochim. Acta 169, 63–81 (2015). Crompton, J. W. et al. Clay mineral precipitation and low silica in glacier meltwaters explored through reaction-path modelling. J. Glaciol. 61, 1061–1078 (2015). Georg, R. B., Zhu, C., Reynolds, B. C. & Halliday, A. N. Stable silicon isotopes of groundwater, feldspars, and clay coatings in the Navajo Sandstone aquifer, Black Mesa, Arizona, USA. Geochim. Cosmochim. Acta 73, 2229–2241 (2009). Hawkings, J. et al. The Greenland Ice Sheet as a hotspot of phosphorus weathering and export in the Arctic. Glob. Biogeochem. Cy. 30, 191–210 (2016). Overeem, I. et al. Substantial export of suspended sediment to the global oceans from glacial erosion in Greenland. Nat. Geosci. 10, 859–863 (2017). Chandler, D. M. et al. Evolution of the subglacial drainage system beneath the Greenland Ice Sheet revealed by tracers. Nat. Geosci. 6, 195–198 (2013). Chu, V. W. Greenland ice sheet hydrology: a review. Prog. Phys. Geogr. 38, 19–54 (2014). Tedstone, A. J. et al. Greenland ice sheet motion insensitive to exceptional meltwater forcing. Proc. Natl. Acad. Sci. USA 110, 19719–19724 (2013). Dove, P. M., Han, N., Wallace, A. F. & De Yoreo, J. J. Kinetics of amorphous silica dissolution and the paradox of the silica polymorphs. Proc. Natl. Acad. Sci. USA 105, 9903–9908 (2008). Article ADS PubMed Google Scholar Icenhower, J. P. & Dove, P. M. The dissolution kinetics of amorphous silica into sodium chloride solutions: effects of temperature and ionic strength. Geochim. Cosmochim. Acta 64, 4193–4203 (2000). Kato, K. & Kitano, Y. Solubility and dissolution rate of amorphous silica in distilled and sea water at 20°C. J. Oceanogr. Soc. Jpn. 24, 147–152 (1968). Yool, A. & Tyrrell, T. Role of diatoms in regulating the ocean's silicon cycle. Global Biogeochem. Cy. 17, 1103 (2003). Durkin, C. A. et al. Silicic acid supplied to coastal diatom communities influences cellular silicification and the potential export of carbon. Limnol. Oceanogr. 58, 1707–1726 (2013). Dugdale, R. C. & Wilkerson, F. P. Silicate regulation of new production in the equatorial Pacific upwelling. Nature 391, 270–273 (1998). Dugdale, R. C., Wilkerson, F. P. & Minas, H. J. The role of a silicate pump in driving new production. Deep-Sea Res. Pt. I 42, 697–719 (1995). Gregoire, L. J., Payne, A. J. & Valdes, P. J. Deglacial rapid sea level rises caused by ice-sheet saddle collapses. Nature 487, 219 (2012). Golledge, N. R. et al. Antarctic contribution to meltwater pulse 1A from reduced Southern Ocean overturning. Nat. Commun. 5, 5107 (2014). Patton, H. et al. Deglaciation of the Eurasian ice sheet complex. Quat. Sci. Rev. 169, 148–172 (2017). Norðdahl, H. & Ingólfsson, Ó. Collapse of the Icelandic ice sheet controlled by sea-level rise? arktos 1, 1–18 (2015). Thornalley, D. J. R. et al. The deglacial evolution of North Atlantic deep convection. Science 331, 202–205 (2011). Thornalley, D. J. R. et al. A warm and poorly ventilated deep Arctic Mediterranean during the last glacial period. Science 349, 706–710 (2015). Thornalley, D. J. R., Elderfield, H. & McCave, I. N. Reconstructing North Atlantic deglacial surface hydrography and its link to the Atlantic overturning circulation. Glob. Planet. Chang. 79, 163–175 (2011). Chen, T. et al. Ocean mixing and ice-sheet control of seawater 234U238U during the last deglaciation. Science 354, 626–629 (2016). Hawkings, J. R. et al. Ice sheets as a significant source of highly reactive nanoparticulate iron to the oceans. Nat. Commun. 5, 3929 (2014). Sole, A. et al. Winter motion mediates dynamic response of the Greenland Ice Sheet to warmer summers. Geophys. Res. Lett. 40, 3940–3944 (2013). DeMaster, D. J. The supply and accumulation of silica in the marine-environment. Geochim. Cosmochim. Acta 45, 1715–1732 (1981). Clymans, W. et al. Amorphous silica analysis in terrestrial runoff samples. Geoderma 167−68, 228–235 (2011). Sauer, D. et al. Review of methodologies for extracting plant-available and amorphous Si from soils and aquatic sediments. Biogeochemistry 80, 89–108 (2006). Georg, R. B., Reynolds, B. C., Frank, M. & Halliday, A. N. New sample preparation techniques for the determination of Si isotopic compositions using MC-ICPMS. Chem. Geol. 235, 95–104 (2006). Telling, J. et al. Rock comminution as a source of hydrogen for subglacial ecosystems. Nat. Geosci. 8, 851–855 (2015). Hughes, H. J. et al. Controlling the mass bias introduced by anionic and organic matrices in silicon isotopic measurements by MC-ICP-MS. J. Anal. At. Spectrom. 26, 1892–1896 (2011). Cardinal, D. et al. Isotopic composition of silicon measured by multicollector plasma source mass spectrometry in dry plasma mode. J. Anal. At. Spectrom. 18, 213–218 (2003). Reynolds, B. C. et al. An inter-laboratory comparison of Si isotope reference materials. J. Anal. At. Spectrom. 22, 561–568 (2007). Hendry, K. R., Rickaby, R. E. M. & Allen, C. S. Changes in micronutrient supply to the surface Southern Ocean (Atlantic sector) across the glacial termination. Geochem. Geophy. Geosys. 12, QO9007 (2011). Panizzo, V. N. et al. Insights into the transfer of silicon isotopes into the sediment record. Biogeosciences 13, 147–157 (2016). Ivanovic, R. F. et al. Transient climate simulations of the deglaciation 21–9 thousand years before present (version 1)—PMIP4 Core experiment design and boundary conditions. Geosci. Model Dev. 9, 2563–2587 (2016). Gregoire, L. J., Valdes, P. J., Payne, A. J. & Kahana, R. Optimal tuning of a GCM using modern and glacial constraints. Clim. Dynam. 37, 705–719 (2011). Bliss, A., Hock, R. & Radić, V. Global response of glacier runoff to twenty-first century climate change. J. Geophys. Res.—Earth 119, 717–730 (2014). Dai, A. & Trenberth, K. E. Estimates of freshwater discharge from continents: latitudinal and seasonal variations. J. Hydrometeorol. 3, 660–687 (2002). Petit, J. R. et al. Climate and atmospheric history of the past 420,000 years from the Vostok ice core, Antarctica. Nature 399, 429–436 (1999). Durr, H. H. et al. Global spatial distribution of natural riverine silica inputs to the coastal zone. Biogeosciences 8, 597–620 (2011). Conley, D. J. Riverine contribution of biogenic silica to the oceanic silica budget. Limnol. Oceanogr. 42, 774–777 (1997). Lawson, E. C. et al. Greenland Ice Sheet exports labile organic carbon to the Arctic oceans. Biogeosciences 11, 4015–4028 (2014). Palmer, S., Shepherd, A., Nienow, P. & Joughin, I. Seasonal speedup of the Greenland Ice Sheet linked to routing of surface water. Earth Planet. Sci. Lett. 302, 423–428 (2011). Andersen, K. K. et al. High-resolution record of Northern Hemisphere climate extending into the last interglacial period. Nature 431, 147–151 (2004). Garcia, H. E., Locarnini, R. A., Boyer, T. P., Antonov, J. I., Baranova, O.K., Zweng, M.M., Reagan, J.R., Johnson, D.R. (2014) World Ocean Atlas 2013, Volume 4: Dissolved Inorganic Nutrients (phosphate, nitrate, silicate). S. Levitus, Ed., A. Mishonov Technical Ed.; NOAA Atlas NESDIS 76, 25 pp We thank all of those who assisted with fieldwork at LG, Dr. Fotis Sgouridis and Mr. James Williams in LOWTEX laboratories at the University of Bristol, Dr. Chris Coath in Bristol Isotope Group laboratories at the University of Bristol, Prof. Nicholas McCave and Dr. David Thornalley who provided sediment core samples, and Dr. Lauren Gregoire for assistance in preparing the ice mass loss data. We are grateful to Prof. Rob Raiswell and three anonymous reviewers for their constructive comments on the manuscript. This research is part of the UK NERC-funded DELVE (NERC grant NE/I008845/1) and a Leverhulme Trust Research Grant (RPG-2016-439) to J.L.W., and the ICYLAB grant (ERC grant ERC-2015-Stg—678371_ICY-LAB) to K.R.H. The Leverhulme Trust, via a Leverhulme research fellowship to J.L.W., the Royal Society, via a university research fellowship (UF120084) to K.R.H., and the Czech Science Foundation (GACR), via a junior grant (15-17346Y) to M.S., provided additional support. G.F.d.S. was supported by the European Union's Horizon 2020 research and innovation programme under Marie Skłodowska-Curie (grant agreement #708407), R.I. was funded by an NERC Independent Research Fellowship (NE/K008536/1), and T.J.K. was supported by Charles University Research Centre (programme no. 204069). Bristol Glaciology Centre, School of Geographical Sciences, University Road, Bristol, BS8 1SS, UK Jon R. Hawkings, Jade E. Hatton, Jemma L. Wadham, Guillaume Lamarche-Gagnon, Andrew Tedstone & Martyn Tranter School of Earth Sciences, University of Bristol, Bristol, BS8 1RJ, UK Katharine R. Hendry Institute of Geochemistry and Petrology, ETH Zurich, Clausiusstrasse 25, 8092, Zürich, Switzerland Gregory F. de Souza School of Earth and Environment, University of Leeds, Leeds, LS2 9JT, UK Ruza Ivanovic Department of Ecology, Charles University, Viničná 7, 12844, Prague 2, Czech Republic Tyler J. Kohler & Marek Stibal National Oceanography Centre, University of Southampton Waterfront Campus, European Way, Southampton, SO14 3ZH, UK Alexander Beaton Earth and Planetary Sciences, University of California, Santa Cruz, CA, 95064, USA Mathis P. Hain Ocean and Earth Science, National Oceanography Centre Southampton, University of Southampton Waterfront Campus, European Way, Southampton, SO14 3ZH, UK School of Earth and Ocean Sciences, Cardiff University, Main Building, Park Place, Cardiff, CF10 3AT, UK Elizabeth Bagshaw & Jennifer Pike Jon R. Hawkings Jade E. Hatton Jemma L. Wadham Tyler J. Kohler Marek Stibal Guillaume Lamarche-Gagnon Andrew Tedstone Elizabeth Bagshaw Jennifer Pike Martyn Tranter All authors made a significant contribution to the research presented here. J.L.W., K.R.H., M.T. and J.R.H. conceived the project. J.R.H., J.E.H., T.J.K., M.S., G.L.-G., A.B., E.B. and A.T. collected the field data. J.E.H., K.R.H. and J.R.H. undertook lab analysis. J.R.H., J.E.H., K.R.H. and J.L.W. wrote the manuscript. K.R.H., J.P. and M.T. provided significant help and invaluable advice in lab analysis. G.F.d.S., M.P.H. and R.I. aided in model development, data input, set up and output interpretation. Correspondence to Jon R. Hawkings. Hawkings, J.R., Hatton, J.E., Hendry, K.R. et al. The silicon cycle impacted by past ice sheets. Nat Commun 9, 3210 (2018). https://doi.org/10.1038/s41467-018-05689-1 The extreme yet transient nature of glacial erosion H. Patton A. Hubbard K. Andreassen Benjamin S. Linhoff Robert G. M. Spencer Nature Geoscience (2021)
CommonCrawl
What is the CFT dual to pure gravity on AdS$_3$? Pure $2+1$-dimensional gravity in $AdS_3$ (parametrized as $S= \int d^3 x \frac{1}{16 \pi G} \sqrt{-g} (R+\frac{2}{l^2})$) is a topological field theory closely related to Chern-Simons theory, and at least naively seems like it may be renormalizable on-shell for certain values of $l/G$. This is a theory which has been studied by many authors, but I can't seem to find a consensus as to what the CFT dual is. Here's what I've gathered from a cursory literature search: Witten (2007) suggests that the dual is the monster theory of Frenkel, Lepowsky, and Meurman for a certain value of $l/G$; his argument applies when the central charges $c_L$ and $c_R$ are both multiplies of $24$. In his argument, he assumes holomorphic factorization of the boundary CFT, which seems to be fairly controversial. His argument does produce approximately correct entropy for BTZ black holes, but a case can be made that black hole states shouldn't exist at all if the CFT is holomorphically factorized. He also gave a PiTP talk on the subject. Witten himself is unsure if this work is correct. In a recent 2013 paper, McGough and H. Verlinde claim that "The edge states of 2+1-D gravity are described by Liouville theory", citing 5 papers to justify this claim. All of those are before Witten's 2007 work. Witten's work does mention Liouville theory, and has some discussion, but he doesn't seem to believe that this is the correct boundary theory, and Liouville theory is at any rate not compatible with holomorphic factorization. This paper also claims that "pure quantum gravity...is unlikely to give rise to a complete theory." Similar assertions are made in a few other papers. Another proposal was made in Castro et.al (2011), relating this to minimal models such as the Ising model. Specifically, they claim that the partition function for the Ising model is equal to that of pure gravity $l=3G$, and make certain claims about higher spin cases. It doesn't seem to me that all of these can simultaneously be true. There could be some way to mitigate the differences between the proposals, but my scan of the literature didn't point to anything. It seems to me that no one agrees on the correct theory. I'm not even sure if these are the only proposals, but they're the ones that I'm aware of. First, are my above statements regarding the three proposals accurate? Also, is there any consensus in the majority of the HET community as to whether pure quantum gravity theories in $AdS_3$ exist, and if so what their CFT duals are? Finally, if there is no consensus, what are the necessary conditions for each of the proposals to be correct? quantum-field-theory research-level quantum-gravity conformal-field-theory ads-cft Colin McFaul asked Nov 1 '13 at 0:59 Without reading your whole question and just answering the title: I think that still is an (very interesting) open problem. See e.g., Five Problems in Quantum Gravity - Andrew Strominger http://arxiv.org/abs/arXiv:0906.1313 On very general grounds [15], we expect that 3D AdS gravity should be dual to a 2D CFT with central charge c = 3l . 2G Solving the theory is equivalent to specifying this CFT. It was suggested in [23] that, rather than directly quantizing the Einstein- Hilbert action, this CFT might simply be deduced by various consistency requirements. Namely, the central charge must be c = 3l 2G , Z must be modular invariant (since these are large diffeomorphisms) and its pole structure must reflect the fact that there are no perturbative excitations. Adding the additional assumption of holomorphic factorization (i.e. decoupling of the left and right movers in the CFT), it was shown [23] that Z is uniquely determined to be a certain modular form Zext . Unfortunately Zext does not agree with the Euclidean sum-over-geometries [25] which indicates that the assumption is not valid for pure gravity.3 Modular invariance and the restriction on the pole structure are still strong, if not uniquely determining, hints on the form of Z for pure gravity. Determining Z for pure 3D quantum Einstein gravity - if it exists - is an important open problem. ungeradeungerade $\begingroup$ Be aware that i could easily be not be up to date. So take that with a grain of salt. Hopefully a more knowledgeable user will add something. $\endgroup$ – ungerade Nov 1 '13 at 1:22 $\begingroup$ Thanks. Two of the three papers I cited are more recent than 2009, but assuming it hasn't changed, this at least negatively answers the question "is there a consensus?". $\endgroup$ – user32020 Nov 1 '13 at 1:23 Not the answer you're looking for? Browse other questions tagged quantum-field-theory research-level quantum-gravity conformal-field-theory ads-cft or ask your own question. References for AdS/CFT correspondence between dimensions 3 and 2 Gravity dual of N free scalars in 2D Is there a T-dual of Witten's twistor topological string theory? Can a $CFT_2$ which can't be factorized into chiral and antichiral parts and/or have a central charge not a multiple of 24 have AdS duals? Vasiliev gravity and "holographic" entanglement Holomorphic Factorization in CFT$_2$ Worldvolume vs boundary in AdS/CFT Holographic dual of pure-classical systems Dual of the Identity operator (AdS/CFT) Naked Singularity and AdS/CFT Non-trivial content of AdS/CFT for a generic EFT on AdS
CommonCrawl
Search references Uspekhi Mat. Nauk: Personal entry: Uspekhi Mat. Nauk, 1987, Volume 42, Issue 3(255), Pages 93–152 (Mi umn2537) This article is cited in 65 scientific papers (total in 66 papers) The $\bar\partial$-equation in the multidimensional inverse scattering problem R. G. Novikov, G. M. Henkin Full text: PDF file (3392 kB) References: PDF file HTML file Russian Mathematical Surveys, 1987, 42:3, 109–180 Bibliographic databases: UDC: 517.9+530.1 MSC: 81U40, 81Q05, 35J10 Citation: R. G. Novikov, G. M. Henkin, "The $\bar\partial$-equation in the multidimensional inverse scattering problem", Uspekhi Mat. Nauk, 42:3(255) (1987), 93–152; Russian Math. Surveys, 42:3 (1987), 109–180 Citation in format AMSBIB \Bibitem{NovHen87} \by R.~G.~Novikov, G.~M.~Henkin \paper The~$\bar\partial$-equation in the multidimensional inverse scattering problem \jour Uspekhi Mat. Nauk \issue 3(255) \pages 93--152 \mathnet{http://mi.mathnet.ru/umn2537} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=896879} \adsnasa{http://adsabs.harvard.edu/cgi-bin/bib_query?1987RuMaS..42..109N} \transl \jour Russian Math. Surveys \crossref{https://doi.org/10.1070/RM1987v042n03ABEH001419} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=A1987N637200005} Linking options: http://mi.mathnet.ru/eng/umn2537 http://mi.mathnet.ru/eng/umn/v42/i3/p93 This publication is cited in the following articles: P. G. Grinevich, S. P. Novikov, "Two-dimensional "inverse scattering problem" for negative energies and generalized-analytic functions. I. Energies below the ground state", Funct. Anal. Appl., 22:1 (1988), 19–27 R. G. Novikov, "Multidimensional inverse spectral problem for the equation $-\Delta\psi+(v(x)-Eu(x))\psi=0$", Funct. Anal. Appl., 22:4 (1988), 263–272 R Beals, R R Coifman, Inverse Probl, 5:2 (1989), 87 R Weder, Inverse Probl, 7:3 (1991), 461 Ricardo Weder, "Generalized limiting absorption method and multidimensional inverse scattering theory", Math Meth Appl Sci, 14:7 (1991), 509 G. Eskin, J. Ralston, "Inverse backscattering in two dimensions", Comm Math Phys, 138:3 (1991), 451 A.G. Ramm, O.L. Weaver, "3D inverse scattering", Computers & Mathematics with Applications, 22:4-5 (1991), 1 Masaru Ikehata, "A special green's function for the biharmonic operator and its application to an inverse boundary value problem", Computers & Mathematics with Applications, 22:4-5 (1991), 53 David Colton, Lassi Päivärinta, "The uniqueness of a solution to an inverse scattering problem for electromagnetic waves", Arch Rational Mech Anal, 119:1 (1992), 59 G. Eskin, J. Ralston, "Inverse backscattering", J Anal Math, 58:1 (1992), 177 V. V. Dubrovskii, "On stability of inverse problems of spectral analysis for equations of mathematical physics", Russian Math. Surveys, 49:3 (1994), 183–184 Volker Enss, Ricardo Weder, "The geometrical approach to multidimensional inverse scattering", J Math Phys (N Y ), 36:8 (1995), 3902 G Hachem, Inverse Probl, 11:1 (1995), 123 G. Eskin, J. Ralston, "Inverse scattering problem for the Schrödinger equation with magnetic potential at a fixed energy", Comm Math Phys, 173:1 (1995), 199 Silke Arians, "Geometric approach to inverse scattering for the Schrödinger equation with magnetic and electric potentials", J Math Phys (N Y ), 38:6 (1997), 2761 Gulmaro Corona Corona, "Generalized Gel'fand–Levitan integral equation for two block Ablowitz–Kaup–Newell–Segur systems", J Math Phys (N Y ), 40:9 (1999), 4393 O. M. Kiselev, "Asymptotic behaviour of the solution of the two-dimensional Dirac system with rapidly oscillating coefficients", Sb. Math., 190:2 (1999), 233–254 R. G. Novikov, "Approximate Inverse Quantum Scattering at Fixed Energy in Dimension 2", Proc. Steklov Inst. Math., 225 (1999), 285–302 P. G. Grinevich, "Scattering transformation at fixed non-zero energy for the two-dimensional Schrödinger operator with potential decaying at infinity", Russian Math. Surveys, 55:6 (2000), 1015–1083 Grant Karamyan, "The inverse scattering problem for the acoustic equation in a half-space", Inverse Probl, 18:6 (2002), 1673 Yu. V. Zasorin, "Potential Renormalization Method for a Model of the Hartree–Fock–Slater Type", Theoret. and Math. Phys., 130:3 (2002), 375–382 Grant Karamyan, "Inverse Scattering in a Half Space with Passive Boundary", Communications in Partial Differential Equations, 28:9-10 (2003), 1627 Grant Karamyan, "The inverse scattering problem with impedance boundary in a half-space", Inverse Probl, 20:5 (2004), 1485 O. M. Kiselev, "Asymptotics of solutions of higher-dimensional integrable equations and their perturbations", Journal of Mathematical Sciences, 138:6 (2006), 6067–6230 Alexandru Tamasan, "On the scattering method for the -equation and reconstruction of convection coefficients", Inverse Problems, 20:6 (2004), 1807 Ricardo Weder, Dimitri Yafaev, "On inverse scattering at a fixed energy for potentials with a regular behaviour at infinity", Inverse Probl, 21:6 (2005), 1937 R G Novikov, "Formulae and equations for finding scattering data from the Dirichlet-to-Neumann map with nonzero background potential", Inverse Probl, 21:1 (2005), 257 Xiaosheng Li, "Inverse Scattering Problem for the Schrödinger Operator with External Yang–Mills Potentials in Two Dimensions at Fixed Energy", Communications in Partial Differential Equations, 30:4 (2005), 451 David Dos Santos Ferreira, Carlos E. Kenig, Johannes Sjöstrand, Gunther Uhlmann, "Determining a Magnetic Schrödinger Operator from Partial Cauchy Data", Comm Math Phys, 271:2 (2007), 467 A S Fokas, "Nonlinear Fourier transforms, integrability and nonlocality in multidimensions", Nonlinearity, 20:9 (2007), 2093 Mikko Salo, "Recovering first order terms from boundary measurements", J Phys Conf Ser, 73 (2007), 012020 I. A. Taimanov, S. P. Tsarev, "Two-dimensional Schrödinger operators with fast decaying potential and multidimensional $L_2$-kernel", Russian Math. Surveys, 62:3 (2007), 631–633 Isozaki, H, "The partial derivative-theory for inverse problems associated with Schrodinger operators on hyperbolic spaces", Publications of the Research Institute For Mathematical Sciences, 43:1 (2007), 201 I. A. Taimanov, S. P. Tsarev, "Two-dimensional rational solitons and their blowup via the Moutard transformation", Theoret. and Math. Phys., 157:2 (2008), 1525–1541 Shari Moskow, John C Schotland, "Convergence and stability of the inverse scattering series for diffuse waves", Inverse Problems, 24:6 (2008), 065005 Burov V.A., Alekseenko N.V., Rumyantseva O.D., "Mnogochastotnoe obobschenie algoritma Novikova dlya resheniya obratnoi dvumernoi zadachi rasseyaniya", Akusticheskii zhurn., 55:6 (2009), 784–798 G Uhlmann, "Electrical impedance tomography and Calderón's problem", Inverse Problems, 25:12 (2009), 123011 I. A. Taimanov, S. P. Tsarev, "On the Moutard transformation and its applications to spectral theory and soliton equations", Journal of Mathematical Sciences, 170:3 (2010), 371–387 Lassi Päivärinta, Mikko Salo, Gunther Uhlmann, "Inverse scattering for the magnetic Schrödinger operator", Journal of Functional Analysis, 259:7 (2010), 1771 R G Novikov, "New global stability estimates for the Gel'fand–Calderon inverse problem", Inverse Probl, 27:1 (2011), 015001 Hiroshi Isozaki, Evgeny Korotyaev, "Inverse Problems, Trace Formulae for Discrete Schrödinger Operators", Ann. Henri Poincaré, 2011 Kenig C.E., Salo M., Uhlmann G., "Reconstructions From Boundary Measurements on Admissible Manifolds", Inverse Probl. Imaging, 5:4 (2011), 859–877 Burov V.A., Kasatkina E.E., Poberezhskaya A.Yu., Bogatyrev A.V., Rumyantseva O.D., "Osobennosti rascheta protsessov rasseyaniya na kontrastnykh i silno pogloschayuschikh dvukh- i trekhmernykh neodnorodnostyakh", Akusticheskii zhurnal, 57:5 (2011), 665–680 Matteo Santacesaria, "New global stability estimates for the Calderón problem in two dimensions", J. Inst. Math. Jussieu, 2012, 1 Grinevich P.G. Novikov R.G., "Faddeev Eigenfunctions for Point Potentials in Two Dimensions", Phys. Lett. A, 376:12-13 (2012), 1102–1106 R. G. Novikov, M. Santacesaria, "Monochromatic Reconstruction Algorithms for Two-dimensional Multi-channel Inverse Problems", International Mathematics Research Notices, 2012 Bazulin E.G., "O vozmozhnosti ispolzovaniya v ultrazvukovom nerazrushayuschem kontrole metoda maksimalnoi entropii dlya polucheniya izobrazheniya rasseivatelei po naboru ekhosignalov", Akusticheskii zhurnal, 59:2 (2013), 235–235 M. I. Isaev, R. G. Novikov, "Stability estimates for recovering the potential by the impedance boundary map", St. Petersburg Math. J., 25:1 (2014), 23–41 V. A. Burov, A. S. Shurup, D. I. Zotov, O. D. Rumyantseva, "Simulation of a functional solution to the acoustic tomography problem for data from quasi-point transducers", Acoust. Phys, 59:3 (2013), 345 I. A. Taimanov, S. P. Tsarev, "Faddeev eigenfunctions for two-dimensional Schrödinger operators via the Moutard transformation", Theoret. and Math. Phys., 176:3 (2013), 1176–1183 M. I. Isaev, "Exponential Instability in the Inverse Scattering Problem on the Energy Interval", Funct. Anal. Appl., 47:3 (2013), 187–194 Novikov R.G., "Approximate Lipschitz Stability for Non-Overdetermined Inverse Scattering at Fixed Energy", J. Inverse Ill-Posed Probl., 21:6 (2013), 813–823 A. V. Kazeykina, "Absence of Solitons with Sufficient Algebraic Localization for the Novikov–Veselov Equation at Nonzero Energy", Funct. Anal. Appl., 48:1 (2014), 24–35 Gunther Uhlmann, "Inverse problems: seeing the unseen", Bull. Math. Sci, 2014 M.I. Isaev, R.G. Novikov, "Effectivized Hölder-logarithmic stability estimates for the Gel'fand inverse problem", Inverse Problems, 30:9 (2014), 095006 A. D. Agaltsov, R. G. Novikov, "Riemann–Hilbert problem approach for two-dimensional flow inverse scatteringa)", J. Math. Phys, 55:10 (2014), 103502 R. G. Novikov, "An iterative approach to non-overdetermined inverse scattering at fixed energy", Sb. Math., 206:1 (2015), 120–134 V. A. Burov, D. I. Zotov, O. D. Rumyantseva, "Reconstruction of the sound velocity and absorption spatial distributions in soft biological tissue phantoms from experimental ultrasound tomography data", Acoust. Phys, 61:2 (2015), 231 Novikov R.G., "Formulas For Phase Recovering From Phaseless Scattering Data At Fixed Frequency", 139, no. 8, 2015, 923–936 Novikov R.G., "Explicit Formulas and Global Uniqueness for Phaseless Inverse Scattering in Multidimensions", J. Geom. Anal., 26:1 (2016), 346–359 Grinevich P.G. Novikov R.G., "Moutard transform approach to generalized analytic functions with contour poles", Bull. Sci. Math., 140:6 (2016), 638–656 Ferreira David Dos Santos, Kurylev Ya., Lassas M., Salo M., "The Calderón problem in transversally anisotropic geometries", J. Eur. Math. Soc., 18:11 (2016), 2579–2626 E. L. Lakshtanov, B. R. Vainberg, "A test for the existence of exceptional points in the Faddeev scattering problem", Theoret. and Math. Phys., 190:1 (2017), 77–90 B. Berndtsson, S. V. Kislyakov, R. G. Novikov, V. M. Polterovich, P. L. Polyakov, A. E. Tumanov, A. A. Shananin, C. L. Epstein, "Gennadi Markovich Henkin (obituary)", Russian Math. Surveys, 72:3 (2017), 547–570 M. I. Belishev, "Boundary control and tomography of Riemannian manifolds (the BC-method)", Russian Math. Surveys, 72:4 (2017), 581–644 Lakshtanov E. Vainberg B., "Recovery of l-P-Potential in the Plane", J. Inverse Ill-Posed Probl., 25:5 (2017), 633–651 This page: 1020 Full text: 310 First page: 1 What is a QR-code?
CommonCrawl
Noise and stability in reaction-diffusion equations Christophe Zhang , Chair for Applied Analysis (Alexander von Humboldt Professorship), Friedrich-Alexander Universität Nürnberg, Cauerstr. 11, 91058 Erlangen * Corresponding author: Christophe Zhang Received September 2020 Revised November 2020 Published March 2022 Early access March 2021 Fund Project: This work was partially supported by ANR project Finite4SoS (ANR-15-CE23-0007), the French Corps des Mines, and the Chair of Applied Analysis, Alexander von Humboldt Professorship, Friedrich-Alexander Universität Nürnberg We use a variant the backstepping method to study the stabilization of a 1-D linear transport equation on the interval $ (0,L) $, by controlling the scalar amplitude of a piecewise regular function of the space variable in the source term. We prove that if the system is controllable in a periodic Sobolev space of order greater than $ 1 $, then the system can be stabilized exponentially in that space and, for any given decay rate, we give an explicit feedback law that achieves that decay rate. The variant of the backstepping method used here relies mainly on the spectral properties of the linear transport equation, and leads to some original technical developments that differ substantially from previous applications. Keywords: Backstepping, system equivalence, Fredholm transformations, transport equation, stabilization, rapid stabilization, internal control. Mathematics Subject Classification: 35L02, 93D15, 93B17, 93B30, 93D23. Citation: Christophe Zhang. Internal rapid stabilization of a 1-D linear transport equation with a scalar feedback. Mathematical Control & Related Fields, 2022, 12 (1) : 169-200. doi: 10.3934/mcrf.2021006 D. M. Bosković, A. Balogh and M. Krstić, Backstepping in infinite dimension for a class of parabolic distributed parameter systems, Math. Control Signals Systems, 16 (2003), 44-75. doi: 10.1007/s00498-003-0128-6. Google Scholar G. Bastin and J. -M. Coron, Stability and Boundary Stabilization of 1-D Hyperbolic Systems, Progress in Nonlinear Differential Equations and their Applications, vol. 88, Birkhäuser/Springer, Cham, 2016. doi: 10.1007/978-3-319-32062-5. Google Scholar A. Balogh and M. Krstić, Infinite dimensional backstepping–style feedback transformations for a heat equation with an arbitrary level of instability, European Journal of Control, 8 (2002), 165-175. doi: 10.3166/ejc.8.165-175. Google Scholar S. Bialas, On the Lyapunov matrix equation, IEEE Trans. Automat. Control, 25 (1980), 813-814. doi: 10.1109/TAC.1980.1102438. Google Scholar H. Brezis, Functional Analysis, Sobolev Spaces and Partial Differential Equations, Universitext, Springer, New York, 2011. Google Scholar P. Brunovský, A classification of linear controllable systems, Kybernetika (Prague), 6 (1970), 173-188. Google Scholar O. Christensen, An Introduction to Frames and Riesz Bases, 2nd edition, Applied and Numerical Harmonic Analysis, Birkhäuser/Springer, Cham, 2016. doi: 10.1007/978-3-319-25613-9. Google Scholar J.-M. Coron and B. d'Andréa-Novel, Stabilization of a rotating body beam without damping, IEEE Trans. Automat. Control, 43 (1998), 608-618. doi: 10.1109/9.668828. Google Scholar J.-M. Coron, L. Gagnon and M. Morancey, Rapid stabilization of a linearized bilinear 1-D Schrödinger equation, J. Math. Pures Appl. (9), 115 (2018), 24-73. doi: 10.1016/j.matpur.2017.10.006. Google Scholar J.-M. Coron, L. Hu and G. Olive, Stabilization and controllability of first-order integro-differential hyperbolic equations, J. Funct. Anal., 271 (2016), 3554-3587. doi: 10.1016/j.jfa.2016.08.018. Google Scholar J.-M. Coron, L. Hu and G. Olive, Finite-time boundary stabilization of general linear hyperbolic balance laws via Fredholm backstepping transformation, Automatica J. IFAC, 84 (2017), 95-100. doi: 10.1016/j.automatica.2017.05.013. Google Scholar J.-M. Coron and Q. Lü, Local rapid stabilization for a Korteweg-de Vries equation with a Neumann boundary control on the right, J. Math. Pures Appl. (9), 102 (2014), 1080-1120. doi: 10.1016/j.matpur.2014.03.004. Google Scholar J.-M. Coron and Q. Lü, Fredholm transform and local rapid stabilization for a Kuramoto-Sivashinsky equation, J. Differential Equations, 259 (2015), 3683-3729. doi: 10.1016/j.jde.2015.05.001. Google Scholar J.-M. Coron and H.-M. Nguyen, Null controllability and finite time stabilization for the heat equations with variable coefficients in space in one dimension via backstepping approach, Arch. Ration. Mech. Anal., 225 (2017), 993-1023. doi: 10.1007/s00205-017-1119-y. Google Scholar J. -M. Coron, Control and Nonlinearity, Mathematical Surveys and Monographs, vol. 136, American Mathematical Society, Providence, RI, 2007. doi: 10.1090/surv/136. Google Scholar [16] J.-M. Coron, Stabilization of control systems and nonlinearities, In Proceedings of the 8th International Congress on Industrial and Applied Mathematics, Higher Ed. Press, Beijing, 2015. Google Scholar J.-M. Coron, R. Vazquez, M. Krstić and G. Bastin, Local exponential $H^2$ stabilization of a $2\times 2$ quasilinear hyperbolic system using backstepping, SIAM J. Control Optim., 51 (2013), 2005-2035. doi: 10.1137/120875739. Google Scholar R. Datko, A linear control problem in an abstract Hilbert space, J. Differential Equations, 9 (1971), 346-359. doi: 10.1016/0022-0396(71)90087-8. Google Scholar F. Dubois, N. Petit and P. Rouchon, Motion planning and nonlinear simulations for a tank containing a fluid, In 1999 European Control Conference (ECC), IEEE, (1999), 3232–3237. doi: 10.23919/ECC. 1999.7099825. Google Scholar L. Grafakos, Classical Fourier Analysis, vol. 2, Springer, New York, 2008. Google Scholar A. Hayat, Exponential stability of general 1-D quasilinear systems with source terms for the C 1 norm under boundary conditions, preprint, October 2017. https://hal.archives-ouvertes.fr/hal-01613139 Google Scholar A. Hayat, On boundary stability of inhomogeneous $2\times2$ 1-D hyperbolic systems for the $C^1$ norm, ESAIM Control Optim. Calc. Var., 25 (2019), Paper No. 82, 31 pp. doi: 10.1051/cocv/2018059. Google Scholar D. Kleinman, An easy way to stabilize a linear constant system, IEEE Transactions on Automatic Control, 15 (1970), 692-692. Google Scholar M. Krstić, B.-Z. Guo, A. Balogh and A. Smyshlyaev, Output-feedback stabilization of an unstable wave equation, Automatica J. IFAC, 44 (2008), 63-74. doi: 10.1016/j.automatica.2007.05.012. Google Scholar V. Komornik, Rapid boundary stabilization of linear distributed systems, SIAM J. Control Optim., 35 (1997), 1591-1613. doi: 10.1137/S0363012996301609. Google Scholar M. Krstić and A. Smyshlyaev, Boundary Control of PDEs, Advances in Design and Control, vol. 16, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2008. doi: 10.1137/1.9780898718607. Google Scholar M. Krstic, P. V. Kokotovic and I. Kanellakopoulos, Nonlinear and Adaptive Control Design, John Wiley & Sons, Inc., New York, NY, 1995. Google Scholar W. H. Kwon and A. E. Pearson, A note on the algebraic matrix Riccati equation, IEEE Trans. Automatic Control, AC-22 (1977), 143-144. doi: 10.1109/tac.1977.1101441. Google Scholar I. Lasiecka and R. Triggiani, Algebraic Riccati equations arising in boundary/point control: A review of theoretical and numerical results. I. Continuous case, In Perspectives in Control Theory (Sielpia, 1988), Progr. Systems Control Theory, volume 2, Birkhäuser Boston, Boston, MA, 1990, 175–210. Google Scholar W. J. Liu and M. Krstić, Backstepping boundary control of Burgers' equation with actuator dynamics, Systems Control Lett., 41 (2000), 291-303. doi: 10.1016/S0167-6911(00)00068-2. Google Scholar W. J. Liu and M. Krstić, Boundary feedback stabilization of homogeneous equilibria in unstable fluid mixtures, Internat. J. Control, 80 (2007), 982-989. doi: 10.1080/00207170701280895. Google Scholar Y. Orlov and D. Dochain, Discontinuous feedback stabilization of minimum-phase semilinear infinite-dimensional systems with application to chemical tubular reactor, IEEE Trans. Automat. Control, 47 (2002), 1293-1304. doi: 10.1109/TAC.2002.800737. Google Scholar D. L. Lukes, Stabilizability and optimal control, Funkcial. Ekvac., 11 (1968), 39-50. Google Scholar A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations, Applied Mathematical Sciences, vol. 44, Springer-Verlag, New York, 1983. doi: 10.1007/978-1-4612-5561-1. Google Scholar J. L. Pitarch, M. Rakhshan, M. Mardani, M. Sadeghi and C. Prada, Distributed nonlinear control of a plug-flow reactor under saturation, IFAC-PapersOnLine, 49 (2016), 87-92. doi: 10.1016/j.ifacol.2016.10.760. Google Scholar R. Rebarber, Spectral assignability for distributed parameter systems with unbounded scalar control, SIAM J. Control Optim., 27 (1989), 148-169. doi: 10.1137/0327009. Google Scholar D. L. Russell, Control theory of hyperbolic equations related to certain questions in harmonic analysis and spectral theory, J. Math. Anal. Appl., 40 (1972), 336-368. doi: 10.1016/0022-247X(72)90055-8. Google Scholar D. L. Russell, Canonical forms and spectral determination for a class of hyperbolic distributed parameter control systems, J. Math. Anal. Appl., 62 (1978), 186-225. doi: 10.1016/0022-247X(78)90229-9. Google Scholar A. Smyshlyaev, E. Cerpa and M. Krstić, Boundary stabilization of a 1-D wave equation with in-domain antidamping, SIAM J. Control Optim., 48 (2010), 4014-4031. doi: 10.1137/080742646. Google Scholar E. D. Sontag, Mathematical Control Theory, Texts in Applied Mathematics, vol. 6, Springer-Verlag, New York, 1998. doi: 10.1007/978-1-4612-0577-7. Google Scholar S. H. Sun, On spectrum distribution of completely controllable linear systems, SIAM J. Control Optim., 19 (1981), 730-743. doi: 10.1137/0319048. Google Scholar D. Tsubakino, M. Krstić, and S. Hara, Backstepping control for parabolic pdes with in-domain actuation, 2012 American Control Conference (ACC), (2012), 2226–2231. doi: 10.1109/ACC. 2012.6315358. Google Scholar J. M. Urquiza, Rapid exponential feedback stabilization with unbounded control operators, SIAM J. Control Optim., 43 (2005), 2233-2244. doi: 10.1137/S0363012901388452. Google Scholar A. Vest, Rapid stabilization in a semigroup framework, SIAM J. Control Optim., 51 (2013), 4169-4188. doi: 10.1137/130906994. Google Scholar F. Woittennek, S. Q. Wang and T. Knüppel, Backstepping design for parabolic systems with in-domain actuation and Robin boundary conditions, IFAC Proceedings Volumes, 19th IFAC World Congress, 47 (2014), 5175–5180. doi: 10.3182/20140824-6-ZA-1003.02285. Google Scholar S. Q. Xiang, Null controllability of a linearized Korteweg–de Vries equation by backstepping approach, SIAM J. Control Optim., 57 (2019), 1493-1515. doi: 10.1137/17M1115253. Google Scholar S. Q. Xiang, Small-time local stabilization for a Korteweg–de Vries equation, Systems Control Lett., 111 (2018), 64-69. doi: 10.1016/j.sysconle.2017.11.003. Google Scholar X. Yu, C. Xu, H. C. Jiang, A. Ganesan and G. J. Zheng., Backstepping synthesis for feedback control of first-order hyperbolic PDEs with spatial-temporal actuation, Abstr. Appl. Anal., (2014), Art. ID 643640, 13 pp. doi: 10.1155/2014/643640. Google Scholar J. Zabczyk, Mathematical Control Theory, Modern Birkhäuser Classics. Birkhäuser Boston, Inc., Boston, MA, 2008. doi: 10.1007/978-0-8176-4733-9. Google Scholar Eduardo Cerpa, Emmanuelle Crépeau. Rapid exponential stabilization for a linear Korteweg-de Vries equation. Discrete & Continuous Dynamical Systems - B, 2009, 11 (3) : 655-668. doi: 10.3934/dcdsb.2009.11.655 Xiaorui Wang, Genqi Xu, Hao Chen. Uniform stabilization of 1-D Schrödinger equation with internal difference-type control. Discrete & Continuous Dynamical Systems - B, 2021, 26 (12) : 6359-6376. doi: 10.3934/dcdsb.2021022 Andrei Fursikov, Lyubov Shatina. Nonlocal stabilization by starting control of the normal equation generated by Helmholtz system. Discrete & Continuous Dynamical Systems, 2018, 38 (3) : 1187-1242. doi: 10.3934/dcds.2018050 Jean-Michel Coron. Phantom tracking method, homogeneity and rapid stabilization. Mathematical Control & Related Fields, 2013, 3 (3) : 303-322. doi: 10.3934/mcrf.2013.3.303 Zuowei Cai, Jianhua Huang, Liu Yang, Lihong Huang. Periodicity and stabilization control of the delayed Filippov system with perturbation. Discrete & Continuous Dynamical Systems - B, 2020, 25 (4) : 1439-1467. doi: 10.3934/dcdsb.2019235 Behzad Azmi, Karl Kunisch. Receding horizon control for the stabilization of the wave equation. Discrete & Continuous Dynamical Systems, 2018, 38 (2) : 449-484. doi: 10.3934/dcds.2018021 Abdallah Benabdallah, Mohsen Dlala. Rapid exponential stabilization by boundary state feedback for a class of coupled nonlinear ODE and $ 1-d $ heat diffusion equation. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021092 B. E. Ainseba, Sebastian Aniţa. Internal nonnegative stabilization for some parabolic equations. Communications on Pure & Applied Analysis, 2008, 7 (3) : 491-512. doi: 10.3934/cpaa.2008.7.491 Sebastian Aniţa, William Edward Fitzgibbon, Michel Langlais. Global existence and internal stabilization for a reaction-diffusion system posed on non coincident spatial domains. Discrete & Continuous Dynamical Systems - B, 2009, 11 (4) : 805-822. doi: 10.3934/dcdsb.2009.11.805 José R. Quintero, Alex M. Montes. Exact controllability and stabilization for a general internal wave system of Benjamin-Ono type. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021021 Viorel Barbu, Ionuţ Munteanu. Internal stabilization of Navier-Stokes equation with exact controllability on spaces with finite codimension. Evolution Equations & Control Theory, 2012, 1 (1) : 1-16. doi: 10.3934/eect.2012.1.1 Shaohong Fang, Jing Huang, Jinying Ma. Stabilization of a discrete-time system via nonlinear impulsive control. Discrete & Continuous Dynamical Systems - S, 2020, 13 (6) : 1803-1811. doi: 10.3934/dcdss.2020106 Xiaorui Wang, Genqi Xu. Uniform stabilization of a wave equation with partial Dirichlet delayed control. Evolution Equations & Control Theory, 2020, 9 (2) : 509-533. doi: 10.3934/eect.2020022 Louis Tcheugoue Tebou. Equivalence between observability and stabilization for a class of second order semilinear evolution. Conference Publications, 2009, 2009 (Special) : 744-752. doi: 10.3934/proc.2009.2009.744 Sorin Micu, Jaime H. Ortega, Lionel Rosier, Bing-Yu Zhang. Control and stabilization of a family of Boussinesq systems. Discrete & Continuous Dynamical Systems, 2009, 24 (2) : 273-313. doi: 10.3934/dcds.2009.24.273 Elena-Alexandra Melnig. Internal feedback stabilization for parabolic systems coupled in zero or first order terms. Evolution Equations & Control Theory, 2021, 10 (2) : 333-351. doi: 10.3934/eect.2020069 Abdelkarim Kelleche, Nasser-Eddine Tatar. Existence and stabilization of a Kirchhoff moving string with a delay in the boundary or in the internal feedback. Evolution Equations & Control Theory, 2018, 7 (4) : 599-616. doi: 10.3934/eect.2018029 Serge Nicaise. Internal stabilization of a Mindlin-Timoshenko model by interior feedbacks. Mathematical Control & Related Fields, 2011, 1 (3) : 331-352. doi: 10.3934/mcrf.2011.1.331 Andrei Fursikov. Stabilization of the simplest normal parabolic equation. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1815-1854. doi: 10.3934/cpaa.2014.13.1815 Camille Laurent. Internal control of the Schrödinger equation. Mathematical Control & Related Fields, 2014, 4 (2) : 161-186. doi: 10.3934/mcrf.2014.4.161 Christophe Zhang
CommonCrawl
1. Review of Fundamentals> To expand algebraic expressions, we need to use the distributive property. The reverse of expansion, called factorization or factoring, consists of using the distributive property in reverse and writing the expression as product of simpler ones. For example, we can write We say that $x-3$ and $x+1$ are factors of $x^{2}-2x-3$. Click here to read why factorization is important Collapse importance of factorization Factorization has many applications. Here are a number of them. Solving equations: If we can put an equation in a factored form $P\cdot Q=0$ (where $P$ and $Q$ are two expressions), then solving the problem reduces to solving two independent and often simpler equations $P=0$ and $Q=0$. (Recall if $ab=0$ then $a=0$ or $b=0$). For example instead of solving \[x^{2}-2x-3=0\] because $x^{2}-2x-3=(x-3)(x+1)$, we can solve \[x-3=0\quad\text{or}\quad x+1=0\] which shows at once \[x=3\quad\text{or}\quad x=-1.\] Similarly because \[x^{3}-8x^{2}+17x-10=(x-1)(x-2)(x-5)\] we immediately conclude \[x^{3}-8x^{2}+17x-10=0\Rightarrow x=1\quad\text{or}\quad x=2\quad\text{or}\quad x=5.\] Evaluating expressions: It is often easier to evaluate polynomials in the factored form. For example, evaluating $x^{3}-8x^{2}+17x-10$ reduces to only three subtractions and two multiplications if we use its factored form $(x-1)(x-2)(x-5)$. Determining sign of the output: Sometimes, we need to find out for which values of the variable (often denoted by $x$) a polynomail is positive and for which values it is negative. Doing so is often much easier if we use the factored forms. Simplifying rational expressions: Another application of factorization can be simplification of rational expressions. For example, $(x^{2}-1)/(x^{2}-2x-3)$ can be simplified as \[\frac{x^{2}-1}{x^{3}-2x-3}=\frac{(x-1)\cancel{(x+1)}}{(x-3)\cancel{(x+1)}}=\frac{x-1}{x-3}\qquad(x\neq-1)\] However, factorization is not always possible and when it is possible, it does not necessarily yield simpler factors. For example, $x^{5}-32$ can be factored into $x-2$ and $x^{4}+2x^{3}+4x^{2}+8x+16$, but obviously we would rather solve $x^{5}-32=x^{5}-2^{5}=0$ than solve the following two equations \[x-2=0,\quad x^{4}+2x^{3}+4x^{2}+8x+16=0.\] Now let's review some techniques of factorization. Common Factors Factoring Expressions with Fractional Exponents Factorization of polynomials of the form $x^2+bx+c$ Special Factorization Formulas Factorization by Grouping Terms When there is a factor common to every term of an expression, we can simply factor it out by applying the distributive property in reverse. Factor each expression (a) $12x^{3}-15x^{2}$ (b) $4x^{5}-8x^{4}-16x^{3}-20x^{2}$ (c) $(2x-5)(3x-7)+4(3x-7)$ (a) The greatest common factor of the terms $12x^{3}$ and $-15x^{2}$ is $3x^{2}$, so we have \[12x^{3}-15x^{2}=3x^{2}(4x-5)\] (b) The greatest common factor of all terms is $4x^{2}$, so we have \[4x^{5}-8x^{4}-16x^{3}-20x^{2}=4x^{2}\left(x^{3}-2x^{2}-4x-5\right).\] (c) The greatest common factor of $(2x-5)(3x-7)$ and $4(3x-7)$ is $(3x-7)$. Thus \begin{align*} (2x-5)(3x-7)+4(3x-7) & =(3x-7)\left[(2x-5)+4\right]\\ & =(3x-7)(2x-1) \end{align*} In calculus sometimes we need to factor expressions with fractional or negative exponents. In this case, we factor out the common factor with the smallest exponent. (a) $x^{3/2}+4x^{1/2}-7x^{-1/2}$ (b) $(x-4)^{-3/5}+5(x-4)^{2/5}$ (a) The common factor is $x$ and its smallest exponent is $-1/2$. Thus \[x^{3/2}+4x^{1/2}-7x^{-1/2}=x^{-1/2}(x^{2}+4x-7)\] Note that $x^{-1/2}x^{2}=x^{(2-1/2)}=x^{3/2}$ and $x^{-1/2}x=x^{(1-1/2)}=x^{1/2}$. (b) The common factor is $x-4$ and its smallest exponent is $-3/5$. Thus (x-4)^{-3/5}+5(x-4)^{2/5} & =(x-4)^{-3/5}\left[1+(x-4)^{1}\right]\\ & =(x-4)^{-3/2}(x-3). Note $(x-4)^{-3/5}\cdot(x-4)=(x-4)^{(1-3/5)}=(x-4)^{2/5}$. \[(x+p)(x+q)=x^{2}+(p+q)x+pq\] we can factor polynomials of the form $x^{2}+bx+c$ if we can find numbers $p$ and $q$ such that \[p+q=b\quad\text{and}\quad pq=c.\] In this section, we choose $p$ and $q$ by trial and error, but technically we can choose them systematically (see the Section on Solutions and Roots). (a) $x^{2}+3x+2$ (b) $x^{2}-2x-15$ (a) We need to find $p$ and $q$ such that $p+q=3$ and $pq=2$. By trial and error we find that they are 2 and 1. Thus the factorization is \[x^{2}+3x+2=(x+2)(x+1).\] (b) We need to choose $p$ and $q$ such that $p+q=-2$ and $pq=-15$. By trial and error we find that they are $-5$ and $3$. Thus \[x^{2}-2x-15=(x-5)(x+3).\] If there is no common factor, to factor algebraic expressions we can sometimes use the following formulas that we reviewed in Section 1.12 and reverse the process. 1. $A^{2}-B^{2}=(A+B)(A-B)$ (Difference of Squares) 2. $A^{2}\pm2AB+B^{2}=(A\pm B)^{2}$ (Perfect Square) 3. $A^{3}-B^{3}=(A-B)(A^{2}+AB+B^{2})$ (Difference of Cubes) 4. $A^{3}+B^{3}=(A+B)(A^{2}-AB+B^{2})$ (Sum of Cubes) Mnemonics for Memorizing the Sum/Difference of Cubes Formulas: Remember a cube of SOAP (a) $9x^{2}-49$ (b) $x^{4}-16$ (c) $x^{2}+10x+25$ (d) $x^{3}-27$ (e) $8x^{3}+64$ (a) Rewriting as difference of squares \[9x^{2}-49=(3x)^{2}-7^{2}\] Let $A=3x$ and $B=7$ in $A^{2}-B^{2}=(A-B)(A+B)$. Then (3x)^{2}-7^{2} & =A^{2}-B^{2}\\ & =(A-B)(A+B)\\ & =(3x-7)(3x+7). (b) Let $A=x^{2}$ and $B=4$, then x^{4}-16 & =A^{2}-B^{2}\\ & =(x^{2}-4)(x^{2}+4) We note that we can rewrite $x^{2}-4$ as $x^{2}-2^{4}$, so we can use the Difference of Squares formula and factor it as \[x^{2}-2^{2}=(x-2)(x+2)\] x^{4}-16 & =(x^{2}-4)(x^{2}+4)\\ & =(x-2)(x+2)(x^{2}+4). (c) Let $A=x$ and $B=5$. Because the middle term $10x=2AB$, the polynomial is a perfect square. By the Perfect Square formula we have \[x^{2}+10x+25=(x+5)^{2}.\] (d) Let $A=x$ and $B=3$, then & =(A-B)(A^{2}+AB+B^{2})\\ & =(x-3)(x^{2}+3x+9). (e) Here the terms $8x^{3}$ and 64 have the common factor 8, so first of all we factor it out. \[8x^{3}+64=8(x^{3}+8).\] Then we can use the Sum of Cubes formula with $A=x$ and $B=2$: x^{3}+8 & =A^{3}+B^{3}\\ & =(A+B)(A^{2}-AB+B^{2})\\ & =(x+2)(x^{2}-2x+4). 8x^{3}+64 & =8(x^{3}+8)\\ & =8(x+2)(x^{2}-2x+4). Click here for more special factorization formulas Collapse more formulas In general we can factor the sum or difference of two $n$th power ($n$ an integer) as follows Difference, even exponent: $A^{2n}-B^{2n}=(A^{n}-B^{n})(A^{n}+B^{n})$ Difference, even or odd exponent: $A^{n}-B^{n}=(A-B)(A^{n-1}+A^{n-2}B+A^{n-3}B^{2}+\cdots+AB^{n-2}+B^{n-1})$ If we replace $B$ with $-B$ in the above formula and $n$ is odd then we will get the following formula (Note that if $n$ is even, we won't get a new formula). Sum, odd exponent: $A^{n}+B^{n}=(A+B)(A^{n-1}-A^{n-2}B+A^{n-3}B^{2}-\cdots-AB^{n-2}+B^{n-1})$ Sum, even exponent: $A^{n}+B^{n}$ cannot, in general, be factored. Sometimes there is no a common factor (other than $\pm1$) to all terms of a polynomial. However, the polynomial can sometimes be factored if we suitably group the terms that have common factors. This strategy may work for polynomials with at least four terms. For example, we can factor the polynomial $x^{3}+3x^{2}+4x+12$ if we group the first two terms together and the last two terms together and then factor each group, namely x^{3}+3x^{2}+4x+12 & =(x^{3}+3x^{2})+(4x+12)\\ & =x^{2}(x+3)+4(x+3)\\ & =(x+3)(x^{2}+4) For the last equation, we have factored out the common factor $x+3$. Factor $2x^{3}+x^{2}-18x-9$. 2x^{3}+x^{2}-18x-9 & =(2x^{3}+x^{2})-(18x+9) & {\small \text{ (Group terms)}}\\ & =x^{2}(2x+1)-9(2x+1) & {\small \text{ (Factor out common factor of each group)}}\\ & =(2x+1)(x^{2}-9) & {\small \text{ (Factor out } (2x+1))}\\ & =(2x+1)(x-3)(x+3) & {\small \text{ (Use Difference of Squares formula)}}
CommonCrawl
Placebo effects are weak: regression to the mean is the main reason ineffective treatments appear to work Jump to follow-up "Statistical regression to the mean predicts that patients selected for abnormalcy will, on the average, tend to improve. We argue that most improvements attributed to the placebo effect are actually instances of statistical regression." "Thus, we urge caution in interpreting patient improvements as causal effects of our actions and should avoid the conceit of assuming that our personal presence has strong healing powers." McDonald et al., (1983) In 1955, Henry Beecher published "The Powerful Placebo". I was in my second undergraduate year when it appeared. And for many decades after that I took it literally, They looked at 15 studies and found that an average 35% of them got "satisfactory relief" when given a placebo. This number got embedded in pharmacological folk-lore. He also mentioned that the relief provided by placebo was greatest in patients who were most ill. Consider the common experiment in which a new treatment is compared with a placebo, in a double-blind randomised controlled trial (RCT). It's common to call the responses measured in the placebo group the placebo response. But that is very misleading, and here's why. The responses seen in the group of patients that are treated with placebo arise from two quite different processes. One is the genuine psychosomatic placebo effect. This effect gives genuine (though small) benefit to the patient. The other contribution comes from the get-better-anyway effect. This is a statistical artefact and it provides no benefit whatsoever to patients. There is now increasing evidence that the latter effect is much bigger than the former. How can you distinguish between real placebo effects and get-better-anyway effect? The only way to measure the size of genuine placebo effects is to compare in an RCT the effect of a dummy treatment with the effect of no treatment at all. Most trials don't have a no-treatment arm, but enough do that estimates can be made. For example, a Cochrane review by Hróbjartsson & Gøtzsche (2010) looked at a wide variety of clinical conditions. Their conclusion was: "We did not find that placebo interventions have important clinical effects in general. However, in certain settings placebo interventions can influence patient-reported outcomes, especially pain and nausea, though it is difficult to distinguish patient-reported effects of placebo from biased reporting." In some cases, the placebo effect is barely there at all. In a non-blind comparison of acupuncture and no acupuncture, the responses were essentially indistinguishable (despite what the authors and the journal said). See "Acupuncturists show that acupuncture doesn't work, but conclude the opposite" So the placebo effect, though a real phenomenon, seems to be quite small. In most cases it is so small that it would be barely perceptible to most patients. Most of the reason why so many people think that medicines work when they don't isn't a result of the placebo response, but it's the result of a statistical artefact. Regression to the mean is a potent source of deception The get-better-anyway effect has a technical name, regression to the mean. It has been understood since Francis Galton described it in 1886 (see Senn, 2011 for the history). It is a statistical phenomenon, and it can be treated mathematically (see references, below). But when you think about it, it's simply common sense. You tend to go for treatment when your condition is bad, and when you are at your worst, then a bit later you're likely to be better, The great biologist, Peter Medawar comments thus. "If a person is (a) poorly, (b) receives treatment intended to make him better, and (c) gets better, then no power of reasoning known to medical science can convince him that it may not have been the treatment that restored his health" (Medawar, P.B. (1969:19). The Art of the Soluble: Creativity and originality in science. Penguin Books: Harmondsworth). This is illustrated beautifully by measurements made by McGorry et al., (2001). Patients with low back pain recorded their pain (on a 10 point scale) every day for 5 months (they were allowed to take analgesics ad lib). The results for four patients are shown in their Figure 2. On average they stay fairly constant over five months, but they fluctuate enormously, with different patterns for each patient. Painful episodes that last for 2 to 9 days are interspersed with periods of lower pain or none at all. It is very obvious that if these patients had gone for treatment at the peak of their pain, then a while later they would feel better, even if they were not actually treated. And if they had been treated, the treatment would have been declared a success, despite the fact that the patient derived no benefit whatsoever from it. This entirely artefactual benefit would be the biggest for the patients that fluctuate the most (e.g this in panels a and d of the Figure). Figure 2 from McGorry et al, 2000. Examples of daily pain scores over a 6-month period for four participants. Note: Dashes of different lengths at the top of a figure designate an episode and its duration. The effect is illustrated well by an analysis of 118 trials of treatments for non-specific low back pain (NSLBP), by Artus et al., (2010). The time course of pain (rated on a 100 point visual analogue pain scale) is shown in their Figure 2. There is a modest improvement in pain over a few weeks, but this happens regardless of what treatment is given, including no treatment whatsoever. FIG. 2 Overall responses (VAS for pain) up to 52-week follow-up in each treatment arm of included trials. Each line represents a response line within each trial arm. Red: index treatment arm; Blue: active treatment arm; Green: usual care/waiting list/placebo arms. ____: pharmacological treatment; – – – -: non-pharmacological treatment; . . .. . .: mixed/other. The authors comment "symptoms seem to improve in a similar pattern in clinical trials following a wide variety of active as well as inactive treatments.", and "The common pattern of responses could, for a large part, be explained by the natural history of NSLBP". In other words, none of the treatments work. This paper was brought to my attention through the blog run by the excellent physiotherapist, Neil O'Connell. He comments "If this finding is supported by future studies it might suggest that we can't even claim victory through the non-specific effects of our interventions such as care, attention and placebo. People enrolled in trials for back pain may improve whatever you do. This is probably explained by the fact that patients enrol in a trial when their pain is at its worst which raises the murky spectre of regression to the mean and the beautiful phenomenon of natural recovery." O'Connell has discussed the matter in recent paper, O'Connell (2015), from the point of view of manipulative therapies. That's an area where there has been resistance to doing proper RCTs, with many people saying that it's better to look at "real world" outcomes. This usually means that you look at how a patient changes after treatment. The hazards of this procedure are obvious from Artus et al.,Fig 2, above. It maximises the risk of being deceived by regression to the mean. As O'Connell commented "Within-patient change in outcome might tell us how much an individual's condition improved, but it does not tell us how much of this improvement was due to treatment." In order to eliminate this effect it's essential to do a proper RCT with control and treatment groups tested in parallel. When that's done the control group shows the same regression to the mean as the treatment group. and any additional response in the latter can confidently attributed to the treatment. Anything short of that is whistling in the wind. Needless to say, the suboptimal methods are most popular in areas where real effectiveness is small or non-existent. This, sad to say, includes low back pain. It also includes just about every treatment that comes under the heading of alternative medicine. Although these problems have been understood for over a century, it remains true that "It is difficult to get a man to understand something, when his salary depends upon his not understanding it." Upton Sinclair (1935) Responders and non-responders? One excuse that's commonly used when a treatment shows only a small effect in proper RCTs is to assert that the treatment actually has a good effect, but only in a subgroup of patients ("responders") while others don't respond at all ("non-responders"). For example, this argument is often used in studies of anti-depressants and of manipulative therapies. And it's universal in alternative medicine. There's a striking similarity between the narrative used by homeopaths and those who are struggling to treat depression. The pill may not work for many weeks. If the first sort of pill doesn't work try another sort. You may get worse before you get better. One is reminded, inexorably, of Voltaire's aphorism "The art of medicine consists in amusing the patient while nature cures the disease". There is only a handful of cases in which a clear distinction can be made between responders and non-responders. Most often what's observed is a smear of different responses to the same treatment -and the greater the variability, the greater is the chance of being deceived by regression to the mean. For example, Thase et al., (2011) looked at responses to escitalopram, an SSRI antidepressant. They attempted to divide patients into responders and non-responders. An example (Fig 1a in their paper) is shown. The evidence for such a bimodal distribution is certainly very far from obvious. The observations are just smeared out. Nonetheless, the authors conclude "Our findings indicate that what appears to be a modest effect in the grouped data – on the boundary of clinical significance, as suggested above – is actually a very large effect for a subset of patients who benefited more from escitalopram than from placebo treatment. " I guess that interpretation could be right, but it seems more likely to be a marketing tool. Before you read the paper, check the authors' conflicts of interest. The bottom line is that analyses that divide patients into responders and non-responders are reliable only if that can be done before the trial starts. Retrospective analyses are unreliable and unconvincing. Some more reading Senn, 2011 provides an excellent introduction (and some interesting history). The subtitle is "Here Stephen Senn examines one of Galton's most important statistical legacies – one that is at once so trivial that it is blindingly obvious, and so deep that many scientists spend their whole career being fooled by it." The examples in this paper are extended in Senn (2009), "Three things that every medical writer should know about statistics". The three things are regression to the mean, the error of the transposed conditional and individual response. You can read slightly more technical accounts of regression to the mean in McDonald & Mazzuca (1983) "How much of the placebo effect is statistical regression" (two quotations from this paper opened this post), and in Stephen Senn (2015) "Mastering variation: variance components and personalised medicine". In 1988 Senn published some corrections to the maths in McDonald (1983). The trials that were used by Hróbjartsson & Gøtzsche (2010) to investigate the comparison between placebo and no treatment were looked at again by Howick et al., (2013), who found that in many of them the difference between treatment and placebo was also small. Most of the treatments did not work very well. Regression to the mean is not just a medical deceiver: it's everywhere Although this post has concentrated on deception in medicine, it's worth noting that the phenomenon of regression to the mean can cause wrong inferences in almost any area where you look at change from baseline. A classical example concern concerns the effectiveness of speed cameras. They tend to be installed after a spate of accidents, and if the accident rate is particularly high in one year it is likely to be lower the next year, regardless of whether a camera had been installed or not. To find the true reduction in accidents caused by installation of speed cameras, you would need to choose several similar sites and allocate them at random to have a camera or no camera. As in clinical trials. looking at the change from baseline can be very deceptive. Statistical postscript Lastly, remember that it you avoid all of these hazards of interpretation, and your test of significance gives P = 0.047. that does not mean you have discovered something. There is still a risk of at least 30% that your 'positive' result is a false positive. This is explained in Colquhoun (2014),"An investigation of the false discovery rate and the misinterpretation of p-values". I've suggested that one way to solve this problem is to use different words to describe P values: something like this. P > 0.05 very weak evidence P = 0.05 weak evidence: worth another look P = 0.01 moderate evidence for a real effect P = 0.001 strong evidence for real effect But notice that if your hypothesis is implausible, even these criteria are too weak. For example, if the treatment and placebo are identical (as would be the case if the treatment were a homeopathic pill) then it follows that 100% of positive tests are false positives. It's worth mentioning that the question of responders versus non-responders is closely-related to the classical topic of bioassays that use quantal responses. In that field it was assumed that each participant had an individual effective dose (IED). That's reasonable for the old-fashioned LD50 toxicity test: every animal will die after a sufficiently big dose. It's less obviously right for ED50 (effective dose in 50% of individuals). The distribution of IEDs is critical, but it has very rarely been determined. The cumulative form of this distribution is what determines the shape of the dose-response curve for fraction of responders as a function of dose. Linearisation of this curve, by means of the probit transformation used to be a staple of biological assay. This topic is discussed in Chapter 10 of Lectures on Biostatistics. And you can read some of the history on my blog about Some pharmacological history: an exam from 1959. Tagged acupuncture, alternative medicine, CAM, chiropractic, osteopathy, physiotherapy, placebo, placebo effect, regression to the mean, statistics | 31 Comments Two more cases of hype in glamour journals: magnets, cocoa and memory In the course of thinking about metrics, I keep coming across cases of over-promoted research. An early case was "Why honey isn't a wonder cough cure: more academic spin". More recently, I noticed these examples. "Effect of Vitamin E and Memantine on Functional Decline in Alzheimer Disease".(Spoiler -very little), published in the Journal of the American Medical Association. " and " Primary Prevention of Cardiovascular Disease with a Mediterranean Diet" , in the New England Journal of Medicine (which had second highest altmetric score in 2013) and "Sleep Drives Metabolite Clearance from the Adult Brain", published in Science In all these cases, misleading press releases were issued by the journals themselves and by the universities. These were copied out by hard-pressed journalists and made headlines that were certainly not merited by the work. In the last three cases, hyped up tweets came from the journals. The responsibility for this hype must eventually rest with the authors. The last two papers came second and fourth in the list of highest altmetric scores for 2013 Here are to two more very recent examples. It seems that every time I check a highly tweeted paper, it turns out that it is very second rate. Both papers involve fMRI imaging, and since the infamous dead salmon paper, I've been a bit sceptical about them. But that is irrelevant to what follows. Boost your memory with electricity That was a popular headline at the end of August. It referred to a paper in Science magazine: "Targeted enhancement of cortical-hippocampal brain networks and associative memory" (Wang, JX et al, Science, 29 August, 2014) This study was promoted by the Northwestern University "Electric current to brain boosts memory". And Science tweeted along the same lines. Science's link did not lead to the paper, but rather to a puff piece, "Rebooting memory with magnets". Again all the emphasis was on memory, with the usual entirely speculative stuff about helping Alzheimer's disease. But the paper itself was behind Science's paywall. You couldn't read it unless your employer subscribed to Science. All the publicity led to much retweeting and a big altmetrics score. Given that the paper was not open access, it's likely that most of the retweeters had not actually read the paper. When you read the paper, you found that is mostly not about memory at all. It was mostly about fMRI. In fact the only reference to memory was in a subsection of Figure 4. This is the evidence. That looks desperately unconvincing to me. The test of significance gives P = 0.043. In an underpowered study like this, the chance of this being a false discovery is probably at least 50%. A result like this means, at most, "worth another look". It does not begin to justify all the hype that surrounded the paper. The journal, the university's PR department, and ultimately the authors, must bear the responsibility for the unjustified claims. Science does not allow online comments following the paper, but there are now plenty of sites that do. NHS Choices did a fairly good job of putting the paper into perspective, though they failed to notice the statistical weakness. A commenter on PubPeer noted that Science had recently announced that it would tighten statistical standards. In this case, they failed. The age of post-publication peer review is already reaching maturity Boost your memory with cocoa Another glamour journal, Nature Neuroscience, hit the headlines on October 26, 2014, in a paper that was publicised in a Nature podcast and a rather uninformative press release. "Enhancing dentate gyrus function with dietary flavanols improves cognition in older adults. Brickman et al., Nat Neurosci. 2014. doi: 10.1038/nn.3850.". The journal helpfully lists no fewer that 89 news items related to this study. Mostly they were something like "Drinking cocoa could improve your memory" (Kat Lay, in The Times). Only a handful of the 89 reports spotted the many problems. A puff piece from Columbia University's PR department quoted the senior author, Dr Small, making the dramatic claim that "If a participant had the memory of a typical 60-year-old at the beginning of the study, after three months that person on average had the memory of a typical 30- or 40-year-old." Like anything to do with diet, the paper immediately got circulated on Twitter. No doubt most of the people who retweeted the message had not read the (paywalled) paper. The links almost all led to inaccurate press accounts, not to the paper itself. But some people actually read the paywalled paper and post-publication review soon kicked in. Pubmed Commons is a good site for that, because Pubmed is where a lot of people go for references. Hilda Bastian kicked off the comments there (her comment was picked out by Retraction Watch). Her conclusion was this. "It's good to see claims about dietary supplements tested. However, the results here rely on a chain of yet-to-be-validated assumptions that are still weakly supported at each point. In my opinion, the immodest title of this paper is not supported by its contents." (Hilda Bastian runs the Statistically Funny blog -"The comedic possibilities of clinical epidemiology are known to be limitless", and also a Scientific American blog about risk, Absolutely Maybe.) NHS Choices spotted most of the problems too, in "A mug of cocoa is not a cure for memory problems". And so did Ian Musgrave of the University of Adelaide who wrote "Most Disappointing Headline Ever (No, Chocolate Will Not Improve Your Memory)", Here are some of the many problems. The paper was not about cocoa. Drinks containing 900 mg cocoa flavanols (as much as in about 25 chocolate bars) and 138 mg of (−)-epicatechin were compared with much lower amounts of these compounds The abstract, all that most people could read, said that subjects were given "high or low cocoa–containing diet for 3 months". Bit it wasn't a test of cocoa: it was a test of a dietary "supplement". The sample was small (37ppeople altogether, split between four groups), and therefore under-powered for detection of the small effect that was expected (and observed) The authors declared the result to be "significant" but you had to hunt through the paper to discover that this meant P = 0.04 (hint -it's 6 lines above Table 1). That means that there is around a 50% chance that it's a false discovery. The test was short -only three months The test didn't measure memory anyway. It measured reaction speed, They did test memory retention too, and there was no detectable improvement. This was not mentioned in the abstract, Neither was the fact that exercise had no detectable effect. The study was funded by the Mars bar company. They, like many others, are clearly looking for a niche in the huge "supplement" market, The claims by the senior author, in a Columbia promotional video that the drink produced "an improvement in memory" and "an improvement in memory performance by two or three decades" seem to have a very thin basis indeed. As has the statement that "we don't need a pharmaceutical agent" to ameliorate a natural process (aging). High doses of supplements are pharmaceutical agents. To be fair, the senior author did say, in the Columbia press release, that "the findings need to be replicated in a larger study—which he and his team plan to do". But there is no hint of this in the paper itself, or in the title of the press release "Dietary Flavanols Reverse Age-Related Memory Decline". The time for all the publicity is surely after a well-powered study, not before it. The high altmetrics score for this paper is yet another blow to the reputation of altmetrics. One may well ask why Nature Neuroscience and the Columbia press office allowed such extravagant claims to be made on such a flimsy basis. What's going wrong? These two papers have much in common. Elaborate imaging studies are accompanied by poor functional tests. All the hype focusses on the latter. These led me to the speculation ( In Pubmed Commons) that what actually happens is as follows. Authors do big imaging (fMRI) study. Glamour journal says coloured blobs are no longer enough and refuses to publish without functional information. Authors tag on a small human study. Paper gets published. Hyped up press releases issued that refer mostly to the add on. Journal and authors are happy. But science is not advanced. It's no wonder that Dorothy Bishop wrote "High-impact journals: where newsworthiness trumps methodology". It's time we forgot glamour journals. Publish open access on the web with open comments. Post-publication peer review is working But boycott commercial publishers who charge large amounts for open access. It shouldn't cost more than about £200, and more and more are essentially free (my latest will appear shortly in Royal Society Open Science). Hilda Bastian has an excellent post about the dangers of reading only the abstract "Science in the Abstract: Don't Judge a Study by its Cover" I was upbraided on Twitter by Euan Adie, founder of Almetric.com, because I didn't click through the altmetric symbol to look at the citations "shouldn't have to tell you to look at the underlying data David" and "you could have saved a lot of Google time". But when I did do that, all I found was a list of media reports and blogs -pretty much the same as Nature Neuroscience provides itself. More interesting, I found that my blog wasn't listed and neither was PubMed Commons. When I asked why, I was told "needs to regularly cite primary research. PubMed, PMC or repository links". But this paper is behind a paywall. So I provide (possibly illegally) a copy of it, so anyone can verify my comments. The result is that altmetric's dumb algorithms ignore it. In order to get counted you have to provide links that lead nowhere. So here's a link to the abstract (only) in Pubmed for the Science paper http://www.ncbi.nlm.nih.gov/pubmed/25170153 and here's the link for the Nature Neuroscience paper http://www.ncbi.nlm.nih.gov/pubmed/25344629 It seems that altmetrics doesn't even do the job that it claims to do very efficiently. It worked. By later in the day, this blog was listed in both Nature's metrics section and by altmetrics. com. But comments on Pubmed Commons were still missing, That's bad because it's an excellent place for post-publications peer review. Tagged Academia, Alzheimer's, badscience, cocoa, false discovery rate, false positives, hype, Journalism, magnetic, memory, perverse incentives, statistics | 5 Comments Why you should ignore altmetrics and other bibliometric nightmares This discussion seemed to be of sufficient general interest that we submitted is as a feature to eLife, because this journal is one of the best steps into the future of scientific publishing. Sadly the features editor thought that " too much of the article is taken up with detailed criticisms of research papers from NEJM and Science that appeared in the altmetrics top 100 for 2013; while many of these criticisms seems valid, the Features section of eLife is not the venue where they should be published". That's pretty typical of what most journals would say. It is that sort of attitude that stifles criticism, and that is part of the problem. We should be encouraging post-publication peer review, not suppressing it. Luckily, thanks to the web, we are now much less constrained by journal editors than we used to be. Scientists don't count: why you should ignore altmetrics and other bibliometric nightmares David Colquhoun1 and Andrew Plested2 1 University College London, Gower Street, London WC1E 6BT 2 Leibniz-Institut für Molekulare Pharmakologie (FMP) & Cluster of Excellence NeuroCure, Charité Universitätsmedizin,Timoféeff-Ressowsky-Haus, Robert-Rössle-Str. 10, 13125 Berlin Germany. Jeffrey Beall is librarian at Auraria Library, University of Colorado Denver. Although not a scientist himself, he, more than anyone, has done science a great service by listing the predatory journals that have sprung up in the wake of pressure for open access. In August 2012 he published "Article-Level Metrics: An Ill-Conceived and Meretricious Idea. At first reading that criticism seemed a bit strong. On mature consideration, it understates the potential that bibliometrics, altmetrics especially, have to undermine both science and scientists. Altmetrics is the latest buzzword in the vocabulary of bibliometricians. It attempts to measure the "impact" of a piece of research by counting the number of times that it's mentioned in tweets, Facebook pages, blogs, YouTube and news media. That sounds childish, and it is. Twitter is an excellent tool for journalism. It's good for debunking bad science, and for spreading links, but too brief for serious discussions. It's rarely useful for real science. Surveys suggest that the great majority of scientists do not use twitter (7 — 13%). Scientific works get tweeted about mostly because they have titles that contain buzzwords, not because they represent great science. What and who is Altmetrics for? The aims of altmetrics are ambiguous to the point of dishonesty; they depend on whether the salesperson is talking to a scientist or to a potential buyer of their wares. At a meeting in London , an employee of altmetric.com said "we measure online attention surrounding journal articles" "we are not measuring quality …" "this whole altmetrics data service was born as a service for publishers", "it doesn't matter if you got 1000 tweets . . .all you need is one blog post that indicates that someone got some value from that paper". These ideas sound fairly harmless, but in stark contrast, Jason Priem (an author of the altmetrics manifesto) said one advantage of altmetrics is that it's fast "Speed: months or weeks, not years: faster evaluations for tenure/hiring". Although conceivably useful for disseminating preliminary results, such speed isn't important for serious science (the kind that ought to be considered for tenure) which operates on the timescale of years. Priem also says "researchers must ask if altmetrics really reflect impact" . Even he doesn't know, yet altmetrics services are being sold to universities, before any evaluation of their usefulness has been done, and universities are buying them. The idea that altmetrics scores could be used for hiring is nothing short of terrifying. The problem with bibliometrics The mistake made by all bibliometricians is that they fail to consider the content of papers, because they have no desire to understand research. Bibliometrics are for people who aren't prepared to take the time (or lack the mental capacity) to evaluate research by reading about it, or in the case of software or databases, by using them. The use of surrogate outcomes in clinical trials is rightly condemned. Bibliometrics are all about surrogate outcomes. If instead we consider the work described in particular papers that most people agree to be important (or that everyone agrees to be bad), it's immediately obvious that no publication metrics can measure quality. There are some examples in How to get good science (Colquhoun, 2007). It is shown there that at least one Nobel prize winner failed dismally to fulfil arbitrary biblometric productivity criteria of the sort imposed in some universities (another example is in Is Queen Mary University of London trying to commit scientific suicide?). Schekman (2013) has said that science "is disfigured by inappropriate incentives. The prevailing structures of personal reputation and career advancement mean the biggest rewards often follow the flashiest work, not the best." Bibliometrics reinforce those inappropriate incentives. A few examples will show that altmetrics are one of the silliest metrics so far proposed. The altmetrics top 100 for 2103 The superficiality of altmetrics is demonstrated beautifully by the list of the 100 papers with the highest altmetric scores in 2013 For a start, 58 of the 100 were behind paywalls, and so unlikely to have been read except (perhaps) by academics. The second most popular paper (with the enormous altmetric score of 2230) was published in the New England Journal of Medicine. The title was Primary Prevention of Cardiovascular Disease with a Mediterranean Diet. It was promoted (inaccurately) by the journal with the following tweet: Many of the 2092 tweets related to this article simply gave the title, but inevitably the theme appealed to diet faddists, with plenty of tweets like the following: The interpretations of the paper promoted by these tweets were mostly desperately inaccurate. Diet studies are anyway notoriously unreliable. As John Ioannidis has said "Almost every single nutrient imaginable has peer reviewed publications associating it with almost any outcome." This sad situation comes about partly because most of the data comes from non-randomised cohort studies that tell you nothing about causality, and also because the effects of diet on health seem to be quite small. The study in question was a randomized controlled trial, so it should be free of the problems of cohort studies. But very few tweeters showed any sign of having read the paper. When you read it you find that the story isn't so simple. Many of the problems are pointed out in the online comments that follow the paper. Post-publication peer review really can work, but you have to read the paper. The conclusions are pretty conclusively demolished in the comments, such as: "I'm surrounded by olive groves here in Australia and love the hand-pressed EVOO [extra virgin olive oil], which I can buy at a local produce market BUT this study shows that I won't live a minute longer, and it won't prevent a heart attack." We found no tweets that mentioned the finding from the paper that the diets had no detectable effect on myocardial infarction, death from cardiovascular causes, or death from any cause. The only difference was in the number of people who had strokes, and that showed a very unimpressive P = 0.04. Neither did we see any tweets that mentioned the truly impressive list of conflicts of interest of the authors, which ran to an astonishing 419 words. "Dr. Estruch reports serving on the board of and receiving lecture fees from the Research Foundation on Wine and Nutrition (FIVIN); serving on the boards of the Beer and Health Foundation and the European Foundation for Alcohol Research (ERAB); receiving lecture fees from Cerveceros de España and Sanofi-Aventis; and receiving grant support through his institution from Novartis. Dr. Ros reports serving on the board of and receiving travel support, as well as grant support through his institution, from the California Walnut Commission; serving on the board of the Flora Foundation (Unilever). . . " And so on, for another 328 words. The interesting question is how such a paper came to be published in the hugely prestigious New England Journal of Medicine. That it happened is yet another reason to distrust impact factors. It seems to be another sign that glamour journals are more concerned with trendiness than quality. One sign of that is the fact that the journal's own tweet misrepresented the work. The irresponsible spin in this initial tweet from the journal started the ball rolling, and after this point, the content of the paper itself became irrelevant. The altmetrics score is utterly disconnected from the science reported in the paper: it more closely reflects wishful thinking and confirmation bias. The fourth paper in the altmetrics top 100 is an equally instructive example. This work was also published in a glamour journal, Science. The paper claimed that a function of sleep was to "clear metabolic waste from the brain". It was initially promoted (inaccurately) on Twitter by the publisher of Science. After that, the paper was retweeted many times, presumably because everybody sleeps, and perhaps because the title hinted at the trendy, but fraudulent, idea of "detox". Many tweets were variants of "The garbage truck that clears metabolic waste from the brain works best when you're asleep". But this paper was hidden behind Science's paywall. It's bordering on irresponsible for journals to promote on social media papers that can't be read freely. It's unlikely that anyone outside academia had read it, and therefore few of the tweeters had any idea of the actual content, or the way the research was done. Nevertheless it got "1,479 tweets from 1,355 accounts with an upper bound of 1,110,974 combined followers". It had the huge Altmetrics score of 1848, the highest altmetric score in October 2013. Within a couple of days, the story fell out of the news cycle. It was not a bad paper, but neither was it a huge breakthrough. It didn't show that naturally-produced metabolites were cleared more quickly, just that injected substances were cleared faster when the mice were asleep or anaesthetised. This finding might or might not have physiological consequences for mice. Worse, the paper also claimed that "Administration of adrenergic antagonists induced an increase in CSF tracer influx, resulting in rates of CSF tracer influx that were more comparable with influx observed during sleep or anesthesia than in the awake state". Simply put, giving the sleeping mice a drug could reduce the clearance to wakeful levels. But nobody seemed to notice the absurd concentrations of antagonists that were used in these experiments: "adrenergic receptor antagonists (prazosin, atipamezole, and propranolol, each 2 mM) were then slowly infused via the cisterna magna cannula for 15 min". Use of such high concentrations is asking for non-specific effects. The binding constant (concentration to occupy half the receptors) for prazosin is less than 1 nM, so infusing 2 mM is working at a million times greater than the concentration that should be effective. That's asking for non-specific effects. Most drugs at this sort of concentration have local anaesthetic effects, so perhaps it isn't surprising that the effects resembled those of ketamine. The altmetrics editor hadn't noticed the problems and none of them featured in the online buzz. That's partly because to find it out you had to read the paper (the antagonist concentrations were hidden in the legend of Figure 4), and partly because you needed to know the binding constant for prazosin to see this warning sign. The lesson, as usual, is that if you want to know about the quality of a paper, you have to read it. Commenting on a paper without knowing anything of its content is liable to make you look like an jackass. A tale of two papers Another approach that looks at individual papers is to compare some of one's own papers. Sadly, UCL shows altmetric scores on each of your own papers. Mostly they are question marks, because nothing published before 2011 is scored. But two recent papers make an interesting contrast. One is from DC's side interest in quackery, one was real science. The former has an altmetric score of 169, the latter has an altmetric score of 2. The first paper was "Acupuncture is a theatrical placebo", which was published as an invited editorial in Anesthesia and Analgesia [download pdf]. The paper was scientifically trivial. It took perhaps a week to write. Nevertheless, it got promoted it on twitter, because anything to do with alternative medicine is interesting to the public. It got quite a lot of retweets. And the resulting altmetric score of 169 put it in the top 1% of all articles altmetric have tracked, and the second highest ever for Anesthesia and Analgesia. As well as the journal's own website, the article was also posted on the DCScience.net blog (May 30, 2013) where it soon became the most viewed page ever (24,468 views as of 23 November 2013), something that altmetrics does not seem to take into account. Compare this with the fate of some real, but rather technical, science. My [DC] best scientific papers are too old (i.e. before 2011) to have an altmetrics score, but my best score for any scientific paper is 2. This score was for Colquhoun & Lape (2012) "Allosteric coupling in ligand-gated ion channels". It was a commentary with some original material. The altmetric score was based on two tweets and 15 readers on Mendeley. The two tweets consisted of one from me ("Real science; The meaning of allosteric conformation changes http://t.co/zZeNtLdU "). The only other tweet as abusive one from a cyberstalker who was upset at having been refused a job years ago. Incredibly, this modest achievement got it rated "Good compared to other articles of the same age (71st percentile)". Conclusions about bibliometrics Bibliometricians spend much time correlating one surrogate outcome with another, from which they learn little. What they don't do is take the time to examine individual papers. Doing that makes it obvious that most metrics, and especially altmetrics, are indeed an ill-conceived and meretricious idea. Universities should know better than to subscribe to them. Although altmetrics may be the silliest bibliometric idea yet, much this criticism applies equally to all such metrics. Even the most plausible metric, counting citations, is easily shown to be nonsense by simply considering individual papers. All you have to do is choose some papers that are universally agreed to be good, and some that are bad, and see how metrics fail to distinguish between them. This is something that bibliometricians fail to do (perhaps because they don't know enough science to tell which is which). Some examples are given by Colquhoun (2007) (more complete version at dcscience.net). Eugene Garfield, who started the metrics mania with the journal impact factor (JIF), was clear that it was not suitable as a measure of the worth of individuals. He has been ignored and the JIF has come to dominate the lives of researchers, despite decades of evidence of the harm it does (e.g.Seglen (1997) and Colquhoun (2003) ) In the wake of JIF, young, bright people have been encouraged to develop yet more spurious metrics (of which 'altmetrics' is the latest). It doesn't matter much whether these metrics are based on nonsense (like counting hashtags) or rely on counting links or comments on a journal website. They won't (and can't) indicate what is important about a piece of research- its quality. People say – I can't be a polymath. Well, then don't try to be. You don't have to have an opinion on things that you don't understand. The number of people who really do have to have an overview, of the kind that altmetrics might purport to give, those who have to make funding decisions about work that they are not intimately familiar with, is quite small. Chances are, you are not one of them. We review plenty of papers and grants. But it's not credible to accept assignments outside of your field, and then rely on metrics to assess the quality of the scientific work or the proposal. It's perfectly reasonable to give credit for all forms of research outputs, not only papers. That doesn't need metrics. It's nonsense to suggest that altmetrics are needed because research outputs are not already valued in grant and job applications. If you write a grant for almost any agency, you can put your CV. If you have a non-publication based output, you can always include it. Metrics are not needed. If you write software, get the numbers of downloads. Software normally garners citations anyway if it's of any use to the greater community. When AP recently wrote a criticism of Heather Piwowar's altmetrics note in Nature, one correspondent wrote: "I haven't read the piece [by HP] but I'm sure you are mischaracterising it". This attitude summarizes the too-long-didn't-read (TLDR) culture that is increasingly becoming accepted amongst scientists, and which the comparisons above show is a central component of altmetrics. Altmetrics are numbers generated by people who don't understand research, for people who don't understand research. People who read papers and understand research just don't need them and should shun them. But all bibliometrics give cause for concern, beyond their lack of utility. They do active harm to science. They encourage "gaming" (a euphemism for cheating). They encourage short-term eye-catching research of questionable quality and reproducibility. They encourage guest authorships: that is, they encourage people to claim credit for work which isn't theirs. At worst, they encourage fraud. No doubt metrics have played some part in the crisis of irreproducibility that has engulfed some fields, particularly experimental psychology, genomics and cancer research. Underpowered studies with a high false-positive rate may get you promoted, but tend to mislead both other scientists and the public (who in general pay for the work). The waste of public money that must result from following up badly done work that can't be reproduced but that was published for the sake of "getting something out" has not been quantified, but must be considered to the detriment of bibliometrics, and sadly overcomes any advantages from rapid dissemination. Yet universities continue to pay publishers to provide these measures, which do nothing but harm. And the general public has noticed. It's now eight years since the New York Times brought to the attention of the public that some scientists engage in puffery, cheating and even fraud. Overblown press releases written by journals, with connivance of university PR wonks and with the connivance of the authors, sometimes go viral on social media (and so score well on altmetrics). Yet another example, from Journal of the American Medical Association involved an overblown press release from the Journal about a trial that allegedly showed a benefit of high doses of Vitamin E for Alzheimer's disease. This sort of puffery harms patients and harms science itself. We can't go on like this. What should be done? Post publication peer review is now happening, in comments on published papers and through sites like PubPeer, where it is already clear that anonymous peer review can work really well. New journals like eLife have open comments after each paper, though authors do not seem to have yet got into the habit of using them constructively. They will. It's very obvious that too many papers are being published, and that anything, however bad, can be published in a journal that claims to be peer reviewed . To a large extent this is just another example of the harm done to science by metrics –the publish or perish culture. Attempts to regulate science by setting "productivity targets" is doomed to do as much harm to science as it has in the National Health Service in the UK. This has been known to economists for a long time, under the name of Goodhart's law. Here are some ideas about how we could restore the confidence of both scientists and of the public in the integrity of published work. Nature, Science, and other vanity journals should become news magazines only. Their glamour value distorts science and encourages dishonesty. Print journals are overpriced and outdated. They are no longer needed. Publishing on the web is cheap, and it allows open access and post-publication peer review. Every paper should be followed by an open comments section, with anonymity allowed. The old publishers should go the same way as the handloom weavers. Their time has passed. Web publication allows proper explanation of methods, without the page, word and figure limits that distort papers in vanity journals. This would also make it very easy to publish negative work, thus reducing publication bias, a major problem (not least for clinical trials) Publish or perish has proved counterproductive. It seems just as likely that better science will result without any performance management at all. All that's needed is peer review of grant applications. Providing more small grants rather than fewer big ones should help to reduce the pressure to publish which distorts the literature. The 'celebrity scientist', running a huge group funded by giant grants has not worked well. It's led to poor mentoring, and, at worst, fraud. Of course huge groups sometimes produce good work, but too often at the price of exploitation of junior scientists There is a good case for limiting the number of original papers that an individual can publish per year, and/or total funding. Fewer but more complete and considered papers would benefit everyone, and counteract the flood of literature that has led to superficiality. Everyone should read, learn and inwardly digest Peter Lawrence's The Mismeasurement of Science. A focus on speed and brevity (cited as major advantages of altmetrics) will help no-one in the end. And a focus on creating and curating new metrics will simply skew science in yet another unsatisfactory way, and rob scientists of the time they need to do their real job: generate new knowledge. It has been said "Creation is sloppy; discovery is messy; exploration is dangerous. What's a manager to do? The answer in general is to encourage curiosity and accept failure. Lots of failure." And, one might add, forget metrics. All of them. This piece was noticed by the Economist. Their 'Writing worth reading' section said "Why you should ignore altmetrics (David Colquhoun) Altmetrics attempt to rank scientific papers by their popularity on social media. David Colquohoun [sic] argues that they are "for people who aren't prepared to take the time (or lack the mental capacity) to evaluate research by reading about it."" Jason Priem, of ImpactStory, has responded to this article on his own blog. In Altmetrics: A Bibliographic Nightmare? he seems to back off a lot from his earlier claim (cited above) that altmetrics are useful for making decisions about hiring or tenure. Our response is on his blog. Jason Priem, of ImpactStory, has responded to this article on his own blog, In Altmetrics: A bibliographic Nightmare? he seems to back off a lot from his earlier claim (cited above) that altmetrics are useful for making decisions about hiring or tenure. Our response is on his blog. The Scholarly Kitchen blog carried another paean to metrics, A vigorous discussion followed. The general line that I've followed in this discussion, and those mentioned below, is that bibliometricians won't qualify as scientists until they test their methods, i.e. show that they predict something useful. In order to do that, they'll have to consider individual papers (as we do above). At present, articles by bibliometricians consist largely of hubris, with little emphasis on the potential to cause corruption. They remind me of articles by homeopaths: their aim is to sell a product (sometimes for cash, but mainly to promote the authors' usefulness). It's noticeable that all of the pro-metrics articles cited here have been written by bibliometricians. None have been written by scientists. Dalmeet Singh Chawla,a bibliometrician from Imperial College London, wrote a blog on the topic. (Imperial, at least in its Medicine department, is notorious for abuse of metrics.) 29 January 2014 Arran Frood wrote a sensible article about the metrics row in Euroscientist. 2 February 2014 Paul Groth (a co-author of the Altmetrics Manifesto) posted more hubristic stuff about altmetrics on Slideshare. A vigorous discussion followed. 5 May 2014. Another vigorous discussion on ImpactStory blog, this time with Stacy Konkiel. She's another non-scientist trying to tell scientists what to do. The evidence that she produced for the usefulness of altmetrics seemed pathetic to me. 7 May 2014 A much-shortened version of this post appeared in the British Medical Journal (BMJ blogs) Tagged Academia, acupuncture, altmetrics, badscience, bibliobollocks, bibliometrics, open access, peer review, publication, regulation | 20 Comments Blogs lead in critical thinking, but newspapers still matter Here is a record of a couple of recent newspaper pieces. Who says the mainstream media don't matter any longer? Blogs may be in the lead now when it comes to critical analysis. The best blogs have more expertise and more time to read the sources than journalists. But the mainstream media get the message to a different, and much larger, audience. The Observer ran a whole page interview with me as part of their "Rational Heroes" series. I rather liked their subtitle [pdf of article] "Professor of pharmacology David Colquhoun is the take-no-prisoners debunker of pseudoscience on his unmissable blog" It was pretty accurate apart from the fact that the picture was labelled as "DC in his office". Actually it was taken (at the insistence of the photographer) in Lucia Sivilotti's lab. Photo by Karen Robinson. The astonishing result of this was that on Sunday the blog got a record 24,305 hits. Normally it gets 1,000-1,400 hits a day . between posts, fewer on Sunday, and the previous record was around 7000/day A week later it was still twice normal. It remains to be seen whether the eventual plateau stays up. I also gained around 1000 extra followers on twitter, though some dropped away quite soon, and 100 or so people signed for email updates. The dead tree media aren't yet dead. I'm happy to say. Perhaps as a result of the foregoing piece, I got asked to write a column for The Observer, at barely 48 hours notice. This is the best I could manage in the time. The web version has links. This attracted the usual "it worked for me" anecdotes in the comments, but I spent an afternoon answering them. It seems important to have a dialogue, not just to lecture the public. In fact when I read a regular scientific paper, I now find myself looking for the comment section. That may say something about the future of scientific publishing. It is for others to judge how succesfully I engage with the public, but I was quite surprised to discover that UCL's public engagement unit, @UCL_in_public, has blocked me on twitter. Hey ho. They have 1574 follower and I have 7597. I wish them the best of luck. Tagged Academia, Anti-science, antiscience, badscience, CAM, cancer, communication, public engagement, quackademia, science communication, Universities | 4 Comments Is Queen Mary University of London trying to commit scientific suicide? Academic staff are going to be fired at Queen Mary University of London (QMUL). It's possible that universities may have to contract a bit in hard times, so what's wrong? What's wrong is that the victims are being selected in a way that I can describe only as insane. The criteria they use are guaranteed to produce a generation of second-rate spiv scientists, with a consequent progressive decline in QMUL's reputation. The firings, it seems, are nothing to do with hard financial times, but are a result of QMUL's aim to raise its ranking in university league tables. In the UK university league table, a university's position is directly related to its government research funding. So they need to do well in the 2014 'Research Excellence Framework' (REF). To achieve that they plan to recruit new staff with high research profiles, take on more PhD students and post-docs, obtain more research funding from grants, and get rid of staff who are not doing 'good' enough research. So far, that's exactly what every other university is trying to do. This sort of distortion is one of the harmful side-effects of the REF. But what's particularly stupid about QMUL's behaviour is the way they are going about it. You can assess your own chances of survival at QMUL's School of Biological and Chemical Sciences from the following table, which is taken from an article by Jeremy Garwood (Lab Times Online. July 4, 2012). The numbers refer to the four year period from 2008 to 2011. Category of staff Research Output ­ Quantity (No. of Papers) Research Output ­ (No. of high quality papers) Research Income (£) (Total) As Principal Investigator at least 200,000 In addition to the three criteria, 'Research Output ‐ quality', 'Research Output – quantity', and 'Research Income', there is a minimum threshold of 1 PhD completion for staff at each academic level. All this data is "evidenced by objective metrics; publications cited in Web of Science, plus official QMUL metrics on grant income and PhD completion." To survive, staff must meet the minimum threshold in three out of the four categories, except as follows: Demonstration of activity at an exceptional level in either 'research outputs' or 'research income', termed an 'enhanced threshold', is "sufficient" to justify selection regardless of levels of activity in the other two categories. And what are these enhanced thresholds? For research quantity: a mere 26 published items with at least 11 as significant author (no distinction between academic level); research quality: a modest 6 items published in numerically-favoured journals (e.g. impact factor > 7). Alternatively you can buy your survival with a total 'Research Income' of £1,000,000 as PI. The university notes that the above criteria "are useful as entry standards into the new school, but they fall short of the levels of activity that will be expected from staff in the future. These metrics should not, therefore, be regarded as targets for future performance." This means that those who survived the redundancy criteria will simply have to do better. But what is to reassure them that it won't be their turn next time should they fail to match the numbers? To help them, Queen Mary is proposing to introduce 'D3' performance management (www.unions.qmul.ac.uk/ucu/docs/d3-part-one.doc). Based on more 'administrative physics', D3 is shorthand for 'Direction × Delivery × Development.' Apparently "all three are essential to a successful team or organisation. The multiplication indicates that where one is absent/zero, then the sum is zero!" D3 is based on principles of accountability: "A sign of a mature organisation is where its members acknowledge that they face choices, they make commitments and are ready to be held to account for discharging these commitments, accepting the consequences rather than seeking to pass responsibility." Inspired? I presume the D3 document must have been written by an HR person. It has all the incoherent use of buzzwords so typical of HR. And it says "sum" when it means "product" (oh dear, innumeracy is rife). The criteria are utterly brainless. The use of impact factors for assessing people has been discredited at least since Seglen (1997) showed that the number of citations that a paper gets is not perceptibly correlated with the impact factor of the journal in which it's published. The reason for this is the distribution of the number of citations for papers in a particular journal is enormously skewed. This means that high-impact journals get most of their citations from a few articles. The distribution for Nature is shown in Fig. 1. Far from being gaussian, it is even more skewed than a geometric distribution; the mean number of citations is 114, but 69% of papers have fewer than the mean, and 24% have fewer than 30 citations. One paper has 2,364 citations but 35 have 10 or fewer. ISI data for citations in 2001 of the 858 papers published in Nature in 1999 show that the 80 most-cited papers (16% of all papers) account for half of all the citations (from Colquhoun, 2003) The Institute of Scientific Information, ISI, is guilty of the unsound statistical practice of characterizing a distribution by its mean only, with no indication of its shape or even its spread. School of Biological and Chemical Sciences-QMUL is expecting everyone has to be above average in the new regime. Anomalously, the thresholds for psychologists are lower because it is said that it's more difficult for them to get grants. This undermines even the twisted logic applied at the outset. All this stuff about skewed distributions is, no doubt, a bit too technical for HR people to understand. Which, of course, is precisely why they should have nothing to do with assessing people. At a time when so may PhDs fail to get academic jobs we should be limiting the numbers. But QMUL requires everyone to have a PhD student, not for the benefit of the student, but to increase its standing in league tables. That is deeply unethical. The demand to have two papers in journals with impact factor greater than seven is nonsense. In physiology, for example, there are only four journals with an impact factor greater that seven and three of them are review journals that don't publish original research. The two best journals for electrophysiology are Journal of Physiology (impact factor 4.98, in 2010) and Journal of General Physiology (IF 4.71). These are the journals that publish papers that get you into the Royal Society or even Nobel prizes. But for QMUL, they don't count. I have been lucky to know well three Nobel prize winners. Andrew Huxley. Bernard Katz, and Bert Sakmann. I doubt that any of them would pass the criteria laid down for a professor by QMUL. They would have been fired. The case of Sakmann is analysed in How to Get Good Science, [pdf version]. In the 10 years from 1976 to 1985, when Sakmann rose to fame, he published an average of 2.6 papers per year (range 0 to 6). In two of these 10 years he had no publications at all. In the 4 year period (1976 – 1979 ) that started with the paper that brought him to fame (Neher & Sakmann, 1976) he published 9 papers, just enough for the Reader grade, but in the four years from 1979 – 1982 he had 6 papers, in 2 of which he was neither first nor last author. His job would have been in danger if he'd worked at QMUL. In 1991 Sakmann, with Erwin Neher, got the Nobel Prize for Physiology or Medicine. The most offensive thing of the lot is the way you can buy yourself out if you publish 26 papers in the 4 year period. Sakmann came nowhere near this. And my own total, for the entire time from my first paper (1963) until I was elected to the Royal Society (May 1985) was 27 papers (and 7 book chapters). I would have been fired. Peter Higgs had no papers at all from the time he moved to Edinburgh in 1960, until 1964 when his two paper's on what's now called the Higgs' Boson were published in Physics Letters. That journal now has an impact factor less than 7 so Queen Mary would not have counted them as "high quality" papers, and he would not have been returnable for the REF. He too would have been fired. The encouragement to publish large numbers of papers is daft. I have seen people rejected from the Royal Society for publishing too much. If you are publishing a paper every six weeks, you certainly aren't writing them, and possibly not even reading them. Most likely you are appending your name to somebody else's work with little or no checking of the data. Such numbers can be reached only by unethical behaviour, as described by Peter Lawrence in The Mismeasurement of Science. Like so much managerialism, the rules provide an active encouragement to dishonesty. In the face of such a boneheaded approach to assessment of your worth, it's the duty of any responsible academic to point out the harm that's being done to the College. Richard Horton, in the Lancet, did so in Bullying at Barts. There followed quickly letters from Stuart McDonald and Nick Wright, who used the Nuremburg defence, pointing out that the Dean (Tom Macdonald) was just obeying orders from above. That has never been as acceptable defence. If Macdonald agreed with the procedure, he should be fired for incompetence. If he did not agree with it he should have resigned. It's a pity, because Tom Macdonald was one of the people with whom I corresponded in support of Barts' students who, very reasonably, objected to having course work marked by homeopaths (see St Bartholomew's teaches antiscience, but students revolt, and, later, Bad medicine. Barts sinks further into the endarkenment). In that case he was not unreasonable, and, a mere two years later I heard that he'd taken action. To cap it all, two academics did their job by applying a critical eye to what's going on at Queen Mary. They wrote to the Lancet under the title Queen Mary: nobody expects the Spanish Inquisition "For example, one of the "metrics" for research output at professorial level is to have published at least two papers in journals with impact factors of 7 or more. This is ludicrous, of course—a triumph of vanity as sensible as selecting athletes on the basis of their brand of track suit. But let us follow this "metric" for a moment. How does the Head of School fair? Zero, actually. He fails. Just consult Web of Science. Take care though, the result is classified information. HR's "data" are marked Private and Confidential. Some things must be believed. To question them is heresy." Astoundingly, the people who wrote this piece are now under investigation for "gross misconduct". This is behaviour worthy of the University of Poppleton, as pointed out by the inimitable Laurie Taylor, in Times Higher Education (June 7) The rustle of censorship It appears that last week's edition of our sister paper, The Poppleton Evening News, carried a letter from Dr Gene Ohm of our Biology Department criticising this university's metrics-based redundancy programme. We now learn that, following the precedent set by Queen Mary, University of London, Dr Ohm could be found guilty of "gross misconduct" and face "disciplinary proceedings leading to dismissal" for having the effrontery to raise such issues in a public place. Louise Bimpson, the corporate director of our ever-expanding human resources team, admitted that this response might appear "severe" but pointed out that Poppleton was eager to follow the disciplinary practices set by such soon-to-be members of the prestigious Russell Group as Queen Mary. Thus it was only to be expected that we would seek to emulate its espousal of draconian censorship. She hoped this clarified the situation. David Bignell, emeritus professor of zoology at Queen Mary hit the nail on the head. "These managers worry me. Too many are modest achievers, retired from their own studies, intoxicated with jargon, delusional about corporate status and forever banging the metrics gong. Crucially, they don't lead by example." What the managers at Queen Mary have failed to notice is that the best academics can choose where to go. People are being told to pack their bags and move out with one day's notice. Access to journals stopped, email address removed, and you may need to be accompanied to your (ex)-office. Good scientists are being treated like criminals. What scientist in their right mind would want to work at QMUL, now that their dimwitted assessment methods, and their bullying tactics, are public knowledge? The responsibility must lie with the principal, Simon Gaskell. And we know what the punishment is for bringing your university into disrepute. Send an email. You may want to join the many people who have already written to QMUL's principal, Simon Gaskell ([email protected]), and/or to Sir Nicholas Montagu, Chairman of Council, [email protected]. Sunday 1 July 2012. Since this blog was posted after lunch on Friday 29th June, it has had around 9000 visits from 72 countries. Here is one of 17 maps showing the origins of 200 of the hits in the last two days The tweets about QMUL are collected in a Storify timeline. I'm reminded of a 2008 comment, on a post about the problems imposed by HR, In-human resources, science and pizza. Thanks for that – I LOVED IT. It's fantastic that the truth of HR (I truly hate that phrase) has been so ruthlessly exposed. Should be part of the School Handbook. Any VC who stripped out all the BS would immediately retain and attract good people and see their productivity soar. That's advice that Queen Mary should heed. Part of the reason for that popularity was Ben Goldacre's tweet, to his 201,000 followers "destructive, unethical and crude metric incentives in academia (spotlight QMUL) bit.ly/MFHk2H by @david_colquhoun" 3 July 2012. I have come by a copy of this email, which was sent to Queen Mary by a senior professor from the USA (word travels fast on the web). It shows just how easy it is to destroy the reputation of an institution. Sir Nicholas Montagu, Chairman of Council, and Principal Gaskell, I was appalled to read the criteria devised by your University to evaluate its faculty. There are so flawed it is hard to know where to begin. Your criteria are antithetical to good scientific research. The journals are littered with weak publications, which are generated mainly by scientists who feel the pressure to publish, no matter whether the results are interesting, valid, or meaningful. The literature is flooded by sheer volume of these publications. Your attempt to require "quality" research is provided by the requirement for publications in "high Impact Factor" journals. IF has been discredited among scientists for many reasons: it is inaccurate in not actually reflecting the merit of the specific paper, it is biased toward fields with lots of scientists, etc. The demand for publications in absurdly high IF journals encourages, and practically enforces scientific fraud. I have personally experienced those reviews from Nature demanding one or two more "final" experiments that will clinch the publication. The authors KNOW how these experiments MUST turn out. If they want their Nature paper (and their very academic survival if they are at a brutal, anti-scientific university like QMUL), they must get the "right" answer. The temptation to fudge the data to get this answer is extreme. Some scientists may even be able to convince themselves that each contrary piece of data that they discard to ensure the "correct" answer is being discarded for a valid reason. But the result is that scientific misconduct occurs. I did not see in your criteria for "success" at QMUL whether you discount retracted papers from the tally of high IF publications, or perhaps the retraction itself counts as yet another high IF publication! Your requirement for each faculty to have one or more postdocs or students promotes the abusive exploitation of these individuals for their cheap labor, and ignores the fact that they are being "trained" for jobs that do not exist. The "standards" you set are fantastically unrealistic. For example, funding is not graded, but a sharp step function – we have 1 or 2 or 0 grants and even if the average is above your limits, no one could sustain this continuously. Once you have fired every one of your faculty, which will almost certainly happen within 1-2 rounds of pogroms, where will you find legitimate scientists who are willing to join such a ludicrous University? 4 July 2012. Professor John F. Allen is Professor of Biochemistry at Queen Mary, University of London, and distinguished in the fields of Photosynthesis, Chloroplasts, Mitochondria, Genome function and evolution and Redox signalling. He, with a younger colleague, wrote a letter to the Lancet, Queen Mary: nobody expects the Spanish Inquisition. It is an admirable letter, the sort of thing any self-respecting academic should write. But not according to HR. On 14 May, Allen got a letter from HR, which starts thus. Dear Professor Allen I am writing to inform you that the College had decided to commence a factfinding investigation into the below allegation: That in writing and/or signing your name to a letter entitled "Queen Mary: nobody expects the Spanish Inquisition," (enclosed) which was published in the Lancet online on 4th May 2012, you sought to bring the Head of School of Biological and Chemical Sciences and the Dean for Research in the School of Medicine and Dentistry into disrepute. Sam Holborn HR Consultant- Science & Engineering Download the entire letter. It is utterly disgraceful bullying. If anyone is bringing Queen Mary into disrepute, it is Sam Holborn and the principal, Simon Gaskell. Here's another letter, from the many that have been sent. This is from a researcher in the Netherlands. Dear Sir Nicholas, I am addressing this to you in the hope that you were not directly involved in creating this extremely stupid set of measures that have been thought up, not to improve the conduct of science at QMUL, but to cheat QMUL's way up the league tables over the heads of the existing academic staff. Others have written more succinctly about the crass stupidity of your Human Resources department than I could, and their apparent ignorance of how science actually works. As your principal must bear full responsibility for the introduction of these measures, I am not sending him a copy of this mail. I am pretty sure that his "principal" mail address will no longer be operative. We have had a recent scandal in the Netherlands where a social psychology professor, who even won a national "Man of the Year" award, as well as as a very large amount of research money, was recently exposed as having faked all the data that went into a total number of articles running into three figures. This is not the sort of thing one wants to happen to one's own university. He would have done well according to your REF .. before he was found out. Human Resources departments have gained too much power, and are completely incompetent when it comes to judging academic standards. Let them get on with the old dull, and gobbledigook-free, tasks that personnel departments should be carrying out. Here's another letter. It's from a member of academic staff at QMUL, someone who is not himself threatened with being fired. It certainly shows that I'm not making a fuss about nothing. Rather, I'm the only person old enough to say what needs to be said without fear of losing my job and my house. Dear Prof. Colquhoun, I am an academic staff member in SBCS, QMUL. I am writing from my personal email account because the risks of using my work account to send this email are too great. I would like to thank you for highlighting our problems and how we have been treated by our employer (Queen Mary University of London), in your blog. I would please urge you to continue to tweet and blog about our plight, and staff in other universities experiencing similarly horrific working conditions. I am not threatened with redundancy by QMUL, and in fact my research is quite successful. Nevertheless, the last nine months have been the most stressful of all my years of academic life. The best of my colleagues in SBCS, QMUL are leaving already and I hope to leave, if I can find another job in London. Staff do indeed feel very unfairly treated, intimidated and bullied. I never thought a job at a university could come to this. Thank you again for your support. It really does matter to the many of us who cannot really speak out openly at present. In a later letter, the same person pointed out "There are many of us who would like to speak more openly, but we simply cannot." "I have mortgage . . . . Losing my job would probably mean losing my home too at this point." "The plight of our female staff has not even been mentioned. We already had very few female staff. And with restructuring, female staff are more likely to be forced into teaching-only contracts or indeed fired"." "total madness in the current climate – who would want to join us unless desperate for a job!" "fuss about nothing" – absolutely not. It is potentially a perfect storm leading to teaching and research disaster for a university! Already the reputation of our university has been greatly damaged. And senior staff keep blaming and targeting the "messengers"." Througn the miracle of WiFi, this is coming from Newton, MA. The Lancet today has another editorial on the Queen Mary scandal. "As hopeful scientists prepare their applications to QMUL, they should be aware that, behind the glossy advertising, a sometimes harsh, at times repressive, and disturbingly unforgiving culture awaits them." That sums it up nicely. 24 July 2012. I'm reminded by Nature writer, Richard van Noorden (@Richvn) that Nature itself has wriiten at least twice about the iniquity of judging people by impact factors. In 2005 Not-so-deep impact said "Only 50 out of the roughly 1,800 citable items published in those two years received more than 100 citations in 2004. The great majority of our papers received fewer than 20 citations." "None of this would really matter very much, were it not for the unhealthy reliance on impact factors by administrators and researchers' employers worldwide to assess the scientific quality of nations and institutions, and often even to judge individuals." And, more recently, in Assessing assessment" (2010). 29 July 2012. Jonathan L Rees. of the University of Edinburgh, ends his blog: "I wonder what career advice I should offer to a young doctor circa 2012. Apart from not taking a job at Queen Mary of course. " How to select candidates I have, at various times, been asked how I would select candidates for a job, if not by counting papers and impact factors. This is a slightly modified version of a comment that I left on a blog, which describes roughly what I'd advocate After a pilot study the entire Research Excellence Framework (which attempts to assess the quality of research in every UK university) made the following statement. "No sub-panel will make any use of journal impact factors, rankings, lists or the perceived standing of publishers in assessing the quality of research outputs" It seems that the REF is paying attention to the science not to bibliometricians. It has been the practice at UCL to ask people to nominate their best papers (2 -4 papers depending on age). We then read the papers and asked candidates hard questions about them (not least about the methods section). It's a method that I learned a long time ago from Stephen Heinemann, a senior scientist at the Salk Institute. It's often been surprising to learn how little some candidates know about the contents of papers which they themselves select as their best. One aim of this is to find out how much the candidate understands the principles of what they are doing, as opposed to following a recipe. Of course we also seek the opinions of people who know the work, and preferably know the person. Written references have suffered so much from 'grade inflation' that they are often worthless, but a talk on the telephone to someone that knows both the work, and the candidate, can be useful, That, however, is now banned by HR who seem to feel that any knowledge of the candidate's ability would lead to bias. It is not true that use of metrics is universal and thank heavens for that. There are alternatives and we use them. Incidentally, the reason that I have described the Queen Mary procedures as insane, brainless and dimwitted is because their aim to increase their ratings is likely to be frustrated. No person in their right mind would want to work for a place that treats its employees like that, if they had any other option. And it is very odd that their attempt to improve their REF rating uses criteria that have been explicitly ruled out by the REF. You can't get more brainless than that. This discussion has been interesting to me, if only because it shows how little bibliometricians understand how to get good science. Tagged Academia, assessment, citations, David Bignell, impact factor, metrics, Quality assessment, Queen Mary, Queen Mary University of London, Simon Gaskell, Tom Macdonald, Universities | 76 Comments Open access, peer review, grants and other academic conundrums Open access is in the news again. Index on Censorship held a debate on open data on December 6th. The video of of the meeting is now on YouTube. A couple of dramatic moments in the video: At 48 min O'Neill & Monbiot face off about "competent persons" (and at 58 min Walport makes fun of my contention that it's better to have more small grants rather than few big ones, on the grounds that it's impossible to select the stars). The meeting has been written up on the Bishop Hill Blog, with some very fine cartoon minutes. (I love the Josh cartoons -pity he seems to be a climate denier, spoken of approvingly by the unspeakable James Delingpole.) It was gratifying that my remarks seemed to be better received by the scientists in the audience than they were by some other panel members. The Bishop Hill blog comments "As David Colquhoun, the only real scientist there and brilliant throughout, said "Give them everything!" " Here's a subsection of the brilliant cartoon minutes The bit about "I just lied -but he kept his job" referred to the notorious case of Richard Eastell and the University of Sheffield. We all agreed that papers should be open for anyone to read, free. Monbiot and I both thought that raw data should be available on request, though O'Neill and Walport had a few reservations about that. A great deal of time and money would be saved if data were provided on request. It shouldn't need a Freedom of Information Act (FOIA) request, and the time and energy spent on refusing FOIA requests is silly. It simply gives the impression that there is something to hide (Climate scientists must be ruthlessly honest about data). The University of Central Lancashire spent £80,000 of taxpayers' money trying (unsuccessfully) to appeal against the judgment of the Information Commissioner that they must release course material to me. It's hard to think of a worse way to spend money. A few days ago, the Department for Business, Innovation and Skills (BIS) published a report which says (para 6.6) "The Government . . . is committed to ensuring that publicly-funded research should be accessible free of charge." That's good, but how it can be achieved is less obvious. Scientific publishing is, at the moment, an unholy mess. It's a playground for profiteers. It runs on the unpaid labour of academics, who work to generate large profits for publishers. That's often been said before, recently by both George Monbiot (Academic publishers make Murdoch look like a socialist) and by me (Publish-or-perish: Peer review and the corruption of science). Here are a few details. Extortionate cost of publishing Mark Walport has told me that The Wellcome Trust is currently spending around £3m pa on OA publishing costs and, looking at the Wellcome papers that find their way to UKPMC, we see that around 50% of this content is routed via the "hybrid option"; 40% via the "pure" OA journals (e.g. PLoS, BMC etc), and the remaining 10% through researchers self-archiving their author manuscripts. I've found some interesting numbers, with help from librarians, and through access to The Journal Usage Statistics Portal (JUSP). UCL pays Elsevier the astonishing sum of €1.25 million, for access to its journals. And that's just one university. That price doesn't include any print editions at all, just web access and there is no open access. You have to have a UCL password to see the results. Elsevier has, of course, been criticised before, and not just for its prices. Elsevier publish around 2700 scientific journals. UCL has bought a package of around 2100 journals. There is no possibility to pick the journals that you want. Some of the journals are used heavily ("use" means access of full text on the web). In 2010, the most heavily used journal was The Lancet, followed by four Cell Press journals But notice the last bin. Most of the journals are hardly used at all. Among all Elsevier journals, 251 were not accessed even once in 2010. Among the 2068 journals bought by UCL, 56 were never accessed in 2010 and the most frequent number of accesses per year is between 1 and 10 (the second bin in the histogram, below). 60 percent of journals have 300 or fewer usages in 2010, Above 300, the histogram tails on up to 51878 accesses for The Lancet. The remaining 40 percent of journals are represented by the last bin (in red). The distribution is exceedingly skewed. The median is 187, i.e. half of the journals had fewer than 187 usages in 2010), but the mean number of usages (which is misleading for such a skewed distribution, was 662 usages). UCL bought 65 journals from NPG in 2010. They get more use than Elsevier, though surprisingly three of them were never accessed in 2010, and 17 had fewer than 1000 accesses in that year. The median usage was 2412, better than most. The leader, needless to say, was Nature itself, with 153,321. The situation is even more extreme for 248 OUP journals, perhaps because many of the journals are arts or law rather than science. The most frequent (modal) usage of was zero (54 journals), followed by 1 to 10 accesses (42 journals) 64 percent of journals had fewer than 200 usages, and the 36 percent with over 200 are pooled in the last (red) bin. The histogram extends right up to 16060 accesses for Brain. The median number of usages in 2010 was 66. So far I haven't been able to discover the costs of the contracts with OUP or Nature Publishing group. It seems that the university has agreed to confidentiality clauses. This itself is a shocking lack of transparency. If I can find the numbers I shall -watch this space. Almost all of these journals are not open access. The academics do the experiments, most often paid for by the taxpayer. They write the paper (and now it has to be in a form that is almost ready for publication without further work), they send it to the journal, where it is sent for peer review, which is also unpaid. The journal sells the product back to the universities for a high price, where the results of the work are hidden from the people who paid for it. It's even worse than that, because often the people who did the work and wrote the paper, have to pay "page charges". These vary, but can be quite high. If you send a paper to the Journal of Neuroscience, it will probably cost you about $1000. Other journals, like the excellent Journal of Physiology, don't charge you to submit a paper (unless you want a colour figure in the print edition, £200), but the paper is hidden from the public for 12 months unless you pay $3000. The major medical charity, the Wellcome Trust, requires that the work it funds should be available to the public within 6 months of publication. That's nothing like good enough to allow the public to judge the claims of a paper which hits the newspapers the day that it's published. Nevertheless it can cost the authors a lot. Elsevier journals charge $3000 except for their most-used journals. The Lancet charges £400 per page and Cell Press journals charge $5000 for this unsatisfactory form of open access. The outcry about hidden results has resulted in a new generation of truly open access journals that are open to everyone from day one. But if you want to publish in them you have to pay quite a lot. Furthermore, although all these journals are free to read, most of them do not allow free use of the material they publish. Most are operating under all-rights-reserved copyrights. In 2009 under 10 percent of open access journals had true Creative Commons licence. Nature Publishing Group has a true open access journal, Nature Communications, but it costs the author $5000 to publish there. The Public Library of Science journals are truly open access but the author is charged $2900 for PLoS Medicine though PLoS One costs the author only $1350 A 2011 report considered the transition to open access publishing but it doesn't even consider radical solutions, and makes unreasonably low estimates of the costs of open access publishing. Scam journals have flourished under the open access flag Open access publishing has, so far, almost always involved paying a hefty fee. That has brought the rats out of the woodwork and one gets bombarded daily with offers to publish in yet another open access journal. Many of these are simply scams. You pay, we put it on the web and we won't fuss about quality. Luckily there is now a guide to these crooks: Jeffrey Beall's List of Predatory, Open-Access Publishers. One that I hear from regularly is Bentham Open Journals (a name that is particularly inappropriate for anyone at UCL). Jeffery Beall comments "Among the first, large-scale gold OA publishers, Bentham Open continues to expand its fleet of journals, now numbering over 230. Bentham essentially operates as a scholarly vanity press." They undercut real journals. A research article in The Open Neuroscience Journal will cost you a mere $800. Although these journals claim to be peer-reviewed, their standards are suspect. In 2009, a nonsensical computer-generated spoof paper was accepted by a Bentham Journal (for $800), What can be done about publication, and what can be done about grants? Both grants and publications are peer-reviewed, but the problems need to be discussed separately. Peer review of papers by journals One option is clearly to follow the example of the best open access journals, such as PLoS. The cost of $3000 to 5000 per paper would have to be paid by the research funder, often the taxpayer. It would be money subtracted from the research budget, but it would retain the present peer review system and should cost no more if the money that were saved on extortionate journal subscriptions were transferred to research budgets to pay the bills, though there is little chance of this happening. The cost of publication would, in any case, be minimised if fewer papers were published, which is highly desirable anyway. But there are real problems with the present peer review system. It works quite well for journals that are high in the hierarchy. I have few grumbles myself about the quality of reviews, and sometimes I've benefitted a lot from good suggestions made by reviewers. But for the user, the process is much less satisfactory because peer review has next to no effect on what gets published in journals. All it influences is which journal the paper appears in. The only effect of the vast amount of unpaid time and effort put into reviewing is to maintain a hierarchy of journals, It has next to no effect on what appears in Pubmed. For authors, peer review can work quite well, but from the point of view of the consumer, peer review is useless. It is a myth that peer review ensures the quality of what appears in the literature. A more radical approach I made some more radical suggestions in Publish-or-perish: Peer review and the corruption of science. It seems to me that there would be many advantages if people simply published their own work on the web, and then opened the comments. For a start, it would cost next to nothing. The huge amount of money that goes to publishers could be put to better uses. Another advantage would be that negative results could be published. And proper full descriptions of methods could be provided because there would be no restrictions on length. Under that system, I would certainly send a draft paper to a few people I respected for comments before publishing it. Informal consortia might form for that purpose. The publication bias that results from non-publication of negative results is a serious problem, mainly, but not exclusively, for clinical trials. It is mandatory to register a clinical trial before it starts, but many of the results never appear. (see, for example, Deborah Cohen's report for Index on Censorship). Although trials now have to be registered before they start, there is no check on whether or not the results are published. A large number of registered trials do not result in any publication, and this publication bias can costs thousands of lives. It is really important to ensure that all results get published, The ArXiv model There are many problems that would have to be solved before we could move to self-publication on the web. Some have already been solved by physicists and mathematicians. Their archive, ArXiv.org provides an example of where we should be heading. Papers are published on the web at no cost to either user or reader, and comments can be left. It is an excellent example of post-publication peer review. Flame wars are minimised by requiring users to register, and to show they are bona fide scientists before they can upload papers or comments. You may need endorsement if you haven't submitted before. Peer review of grants The problems for grants are quite different from those for papers. There is no possibility of doing away with peer review for the award of grants, however imperfect the process may be. In fact candidates for the new Wellcome Trust investigator awards were alarmed to find that the short listing of candidates for their new Investigator Awards was done without peer review. The Wellcome Trust has been enormously important for the support of medical and biological support, and never more than now, when the MRC has become rather chaotic (let's hope the new CEO can sort it out). There was, therefore, real consternation when Wellcome announced a while ago its intention to stop giving project and programme grants altogether. Instead it would give a few Wellcome Trust Investigator Awards to prominent people. That sounds like the Howard Hughes approach, and runs a big risk of "to them that hath shall be given". The awards have just been announced, and there is a good account by Colin Macilwain in Science [pdf]. UCL did reasonable well with four awards, but four is not many for a place the size of UCL. Colin Macilwain hits the nail on the head. "While this is great news for the 27 new Wellcome Investigators who will share £57 million, hundreds of university-based researchers stand to lose Wellcome funds as the trust phases out some existing programs to pay for the new category of investigators". There were 750 applications, but on the basis of CV alone, they were pared down to a long-list if 173. The panels then cut this down to a short-list of 55. Up to this point no external referees were used, quite unlike the normal process for award of grants. This seems to me to have been an enormous mistake. No panel, however distinguished, can have the knowledge to distinguish the good from the bad in areas outside their own work, It is only human nature to favour the sort of work you do yourself. The 55 shortlisted people were interviewed, but again by a panel with an even narrower range of expertise, Macilwain again: "Applications for MRC grants have gone up "markedly" since the Wellcome ones closed, he says: "We still see that as unresolved." Leszek Borysiewicz, vice-chancellor of the University of Cambridge, which won four awards, believes the impact will be positive: "Universities will adapt to this way of funding research." It certainly isn't obvious to most people how Cambridge or UCL will "adapt" to funding of only four people. The Cancer Research Campaign UK has recently made the same mistake. One problem is that any scheme of this sort will inevitably favour big groups, most of whom are well-funded already. Since there is some reason to believe that small groups are more productive (see also University Alliance report), it isn't obvious that this is a good way to go. I was lucky enough to get 45 minutes with the director of the Wellcome Trust, Mark Walport, to put these views. He didn't agree with all I said, but he did listen. One of the things that I put to him was a small statistical calculation to illustrate the great danger of a plan that funds very few people. The funding rate was 3.6% of the original applications, and 15.6% of the long-listed applications. Let's suppose, as a rough approximation, that the 173 long-listed applications were all of roughly equal merit. No doubt that won't be exactly true, but I suspect it might be more nearly true than the expert panels will admit. A quick calculation in Mathcad gives this, if we assume a 1 in 8 chance of success for each application. Distribution of the number of successful applications Suppose $ n $ grant applications are submitted. For example, the same grant submitted $ n $ times to selection boards of equal quality, OR $ n $ different grants of equal merit are submitted to the same board. Define $ p $ = probability of success at each application Under these assumptions, it is a simple binomial distribution problem. According to the binomial distribution, the probability of getting $ r $ successful applications in $ n $ attempts is \[ P(r)=\frac{n!}{r!\left(n-r\right)! }\; {p}^{r} \left(1-p \right)^{n-r} \] For a success rate of 1 in 8, $ p = 0.125 $, so if you make $ n = 8 $ applications, the probability that $ r $ of them will succeed is shown in the graph. Despite equal merit, almost as many people end up with no grant at all as almost as many people end up with no grant at all as get one grant. And 26% of people will get two or more grants. Of course it would take an entire year to write 8 applications. If we take a more realistic case of making four applications we have $ n = 4 $ (and $ p = 0.125 $, as before). In this case the graph comes out as below. You have a nearly 60% chance of getting nothing at all, and only a 1 in 3 chance of getting one grant. These results arise regardless of merit, purely as consequence of random chance. They are disastrous, and especially disastrous for the smaller, better-value, groups for which a gap in funding can mean loss of vital expertise. It also has the consequence that scientists have to spend most of their time not doing science, but writing grant applications. The mean number of applications before a success is 8, and a third of people will have to write 9 or more applications before they get funding. This makes very little sense. Grant-awarding panels are faced with the near-impossible task of ranking many similar grants. The peer review system is breaking down, just as it has already broken down for journal publications. I think these considerations demolish the argument for funding a small number of 'stars'. The public might expect that the person making the application would take an active part in the research. Too often, now, they spend most of their time writing grant applications. What we need is more responsive-mode smallish programme grants and a maximum on the size of groups. We should be thinking about the following changes, Limit the number of papers that an individual can publish. This would increase quality, it would reduce the impossible load on peer reviewers and it would reduce costs. Limit the size of labs so that more small groups are encouraged. This would increase both quality and value for money. More (and so smaller) grants are essential for innovation and productivity. Move towards self-publishing on the web so the cost of publishing becomes very low rather than the present extortionate costs. It would also mean that negative results could be published easily and that methods could be described in proper detail. The entire debate is now on YouTube. 24 January 2012. The eminent mathematician, Tim Gowers, has a rather hard-hitting blog on open access and scientific publishing, Elsevier – my part in its downfall. I'm right with him. Although his post lacks the detailed numbers of mine, it shows that mathematicians has exactly the same problems of the rest of us. 11 April 2012. Thanks to Twitter, I came across a remarkably prescient article, in the Guardian, in 2001. Science world in revolt at power of the journal owners, by James Meek. Elsevier have been getting away with murder for quite a while. I got invited to give after-dinner talk on open access at Cumberland Lodge. It was for the retreat of out GEE Department (that is the catchy brand name we've had since 2007: I'm in the equally memorable NPP). I think it stands for Genetics, Evolution and Environment. The talk seemed to stiir up a lot of interest: the discussions ran on to the next day. It was clear that younger people are still as infatuated with Nature and Science as ever. And that, of course is the fault of their elders. The only way that I can see, is to abandon impact factor as a way of judging people. It should have gone years ago,and good people have never used it. They read the papers. Access to research will never be free until we think oi a way to break the hegemony of Nature, Science and a handful of others. Stephen Curry has made some suggestions Probably it will take action from above. The Wellcome Trust has made a good start. And so has Harvard. We should follow their lead (see also, Stephen Curry's take on Harvard) And don't forget to sign up for the Elsevier boycott. Over 10,000 academics have already signed. Tim Gowers' initiative took off remarkably. 24 July 2012. I'm reminded by Nature writer, Richard van Noorden (@Richvn) that Nature itself has written at least twice about the iniquity of judging people by impact factors. In 2005 Not-so-deep impact said The brilliant mathematician,Tim Gowers, started a real revolt against old-fashioned publishers who are desperately trying to maintain extortionate profits in a world that has changed entirely. In his 2012 post, Elsevier: my part in its downfall, he declared that he would no longer publish in, or act as referee for, any journal published by Elsevier. Please follow his lead and sign an undertaking to that effect: 14,614 people have already signed. Gowers has now gone further. He's made substantial progress in penetrating the wall of secrecy with which predatory publishers (of which Elsevier is not the only example) seek to prevent anyone knowing about the profitable racket they are operating. Even the confidentiality agreements, which they force universities to sign, are themselves confidential. In a new post, Tim Gowers has provided more shocking facts about the prices paid by universities. Please look at Elsevier journals — some facts. The jaw-dropping 2011 sum of €1.25 million paid by UCL alone, is now already well out-of-date. It's now £1,381,380. He gives figures for many other Russell Group universities too. He also publishes some of the obstructive letters that he got in the process of trying to get hold of the numbers. It's a wonderful aspect of the web that it's easy to shame those who deserve to be shamed. I very much hope the matter is taken to the Information Commissioner, and that a precedent is set that it's totally unacceptable to keep secret what a university pays for services. Tagged Academia, acupuncture, George Monbiot, Mark Walport, Onora O'Neill, open access, peer review, Public interest, publishing | 26 Comments
CommonCrawl
Magnetic field : Definition and Solved Examples Category : Magnetism Magnetic field concepts and definitions are explored with plenty of examples with detailed answers. Magnetic field is a challenging area in introductory physics for K-12 and college students. To overcome this difficulty you can solve tens of questions with detailed answers in Examcenter. You are familiar with the concept of electric field. In that topic, you noticed that the surrounding space of any charged particle is formed a property which can exert a force upon each other electrically charged particle within it. To describe that property the notion of electric field $\vec E$, which is a vector quantity, is introduced. There is also a mysterious property in the space around a magnet that can induce magnetic property in the unmagnetized iron objects and attract or repel other magnets. For example, when one of the poles of a bar magnet moves to the vicinity of a compass needle which is aligned in the north-south direction, the needle starts to turn. In this situation, if the magnet is removed, the needle again points to the original direction that was north-south. This experiment shows that there is a property around the space of a magnet. This property that can exert a force on a tiny compass needle is called the magnetic field and is denoted by $\vec B$. Magnetic field can be created by a magnet or produced by moving a charged particle or current in a wire. In this section, we will not concentrate on the creation of a magnetic field. Here we will describe the interaction of a moving charge with an external magnetic field. Magnetic field is a vector quantity, like the electric field, so analogous to any vector quantity it has a magnitude and direction. In the following, we first deal with its direction and then define the magnitude and its unit. Direction of magnetic field: We saw that when a compass needle is put near a magnet , the needle turns and points in a particular direction. If the magnet is aligned in other direction, the compass will also turn and points in other direction. Magnetic field, by definition, at any position is along the compass needle which in that point has been balanced and its direction to be from south pole to north pole. In other words, the direction of magnetic field $\vec B$ at any point in space is defined as the direction of the north pole of a compass needle at that point. Magnetic field lines: Electric field is shown by electric field lines. Just as electric field we can also picture magnetic fields by magnetic field lines. These lines are drawn so that in each point direction of the magnetic field is tangent to the field line at that point which can be traced by a needle compass as shown in the figure below. The field line at any position is parallel to the magnetic field at that point. Density of lines at any point of space represents the strength of magnetic field at that point. another important note is that since the direction of $\vec B$ at each point is unique, the field lines never intersect. In the following figure, the magnetic field lines of a permanent bar magnet are shown. These lines, in general, outside the magnet point away from N poles and toward S poles. Inside the magnet the lines passing through S poles to the N poles, that is in the opposite direction of outside the magnet. Magnetic field patterns can be shown by iron filings sprinkled on the paper near a magnet, as shown in the figure for various configurations of the poles. Important note: unlike the electric charges, strong evidence shows that there are no isolated magnetic poles in nature. In other words, magnetic poles always come in pairs and can't be isolated. For example, breaking a bar magnet leads to two complete bar magnets. Magnetic force: The existence of a magnetic field at a particular point in space can be quantified, similar to electric field's definition, by measuring the magnetic force $\vec F_B$ exerted on a specific test particle at that point. but this situation is totally different from electric force counterpart since experiments show that the magnetic force is proportional to the velocity $\vec v$ of the test particle, unlike the electric force which can be exerted on a rest charged particle. In the next section, we explain additional complexity that appears in the magnetic forces. Another way to quantify the magnetic field is to measure the magnetic force on a wire carrying current. With the aid of this two measurement, one can define the magnetic field at each point in space. Force on a moving charged particle in a magnetic field: Experiment shows that when a charged particle $q$ with velocity $\vec v$ moves through a magnetic field $\vec B$ (so that its direction of motion does not parallel to the field), experiences a force which is, as shown in the figure, perpendicular to the directions of both $\vec v$ and $\vec B$. This force is called the magnetic force. Experiment shows that the magnitude of the force acting on a moving charged particle in a magnetic field $\vec B$depends on the following factors: Electric charge ($q$) : the larger the electric charge $q$, the more magnetic force acting on it. i.e. $F \propto q$. Velocity of moving charge ($\vec v$): the more the velocity, the greater the magnetic force acting on it i.e. $F\propto v$. Magnetic field ($\vec B$): the larger the magnetic field, the more force acting on the charge i.e. $F\propto B$. The angle between the magnetic field and velocity vector $\vec v$: the magnetic force on a moving charged particle is proportional to $\sin \theta$ i.e. $F\propto \sin\theta$. These experimental results can be summarized in a single equation as follows \[F=kqvB\, \sin\theta \] Where $k$ is a proportionality constant. If $F$ is in newtons, the charge $q$ in coulombs, the velocity in meters per second, and the magnetic field in teslas, this constant of proportionality is $k=1$. Therefore, in the SI units, the magnetic force on a charged moving particle may be rewritten, in vector form, as \[\vec F=q\vec v \times \vec B\] Direction of the force acting on a moving charge: The direction of the magnetic force acting on a positively charged particle with velocity $\vec v$ through the magnetic field $\vec B$ is determined by the right-hand rule as follows. Point fingers of your right hand along the direction of velocity $\vec v$. Curl them toward magnetic field $\vec B$. Your extended thumb points in direction of magnetic force or $\vec v\times\vec B$. Force on a negative charge is in the opposite direction of the force on a positive charge (since the vector product is anticommutative that is $\vec A\times\vec B=-\vec B\times \vec A$). Alternative rule for finding the direction of $\vec v\times\vec B$: point your fingers of the right hand in direction of B ⃗ and the thumb in the direction of $\vec v$. The palm shows the outward direction of the magnetic force $\vec F_B$ on a positive charge. The force on a negative charge is in the opposite direction. The equation above defines the magnetic field $\vec B$ in terms of the force exerted on a moving charged particle. The SI unit of magnetic field magnitude is the tesla ($T$). In simple words, when a particle of charge $1\, \mathrm C$ moves with velocity $1\, \mathrm {m/s}$ perpendicular to a magnetic field of $1\,\mathrm T$ experiences a force of one newton: \[\mathrm {1\, T}=1\, \mathrm{\frac{N}{C.m/s}}\] Since a coulomb per second is one ampere, by definition of electric current, so \[\mathrm {1\,T}=1\,\mathrm{\frac{N}{A.m}}\] Like coulomb, the SI unit of electric charge, tesla is a very large unit of magnetic field magnitude. In practice, the magnetic field magnitude which presents in everyday life ranging from some millitesla to one or two teslas. The magnetic field magnitude of powerful permanent magnets is about $0.1\,\mathrm T$ to $0.5\,\mathrm T$. The Earth's magnetic field is of the order of $10^{-4}\,\mathrm T$ or industrial or laboratory magnets have strengths of the order of $1\,\mathrm T$ or $2\,\mathrm T$. Because of this another unit of $B$, the gauss, is used which is related to the SI unit tesla as \[\mathrm {1\,G=10^{-4}\,T}\] Remember that the gauss is not an SI unit so you must convert it to teslas when making calculations. Equation $\vec F=q\vec v\times \vec B$ shows that when a particle moves parallel($\theta=0$) or antiparallel ($\theta =\pi$) through a magnetic field, there is no force on it. But if it moves perpendicular to the field it experiences a maximum force $qvB$. Example: a charged particle of $q=4\,\mathrm {\mu C}$ moves through a uniform magnetic field of $B=100\,\mathrm G$ with velocity $2\times 10^{3}\,\mathrm {m/s}$. The angle between $\vec B$ and $\vec v$ is $30{}^\circ$. Find the magnitude of the force acting on the charge. Solution: the magnitude of the magnetic force acting on a moving charged particle is found by \begin{align*} |\vec F|&=|q|vB\, \sin \theta\\ &=4\times 10^{-4}\times 2\times 10^{3}\times 10^{-2}\,\sin 30{}^\circ\\ &=4\times 10^{-5}\, \mathrm N \end{align*} Example: The magnetic field $B=0.5\hat i$ is acting on proton moving in $xy$ plane with velocity $\vec v=10^5 \hat i+10^5 \hat j\,\mathrm {m/s}$. Find the force acting on the proton. From the equation $\vec F=q\vec v\times \vec B$ the direction of the force on the particle is \begin{align*} \vec F&=\left(1.6\times 10^{-19}\right)\left(10^{5}\hat i+10^{5}\hat j\right)\times \left(0.5\hat i\right)\\ &=8\times 10^{-15}\left(\left(\hat i \times \hat j\right)+\left(\hat j \times \hat i\right)\right)\\ &=8\times 10^{-15}\, \mathrm {N} \left(-\hat k\right) \end{align*} Therefore, the magnetic force is in the negative $z$ direction. Example: at the moment that the velocity vector of an electron of charge $e=-1.6\times 10^{-19}\,\mathrm {C}$ in the magnetic field $\vec B=5\times 10^{-4}\,\hat k\, \mathrm T$ is $\vec v=2\times 10^{7}\,\left(\hat i+\hat j\right)\,\mathrm {m/s}$ find the magnetic force on the electron? The magnetic force on a positive charge is given by \begin{align*} \vec F&=q\vec v\times \vec B\\ &=\left(-1.6\times 10^{-19}\right)\left| \begin{array}{ccc} \hat i & \hat j & \hat k\\ 2\times10^{7} & 2\times 10^{7} & 0\\ 0 & 0 & 5\times 10^-4 \end{array} \right|\\ &=-1.6\times 10^{-15}\,\left(\hat i-\hat j\right)\, \mathrm N The magnitude of the magnetic force is the square root of its components as \[\vec F_B=1.6\times 10^{-15}\sqrt{1^2+(-1)^2}=1.6\times 10^{-15}\times \sqrt 2=2.3\times 10^{-15}\,\mathrm N\] Example: A proton (mass = $1.66\times 10^{-27}\,\mathrm {kg}$) is circulating in the plane of the page due to a magnetic field perpendicular to the page and directed into it. The radius of its circulating motion is $\mathrm {9\,cm}$ and its speed is $\mathrm {1.6\,km/s}$. Find the value of magnetic field and the direction of the motion. When a particle moves in a circular motion it experiences a centripetal force that in this case is provided by the magnetic force acting on the particle. Using Newton's second law for circular motions we have \[\Sigma \vec F_r=m\vec a_r=\frac{mv^2}r \left(-\hat r\right)\] Where $\hat r$ is the unit vector along the radius and toward the center of the circle and $r$ is the radius of the circle. Since the magnetic force must be toward the center and the angle between $\vec B$ and $\vec v$ is $\theta=90{}^\circ$, we have qvB=\frac{mv^2}{r} \Rightarrow B&=\frac{mv}{qr}\\ &=\frac{\left(1.6\times 10^{-27}\,\mathrm {kg}\right)\left(1.6\times 10^{3}\,\mathrm {m/s}\right)}{\left(1.6\times 10^{-19}\,\mathrm{C}\right)\left(9\times 10^{-2}\,\mathrm m\right)}\\ &=0.185\times 10^{-3}\,\mathrm {mT} Using right hand rule, obtain the direction of velocity around the circle as follow: Put fingers of your right hand in a counterclockwise direction along the circle and curl them into the page, the thumb points in the radial direction, which is indeed the direction of the magnetic force. Thus, the proton circles counterclockwise around the path. Motion of a charged particle in electric and magnetic fields: If a charged particle is placed in a region of space containing both electric and magnetic fields two forces, magnetic and electric, act on it which is called Lorentz force and is given by \[\vec F=q\left(\vec E+\vec v \times \vec B\right)\] Since the magnetic force is always perpendicular to velocity $\vec v$, so it cannot change the magnitude of the velocity but only its direction. Therefore, in the motion of a charged particle in the presence of electric and magnetic fields, only the electric field does work on the particle. Example: show that the kinetic energy of a particle in a magnetic field is constant. being constant of a quantity means that its time rate to be zero. The time rate of change of energy is called power $P$. Recall that power of a particle with velocity $\vec v$ under the action of force $\vec F$ is given by $P=\vec F \cdot \vec v$. by combining these we obtain \begin{gather*} \frac{dK}{dt}=P=\vec F_B \cdot \vec v\\ \Rightarrow \frac{dK}{dt}=q\left(\vec v\times \vec B\right)\cdot \vec v=0\Rightarrow K=constant \end{gather*} In the last step, we have used the fact that $\vec v \times \vec B$ is a vector perpendicular to $\vec v$ thus the dot product of this vector and $\vec v$ is zero since the angle between them is $90{}^\circ$. Example: there is a uniform magnetic field $\vec B$ into the page. If a particle of charge $q$ and mass $m$enters this region perpendicular to the magnetic field $\vec B$, what is the path of the motion? At the moment that the particle enters the field a perpendicular force act on it at this moment the particle in response to the magnetic force changes the direction of its velocity but we know that magnetic force remains always perpendicular to the velocity. From Mechanics recall that when the force always perpendicular to the velocity, the path of the particle is a circle. This magnetic force serves as a centripetal force toward the center of the circle so using Newton's second law we can obtain the radius of the circle as follows F_r=ma_r=\frac{mv^{2}}r\\ \Rightarrow qvB=\frac{mv^{2}}{r} \quad \Rightarrow \quad r=\frac{mv}{qB} The period of the motion is equal to the circumference of the circle divided by the speed of the particle \[T=\frac{2\pi r}{v}=\frac{2\pi m}{qB}\] The angular speed of the motion, which is called the cyclotron frequency since charged particles circulate in a special type of accelerator called cyclotron at this frequency, is obtained as \[\omega=\frac v r =\frac{qB}{m}\] Example: in the previous example, if the initial velocity of the particle does not perpendicular to the magnetic field $\vec B=B\hat k$, find the path of the motion of the particle. Let the magnetic field be along the $z$ direction and the charged particle enters the region at point $A$ on the $x-y$ plane. Let the particle's velocity lying in the $yz$ plane and makes angle $\theta$ with $z$ axis. So its decomposition along the $y$ and $z$ directions is, as shown in the figure, \[\vec v=v_0\,\cos \theta \hat k+v_0\,\sin \theta \hat y\] Where $v_0$ is the initial velocity of the particle at the moment of entering the field. Therefore, the magnitude of the magnetic force on the particle is \vec F_B&=q\vec v\times \vec B\\ &=q\left(v_0\,\cos \theta \hat k+v_0\,\sin \theta \hat y\right)\times B\hat k\\ &=qv_0B\,\left(\cos \theta \hat k\times \hat k+\sin \theta \hat y\times \hat k\right)\\ &=qB\underbrace{v_0\,\sin \theta}_{v_{\perp}}\hat i Where $v_{\perp}$ is the velocity component perpendicular to $\vec B$. Similar to the previous example, this force can only change the direction of the particle and make a circular path with radius and period below \[r=\frac{m\left(v_0\,\sin \theta\right)}{qB} \quad , \quad T=\frac{2\pi m}{qB}\] As you can see, the $y$ component of the velocity causes circulation of the particle around a circle while the $z$ component of velocity, which is parallel to the magnetic field, without any forces acted on it moves with uniform motion along the $z$ axis (remains constant). Such a path called helical path of radius $r$. Therefore, when a moving charge has components parallel $v_{\parallel}$ and perpendicular $v_{\perp}$ to the magnetic field, its motion is in a helical path. Ali Nemati Tags: Magnetic field, Field lines, Magnetic force, Helical path
CommonCrawl
How is it possible for a singleton to exist if ∅ is a subset of every set? The question arises from the following statements: * " $\varnothing$ is a subset of every set. This fact (that $\varnothing \subseteq A$ for any A) is "vacuously true" (...) " (Enderton - Elements of Set Theory) Okay, so I understand that the empty set is included in every other set. So far so good. * "$\varnothing \in \{\varnothing\}$" (Enderton) From here, it's possible to see that while $\varnothing$ doesn't contain elements at all, it is the element of the set containing the empty set. * "For any two sets there exists a set that contains both of them and nothing else" (Halmos - Naive Set Theory) Okay, here is where I get into trouble. Since $\varnothing \in \{\varnothing\}$ , it follows that the empty set can be considered a member of another set. Namely, $(\forall A)(\varnothing \in A) $. Or that's what I assume. If it's true, some weird things happen. Putting it shortly, since the empty set will be an element of every set, the axiom of pairing won't be true: For every sets $a$ and $b$, a set $A$ will also contain the empty set. So (following Halmos' definition) "for any two sets there exists a set that contains both of them and nothing else" won't be true because a third element $\varnothing $ will always be present. The same reasoning can be applied to the idea of a singleton: I think it's not possible for a set of a single element to exist if there's always an empty set right there, next to the so-called "unique element". Could be wrong about this one, too. Any further explanation of the problems that arise will be, in my opinion, wasted time if the premise that caused them is false, so I'll leave it there until someone proves me that the statement is right. Else, the question is answered. TL;DR: Is $(\forall A)(\varnothing \in A) $ true? elementary-set-theory Henning Makholm Kevin LanguascoKevin Languasco $\begingroup$ No, it is not true. How did you come to the conclusion? $\endgroup$ – Tobias Kildetoft Mar 20 '14 at 8:57 $\begingroup$ @TobiasKildetoft The empty set is the element of the set containing the empty set, and this one belongs to {{{}}} and so on. It's accepted that the empty set if a subset of every other set, but just like {} belongs to {{}}, why won't it belong to the other sets if it's included as well? $\endgroup$ – Kevin Languasco Mar 20 '14 at 9:05 $\begingroup$ You should distinguish two different cases the first when we consider $\emptyset$ as a subset and in this case $\emptyset\subset A,$ for all $A$ is true and the second case when we consider $\emptyset$ as an element and in this case $\emptyset\in A$ for all $A$ isn't true. $\endgroup$ – user63181 Mar 20 '14 at 9:10 $\begingroup$ Element$\ne$subset. $\endgroup$ – Martín-Blas Pérez Pinilla Mar 20 '14 at 9:16 $\begingroup$ I would like to point out that "For any two sets there is a set that contains both of them and nothing else" isn't about union. If you have two sets $A$ and $B$, then the set spoken of here isn't $A\cup B$, it's $\{A, B\}$. $\endgroup$ – Arthur Mar 20 '14 at 10:34 The empty set is always a subset, but not always an element in a given set. Say the set $A$ consists of the elements $a_1, a_2, \dots, a_n$. It could be infinite too. Then you have another set which contains the empty set $B =\{ \emptyset \}$. It is clear that $\emptyset \not \in A$ and $\emptyset \in B$. The statement For any two sets there exists a set that contains both of them and nothing else doesn't mean that $A$ also includes the empty set, it means there is some set $C$ that contains everything in $A$ and $B$, for example $$C = A \cup B = \{a_1, \dots, a_n, \emptyset\}$$ for which it is true that $\emptyset \in C$. It says nothing about whether the empty set is a member of $A$ or not. naslundxnaslundx $\begingroup$ So, is the only set that contains $\emptyset$ as an element $\{ \emptyset \}$ ? In this case, B $\endgroup$ – Kevin Languasco Mar 20 '14 at 9:20 $\begingroup$ Not the only one, obviously $C$ too. $\endgroup$ – naslundx Mar 20 '14 at 9:23 $\begingroup$ Oh of course, you're right. So any set can contain $\{ \emptyset \} $, but not $\emptyset $ directly (only by inclusion as a subset), unless this set is a collection that includes $\{ \emptyset \} $ and, sometimes, other sets. Is this right? $\endgroup$ – Kevin Languasco Mar 20 '14 at 9:34 $\begingroup$ Any set has $\emptyset$ as a subset, simply by picking out exactly 0 of the members. But $\emptyset$, or $\{\emptyset\}$, or $\{\{\emptyset\}\}$, ... may or may not be a member of a set. As you said, it depends on whether it is included or not. $\endgroup$ – naslundx Mar 20 '14 at 9:37 $\begingroup$ Well that answers my question. It's so clear now, I don't know why the empty set confused me when talking about belonging and inclusion. Thank you! $\endgroup$ – Kevin Languasco Mar 20 '14 at 10:23 You are confusing the concept of membership with the concept of subset. They are not the same. The empty set $\varnothing$ is a subset of every set, but it is not necessarily a member of a given set. If $B = \{ \varnothing \}$ (the singleton consisting of the empty set), then both $\varnothing \subseteq B$ and $\varnothing \in B$. Suppose $A = \{1, 2, 3\}$. Then $$A \cup B = A \cup \{ \varnothing \} = \{ \varnothing, 1, 2, 3 \} \ne A.$$ Your misunderstanding is that you believe that for any set $A$, $A \cup \{\varnothing\} = A$ since $\varnothing \subseteq A$. But as I have pointed out, the property of subset is not the same as membership: for example, if $A = \{1, 2, 3\}$ and $B = \{1, 2 \}$, then $B \subset A$ but $B \not\in A$, because in order for the latter to be true, $A$ would need to contain an element which is the set $B$; e.g., if $C = \{1, 2, 3, \{1,2\}\}$, then $B \in C$. heropupheropup $\begingroup$ Thank you! I did know the difference between belonging and inclusion, but for some reason the empty set gave me trouble. Thanks for clarifying, again $\endgroup$ – Kevin Languasco Mar 20 '14 at 10:21 Not the answer you're looking for? Browse other questions tagged elementary-set-theory or ask your own question. Is empty set a subset of every subset? Any set A has void set as its subset? if yes how? For every set $A$, the empty set is a subset of $A$. The empty set is a set. Therefore, the empty set has a cardinality $\geq 1\ldots$ How could we show that the set is equal to the empty set? Is empty set element of every set if it is subset of every set? A Criterion For a Set To Have all the Atoms of a Boolean Algebra is it possible for a set not to contain the empty set? Cantor's Theorem holding simply because every power set includes a singleton set for each element, and the empty set? vacuous truth -> empty set is both included and not included in every set? What happens if the empty set is not a subset of every set? Is set of empty set is a subset of empty set? Why the empty set is a subset of every set?
CommonCrawl
arXiv.org > math > arXiv:1401.7823 Mathematics > Group Theory Title:Universal sequences for the order-automorphisms of the rationals Authors:J. Hyde, J. Jonusas, J. D. Mitchell, Y. H. Peresse (Submitted on 30 Jan 2014 (v1), last revised 14 Mar 2016 (this version, v4)) Abstract: In this paper, we consider the group Aut$(\mathbb{Q}, \leq)$ of order-automorphisms of the rational numbers, proving a result analogous to a theorem of Galvin's for the symmetric group. In an announcement, Khélif states that every countable subset of Aut$(\mathbb{Q}, \leq)$ is contained in an $N$-generated subgroup of Aut$(\mathbb{Q}, \leq)$ for some fixed $N\in\mathbb{N}$. We show that the least such $N$ is $2$. Moreover, for every countable subset of Aut$(\mathbb{Q}, \leq)$, we show that every element can be given as a prescribed product of two generators without using their inverses. More precisely, suppose that $a$ and $b$ freely generate the free semigroup $\{a,b\}^+$ consisting of the non-empty words over $a$ and $b$. Then we show that there exists a sequence of words $w_1, w_2,\ldots$ over $\{a,b\}$ such that for every sequence $f_1, f_2, \ldots\in\,$Aut$(\mathbb{Q}, \leq)$ there is a homomorphism $\phi:\{a,b\}^{+}\to$ Aut$(\mathbb{Q},\leq)$ where $(w_i)\phi=f_i$ for every $i$. As a corollary to the main theorem in this paper, we obtain a result of Droste and Holland showing that the strong cofinality of Aut$(\mathbb{Q}, \leq)$ is uncountable, or equivalently that Aut$(\mathbb{Q}, \leq)$ has uncountable cofinality and Bergman's property. Comments: Updated to clarify some parts of the proof Subjects: Group Theory (math.GR) MSC classes: 20B27, 20B07 (primary) 20E15 (secondary) DOI: 10.1112/jlms/jdw015 Cite as: arXiv:1401.7823 [math.GR] (or arXiv:1401.7823v4 [math.GR] for this version) From: James Mitchell [view email] [v2] Mon, 24 Feb 2014 16:56:09 UTC (14 KB) [v3] Thu, 10 Apr 2014 15:56:31 UTC (14 KB) [v4] Mon, 14 Mar 2016 09:49:31 UTC (16 KB) math.GR
CommonCrawl
Predictor flipping sign in regression with no multicollinearity I'm running a multiple regression model with 4 predictors. The problem is that when I put predictor A together with the others its sign becomes negative (whereas in simple regression the sign is positive). I also found the other predictor (B) that causes the change in sign. An important addition is that A alone is positive but not significant, with B it becomes negative and significantly improves the R-squared of the model. I checked the VIF and found no sign of multicollinearity (maximum VIF is 1.60). Also, the correlation between A and B is not incredibly high (only 0.6). I have the following questions: Could you explain to me why there is this change in sign combining the two predictors even if they are not multicollinear? Is it OK to leave them both in the model, or should I choose between the two? Having both of them makes A significant and improves the R-squared and Adjusted R-Squared. How do I interpret this result in simple words? I checked these other questions (1 and 2) and found no clear answer for my case. regression multiple-regression multicollinearity gung♦ ForinstanceForinstance $\begingroup$ You don't need "incredibly high" correlation between two predictors for this to happen, any correlation $\neq 0$ can do it, although in some informal sense larger correlations are more likely to than smaller ones. Correlations with all the other predictors matter too. $\endgroup$ – jbowman Apr 5 '18 at 16:06 $\begingroup$ Thanks. Correlations with the other predictors are low, and this happens also when the two predictors are alone. But the VIF is very low.. $\endgroup$ – Forinstance Apr 5 '18 at 16:19 As @jbowman notes, you don't need an "incredibly high" correlation to cause the sign to flip. How far a coefficient can move is a function of the correlation, and whether the sign 'flips' is only a matter of whether the coefficient moved towards 0, and how far away it was beforehand. Multicollinearity is a pretty strict criterion for correlation. By conventional rule of thumb, the VIF should be $\ge 10$ to claim that there is multicollinearity. If we restrict ourselves to the pairwise correlation, and two models, one with a single variable, and the second with both, it's easier to see how this plays out. The VIF is 1/tolerance, and tolerance is $1-R^2$, so a VIF of $10$ corresponds to a pairwise correlation of $r \approx .95$. In your case, working forwards, $r=.6$, $r^2=.36$, tolerance $= .64$, $VIF = 1.6$. What the $1.6$ means is that the variance of the sampling distribution is $1.6\times$ wider than it would have been if the variables had been perfectly uncorrelated. Thus, the standard error is $1.25\times$ as wide as it would be. That is, there is very little effect on the power of the test of this coefficient due to the collinearity. As a result, people quite reasonably say you don't have to worry about collinearity in cases like yours. But that isn't the same thing as saying that you can't have an effect of the omitted variable bias, $r = .6$ is still a reasonably strong correlation. To get a clearer understanding of how the sign could flip, it might help you to read my answer here: Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regression? The issue just isn't really restricted to multicollinearity, it can occur with any amount of correlation, if the conditions are right. To address your subsequent questions, it's fine to have both in the model. You would just say that A is correlated with B such that the marginal relationship between A and Y is positive, but the relationship is negative after controlling for B. gung♦gung $\begingroup$ +1 for the answer and useful link. one more question VIF $\geq 10$ any reference? $\endgroup$ – hxd1011 Apr 5 '18 at 20:11 $\begingroup$ @hxd1011, not per se. There are a lot of these rules of thumb; they often seem to be 10. My theory is that it's because we have 10 fingers. $\endgroup$ – gung♦ Apr 5 '18 at 20:12 $\begingroup$ Great answer! Thanks. Just a small doubt left. How do I interpret the fact that the variable is not significant alone, but significant when with the predictor that causes the switch in sign? $\endgroup$ – Forinstance Apr 6 '18 at 8:19 $\begingroup$ @Forinstance, see: How can adding a 2nd IV make the 1st IV significant? $\endgroup$ – gung♦ Apr 6 '18 at 14:21 Not the answer you're looking for? Browse other questions tagged regression multiple-regression multicollinearity or ask your own question. How can adding a 2nd IV make the 1st IV significant? Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regression? Regression coefficients that flip sign after including other predictors Sign flipping when adding one more variable in regression and with much greater magnitude Multicollinearity? Non-significant multiple-linear regression, highly correlated regressors but low variance inflation factor VIF to find multicollinearity Do I have multicollinearity? Multiple linear regression - significant ANOVA, but non-significant predictors. No multicollinearity. What's the problem? Why does the sign of Regression coefficients for standardized & unstandardize data using R differ Problems (potentially) caused by multicollinearity in SEM? Regression model constant causes multicollinearity warning, but not in standardized model regression coefficient sign is opposite of correlation but VIF is low? Estimating effect of linear regression coefficients with multicollinearity
CommonCrawl
Treatment of Rivaroxaban versus Aspirin for Non-disabling Cerebrovascular Events (TRACE): study protocol for a randomized controlled trial Study protocol Fang Yang1, Wenrui Jiang2, Ya Bai1, Junliang Han1, Xuedong Liu1, Guangyun Zhang1 & Gang Zhao1 Transient ischemic attack (TIA) or minor ischemic stroke represents the largest group of cerebrovascular disease, and those patients have a high risk of early recurrent stroke. Over decades, anticoagulation therapy has been used prudently in them for likely increasing the risk of intra-/extra-cranial hemorrhagic complications. However, recently rivaroxaban, a new oral anticoagulant, is proved to be as effective as traditional anticoagulants, while carrying significantly less risk of intracranial hemorrhage. Therefore, we assumed that patients may benefit from rivaroxaban if treated soon after TIA or minor stroke, and designed this adequately powered randomized study, TRACE. Methods and design The Treatment of Rivaroxaban versus Aspirin in Non-disabling Cerebrovascular Events (TRACE) study is a randomized, double-blind clinical trial with a target enrollment of 4400 patients. A 14-days regimen of rivaroxaban 10 mg daily or a 14-days regimen of aspirin 100 mg daily will be administrated to randomized participants with acute TIA or minor stroke, defined as National Institute of Health Stroke Scale scores ≤3. The primary efficacy end point is percentage of patients with any stroke (ischemic or hemorrhage) at 14 days. Study visits will be performed at the day of randomization, day 14 and day 90. Even though the new oral anticoagulants seem to be both safe and effective, few clinical trials have been carried out to test their effect on non-disabling cerebrovascular events. Treatment with rivaroxaban may prevent more cerebrovascular events with an acceptable risk profile after TIA or minor stroke, compared with aspirin, thus helping to improve the outcome of the disease. No. NCT01923818 Transient ischemic attack (TIA) and acute minor ischemic stroke (MIS), defined as a National Institutes of Health Stroke Scale (NIHSS) score ≤3 [1], represent the largest group of patients with cerebrovascular disease. A total of 150,000 suspected TIAs and MIS are referred to secondary care for assessment and investigation in England alone each year [2], and approximately 300,000 TIAs are diagnosed each year in the United States [3]. Though TIA and MIS have commonly been referred as non-disabling cerebrovascular events, they often prognosticated a high risk of recurrent disabling stroke. Previous studies reported that [4, 5] the risks of subsequent stroke were as high as 10.5–18.5 % at 90 days after TIA or MIS, over a half of which occurred in the first week. Some even believed that the recurrent stroke risk was likely to be much higher than commonly thought, about one third patients with ischemic stroke have had an earlier TIA or MIS [6]. Thrombylisis therapy, antiplatelet therapy and anticoagulant therapy play important roles in treatments of ischemic cerebrovascular disease. Intravenous tissue plasminogen activator (t-PA) was identified as the only pharmacological treatment for acute ischemic stroke approved by the US FDA in 1996 [7], however, it usually excluded the MIS patients [8]. About one third of those, who hadn't accepted intravenous t-PA treatment since stroke symptoms were mild on hospital arrival, would have a poor final stroke outcome [9, 10]. For long time, aspirin has been considered as the standard antiplatelet therapy for ischemic cerebrovascular diseases. Two large clinical trials indicated that over 2–4 weeks treatment with aspirin after acute ischemic stroke reduced the risk of recurrent ischemic stroke by 30 % with a small increase in intracranial hemorrhage [11, 12]. Previous trials [13, 14] on secondary prevention of noncardiogenic ischemic stroke had reported a combination of clopidogrel and aspirin had no further benefit compared with aspirin only. However recently, CHANCE study [15] excitingly reported that, not associated with increased hemorrhage events as compared with aspirin monotherapy, dual antiplatelet therapy (clopidogrel and aspirin) reduced the risk of recurrent stroke in patients with TIA and acute MIS. The results shed a light on exploiting possibly more effective medical therapy for patients suffered non-disabling cerebrovascular diseases. Over 50 years, anticoagulants have been used to treat patients with acute ischemic stroke for preventing early recurrent stroke and improving neurological outcomes with the cited reasons including: (1) to halt neurological worsening; (2) to prevent early recurrent embolization; (3) to improve neurological outcomes. Although, traditional anticoagulant has showed effectiveness of recurrent stroke prevention, its use was very limited in patients with ischemic cerebrovascular events because of higher incidence of bleeding events [16–19]. Proper treatment for non-disabling cerebrovascular events should be effective, as well as safe and convenient. Recently, new oral anticoagulants have received extensive attentions because of the following advantages, 1) directly targeting the coagulation cascade with rapid onset/offset of action; 2) fewer side effects (especially lower rates of major hemorrhage); 3) lower risks for drug-drug interactions; 4) more predictable responses [20]. One of the new anticoagulants, rivaroxaban, a direct factor Xa inhibitor, has been approved in US for prevention of deep-vein thrombosis (DVT) and venous thromboembolism (VTE) in surgery and for prophylaxis of the risk of stroke in people with abnormal heart rhythm (non-valvular atrial fibrillation) at 2011, and for treatment of DVT and pulmonary embolism at 2012. Recent published trials [21, 22] confirmed that rivaroxaban reduced the mortality of severe cardiovascular events and improved outcome with less fatal bleeding in the patients with atrial fibrillation and in the secondary prevention of patients with acute coronary syndrome. Moreover, rivaroxaban showed the best cost-effectiveness of stroke prevention in patients with nonvalvular atrial fibrillation within the 3 novel oral anticoagulants, i.e. apixaban, dabigatran and rivaroxaban [23]. Till now, no specific acute therapy is available for the vast majority of patients with acute non-disabling cerebrovascular events other than antiplatelet therapy [15]. This new anticoagulant agent, rivaroxaban could be a more effective, as well as safe and convenient option. Therefore, in this study we will enroll Chinese patients with TIA and acute MIS, who would show particularly high risk for recurrent ischemia, low risk for hemorrhage and relatively stable systematic conditions after cerebrovascular events onset. And they will be randomly treated with rivaroxaban (10 mg daily) or aspirin (100 mg daily) to assess the safety and efficacy of the new medication. Study objective The Treatment of Rivaroxaban versus Aspirin in Non-disabling Cerebrovascular Events (TRACE) study (www.clinicaltrials.gov identifier NCT01923818) is a randomized, double-blind, multicenter, controlled clinical trial in 4400 Chinese patients with acute TIA or minor stroke. The primary objective of this trial is to determine whether rivaroxaban is safe, when added to standard care, and can reduce the risk of any stroke (both ischemic and hemorrhagic) when initiated within 24 h of symptom onset in high-risk patients with TIA or MIS. Totally 4400 patients will be recruited through 60 emergency departments of general hospitals in China. Two subtypes of patients would be enrolled: I, acute non-disabling ischemic stroke (<24 h of symptoms onset); II, acute TIA (<24 h of symptoms onset). Informed consent would be supported by a patient (or next to kin) information leaflet in Chinese. A consulted meeting would be developed by a study physician to ensure patients and their families understand the study procedure and consent to participation in the trial [24]. All patients would receive standard care based on the recommendations of AHA/ASA guidelines [25, 26]. For patients with markedly elevated blood pressure, a reasonable goal would be lowering blood pressure by 15 % during the first 24 h. After the first several days, the ideal target blood pressure would be <140/90 mm Hg. Patients with dyslipidemia would be administrated with statin therapy according to low-density lipoprotein cholesterol level. Those patients already taking statins at the time of onset of TIA or ischemic stroke would continue their statin therapy. Patients with diabetes would receive glycemic control according to plasma glucose level. Patients would be suggested early mobilization and several lifestyle modifications, include salt restriction, weight loss, the consumption of a diet rich in fruits, vegetables, and low-fat dairy products, regular aerobic physical activity, smoke quitting and limited alcohol consumption. The study has been approved by the Medical Ethical Reviewing Committee of the Fourth Military Medical University Medical Center. This clinical trial will be conducted in accordance with the principles laid down by the 18th World Medical Assembly (Helsinki, 1964) and all applicable amendments laid down by the World Medical Assemblies and the International Conference on Harmonization guidelines for Good Clinical Practice. The trial would include (Table 1) male and female patients ≥18 years of age who have an acute MIS or TIA and can be treated with intervention medication within 24 h of symptoms onset. Symptom onset is defined by the "last see normal" principle. The patients could be enrolled if he/she had an acute MIS with NIHSS ≤3 at the time of randomization or had a TIA onset caused by focal brain ischemia with resolution within 24 h of symptom onset and moderate to high risk of stroke recurrence (ABCD2 score ≥4 at the time of randomization). Table 1 Inclusion and exclusion criteria Patients would be ineligible for the study if they had a diagnosis of hemorrhage or other pathology, such as vascular malformation, tumor, abscess, or other major non-ischemic brain diseases (e.g., multiple sclerosis), on baseline computed tomography (CT) brain scanning or magnetic resonance imaging (MRI) + diffusion weighted imaging (DWI) brain scanning; had modified Rankin Scale (mRS) score >2 at randomization (premorbid historical assessment); had NIHSS ≥4 at randomization; had a clear indication for anticoagulation (presumed cardiac source of embolus, e.g. atrial fibrillation, prosthetic cardiac valves known or suspected endocarditis); had contraindication to investigational medications; are currently treated (last dose given within 10 days before randomization) with heparin therapy or oral anti-coagulation; had history of intracranial hemorrhage; had gastrointestinal bleed or major surgery within 3 months; were planning or likely to undergo revascularization (any angioplasty or vascular surgery) within the next 3 months; had TIA or MIS induced by angiography or surgery; had severe non-cardiovascular comorbidity with life expectancy <3 months; were women of childbearing age not practicing reliable contraception who did not have a documented negative pregnancy test result. Study procedures Figure 1 shows the trial procedures. Participants with suspected TIA or MIS would firstly receive head CT to exclude the intracranial hemorrhage and head MRI + DWI to confirm the area of infarction. A certified, trained and licensed physician investigator would be required to confirm the diagnosis of TIA or MIS and to calculate the ABCD2 score for subjects with TIA or NIHSS score for subjects with MIS. Once a standardized, structured interview was performed, the data would be recorded in the case report form for each patient. All clinical data, biological samples and radiological images would be sent to the central study site where a cerebrovascular neurologist would review the data. Demographic, medical, social, and behavioral variables would be determined along with baseline medications. Anthropometry would be conducted using standardized equipment calibrated on a daily basis. Study flowchart Patients meeting these criteria and offering informed consent would be randomized into 2 arms: Receiving a 100-mg dose of aspirin and placebo rivaroxaban Receiving a 10-mg dose of rivaroxaban and placebo aspirin Patients would receive certain study medication from day 1 to day 14. The first dose would be given within 24 h of symptom onset. From day 15 to day 90, all patients would receive 100-mg dose of aspirin as standard antiplatelet therapy. Study visits would be performed on the day of randomization at day 14 and day 90. At randomization and during follow-up visits, the following information would be collected: a neurologic evaluation (mRS and NIHSS); a physical examination, including measurement of weight (kilogram) and vital signs (supine systolic and diastolic blood pressure, heart rate); laboratory tests, a neurological dysfunctions assessment, and concomitant medications and adverse events. For ensuring the safety of patients during the study, the trial should be stopped if the probability that rate of intracranial hemorrhage exceeded 5 % or the rate of other hemorrhagic events is more than 10 %. The progress of the study would be monitored by the data and safety monitoring board for ensuring the high standards of ethics and patient safety. Study outcomes The primary efficacy end point is percentage of patients with any new stroke events (ischemic or hemorrhage), including fatal stroke, at 14 days. Secondary outcome measures include, Percentage of patients at 14 days and 90 days with new clinical vascular events (ischemic stroke/hemorrhagic stroke/TIA/myocardial infarction/vascular death) as a cluster and evaluated individually; mRS score, dichotomized at percentage with score 0–2 versus score 3–6, at 14 days and 90 days follow-up; Changes in NIHSS scores at 14 days and 90-days follow-up; Efficacy end point would also be analyzed stratified by etiologic subtypes (nonintracranial artery diseases versus intracranial artery diseases), by time randomization (<12 h versus ≥12 h), by qualifying event (TIA versus MIS), and by age (dichotomized at a 5-year cut point closest to median age). Safety end points include, Moderate to severe bleeding event, according to the Global Utilization of Streptokinase and Tissue Plasminogen Activator for occluded Coronary Arteries definition, including fatal bleeding, intracranial hemorrhage and other hemorrhagic events required transfusion of blood, even causing hemodynamic compromise requiring intervention at 90 days; Total mortality at 14-days follow-up; Adverse events/severe adverse events reported by the investigators in 14-days follow-up. Randomization, allocation concealment and blinding Subject numbers would be assigned sequentially as each patient enters the study. After receiving an informed consent, patients fulfilling the inclusion criteria would be randomized by computer-generated random numbers (Microsoft Excel 2010, Redmond, WA, USA) at statistics research office, Fourth Military Medical University. A clinical research associate would make opaque blinded envelopes (with consecutive numbers) and deliver them to each participating center. Random allocation would be performed within 24 h after a patient enrolled. The block size and treatment-assignment table would not be available to the researchers until the end of the trial. The study medication would be stored under the conditions specified on the label in a locked, safe area of the pharmacy department to prevent unauthorized access. Both the study medication and the placebo would be indistinguishable; they would be manufactured by the same company and similar in appearance, organoleptic characteristics, and presentation. In the event of emergency, an investigator (HJL) and a clinical pharmacist (JYY) would decide whether it's necessary to unblind the subject's treatment assignment using the unblinding envelopes provided to the hospital. If unblinding was necessary, the investigator must record the reason for unblinding, as well as the date and time of the event. In each participating center, an independent researcher would be assigned to collect the trial paper and electronic records. In the central trial coordinating office, an independent research assistant (ZZR) would have access to all unblinded data and maintain confidentiality of patient records. Statistical considerations The primary null hypothesis of this trial is as follows: in patients with TIA or MIS treated with aspirin 100 mg per day, there would be no difference in 14-days risk of stroke (ischemic or hemorrhagic) in those treated with a 14-days regimen of rivaroxaban 10 mg per day, when therapy were initiated within 24 h of symptom onset. The minimum necessary sample size in the trial is established by the requirement to detect the smallest expected clinically meaningful treatment difference comparing the treatment with placebo. According to the results of previous clinical studies [4, 5], we speculated the 14-day risk of stroke recurrence in the placebo (aspirin) group was about 10 % among TIA or MIS patients treated with aspirin within 24 h of symptom onset. The results from our preliminary study were used to calculate the sample size (unpublished data). A relative risk reduction of 28 % (relative risk with addition of rivaroxaban is 0.72) was the smallest difference we would attempt to detect. The following equation was used for calculating the sample size: $$ \mathrm{n}=\frac{1}{2}{\left(\frac{u_{\alpha }+{u}_{\beta }}{{ \sin}^{-1}\sqrt{p1}-{ \sin}^{-1}\sqrt{p2}}\right)}^2 $$ With a two-sided 5 % significance level (α) and a 90 % power (1-β), p1 of 0.10, p2 of 0.072, the minimal sample size per group was estimated to be 2095 patients. About 4400 participants in total 2 groups was a final estimated sample size with 5 % dropouts (medication nonadherence). Missing values would be remained missing, and patients would be censored at their last follow-up assessment (time of clinical event, end of study, or last visit before loss to follow-up). All analyses would be intention to treat. In these analyses we would compare conventional-dose of rivaroxaban versus aspirin. Kaplan-Meier estimates of the cumulative risk of a stroke (ischemic or hemorrhagic) event during a 14 days' follow-up, with hazard ratios and 95 % confidence intervals calculated using Cox proportional hazards methods and the log-rank test to evaluate the treatment effect would be reported. The Chi-square test of association would be used to compare groups at baseline as appropriate. Logistic regression would be used to determine the significance of the results obtained. All statistics would be two-sided with p <0.05 considered significant. The progress of the study would be monitored by the Data and Safety Monitoring Board (DSMB) to ensure the high standards of ethics and patient safety. The DSMB would monitor the trial via scheduled and unscheduled reviews, supervise stopping rules and unmasking, and maintain the confidentiality of internal discussions and validity of the reports. An Executive Committee meeting would be organized to make major decisions. Quality control inspectors would be responsible for inspecting the quality of research sites and study data periodically, supervising any quality problem during the trial, and reporting to the Executive Committee. The members of the Steering Committee, including two project directors, would be members of the Executive Committee and would convene monthly (teleconferences or physical meetings) to review the status of the trial and available blinded data; they would take appropriate action regarding the conduct of the study. An Adjudication Committee charter including membership, role, and responsibilities would be approved before the start of the trial by the Adjudication Committee and the Executive Committee. This committee would be composed of Academic Members, including an independent statistician, who were not otherwise participating in the trial. TIA and MIS have commonly been referred as non-disabling cerebrovascular events and often portend disabling stroke. Although previous study showed that rapid assessment and early treatment after a non-disabling cerebrovascular event resulted in a much lower risk of recurrent stroke [25], few established or effective therapies were used in clinical work. Aspirin is the current standard antiplatelet therapy to prevent recurrent stroke in patients with acute cerebrovascular event, but the effect is modest, and moreover weakened by a small increased risk of intracranial hemorrhage. Traditional anticoagulant, if administrated immediately after acute ischemic stroke, could decrease risk of recurrent stroke compared to patients untreated or received antiplatelet agent [12, 27]. However, since increased rate of bleeding complications, the usefulness of anticoagulation is still in dispute. There're few clinical trials to test the effect of new oral anticoagulants on non-disabling cerebrovascular events, even though these agents seem to be safe and effective. Rivaroxaban (a direct factor Xa inhibitor), one of the new anticoagulants, has received attention widely. Recently it has been confirmed effective in prevention and treatment of DVT, VTE and pulmonary embolism, and showed improved outcome in patients with atrial fibrillation and with acute coronary syndrome [20–22]. However, effects of rivaroxaban for prevention of the early recurrent risk after TIA and acute MIS are still unsettled. Anticoagulant therapy, with the new drug rivaroxaban, might prevent more cerebrovascular events with an acceptable risk profile after TIA or MIS compared with mono-antiplatelet therapy, thus would be helpful to improve the outcome of the disease. We designed TRACE trial as a randomized, double-blind, multicenter, controlled clinical trial in China. We would assess the hypothesis that a 14-days rivaroxaban regimen followed to aspirin administration is superior to aspirin alone for the treatment of high-risk patients with acute nondisabling cerebrovascular event. The trial was registered at Clinicaltrials.org and the study is open for recruitment. DWI: Diffusion weighted imaging DVT: Deep-vein thrombosis MIS: Minor ischemic stroke mRS: Modified Rankin Scale NIHSS: National Institutes of Health Stroke Scale t-PA: Tissue plasminogen activator VTE: Brott T, Adams Jr HP, Olinger CP, Marler JR, Barsan WG, Biller J, et al. Measurements of acute cerebral infarction: a clinical examination scale. Stroke. 1989;20(7):864–70. Giles MF, Rothwell PM. Substantial underestimation of the need for outpatient services for TIA and minor stroke. Age Ageing. 2007;36(6):676–80. Edlow JA, Kim S, Pelletier AJ, Camargo Jr CA. National study on emergency department visits for transient ischemic attack, 1992–2001. Acad Emerg Med. 2006;13(6):666–72. Coull AJ, Lovett JK, Rothwell PM, Oxford Vascular S. Population based study of early risk of stroke after transient ischaemic attack or minor stroke: implications for public education and organisation of services. BMJ. 2004;328(7435):326. Johnston SC, Gress DR, Browner WS, Sidney S. Short-term prognosis after emergency department diagnosis of TIA. JAMA. 2000;284(22):2901–6. Rothwell PM, Buchan A, Johnston SC. Recent advances in management of transient ischaemic attacks and minor ischaemic strokes. Lancet Neurol. 2006;5(4):323–31. Graham GD. Tissue plasminogen activator for acute ischemic stroke in clinical practice: a meta-analysis of safety data. Stroke. 2003;34(12):2847–50. Kleindorfer D, Kissela B, Schneider A, Woo D, Khoury J, Miller R, et al. Eligibility for recombinant tissue plasminogen activator in acute ischemic stroke: a population-based study. Stroke. 2004;35(2):e27–29. De Keyser J, Gdovinova Z, Uyttenboogaart M, Vroomen PC, Luijckx GJ. Intravenous alteplase for stroke: beyond the guidelines and in particular clinical situations. Stroke. 2007;38(9):2612–8. Smith EE, Abdullah AR, Petkovska I, Rosenthal E, Koroshetz WJ, Schwamm LH. Poor outcomes in patients who do not receive intravenous tissue plasminogen activator because of mild or improving ischemic stroke. Stroke. 2005;36(11):2497–9. Chen ZM, Group CC. CAST: randomised placebo-controlled trial of early aspirin use in 20,000 patients with acute ischaemic stroke. CAST (Chinese Acute Stroke Trial) Collaborative Group. Lancet. 1997;349(9066):1641–9. International Stroke Trial Collaborative Group. The International Stroke Trial (IST): a randomised trial of aspirin, subcutaneous heparin, both, or neither among 19435 patients with acute ischaemic stroke. International Stroke Trial Collaborative Group. Lancet. 1997;349(9065):1569–81. Bhatt DL, Fox KA, Hacke W, Berger PB, Black HR, Boden WE, et al. Clopidogrel and aspirin versus aspirin alone for the prevention of atherothrombotic events. N Engl J Med. 2006;354(16):1706–17. S.P.S. Investigators, Benavente OR, Hart RG, McClure LA, Szychowski JM, Coffey CS, et al. Effects of clopidogrel added to aspirin in patients with recent lacunar stroke. N Engl J Med. 2012;367(9):817–25. Wang Y, Wang Y, Zhao X, Liu L, Wang D, Wang C, et al. Clopidogrel with aspirin in acute minor stroke or transient ischemic attack. N Engl J Med. 2013;369(1):11–9. Esprit. Oral anticoagulation in patients after cerebral ischemia of arterial origin and risk of intracranial hemorrhage. Stroke. 2003;34(6):e45–46. Chimowitz MI, Lynn MJ, Howlett-Smith H, Stern BJ, Hertzberg VS, Frankel MR, et al. Comparison of warfarin and aspirin for symptomatic intracranial arterial stenosis. N Engl J Med. 2005;352(13):1305–16. Mohr JP, Thompson JL, Lazar RM, Levin B, Sacco RL, Furie KL, et al. A comparison of warfarin and aspirin for the prevention of recurrent ischemic stroke. N Engl J Med. 2001;345(20):1444–51. Sacco RL, Prabhakaran S, Thompson JL, Murphy A, Sciacca RR, Levin B, et al. Comparison of warfarin versus aspirin for the prevention of recurrent stroke or death: subgroup analyses from the Warfarin-Aspirin Recurrent Stroke Study. Cerebrovasc Dis. 2006;22(1):4–12. Furie KL, Goldstein LB, Albers GW, Khatri P, Neyens R, Turakhia MP, et al. Oral antithrombotic agents for the prevention of stroke in nonvalvular atrial fibrillation: a science advisory for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2012;43(12):3442–53. Patel MR, Mahaffey KW, Garg J, Pan G, Singer DE, Hacke W, et al. Rivaroxaban versus warfarin in nonvalvular atrial fibrillation. N Engl J Med. 2011;365(10):883–91. Granger CB, Alexander JH, McMurray JJ, Lopes RD, Hylek EM, Hanna M, et al. Apixaban versus warfarin in patients with atrial fibrillation. N Engl J Med. 2011;365(11):981–92. Harrington AR, Armstrong EP, Nolan Jr PE, Malone DC. Cost-effectiveness of apixaban, dabigatran, rivaroxaban, and warfarin for stroke prevention in atrial fibrillation. Stroke. 2013;44(6):1676–81. Allmark P, Mason S. Improving the quality of consent to randomised controlled trials by using continuous consent and clinician training in the consent process. J Med Ethics. 2006;32(8):439–43. Jauch EC, Saver JL, Adams Jr HP, Bruno A, Connors JJ, Demaerschalk BM, et al. Guidelines for the early management of patients with acute ischemic stroke: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2013;44(3):870–947. Kernan WN, Ovbiagele B, Black HR, Bravata DM, Chimowitz MI, Ezekowitz MD, et al. Guidelines for the prevention of stroke in patients with stroke and transient ischemic attack: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2014;45(7):2160–236. The Publications Committee for the Trial of ORG 10172 in Acute Stroke Treatment (TOAST) Investigators. Low molecular weight heparinoid, ORG 10172 (danaparoid), and outcome after acute ischemic stroke: a randomized controlled trial. JAMA. 1998;279(16):1265–72. Funding for the preliminary trial of TRACE was provided by Discipline Construction Program of Xijing Hospital. Funding for TRACE was provided by Key Scientific and Technological Project of Shaanxi province. Department of Neurology, Xijing Hospital, No. 15 West Changle Road, Xi'an, 710032, China Fang Yang, Ya Bai, Junliang Han, Xuedong Liu, Guangyun Zhang & Gang Zhao Emergency Department, Xijing Hospital, No. 15 West Changle Road, Xi'an, 710032, China Wenrui Jiang Fang Yang Ya Bai Junliang Han Xuedong Liu Guangyun Zhang Gang Zhao Correspondence to Gang Zhao. ZG: the principal investigator for this project who led the conceptualization, design, funding applications and made a strategic decision of this research protocol. YF and JWR: co-led the conceptualization, design, development, and implementation of this research protocol and contributed to the writing of this manuscript. BY and HJL: led the development of the data management protocol and implementation of this research protocol. LXD: contributed to the design, development and statistical analysis plan. ZGY: assisted in the design of this protocol. All authors read and approved of the manuscript. Fang Yang, Wenrui Jiang and Ya Bai contributed equally to this work. Yang, F., Jiang, W., Bai, Y. et al. Treatment of Rivaroxaban versus Aspirin for Non-disabling Cerebrovascular Events (TRACE): study protocol for a randomized controlled trial. BMC Neurol 15, 195 (2015). https://doi.org/10.1186/s12883-015-0453-7 Acute minor ischemic stroke
CommonCrawl
MSC Classifications MSC 2010: Measure and integration 28Exx Refine listing Statistics and Probability (2) The ANZIAM Journal (2) The Journal of Symbolic Logic (2) Advances in Applied Probability (1) Bulletin of the Australian Mathematical Society (1) Canadian Journal of Mathematics (1) Canadian Mathematical Bulletin (1) Combinatorics, Probability and Computing (1) Australian Mathematical Society Inc (3) Association for Symbolic Logic (2) Canadian Mathematical Society (2) Applied Probability Trust (1) 9 results in 28Exx How robustly can you predict the future? Real functions Topological and differentiable algebraic systems Noncompact transformation groups Miscellaneous topics in measure theory Sean Cox, Matthew Elpers Journal: Canadian Journal of Mathematics , First View Published online by Cambridge University Press: 07 September 2022, pp. 1-23 Hardin and Taylor proved that any function on the reals—even a nowhere continuous one—can be correctly predicted, based solely on its past behavior, at almost every point in time. They showed that one could even arrange for the predictors to be robust with respect to simple time shifts, and asked whether they could be robust with respect to other, more complicated time distortions. This question was partially answered by Bajpai and Velleman, who provided upper and lower frontiers (in the subgroup lattice of $\mathrm{Homeo}^+(\mathbb {R})$) on how robust a predictor can possibly be. We improve both frontiers, some of which reduce ultimately to consequences of Hölder's Theorem (that every Archimedean group is abelian). EUCLIDEAN NUMBERS AND NUMEROSITIES Model theory Nonstandard models VIERI BENCI, LORENZO LUPERI BAGLINI Journal: The Journal of Symbolic Logic , First View Published online by Cambridge University Press: 21 February 2022, pp. 1-35 Several different versions of the theory of numerosities have been introduced in the literature. Here, we unify these approaches in a consistent frame through the notion of set of labels, relating numerosities with the Kiesler field of Euclidean numbers. This approach allows us to easily introduce, by means of numerosities, ordinals and their natural operations, as well as the Lebesgue measure as a counting measure on the reals. ON SEQUENCES OF HOMOMORPHISMS INTO MEASURE ALGEBRAS AND THE EFIMOV PROBLEM Classical measure theory PIOTR BORODULIN–NADZIEJA, DAMIAN SOBOTA For given Boolean algebras $\mathbb {A}$ and $\mathbb {B}$ we endow the space $\mathcal {H}(\mathbb {A},\mathbb {B})$ of all Boolean homomorphisms from $\mathbb {A}$ to $\mathbb {B}$ with various topologies and study convergence properties of sequences in $\mathcal {H}(\mathbb {A},\mathbb {B})$. We are in particular interested in the situation when $\mathbb {B}$ is a measure algebra as in this case we obtain a natural tool for studying topological convergence properties of sequences of ultrafilters on $\mathbb {A}$ in random extensions of the set-theoretical universe. This appears to have strong connections with Dow and Fremlin's result stating that there are Efimov spaces in the random model. We also investigate relations between topologies on $\mathcal {H}(\mathbb {A},\mathbb {B})$ for a Boolean algebra $\mathbb {B}$ carrying a strictly positive measure and convergence properties of sequences of measures on $\mathbb {A}$. Mixing and average mixing times for general Markov processes Real functions: Miscellaneous topics Robert M. Anderson, Haosui Duanmu, Aaron Smith Journal: Canadian Mathematical Bulletin / Volume 64 / Issue 3 / September 2021 Print publication: September 2021 Yuval Peres and Perla Sousi showed that the mixing times and average mixing times of reversible Markov chains on finite state spaces are equal up to some universal multiplicative constant. We use tools from nonstandard analysis to extend this result to reversible Markov chains on compact state spaces that satisfy the strong Feller property. CHARACTERIZATIONS OF THE BOREL $\unicode[STIX]{x1D70E}$-FIELDS OF THE FUZZY NUMBER SPACE TAI-HE FAN, MENG-KE BIAN Journal: The ANZIAM Journal / Volume 58 / Issue 3-4 / April 2017 Print publication: April 2017 In this paper, we characterize Borel $\unicode[STIX]{x1D70E}$-fields of the set of all fuzzy numbers endowed with different metrics. The main result is that the Borel $\unicode[STIX]{x1D70E}$-fields with respect to all known separable metrics are identical. This Borel field is the Borel $\unicode[STIX]{x1D70E}$-field making all level cut functions of fuzzy mappings from any measurable space to the fuzzy number space measurable with respect to the Hausdorff metric on the cut sets. The relation between the Borel $\unicode[STIX]{x1D70E}$-field with respect to the supremum metric $d_{\infty }$ is also demonstrated. We prove that the Borel field is induced by a separable and complete metric. A global characterization of measurability of fuzzy-valued functions is given via the main result. Applications to fuzzy-valued integrals are given, and an approximation method is presented for integrals of fuzzy-valued functions. Finally, an example is given to illustrate the applications of these results in economics. This example shows that the results in this paper are basic to the theory of fuzzy-valued functions, such as the fuzzy version of Lebesgue-like integrals of fuzzy-valued functions, and are useful in applied fields. First-Order Convergence and Roots† Graph theory DEMETRES CHRISTOFIDES, DANIEL KRÁL' Journal: Combinatorics, Probability and Computing / Volume 25 / Issue 2 / March 2016 Published online by Cambridge University Press: 24 February 2015, pp. 213-221 Nešetřil and Ossona de Mendez introduced the notion of first-order convergence, which unifies the notions of convergence for sparse and dense graphs. They asked whether, if (Gi)i∈ℕ is a sequence of graphs with M being their first-order limit and v is a vertex of M, then there exists a sequence (vi)i∈ℕ of vertices such that the graphs Gi rooted at vi converge to M rooted at v. We show that this holds for almost all vertices v of M, and we give an example showing that the statement need not hold for all vertices. APPROXIMATING THE KOHLRAUSCH FUNCTION BY SUMS OF EXPONENTIALS Integral equations, integral transforms Numerical approximation and computational geometry Computational aspects MIN ZHONG, R. J. LOY, R. S. ANDERSSEN Journal: The ANZIAM Journal / Volume 54 / Issue 4 / April 2013 Published online by Cambridge University Press: 04 September 2013, pp. 306-323 The Kohlrausch functions $\exp (- {t}^{\beta } )$, with $\beta \in (0, 1)$, which are important in a wide range of physical, chemical and biological applications, correspond to specific realizations of completely monotone functions. In this paper, using nonuniform grids and midpoint estimates, constructive procedures are formulated and analysed for the Kohlrausch functions. Sharper estimates are discussed to improve the approximation results. Numerical results and representative approximations are presented to illustrate the effectiveness of the proposed method. SOLUTIONS OF A GOŁA̧B–SCHINZEL-TYPE FUNCTIONAL EQUATION BOUNDED ON 'BIG' SETS IN AN ABSTRACT SENSE ELIZA JABŁOŃSKA Journal: Bulletin of the Australian Mathematical Society / Volume 81 / Issue 3 / June 2010 Published online by Cambridge University Press: 05 March 2010, pp. 430-441 Print publication: June 2010 It is well known that an exponential real function, which is Lebesgue measurable (Baire measurable, respectively) or bounded on a set of positive Lebesgue measure (of the second category with the Baire property, respectively), is continuous. Here we consider bounded on 'big' set solutions of an equation generalizing the exponential equation as well as the Goła̧b–Schinzel equation. Moreover, we unify results into a more general and abstract case. Measure generation by Euler functionals Combinatorial probability Geometric probability and stochastic geometry R. V. Ambartzumian Journal: Advances in Applied Probability / Volume 27 / Issue 3 / September 1995 Guided by analogy with Euler's spherical excess formula, we define a finite-additive functional on bounded convex polygons in ℝ2 (the Euler functional). Under certain smoothness assumptions, we find some sufficient conditions when this functional can be extended to a planar signed measure. A dual reformulation of these conditions leads to signed measures in the space of lines in ℝ2. In this way we obtain two sets of conditions which ensure that a segment function corresponds to a signed measure in the space of lines. The latter conditions are also necessary.
CommonCrawl
Math Tools Four-Function CalculatorScientific CalculatorGraphing CalculatorGeometrySpreadsheetProbability CalculatorConstructions Algebra & GeometryAlgebra 1GeometryAlgebra 2Algebra 1 Supports Algebra 1Unit 1Unit 2Unit 3Unit 4Unit 5Unit 6Unit 7 1234567891011121314151617181920212223242526 Solutions to Systems of Linear Inequalities in Two Variables 24.1: A Silly Riddle (10 minutes) Building On HSA-REI.C.6 Building Towards HSA-REI.D.12 Graph It This warm-up reminds students about systems of equations and their solutions. Students recall that a solution to a linear equation in two variables is any pair of numbers that makes the equation true, and that a solution to a system of two equations in two variables is a pair of numbers that make both equations true. The given system has a solution that is hard to find mentally, but can be calculated algebraically or by using graphing technology. Making graphing technology available gives students an opportunity to choose appropriate tools strategically (MP5). Here is a riddle: "I am thinking of two numbers that add up to 5.678. The difference between them is 9.876. What are the two numbers?" Name any pair of numbers whose sum is 5.678. Name any pair of numbers whose difference is 9.876. The riddle can be represented with two equations. Write the equations. Solve the riddle. Explain or show your reasoning. Students who graph the system of equations using technology may estimate from the graph and offer \((8,\text-2)\) as a solution. Ask them to check whether \(8+\text-2\) really does equal 5.678. Ask students using different methods to briefly describe their solving process. Record and display their reasoning (including a graph) for all to see. If not mentioned in students' explanations, highlight that the riddle can be solved by writing and solving a system of equations. Each equation represents a constraint. Ask students: "What constraints do the two equations represent?" (The first equation represents a constraint about the sum of the two numbers. The second represents a constraint about the difference of the two numbers.) "What does a solution to the system represent?" (The solution is a pair of numbers that simultaneously meet both constraints or make both equations true.) "How many pairs of numbers meet both constraints at the same time?" (Only one pair. The graphs of the equations intersect at one point.) 24.2: A Quilting Project (15 minutes) In this activity, students encounter a situation in which two constraints that can be expressed with inequalities are represented on two separate graphs, which makes it challenging to find pairs of values that meet both constraints simultaneously. This motivates a desire to represent both constraints on the same graph. As students work, monitor for the different ways students try to find a pair of values that satisfy both inequalities. Some likely approaches: Substituting different \(x\)- and \(y\)-values into the inequalities until they find a pair that make both inequalities true. Visually estimating a point in the solution region of one graph that appears likely to also be in the solution region of the other graph (for example, noticing that \((6,4)\) is in the solution region of each inequality. Graphing both inequalities on the same coordinate plane (either using technology or by copying the line in one graph onto the other graph) and finding a point in the region where they overlap. Identify students who use these or other approaches and ask them to share during class discussion. Give students a moment to read the task statement and look at the two graphs. Ask students: "What are the two constraints in this situation?" (A length constraint and a cost constraint) "Which graph represents which constraint? How do you know?" (The first graph represents the length constraint. Possible explanations: The graph intersects the vertical and horizontal axes at approximately 9.5, which means that if the quilter bought 0 yards of one color, he will need at least 9.5 yards of the other color. The length constraint says "at least 9.5 yards," so the lengths must include values greater than 9.5, which is shown by the shaded region of the first graph. The second graph represents the cost constraint. If the quilter bought 0 yards of the light color, he could buy up to \(\frac{110}{13}\), or about 8.5, yards of the dark color fabric. This corresponds to the vertical intercept of the second graph.) The cost constraint says "up to $110," so the lengths must be below certain limits. The solution region of the second graph shows values below a boundary.) Arrange students in groups of 2 and provide access to graphing technology, in case requested. Ask students to pause briefly after they have written both inequalities. Verify that the inequalities that students have written accurately represent the constraints before they proceed to the rest of the activity. Engagement: Develop Effort and Persistence. Encourage and support opportunities for peer interactions. During the time spent on the two questions in the launch, invite students to brainstorm with a partner before sharing with the whole-class. Display sentence frames that elicit descriptive observations: "I notice that. . .", as well as frames that support interpretation and representation "_____ represents _____ because . . . ". Supports accessibility for: Language; Social-emotional skills To make a quilt, a quilter is buying fabric in two colors, light and dark. He needs at least 9.5 yards of fabric in total. The light color costs $9 a yard. The dark color costs $13 a yard. The quilter can spend up to $110 on fabric. Attribution: Quilt, by pixel1. Public Domain. pixabay. Source. Here are two graphs that represent the two constraints. Description: <p>Inequality graphed on a coordinate plane, origin O. Horizontal axis, light fabric, yards, scale from 0 to 16, by 4's. Vertical axis, dark fabric, yards, scale from 0 to 12, by 4's. Line starts on vertical axis at approximately 9 point 5, goes through about 4 comma 5 point 5 and ends on the horizontal axis at 9 point 5. The region above the line is shaded.</p> Description: <p>Inequality graphed on a coordinate plane, origin O. Horizontal axis, Light fabric, yards, scale from 0 to 16, by 4's. Vertical axis, dark fabric, yards, scale from 0 to 12, by 4's. Line starts on vertical axis at approximately 8 point 5, goes through about 6 comma 4 and ends on the horizontal axis at 12. The region below the line is shaded.</p> Write an inequality to represent the length constraint. Let \(x\) represent the yards of light fabric and \(y\) represent the yards of dark fabric. Select all the pairs that satisfy the length constraint. \((5,5)\) \((2.5, 4.5)\) \((12,10)\) Write an inequality to represent the cost constraint. Select all the pairs that satisfy the cost constraint. \((10,1)\) Explain why \((2,2)\) satisfies the cost constraint, but not the length constraint. Find at least one pair of numbers that satisfies both constraints. Be prepared to explain how you know. What does the pair of numbers represent in this situation? Focus the discussion on the last two questions. Select previously identified students to share how they identified a pair of values that meet both constraints. Sequence their presentation in the order listed in the Activity Narrative, which is from less systematic to more systematic. Select students who graphed both inequalities on the same plane to display their graphs for all to see. If no students did this on their own, display the embedded applet in the online materials for all to see. Consider selecting first one inequality, then the other, then both simultaneously. (The Desmos graphs show both variables being restricted to positive values, but students are not expected to do this when graphing. Consider discussing with students why it makes sense in this situation to disregard negative values.) Discuss the meaning of a pair of values that satisfy both inequalities. Emphasize that, in this situation, it refers to the amount of fabric of each color that meets both the length and cost requirements. Explain to students that the two inequalities representing the constraints in the same situation form a system of linear inequalities. In a system of linear equations, the solutions can be represented by one or more points where the graphs of the equations intersect. The solutions to a system of inequalities are all points in the region where the graphs of the two inequalities overlap, because those points represent all pairs of values that make both inequalities true. 24.3: Remember These Situations? (10 minutes) HSA-CED.A.3 MLR8: Discussion Supports Graphing technology In this activity, students write systems of inequalities that represent situations and find the solutions by graphing. Students have encountered the same situations and constraints in a previous lesson, so the main work here is on thinking about each pair of constraints as a system, finding the solution region of each inequality, and identifying a point in the region where the graphs overlap as a solution to the system. Arrange students in groups of 2–4. Explain to students that they will now revisit some situations they have seen and graphed in an earlier lesson. Assign one situation to each group member (or allow them to choose one situation) and ask them to answer the questions. Provide access to graphing technology. Give students 5-6 minutes of quiet work time, and then time to discuss their responses and graph with their group. Speaking: MLR8 Discussion Supports. Use this routine to support small-group discussion. At the appropriate time, give students 2–3 minutes to plan what they will say when they present their responses and graph to their group. Encourage students to consider what details are important to share and to think about how they will explain their reasoning using mathematical language. Design Principle(s): Support sense-making; Maximize meta-awareness Action and Expression: Internalize Executive Functions. To support development of organizational skills, check in with students within the first 2–3 minutes of work time. Look for students who are organizing their information, understanding the need for two equations, and strategizing as to how to overlay them onto a single graph. Supports accessibility for: Memory; Organization Here are some situations you have seen before. Answer the questions for one situation. A customer opens a checking account and a savings account at a bank. They will deposit a maximum of $600, some in the checking account and some in the savings account. (They might not deposit all of it and keep some of the money as cash.) The bank requires a minimum balance of $50 in the savings account. It does not matter how much money is kept in the checking account. Two kinds of tickets to an outdoor concert were sold: lawn tickets and seat tickets. Fewer than 400 tickets in total were sold. Lawn tickets cost $30 each and seat tickets cost $50 each. The organizers want to make at least $14,000 from ticket sales. An advertising agency offers two packages for small businesses who need advertising services. A basic package includes only design services. A premium package includes design and promotion. The agency's goal is to sell at least 60 packages in total. The basic advertising package has a value of $1,000 and the premium package has a value of $2,500. The goal of the agency is to sell more than $60,000 worth of small-business advertising packages. 1. Write a system of inequalities to represent the constraints. Specify what each variable represents. 2. Use technology to graph the inequalities and sketch the solution regions. Include labels and scales for the axes. 3. Identify a solution to the system. Explain what the numbers mean in the situation. Students may need reminding what "a solution to the system" would be in these specific contexts. (First find a point where the total money in the two accounts totals less than or equal to $600. Now make sure the money in the saving account is also at least $50.) Much of the discussion would have happened in small groups. During the whole-class discussion, emphasize the meaning of a point in the region where two graphs of linear inequalities overlap. Make sure students understand that all the points in that region represent values that simultaneously meet both constraints in the situation. If time permits, ask students: "Why does it make sense to think of the two inequalities in each situation as a system and find the solutions to the system, instead of only to individual inequalities?" (If both constraints in the situation must be met, then we need to find values that satisfy both inequalities.) 24.4: Scavenger Hunt (15 minutes) This optional activity reinforces the idea that the solutions to a system of inequalities can be effectively represented by a region on the graphs of the inequalities in the system. The activity is designed to be completed without the use of graphing technology. Ask students to put away any devices. Members of a high school math club are doing a scavenger hunt. Three items are hidden in the park, which is a rectangle that measures 50 meters by 20 meters. The clues are written as systems of inequalities. One system has no solutions. The locations of the items can be narrowed down by solving the systems. A coordinate plane can be used to describe the solutions. Can you find the hidden items? Sketch a graph to show where each item could be hidden. Clue 1: \(\qquad y>14\\ \qquad x<10\) Clue 2: \(\qquad x+y<20\\ \qquad x>6\) Clue 3: \(\qquad y<\text-2x+20\\ \qquad y < \text-2x+10\) Clue 4: \(\qquad y \ge x+10\\ \qquad x > y\) Two non-negative numbers \(x\) and \(y\) satisfy \(x + y \leq 1\). Find a second inequality, also using \(x\) and \(y\) values greater than or equal to zero, to make a system of inequalities with exactly one solution. Find as many ways to answer this question as you can. Some students may have trouble interpreting the graph of the fourth system, wondering if a point in either of the shaded regions on the graph could be where an item is hidden. Ask them to pick a point on the graph and consider whether it satisfies the first inequality, and then whether it satisfies the second inequality. Remind them that a solution to a system needs to satisfy both. Invite students to share their graphs and strategies for finding the solution regions. In particular, discuss how they found out which system had no solutions. Remind students that a system of linear equations has no solutions if the graphs of the equations are two parallel lines that never intersect. Explain that a system of linear inequalities has no solutions if their regions are bound by two parallel lines and the solution region of each one is on the "outside" of the parallel lines, as is the case with the last given system. To help students make connections between systems of equations and systems of inequalities, display the following graphs for all to see. Ask students: "How are the two sets of graphs alike?" (They have the same two lines. They can tell us about the solutions to individual equations or inequalities, as well as the solutions to systems.) "How are they different?" (The first set of graphs show two regions that overlap, bounded by dotted lines. The second set shows two intersecting lines and the lines are solid. One set represents the solutions to a system of linear inequalities.) "How can we tell the number of solutions from each set of graphs?" (The graphs representing a system of equations shows one point of intersection, so there is only one solution. The graphs representing a system of inequalities show one region of overlap, but there are many points in that region. This means that there are many solutions.) 24.5: Cool-down - Oh Good, Another Riddle (5 minutes) In this lesson, we used two linear inequalities in two variables to represent the constraints in a situation. Each pair of inequalities forms a system of inequalities. A solution to the system is any \((x,y)\) pair that makes both inequalities true, or any pair of values that simultaneously meet both constraints in the situation. The solution to the system is often best represented by a region on a graph. Suppose there are two numbers, \(x\) and \(y\), and there are two things we know about them: The value of one number is more than double the value of the other. The sum of the two numbers is less than 10. We can represent these constraints with a system of inequalities. \(\begin {cases} y > 2x\\ x+y <10 \end {cases}\) There are many possible pairs of numbers that meet the first constraint, for example: 1 and 3, or 4 and 9. The same can be said about the second constraint, for example: 1 and 3, or 2.4 and 7.5. The pair \(x=1\) and \(y=3\) meets both constraints, so it is a solution to the system. The pair \(x=4\) and \(y=9\) meets the first constraint but not the second (\(9 >2(4)\) is a true statement, but \(4+9<10\) is not true.) Remember that graphing is a great way to show all the possible solutions to an inequality, so let's graph the solution region for each inequality.​​​​​​ Description: <p>A graph of an inequality on a coordinate plane, origin O. Each axis from negative 10 to 5, by 5's. Dashed line starts below x axis and right of y axis, goes through negative 2 point 5 comma negative 5, 0 comma 0, and 2 point 5 comma 5. The region above the dashed line is shaded.</p> Description: <p>A graph of an inequality on a coordinate plane, origin O. Each axis from negative 10 to 5, by 5's. A dashed line starts on the y axis at 10, goes through 5 comma 5, and ends on x axis at 10. The region below the dashed line is shaded.</p> Because we are looking for a pair of numbers that meet both constraints or make both inequalities true at the same time, we want to find points that are in the solution regions of both graphs. To do that, we can graph both inequalities on the same coordinate plane. The solution set to the system of inequalities is represented by the region where the two graphs overlap. Description: <p>A graph of two intersecting inequalities on a coordinate plane, origin O. Each axis from negative 10 to 5, by 5's. The first dashed line starts below x axis and right of y axis, goes through negative 2 point 5 comma negative 5, 0 comma 0, and 2 point 5 comma 5. The region above the dashed line is shaded. Second line at on the y axis at 10, goes through 5 comma 5, end on x axis at 10. The region below the dashed line is shaded.</p> © 2019 Illustrative Mathematics®. Licensed under the Creative Commons Attribution 4.0 license. This book includes public domain images or openly licensed images that are copyrighted by their respective owners. Openly licensed images remain under the terms of their respective licenses. See the image attribution section for more information.
CommonCrawl
Last edited by Dajora 2 edition of Probable Universe found in the catalog. M. Y. Han An Owners Guide to Quantum Physics by M. Y. Han Published October 1992 by McGraw-Hill . We show that a unique, most probable and stable solution for the wavefunction of the universe, with a very small cosmological constant L1 @ (\fracplpN)2\Lambda_1\simeq\big(\frac{\pi}{l_{p}N}\big Author: Brett Mcinnes. That reflects our conviction that given a small, pre-Copernican universe, God's existence is much more probable than atheism. This assumes that the prior or intrinsic probability of theism or atheism is exactly the same; otherwise Manley's argument collapses. The fine-tuning of the universe for the existence of life is fascinating to me. That's why I was so excited to find A Fortunate Universe: Life in a Finely Tuned Cosmos, co-authored by astrophysicists Geraint Lewis and Luke Barnes, under my Christmas tree. This book provides the most up-to-date scientific evidence for the fine-tuning of the universe for life. Integral tables A descriptive bibliography of Sanskrit works in Persian Toy designer Pharmacotherapy: A Pathophysiologic Approach, 4e; and Schwinghammer: Pharmacotherapy Casebook How to Rally Teams for Ministry Singular voices Engineering systems analysis and optimization Probable Universe by M. Y. Han Download PDF EPUB FB2 In "The Probable Universe", Dr. Han is able to relate to non-scientific audiences the remarkable potential of quantum physics and how its staggering technological implications will affect our daily lives in the coming century. Han aims to narrow the knowledge gap that exists between career physicists who are well-versed in quantum theory Author: M. Han. Compare book prices from overbooksellers. Find The Probable Universe: An Owner's Guide to Quantum P () by Han, M. Y/5(2). Buy The Probable Universe book Universe: An Owner's Guide to Quantum Physics on FREE SHIPPING on qualified orders The Probable Universe: An Owner's Guide to Quantum Physics: M. Han: : Books. Book Review: Our Improbable Universe A long time ago, in our universe, everything Probable Universe book, matter and light) was contained within the volume of about a grapefruit. This is the starting point for. The probable universe by M. Han,TAB Probable Universe book edition, in English - 1st : The probable universe: an owner's guide to quantum physics Item Preview remove-circle Share or Embed This Item. Borrow this book to access EPUB and PDF files. IN COLLECTIONS. Books to Borrow. Books for People with Print Disabilities. Oliver Pages: The Five Ages of the Universe is a popular science book written by Probable Universe book Fred Adams and Professor Gregory P. Laughlin about the future of an expanding universe first published in The book The Probable Universe book Ages of the Universe Probable Universe book the history, present state, and probable future of the universe, according to cosmologists' current Author: Fred Adams and Gregory Laughlin. Get this from a library. The probable universe: an owner's guide to quantum physics. [M Y Han] -- Probable Universe book (physics, Duke U.) explains in layman's terms the physical principles behind quantum mechanics, including photons, wave-particle duality, and the uncertainty principle. Annotation copyright by. A trek through the probable universe. plainspoken science writer and former Nature editor Philip Ball mostly achieves this ambition in his 23rd book, a "participatory universe" — and Author: Natalie Wolchover. John Gribbin is Probable Universe book great popular science writer and in this book he has made a great book is the complement to the Rare Earth by Ward and Brownlee Probable Universe book the Gribbin book takes a more step an asks for technological inteligent life not only complex life, and makes more emphasis in the astronomical aspects at the light of last breakthougts as our very special Probable Universe book in the galaxy, by why /5. The probable universe is an interactive audio-visual installation generating an infinite combination of projected worlds in a physical environment Probable Universe book an industrial robotic arm. In order to present the concept we Probable Universe book scripted 4 films using generative visual and audio components. Alternate Universe (often abbreviated as "AU") is a descriptor used to characterize fanworks which change one or more elements of the source work's canon. Also may be known as multiverse. The various universes within Probable Universe book multiverse are called "parallel universes", "other universes" or "alternate universes.". The Artful Universe Pi in the Sky Perché il mondo è matematico. Impossibility The Origin of the Universe Between Inner Space and Outer Space The Universe that Discovered Itself The Book of Nothing The Constants of Nature: From Alpha to Omega The Infinite Book: A Short Guide to the Boundless, Probable universes 00 Anthropic universes Ancient Vedic texts speak of multiverses. One such book, Probable Universe book Purana states: "There are innumerable universes besides this one, and although they are unlimitedly large, they move about like atoms in You. Therefore You are called unlimited" ~. Universe Books: EXOPOLITICS: POLITICS, GOVERNMENT AND LAW IN THE UNIVERSE - The book that founded Exopolitics & was time-traveled by the secret DARPA/CIA quantum access program from back to ; Universe Books: The Omniverse: Transdimensional Intelligence, Time Travel, the Afterlife, and the Secret Colony on Mars by Alfred Lambremont Webre. Of course, this is an incredibly huge question with no definite answers, but I'll give it a try nonetheless. First of all, religions aren't discrete entities - what one Christian believes might be wildly at odds with what another Christian in the. Probable definition, likely to occur or prove true: He foresaw a probable business loss. He is the probable writer of the article. See more. A universal probability bound is a probabilistic threshold whose existence is asserted by William A. Dembski and is used by him in his works promoting intelligent is defined as. A degree of improbability below which a specified event of that probability cannot reasonably be attributed to chance regardless of whatever probabilitistic resources from the known universe are factored in. The Probable Universe is an interactive audio-visual installation generating an infinite combination of projected worlds in a physical environment using an industrial robotic arm. In order to present the concept we have scripted 4 films using the generative visual and audio components. The NOOK Book (eBook) of the Mission to Universe by Gordon R. Dickson at Barnes & Noble. FREE Shipping on $35 or more. Due to COVID, orders may be delayed. Thank you for your patience. Before him lay danger, probable disappointment - even death. But nothing had prepared him for the nightmare he would have to face on the planet of the Brand: Start Science Fiction. No matter how improbable an individual universe is, the probability that it exists if a multiverse exists is effectively %. So it's significant that we have six arguments that entail a multiverse explanation of observations is more probable than the God. Mike Dooley Co-founder and Author. Mike Dooley is a New York Times best-selling author, metaphysical teacher, and creator of the wildly popular "Notes from the Universe" whose acclaimed books—including The Top Ten Things Dead People Want to Tell YOU, A Beginner's Guide to the Universe, and Infinite Possibilities: The Art of Living Your Dreams—have been published worldwide in I have heard a lot of atheists say that it is not probable for their to be a God. Lets assume for a second that there is no God, then what explanation is more probable for the following questions. -What caused the universe to exist out of nothing. You can say there was no beginning but that is not the same as saying there is no cause. If you still claim in no cause can you honestly believe. The Five Ages Of The Universe is a popular science book written by Professor Fred Adams and Gregory Laughlin first published in It discusses the history, present state, and probable future of the universe, according to cosmologists' current n: Having excluded all metaphysical speculations as idle and a waste of time, he goes on to affirm his belief in the existence of God, the moral order of the universe, the reality of both mind and matter, the reality of causal relations, and the possibility of probable knowledge in the field of the natural sciences. Remote Viewing Secrets Revealed Audio Book () Explains the basic mechanisms involved in Remote Viewing and its modus operandi. Training Module 4. Audio Session () Trains you to easily refocus your mind to a much deeper level of your inner awareness where Remote Viewing is a natural ability: deep Theta. Synopsis. You Are the Universe is a philosophical work which attempts to give answers to questions pertaining to the origin of the universe, time, space, matter and the origin and meaning of consciousness and the marrying of science and spirituality in daily lives. The book challenges the assumption that consciousness is a byproduct of matter claiming that matter is actually an experience in Author: Deepak Chopra, Menas Kafatos. The Seth Material. For all Seth related posts on my website click here. The following originally copied and pasted from nationmaster encyclopedia. In lateJane Roberts and her husband, Robert Butts, experimented with a Ouija board as part of Roberts' research for a book on extra-sensory ing to Roberts and Butts, on December 2, they began to receive coherent. It was a medium, or channeller, as they are now called, who first mentioned parallel, possible or probable worlds, universes or systems of reality inhabited by. Now the topic of the universe can not in any way be described as being dry, but this book is a brilliant attempt to explain everything you would want to know about the universe. The book discusses the beginning and possible end of the universe, the planets, stars and other celestial objects that frequent it/5(). In The Universe, today's most influential science writers explain the science behind our evolving understanding of the universe and everything in it, including the cutting edge research and discoveries that are shaping our knowledge. Lee Smolin reveals how math and cosmology are helping us create a theory of the whole universe. The universe in the book is huge and dominated by the Radch Empire, which uses artificial intelligence in human bodies to bolster its fighting forces. These creatures are called ancillaries, and Leckie's novel tells the story of one who survived the mysterious disappearance of. Universe Books is a publisher of children's books. Some of the books published by Universe Books include First Steps in Paint: A New and Simple Way to Learn How to Paint Step by Step, Crystal Tales, Maya and the Town That Loved a Tree, and Frankenstein. The Elegant Universe: Pt 1. Combining the laws of the universe in one theory that explains it all is the Holy Grail. Premiered Octo AT 10PM on PBS The Elegant Universe: Pt 2. probable adj adjective: Describes a noun or pronoun--for example, "a tall girl," "an interesting book," "a big house." (most likely) 그럴듯한, 그럴싸한 how probable is the universe. - English Only forum it is possible that vs it is probable that - English Only forum. A renowned physicist and author of the bestseller One Two ThreeInfinity, George Gamow is one of the founders of the Big Bang theory. Modern Science Made Easy By one of the leading physicists of the twentieth century, George Gamow's One, Two, Three Infinity is one of the most memorable popular books on physics, mathematics, and science generally ever written, famous for Brand: Dover Publications. A book that strives not only to present the universe for what it is, but also to explain why it is the way it is, and how we know it — that's an ambitious goal if I've ever heard one. But. cosmic history, from the universe's fiery origin in the Big Bang to the silent, stately flight of galaxies through the intergalactic night." (National Research Council) Order in the Universe Cosmology is the study of the evolution of the universe from its first moments to the present. If our universe were but one member of a multiverse of randomly ordered worlds, then it is vastly more probable that we should be observing a much smaller orderly universe. The odds of our solar system's being formed instantly by random collisions of particles is, according to Penrose, about 10(60), a vast number, but inconceivably. Abstract: It has recently been suggested, by Firouzjahi, Sarangi, and Tye, that string-motivated modifications of the Hartle-Hawking wave function predict that our Universe came into existence from "nothing" with a de Sitter-like spacetime geometry and a spacetime curvature similar to that of "low-scale" models of Inflation. This means, however, that the Universe was quite large at by:. Weird Shape made probable | Steven Universe. See more pdf Universe' images on Know Your Meme! -- COMPLETE -- Pdf a book filled with Steven Universe memes and fanart because I've got over on my phone and I really need to free up space. I had this exact theory, and I'm so glad that someone put it into words!.The Urantia Book. Paper The Universe of Universes. () THE immensity of the far-flung creation of the Universal Father is download pdf beyond the grasp of finite imagination; the enormousness of the master universe staggers the concept of even my order of being. But the mortal mind can be taught much about the plan and arrangement of the universes; you can know something of their. Comic book fans know it's ebook as easy to topple ebook forces as they have the Nova Prime to guide them. With the MCU looking to expand its universe, a theory is that we could have an entirely separate movie based on Xandar where the fallout from Thanos' attack is shown. tula-music.com - Probable Universe book © 2020
CommonCrawl
Gender-based Decision Making in Marketing Channel Choice – Evidence of Maize Supply Chains in Southern Ethiopia Girma Gezimu Gebre ORCID: orcid.org/0000-0003-4875-88251,5, Hiroshi Isoda2, Yuichiro Amekawa3, Dil Bahadur Rahut4,6, Hisako Nomura2 & Takaaki Watanabe2 Human Ecology volume 49, pages 443–451 (2021)Cite this article We examine factors affecting the choice of marketing channels for maize among male, female, and joint decision-making farm households using data from households in Dawuro zone, southern Ethiopia. Econometric results suggest that female and joint decision-makers are more likely to sell maize to consumers or retailers in the main local market where the maize price is higher than to wholesale merchants directly from the farm. Individual decision-makers (male or female) who grow improved maize varieties are more likely to sell to wholesalers directly from the farm. This may be an indication of the effectiveness of joint decisions over individual decisions related to the market price. We also found that improved maize varieties distributed to farmers in the study area are of poor quality and that there is a lack of modern storage facilities so that farmers have to sell immediately after harvest during the lower price season. Thus, there is a need for policies promoting the distribution of high-quality maize seeds and encouraging investments in the establishment of modern maize storage facilities in the study area. Maize is Ethiopia's dominant cereal crop in terms of production and number of farms. Averaged over the period 2006 to 2017, 9.5 million smallholder farmers grew maize, over 21% of the total cereal crop production area of the country. Taken together, these farmers produce an annual average of 6.3 million tons, about 30% of total cereal production in Ethiopia (CSA 2015, 2017). Maize accounts for 17–20% of the national per capita calorie intake (Abate et al. 2015). The unit cost of calories from maize is the cheapest among all major cereals in Ethiopia, making it the most important cereal crop, particularly for economically less endowed households (Rashid et al. 2010; Berhane et al. 2011; FAO 2015). Maize is the main staple food for consumers and a critical source of income for smallholder farm households in Ethiopia. Like many sub-Saharan African countries, maize marketing chains in Ethiopia are relatively long and involve many intermediaries including collectors, wholesalers, or retailers who rarely provide marketing services besides transport and storage. Almost all maize grain reaches consumers without processing. Maize farming households do not receive a reasonable price for their maize harvest because of high transaction costs resulting from poor road access, lack of formal grades and standards, price information asymmetry, high transportation costs, and the presence of intermediaries (Rashid et al. 2010; FAO 2015; Abate et al. 2015; World Bank 2018), although transaction costs vary across individuals or households according to the type of marketing channel utilized (Hill and Marcella 2014). Female decision-making farm households face many gender-specific constraints in accessing markets. They tend not to have the same socio-political networks as male decision-making farm households. Men are more likely to be approached by traders or other intermediaries who assume they are the primary decision makers, while women do not have time to search out new market opportunities as they are preoccupied busy with both productive and reproductive household activities. As a result, female-headed households are less successful than male-headed households at accessing new market opportunities (Morrison et al. 2007; Barham and Chitemi 2009). Choice of marketing channel is also determined by the amount of maize being sold (Fafchamps and Hill 2005), which impacts male and female led farm households differentially. There is evidence that female farm households sell smaller quantities at the local market and receive lower prices while male farm households sell bulk quantities and travel to distant markets to secure higher prices (Aregu et al. 2011; FAO 2011; Amani 2014; Eerdewijk and Danielsen 2015). Previous studies identify two major reasons for this: first, female-headed households may have fewer productive resources than their male counterparts, so they produce smaller quantities and lack pack animals or money for transport of their produce to distant markets (Fafchamps and Hill 2005; Vigneri and Holmes 2009; Aregu et al. 2011; FAO 2011; Amani 2014; Eerdewijk and Danielsen 2015). Second, they often allocate only a small portion of their resources to marketable crops (De Brauw 2015) as they are responsible for family provisioning (Doss 2002). Smallholder maize farming is a familial system, usually employing one or two household members (Gebre et al. 2019). We focus in this study on who makes the marketing decisions in the household. The neoclassical/mainstream economic theory usually regards all the members of the households as having the same preferences. However, agricultural households do not always agree on decisions and women and men do not always have the same preferences (Wilson 1991; Agarwal 1997; Meinzen-Dick et al. 2011). Marketing decisions vary among households; in some cases, male and female family members (generally husband and wife) make decisions jointly, while in other cases one or the other make decisions independently (Aregu et al. 2011). We investigate factors affecting maize marketing channel choice by dividing sampled households into three categories for comparison: male, female, and joint decision-making households, to establish any gender differences regarding maize marketing channel choice and the significance this might have on maize producing household well being. Study Area, Data Collection, and Sampling Techniques The principal crops in Dawuro include ensete (Ensete ventricosum), teff (Eragrostis tef), maize, sorghum, wheat, barley, coffee, beans, peas, spices, vegetables, and fruits. The Dawuro zone has ample potential, but farm productivity is very low because of limitations inherent in traditional means of production, dependence on natural rainfall, and poor market access (Abebe 2014). While both men and women engage in agricultural activities, female-headed farm households are particularly vulnerable because of lack of access to farmland, shortage of farm labour, and whether or not they have draft animals for cultivation. We collected our data for this study through a household survey, key informant interviews, and focus group discussions conducted in two rounds. In the first round (April-June 2018), we conducted a survey to collect household-level data, and in the second round (June-July in 2019), we conducted key informant interviews and focus group discussions to supplement the survey data. We used multistage sampling techniques to select smallholder maize farm households for the study. In the first stage, we selected four districts—Loma (including Zisa), Mareka, Esara, and Tocha (Kachi and Tarcha zuriya) based on their maize production and marketing potentials (Fig. 1). In the second stage, we selected 6–8 kebeleFootnote 1 (peasant associations) from each district where maize is grown as the major staple food for consumption and income. In the third stage, we selected an average of 20 maize growing households to survey from each kebele, for a total of 560 smallholder maize farm households. Since male and female family members work either separately or jointly on the maize farm, we interviewed the person most responsible for production, consumption, and marketing decisions in the household using a semi-structured questionnaire (Gebre et al. 2020). Map of the study area (Dawuro zone) in southern Ethiopia. Source: Authors' sketch using GPS data (2018) We identified each household as male, female, or joint decision-making based on survey data. All household respondents were asked 20 gender-disaggregated questions (see Appendix). The first 12 pertained to the ownership of farmland and other farm-related assets, maize production decisions, and maize production activities such as variety choice, farm preparation, planting, fertilizer use, weeding, harvesting, and collection. The remaining eight questions related to decision-making about the amount of maize allocated between home consumption and sale, the responsible person in the household for the sale of maize, choice of buyer, price decisions, and utilization of money from the sale of maize. All responses indicated whether decisions were made by men or women, or jointly. Separately, we asked an additional family member supplemental questions, for example, who makes decisions about maize production, consumption, and sale in the market, to ensure we had an accurate description of intra-household gender dynamics. In a few cases, men and women from the same household gave different answers to the same questions in which case we asked them jointly so that they could reach a consensus. Finally, we used principal component analysisFootnote 2 to group all responses into the three household decision-making categories: male, female, or joint. In each kebele we identified key informants such as agricultural experts, community elders, or maize farmers based on information provided by kebele level agricultural development agents who closely work with farmers, community elders, and other agricultural experts.We conducted three separate focus group discussions in each sampled district (male, female, and joint decision-making groups) to supplement the household survey data collected. We registered the names, addresses, and identity numbers of surveyed households along with their survey responses. Farm households' choice of marketing channels can be modeled using a random utility framework (Greene 2012) that assumes the choice of a particular maize marketing channel from a set of alternative options is based on its expected utility. Following Greene (2012) and Musara et al. (2018), an \(i^{th}\) decision maker in the household is faced with N(4) market channel choices: own distribution (direct sale to consumers), collectors, wholesalers, or retailers. Then, the utility U of a decision maker i making choice j, is given as: $${U}_{ij}={B}_{j=k}{X}_{ij}+{\varepsilon }_{ij} \qquad\forall j\epsilon N$$ The vector of variables X contains attributes of both the market choice j and the decision-maker i. A random utility \(X_{ij}\) for an individual decision maker choosing a particular alternative is a linear function of a vector of channel-specific parameters (\(\beta_{j}\)), attributes of individual decision-makers and alternatives (\(U_{ij}\)), and a stochastic error (\(\epsilon_{ij}\) ). If a decision-maker in the household makes choice j in particular, then we assume that \(U_{ij}\) is the maximum among the N alternative utilities. Hence, the probability that the choice j is made is denoted as: $${U}_{i(j=k)}>{U}_{i(j\ne k)}\forall k\ne j$$ Empirical Frameworks Given that sampled maize farmers in the study areas have more than two alternative market channel choices, we applied the multinomial logit (MNL) model to estimate factors affecting maize marketing channel choice (see, e.g., Deressa et al. 2009; Panda and Steerkumar 2012; Arinloye et al. 2014; Ndoro et al. 2015; Musara et al. 2018). It is simpler for computation than the alternatives of multinomial probit, nested logit, and random parameter (mixed) logit models. Following Greene (2012), assuming that the probability that the \(i^{th}\) decision-maker in the household chooses the \(j^{tk}\) of 4 channels is \(p_{ij}\) , the probability that a decision-maker chooses alternative j can takes the form: $${P}_{ij}=\frac{\mathrm{exp}({\beta }_{j}{x}_{i})}{1+\sum\limits^4_{j=1}\mathrm{exp}({\beta }_{j}{x}_{i})}for \;j=1, 2, 3\: \& \:4$$ where \(x_{i}\) is a vector of explanatory variables of the \(i^{th}\) decision-maker, \(\beta_{j}\) is coefficients associated with alternative j, and 4 is the number of market channels in the choice set. The parameter estimates from the multinomial logit regression are difficult to interpret. It is tempting to associate \(\beta_{j}\) with the \(j^{th}\) outcome, but that could be misleading. It simply gives the direction of explanatory variables on the response (choice) variable; the estimates thus represent neither the actual magnitude of change nor the probabilities associated with each independent variable. By differentiating Eq. (3) with respect to explanatory variables, we identify the marginal effects of individual characteristics on the probabilities, which can be estimated as: $${\delta }_{ij}=\frac{{\partial P}_{ij}}{{\partial x}_{i}}={P}_{ij}\left[{\beta }_{j}-\sum_{j=0}^{4}{P}_{ij}{\beta }_{j}\right]={P}_{ij}\left[{\beta }_{j}-\overline{\beta }\right]$$ Hence, every sub-vector of \(\beta\) enters every marginal effect, both through the probabilities and through the weighted average that appears in \(\delta_{ij}\) . These values can be computed from the coefficient estimates. Unbiased and consistent parameter estimates of the MNL model in Eq. (3) require the assumption of independence of irrelevant alternatives (IIA) to hold, which implies that the probability ratio of the decision-maker choosing between two alternative market channels does not depend on the availability or attributes of the other channel choices. This assumption makes MNL somehow restrictive, but it is realistic in situations such as the one under study. The premise of the IIA assumption is the independence between alternatives (i.e., it does not allow correlation between choices) and homoscedasticity of the Eq. (3). Since this assumption is critical, the validity test for IIA is required, for which we used the test developed by Hausman and McFadden (1984), which suggests that if a subset of the choice set is truly irrelevant, then omitting it from the model altogether will not change parameter estimates systematically (Greene 2012). The test result showed no evidence of deviation from the IIA assumption. Hence, there is no need for the trial of other alternative models such as nested logit, random parameter (mixed) logit, or multinomial probit in this study. Descriptions of Variables We identified four major channels through which smallholder farm households in Dawuro zone sold maize during the 2017/18 cropping season: (i) direct sale to consumers in the local market,Footnote 3 (ii) retailers who purchase in the main market,Footnote 4 (iii) wholesalers who purchase in nearby towns,Footnote 5 and (iv) collectors who purchase at the farm gate. Their respective marketing channels are: Producer → Consumer Producer → Retailer → Consumer Producer → Wholesaler → Retailer → Consumer Producer → Collector → Wholesaler → Retailer → Consumer Hence, the response variable in the empirical estimation is maize farmers' marketing channel options i to iv. For ease of explanation, here, we present the four patterns of maize supply chain in the order of the increasing numbers of intermediaries from (i) to (iv). However, the priority of maize farm households' marketing channel choice depends on their utility maximization with consideration for a combination of market margins, amount sold, transaction costs, pests, and disease resistance of maize after harvest, gender of decision-makers, trust of buyers, among others (Table 1 in Appendix). The results of the pooled sample show that about 38% of the sample households sold their maize directly to consumers in the local market (Table 2 in Appendix). About 19% and 20% sold maize through retailer and wholesaler channels, respectively, while 23% sold through the collectors. About 35% of male decision-making households sold maize through collectors whereas 44% and 39% of female and joint decision-making households sold it directly to consumers in the local market, respectively. On average, the age of household heads was 42.6 years with the highest average age in households selling maize through wholesalers (43.7 years), followed by those through collectors (43.5 years). In comparison, households that directly sell maize to consumers in the local market owned on average fewer livestock, had a lower rate of improved maize seed application, and allocated a smaller area of their farmland to maize production. The average age of the household head is highest among joint decision-maker households, particularly those selling maize through collectors, followed by those through consumers (Table 3 in Appendix), and is lowest among female decision-making households. The average years of the household head's education is higher among female decision-making households than those of male and joint decision-makers. The average number of adult male and adult female family members is highest in male decision-making households, followed by joint decision-making households, and lowest in female decision-making households. The average number of livestock owned is highest in male decision-making households, followed by joint and female decision-making households. On average, households using improved maize seed, the area of farmland for maize, and the amount of maize sold to the market are higher in male decision-making households than female and joint decision-making farm households. However the average unit price received from maize sale is highest in joint decision-making households while the average unit marketing cost incurred by maize farm households is highest in female decision-making households. Econometric Results To estimate the MNL model first we began by normalizing one category, usually referred to as the base category, in this case 'consumers in the local market' since most sampled farm households choose direct sale to consumers in the local market. We then tested for potential endogeneity or any situation in which explanatory variables are correlated with residuals. Other studies have suggested that access to credit service, market information, contact with extension agents, and participation in social events be assumed as endogenous variables in a choice model (e.g., Deressa et al. 2009; Mmbando et al. 2016). For our test we adopted a two-stage approach that involves the use of predicted values of potentially endogenous variables (Wooldridge 2012). Probit models for access to credit, market information, contact with extension agents, and participation in social events are specified in the first stage. We then used the predicted values of these variables in the second stage of estimating factors affecting farm households' choice of maize marketing channel. The test failed to reject the null hypothesis, suggesting there is no significant correlation between explanatory variables and residuals. Subsequently, we fitted the Ordinary Least Square model and tested for multicollinearity by using the Variance Inflation Factor (VIF). The VIFs for all the explanatory variables are less than 10 (1.02–1.64), which suggests that there is no serious multicollinearity problem among the explanatory variables included in the model. Finally, we ran the model and tested for the validity of the independence of irrelevant alternative (IIA) assumption by using the Hausman specification test. The test failed to reject the null hypothesis of the independence of the maize marketing channel choice options, suggesting that the MNL specification is appropriate for modeling the maize marketing channel choice of the smallholder maize farm households. We ran pooled and separate sample models to determine the effect of gender on maize marketing channel choice. In both models, the likelihood ratios indicated by statistics are significant at 1.0% probability, suggesting that both models have a strong explanatory power. The pooled model explains 23.32% of the variation in the market choice among the sampled maize producers. The separate model explains 18.56% of variation in the market choice among male, 48.49% among female, and 39.67% among joint decision-making households, respectively. As indicated earlier, the coefficient estimates of the MNL model provide only the direction of the effect of the regressor variables on the response variable, i.e., estimated coefficients do not represent the actual magnitude of change or probabilities. Thus, we report and discuss the average marginal effects from the MNL, which helps measure the expected change in the probability of a particular market channel choice being made with respect to a unit change in regressor variables. In all the cases, the estimated coefficients should be compared with the base category of direct sale to consumers in the local market. In the pooled sample model, the gender of female and joint decision-makers is included to examine the relative positions of an individual or a joint decision-making pair within a farm household while the gender of male decision-makers is considered as a reference group for the analysis. The result indicates that female decision-makers reduce the probability that maize producing households would sell maize to collectors at the farm gate by 9.1% (Table 4 in Appendix). Meanwhile, they had a higher probability of selling maize in the local market by 13.7%. There are three possible explanations for this result in view of the gender-specific constraints and women's marketing behaviors in the study area. First, women dominate local market sales by negotiating with buyers who are themselves often local women buying maize for their family consumption. The women peddlers in the local village market have strong social bonds with customers living in the neighborhood. They frequently visit a local market (usually once a week) as they do not have time to travel to the main market located far from the local community, given that they spend significant time each day in obligatory household activities. Second, most women prefer to occasionally sell small quantities in the local market while most men prefer to sell one-shot bulk quantities in more distant markets (Aregu et al. 2011). This behavior is mainly linked to the price volatility of maize in the study area as confirmed by agricultural experts, community elders, and farmers; they could be wary of incurring a significant loss by selling in bulk when prices drop. Considering women's responsibility for family sustainability, they may wish to minimize the risk. This explanation can thus be linked to the generally more risk adverse behavior of women than men (Eckel and Grossman 2008). Lastly, women producers could be less visited by collectors, who may assume that men are primary agricultural producers in the village. This explanation is linked to the notion that men are more likely approached by traders than women for their agricultural products (Barham and Chitemi 2009). Another result is that joint decision-making households are 13.8% less likely to sell maize to wholesalers in their nearby town. Conversely, they are 13.3% more likely to sell maize in the local market. According to Nyikahadzoi et al. (2010), collective marketing reduces the cost of getting the product to markets and helps improve farmers' bargaining power. The result may thus suggest that joint decision-making households selling their maize products in the local market tend to incur lower transaction costs than do either male or female decision-making households. Another explanation may be related to the tendency for women to play a leading role when they make local marketing decisions. This is exemplified by the following interview narrative provided by a male farmer: When I sell maize in the local market, I always go there with my wife to help her for transportation and, most importantly, security. I always prefer not to engage in sales activities because most of the buyers there are women and they always charge a lower price for me by saying, 'you are man, you are the main producer, so do not get involved in this women's activity'. As a cultural norm, it is no good for us to argue with a woman in the local market. Hence, I let my wife talk to them and she easily negotiates with them for better prices. His wife in turn confirmed her husband's view: "I am in charge of selling maize in the local market. However, we [wife and husband] are handling money from the sales together." A further explanation of women's relatively louder voice in the joint decision-making households could be linked to the price volatility of maize as well as quantitative requirements of the wholesale market. Wholesalers usually require large quantities of produce for purchase although the future price of maize is usually unpredictable for smallholder farmers. Thus, men and women who make joint decisions tend to sell maize in the local market as it caters for much smaller quantities than the wholesale market. In this connection, women who have stronger bonds with customers in the local market are better positioned to take advantage of such relationships for their maize marketing, hence helping to maximize their household economic welfare. The number of adult male and female family members influences the choice of maize marketing channel. The addition of an adult female in the household decreases the probability by 2.8% that the household would sell maize to retailers in the main market and increases the probability of selling it to consumers in the local market by 2% (cf. Aregu et al. 2011). The addition of an adult male in the household increases the probability by 2.6% that they sell maize to collectors at the farm gate; however, they would decrease the probability of selling to consumers in the local market by 1.7% (cf. Amani 2014 for Burkina Faso and Rwanda). Growing improved maize varieties increases the probability that producers sell maize to retailers in the main market and to collectors at the farm gate by 5.5% each; however, it decreases the probability by 6.3% (as compared to growing traditional maize varieties) that they sell them to wholesalers. These results are linked to the quality of improved maize seeds used by farmers and the storage capacity of traders involved. According to the Ethiopian Seed Association (2014), a lack of quality seed is one of the critical constraints to increasing production and productivity in Ethiopia (see also Gebre et al. 2019). On the other hand, the FAO (2015) and World Bank (2018) note that maize traders in Ethiopia face constraints in the capacity of their storage facilities. Maize traders in our study lack capital to invest in large modern maize storage. Compared to other traders, wholesalers are able to store maize for much longer, up to 2 ~ 3 months depending on the market price. Our results indicate that producers are more likely to sell improved maize to retailers or collectors than wholesalers as retailers and collectors sell products immediately after they purchase them. In the study area there are no quality standards nor grades in the maize market, and collectors mix improved maize with traditional maize varieties. Most wholesalers receive maize from collectors, often in such a mixture. The area of farmland allocated to maize production increases the probability that the farm household sells maize to retailers in the main market rather while it decreases the probability of selling directly to consumers in the local market (cf. Amaya and Alwayng 2011). The amount of maize households sell influences their choice of marketing channel. Our results indicate that a one-quintal increase in the amount of maize sold increases the probability by 12% that the household sells maize to collectors at the farm gate (with 'sale to consumers in the local market' being the base market for comparison). This could be related both to the relatively small amounts required in the local market and poor access to roads and trucks to transport their produce to market. However, it decreases the probability by 1.9% that the household sells maize to wholesalers. An increase in the price of maize increases the probability that farm households sell maize to consumers in the local market while it decreases the probability that they sell maize to wholesalers or collectors. Since that rational farmers would prefer to sell produce in the market where they can reap the most benefit (Mmbando et al. 2016), an increase in the cost of marketing increases the probability that producers sell maize to wholesalers or collectors rather than directly to consumers in the local market (cf. Masuku et al. 2001). The age of the household head increases the probability that female and joint decision-making households sell maize to collectors at the farm gate (Table 5). Older farmers (who are most likely the household head) sell farm produce to a closer market (cf. Amaya and Alwayng 2011; Mmbando et al. (2016). As age increases, they lose interest in traveling (even to the local market) and shift to focus on selling produce at the farm gate. Number of adult females in the male decision-making household is negatively associated with selling to retailers in the main market while positively associated with selling to consumers in the local market. In female decision-making households, number of adult females is positively associated with selling directly to consumers in the local market but negatively associated with selling to wholesalers. In joint decision-making households, an increase in the number of adult females increases the probability by 5.5% that joint decision-makers sell directly to consumers in the local market. The finding might be related to household production capacity. For some agricultural activities such as plowing with oxen and planting, male and female labour is not interchangeable. Plowing with oxen is culturally considered as a male task in the study area (Gebre et al. 2020). Thus, given the gendered division of labour for agricultural production, a higher number of adult females in the household (with the number of working age adults in the household held constant) may lead to diminished household farm output. This in turn leads to less marketable produce. Women who prefer to sell smaller quantities are more likely to sell in the local market. Given the gender division of labour in agriculture, a higher number of adult male family members could provide more household production. In all the three decision-making types of households, an increased number of adult men lead to more sales to collectors. In contrast, an increase in the number of adult males in the male decision-making household decreases the probability of household maize sales to consumers by 3.7% since local market exchanges are dominated by women. In our sample, the majority of female decision-makers are household heads. The adult males in these households are usually their adult sons may prefer to sell maize to wholesalers and claim the income. Our results show that an adult male added to a female decision-making household increases the probability of household maize sale to consumers or collectors by 4.3% and 5.7%, respectively, whereas the probability of selling maize to wholesalers would decrease by 10.1%. Similarly, an increase in the number of adult males in joint decision-making households increase the possibility of conflict between men and women selling maize and controlling income. A woman who jointly makes decisions with a man in the household would be unwilling for him to sell maize in distant markets. Our results indicate that in fact they generally agree to sell to collectors at the farm gate and share control over the income from the sale. Number of livestock owned by male decision-making households increases the probability that they would sell maize to wholesalers in town (cf. Aregu et al. 2011). Planting improved maize varieties decreases the probability that male decision-making households sell to wholesalers in town and increases the probability they sell to collectors at their farm gate. For female and joint decision-making households, growing improved maize variety increases the probability that they sell maize to collectors at the farm gate and retailers in the main market, respectively. Our results also indicate that growing improved varieties results in a decrease in the probability that female and joint decision-makers sell maize to consumers, by 12.8% and 11.5%, respectively but rather sell to collectors at the farm gate. However, collectors in the study area are aware of the farmers' lack of storage facilities and set a lower price than the market in order to take advantage of the farmers' need to sell directly after harvest. They also set prices according to their social relationship with the farmer. Male decision-makers with good connections to maize collectors may receive a relatively higher price than female decision-makers. Joint decision-makers are more likely to sell to retailers in the main market where prices are higher. The area of farmland allocated to maize increases the probability that female and joint decision-making households sell maize to collectors at the farm gate (cf. Amaya and Alwayng 2011) since they usually lack trucks to transport their produce to main/distant markets. .Access to credit services increases the probability that male and joint decision-making households sell maize to collectors at the farm gate. They may receive credit from collectors in advance of maize sales as nationwide evidence suggests (Rashid et al. 2010; Abate et al. 2015; World Bank 2018). Female decision-makers may fear risks of debt default associated with receiving advance credit from collectors and hence rely on selling directly to consumers in the local market. Conclusion and Recommendation We explore the factors that affect marketing channel choice by comparing three gender-based decision-making household categories: male, female, and joint. Our econometric analyses have four key findings. First, compared with male decision-making households, female decision-making households have a lower probability of selling maize to collectors at the farm gate, and a higher probability of selling to consumers in the local market for three possible reasons: 1) female sellers at the local market have greater bargaining power than men as they prefer to sell to females following customary gender norms; 2) women tend to prefer occasionally selling small quantities in the local market to selling bulk quantities to collectors at the farm gate; 3) females are be visited less often by collectors than males, as collectors, following customary norms, may assume that men are the primary producers or decision makers in the household. Second, joint decision-making households are less likely to sell maize to wholesalers, and more likely sell to consumers in the local market. In the study area, women tend to be more decisive in joint decision-making than men in choosing the venue for their maize sale between the local market and the wholesale market, because wholesalers mainly engage in bulk purchasing while the future market price of maize is unpredictable for maize producers to ascertain exactly how much they can sell at one time. Accordingly, joint decision-makers tend to sell maize in the local market which caters to the buying of smaller quantities and where female peddlers maintain dominant customary market exchange relationships with other women. Since women in both female and joint decision-making households are largely in charge of maize sales in the local market, there is a need for policies that aim to support women in accessing to a new output market for maize. Moreover, female farm households are less successful than male farm households at searching for accessing new market opportunities for their farm outputs, as women are obliged to engage in both agricultural productive and household maintenance activities. Third, in all the three decision-making types of households, an increase in the number of adult females in the household increases the probability of selling maize in the local market while an increased number of adult men leads to more sales to collectors at the farm gate. This is related to the gendered division of labour for agricultural production since a larger farm output for sale is associated with more available male labour. Households with more maize output for sale are more likely to sell it to collectors who buy in bulk whereas households with less maize output for sale are more likely to sell it to the local market in smaller amounts. Fourth, male and female decision-making households that grow improved maize varieties are more likely to sell to collectors at the farm gate due to the quality of improved maize seed and the storage capacity of traders in the study area. Improved maize seeds distributed to the farmers of the study area are susceptible to insect and disease pests and policies and programs should be directed at developing and disseminating insect and disease resistant maize varieties. Further, policies are needed to promote investment in modern storage facilities as maize traders in the study area lack the capital to do so. The smallest administrative unit followed by Woreda (district). Principal Component Analysis is a dimensionality-reduction method that is often used to reduce the dimensionality of large data sets, by transforming a large set of variables into a smaller one which still contains most of the information of the large set. 'Local markets' are in the vicinity of customers' households and are smaller than the main market. 'Main markets' in small towns and main cities are where traders such as retailers regularly conduct business, and where a larger amount of produce is available. 'Nearby town' refers to the main city or a main town of the district where most of the wholesalers live in, such as Gessa Chare, Waka, Essera (Bale and Dali), Tocha (Wara & Aba), and Tarcha towns. Hence, this study considered only one town from each district although there are more than one main markets in each district of the study area. Abate G. T., Kasa L., Minot N., Rashid S., Lemma S., and Warner J. (2015). Maize Value Chain in Ethiopia. Structure, Conduct, and Performance. International Food Policy Research Institute (IFPRI) Washington DC. Markets, Trade, and Institutions Division. Abebe Z.T. (2014). The Potentials of Local Institutions for Sustainable Rural Livelihoods: The Case of Farming Households in Dawuro Zone, Ethiopia. Public Policy and Administration Review, 2(2), pp. 95-129. Agarwal, B. (1997). '''Bargaining'' and Gender Relations: Within and Beyond the Household', Feminist. Amani, S. (2014). Smallholder Farmers' Marketing Choices P4P Global Learning Series. Management SystemsInternational. https://documents.wfp.org/stellent/groups/public/documents/reports/WFP285379.pdf?_ga=1.113473857.77531865.1488831444. Accessed on September 15, 2019. Amaya, N. and Alwayng, J. (2011). Access to information and farmer's market choice: The case of potato in highland Bolivia. Journal of Agriculture, Food Systems, and Community Development 1(4): 35–53. Aregu, L., Puskur, R. and Bishop-Sambrook, C. (2011). The role of gender in crop value chain in Ethiopia. Paper presented at the Gender and Market Oriented Agriculture (AgriGender 2011) Workshop, Addis Ababa, Ethiopia, 31st January-2nd February 2011. Nairobi, Kenya: ILRI. Arinloye, D.-D. A. A., Pascucci, S., Linnemann, A. R., Coulibaly, O. N., Hagelaar, G., & Omta, O. S. W. F. (2014). Marketing Channel Selection by Smallholder Farmers. Journal of Food Products Marketing, 21(4), 337–357. https://doi.org/10.1080/10454446.2013.856052 Barham, J., & Chitemi, C. (2009). Collective action initiatives to improve marketing performance: Lessons from farmer groups in Tanzania. Food Policy, 34(1), 53–59. Berhane, G., Paulos Z., Tafere, K., and Tamiru S. (2011). Food Grain Consumption and Calorie Intake Patterns in Ethiopia. Ethiopia Strategic Support Program II Working Paper 23. Addis Ababa, Ethiopia: International Food Policy Research Institute. Central Statistical Agency (CSA) of Ethiopia (2015). Agricultural sample survey: Timeseries data for national and regional level (from 1995/96 (1988 E.C)-2014/15 (2007E.C)). http://www.csa.gov.et/images/general/news/agss_time_series%20report. Accessed on August 10, 2019. Central Statistical Agency of Ethiopia (2017). Agricultural sample survey of 2016/2017(2009 E.C). Area and production of major crops. Report on (private peasant holdings,meher season). Vol I. http://www.csa.gov.et/ehioinfo-internal?download=771:report-on-area-and-production-of-major-crops-2009-e-c-meher-season. De, Brauw, A. (2015) Gender, control, and crop choice in northern Mozambique Agricultural Economics 46(3)435 448. Deressa, T. T., Hassan, R. M., Ringler, C., Alemu, T., & Yesuf, M. (2009). Determinants of farmers' choice of adaptation methods to climate change in the Nile Basin of Ethiopia. Global Environmental Change, 19(2), 248–255. https://doi.org/10.1016/j.gloenvcha.2009.01.002 Doss, C. (2002). Men's crops? Women's crops? The gender patterns of cropping in Ghana. World Development, 30(11), 1987-2000. Eckel C.C & Grossman J. P (2008). Men, women and Risk Aversion: Experimental Evidence. Handbook of Experimental Economics Results, 1(113);1061–1073. Eerdewijk V. A., and Danielsen C., (2015). Gender Matters in Farm Power. Royal Tropical Institute. Ethiopian Seed Association (2014). Hybrid maize seed production manual. https://ethiopianseedassociation.files.wordpress.com/2015/05/hybrid-maize-seed-production-manual.pdf. Accessed on May 17, 2019. Fafchamps M, Hill RV (2005) Selling at the farmgate or traveling to market. Am J Agric Econ 87(3):717–734 Gebre G. G., Isoda, H., Rahut, B. D., Amekawa, Y., & Nomura, H. (2020). Gender Gaps in Market Participation among Individual and Joint Decision-Making Farm Households: Evidence from Southern Ethiopia. European Journal of Development Research. Gebre G.G., Isoda H., Rahut B. D., Amekawa Y., & Nomura H., 2019. Gender differences in the adoption of agricultural technology: The case of improved maize varieties in southern Ethiopia. Women's Studies International Forum. 76, 102264. https://doi.org/10.1016/j.wsif.2019.102264 Greene W. H (2012) Econometric analysis (7th (International) ed.), 978–0–273–75356–8, New York University. Pearson. Hausman, J., & McFadden, D. (1984). Specification Tests for the Multinomial Logit Model. Econometrica, 52(5), 1219. https://doi.org/10.2307/1910997 Hill R.V., Vigneri M. (2014) Mainstreaming Gender Sensitivity in Cash Crop Market Supply Chains. In: Quisumbing A., Meinzen-Dick R., Raney T., Croppenstedt A., Behrman J., Peterman A. (eds) Gender in Agriculture. Springer, Dordrecht. Masuku, M. B., Makhura, M. T., & Rwelarmira, J. K. (2001). Factors Affecting Marketing Decisions in the Maize Supply Chain among Smallholder Farmers in Swaziland. Agrekon, 40(4), 698–707. https://doi.org/10.1080/03031853.2001.952498 Meinzen-Dick, R., Quisumbing A., Julia Behrman J., Biermayr-Jenzano P., Wilde V., Noordeloos M., Ragasa C., & Beintema N., (2011). Engendering Agricultural Research, Development, and Extension: Priority Setting, Research &Development, Extension, Adoption, Evaluation. Washington: IFPRI. Mmbando, F., Wale, E., Baiyegunhi, L., & Darroch, M. (2016). The Choice of Marketing Channel by Maize and Pigeonpea Smallholder Farmers: Evidence from the Northern and Eastern Zones of Tanzania. Agrekon, 55(3), 254–277. https://doi.org/10.1080/03031853.2016.1203803 Morrison A, Raju D, Sinha N (2007) Gender equality, poverty, and economic growth. Policy research working paper 4349. World Bank, Washington, DC. Musara, J. P., Musemwa, L., Mutenje, M., Mushunje, A., & Pfukwa, C. (2018). Market participation and marketing channel preferences by small scale sorghum farmers in semi-arid Zimbabwe. Agrekon, 57(1), 64–77. https://doi.org/10.1080/03031853.2018.1454334 Ndoro, J. T., Mudhara, M., & Chimonyo, M. (2015). Farmers' choice of cattle marketing channels under transaction cost in rural South Africa: a multinomial logit model. African Journal of Range & Forage Science, 32(4), 243–252. https://doi.org/10.2989/10220119.2014.959056 Nyikahadzoi K., Siziba Sh., Sagary N., Njuki J., & Adekunle A.A. (2010). Promoting Effective Collective Marketing in the Context of Integrated Agricultural Research for Development in Sub Saharan Africa. Learning Publics Journal of Agriculture and Environmental Studies Vol 2 (1). Panda, K. R. ad Sreekumar (2012) Marketing Channel Choice and Marketing Efficiency Assessment in Agribusiness, Journal of International Food & Agribusiness Marketing, 24:3, 213-230. https://doi.org/10.1080/08974438.2012.691812 Rashid S., Getnet K. and Lemma S. (2010) International Food Policy research Institute (IFPRI): Maize Value Chain Potential in Ethiopia : Constraints and opportunities for enhancing the system. http://ebrary.ifpri.org/utils/getfile/collection/p15738coll2/id/5371/filename/5372.pdf. Accessed on June 23, 2019. The Food and Agriculture Organization (FAO), (2011). The state of food and agriculture. Women in Agriculture: Closing the Gender Gap for Development. Washington, DC. The Food and Agriculture Organization (FAO), (2015). Analysis of price incentives for Maize in Ethiopia. Technical notes series, MAFAP, by Wakeyo M.B., Lanos B., Rome. http://www.fao.org/3/a-i4527e.pdf. Accessed on February 12, 201. The State of Food and Agriculture Organization (FAO). (2011). The state of food and agriculture. Women in Agriculture: Closing the Gender Gap for Development. Vigneri M, Holmes R (2009). When being more productive still doesn't pay: gender inequality and socio-economic constraints in Ghana's cocoa sector. Paper presented at the FAO-IFAD-ILO workshop on gaps, trends, and current research in gender dimensions of agricultural and rural employment, Rome Wilson, G. (1991). Thoughts on the Cooperative Conflict Model of the Household in Relation to Economic Method. IDS Bulletin, 22(1), 31–36. https://doi.org/10.1111/j.1759-5436.1991.mp22001005.x Wooldridge, J. M. (2012). Introductory Econometrics: A Modren Approach. 4th ed. World Bank (2018). Cereal Market Performance in Ethiopia: Policy implications for improving investments in maize and wheat value chains. We would like to express our sincere gratitude to the International Maize and Wheat Improvement Center (CIMMYT) for supporting our study through the Stress Tolerant Maize for Africa (STMA) project, which is funded by the Bill and Melinda Gates Foundation (Grant No. OPP1134248). Faculty of Environment, Gender and Development Studies, Hawassa University, Hawassa, Ethiopia Girma Gezimu Gebre Faculty of Agriculture, Kyushu University, Kyushu, Japan Hiroshi Isoda, Hisako Nomura & Takaaki Watanabe College of International Relations, Ritsumeikan University, Kyoto, Japan Yuichiro Amekawa Asian Development Bank Institute, Tokyo, Japan Dil Bahadur Rahut Department of Agricultural and Resource Economics, Graduate School of Bioresource and Bioenvironmental Sciences, Kyushu University, Fukuoka, Japan International Maize and Wheat Improvement Center (CIMMYT), El Batan, Mexico Hiroshi Isoda Hisako Nomura Takaaki Watanabe Correspondence to Girma Gezimu Gebre. Interviewees were informed about the research goals and they signed a form of consent. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 34 KB) Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Gebre, G.G., Isoda, H., Amekawa, Y. et al. Gender-based Decision Making in Marketing Channel Choice – Evidence of Maize Supply Chains in Southern Ethiopia. Hum Ecol 49, 443–451 (2021). https://doi.org/10.1007/s10745-021-00252-x Issue Date: August 2021 DOI: https://doi.org/10.1007/s10745-021-00252-x Dawuro zone
CommonCrawl
How to Use the Implicit Differentiation Calculator? How the Derivative Calculator Works Implicit Differentiation What is Implicit Differentiation? Implicit Derivative Implicit Differentiation and Chain Rule How to Do Implicit Differentiation? How to Do Implicit Differentiation? The process is explained by step by step explanation. Implicit Differentiation Formula Important Notes on Implicit Differentiation: The Implicit Differentiation Calculator displays the derivative of a given function with respect to a variable. STUDYQUERIES's Implicit Differentiation Calculator makes calculations faster, and a derivative of an implicit function is displayed in a fraction of a second. To use the implicit Differentiation calculator, follow these steps: Step 1: Enter the equation in the given input field Step 2: Click "Submit" to get the derivative of a function Step 3: The derivative will be displayed in a new window The following section explains how the Derivative Calculator works for those with a technical background. A parser analyzes the mathematical function first. Specifically, it converts it into a form that can be understood by a computer, namely a tree. In order to do this, the Derivative Calculator must respect the order of operations. It is a specialty of mathematical expressions that sometimes the multiplication sign is omitted, for example, we write "5x" instead of "5^x". The Derivative Calculator must detect these cases and insert the multiplication sign. JavaScript is used to implement the parser, which is based on the Shunting-yard algorithm. Transforming the tree into LaTeX code allows for quick feedback while typing. MathJax handles the display in the browser. By clicking the "Go!" button, the Derivative Calculator sends the mathematical function and the settings (differentiation variable and order) to the server, where they are analyzed again. The function is now transformed into a form that the computer algebra system Maxima can understand. Maxima actually computes the derivative of the mathematical function. According to the commonly known differentiation rules, it applies a number of rules to simplify the function and calculate the derivatives. Maxima's output is transformed once again into LaTeX and then presented to the user. Displaying the steps of the calculation is more complicated since the Derivative Calculator isn't entirely dependent on Maxima for this. The derivatives must be calculated manually step by step. JavaScript has been used to implement the differentiation rules (product rule, quotient rule, chain rule, …) There is also a table of derivative functions for trigonometric functions and the square root, logarithm, and exponential functions. Each calculation step involves a differentiation operation or rewrite. For example, constant factors are removed from differentiation operations and sums are divided (sum rule). This is done using Maxima, as well as general simplifications. To enable highlighting, the LaTeX representations of the resulting mathematical expressions are tagged in the HTML code. The "Check answer" feature has to determine whether two mathematical expressions are equivalent. Utilizing Maxima, their difference is computed and simplified as much as possible. For instance, this involves writing trigonometric and hyperbolic functions in their exponential forms. The task is solved if it can be demonstrated that the difference simplifies to zero. If not, a probabilistic algorithm is applied that evaluates and compares both functions at random locations. Implicit differentiation is the process of finding the derivative of an implicit function. There are two types of functions: explicit function and implicit function. An explicit function is of the form \(y = f(x)\) with the dependent variable \("y"\) is on one of the sides of the equation. But it is not necessary always to have \('y'\) on one side of the equation. For example, consider the following functions: $$x^2 + y = 2$$ $$xy + sin (xy) = 0$$ Slant Asymptote Calculator Maclaurin Series Calculator In the first case, though \('y'\) is not one of the sides of the equation, we can still solve it to write it like \(y = 2 – x^2\) and it is an explicit function. But in the second case, we cannot solve the equation easily for \('y'\), and this type of function is called an implicit function and in this page, we are going to see how to find the derivative of an implicit function by using the process of implicit differentiation. Implicit differentiation is the process of differentiating an implicit function. An implicit function is a function that can be expressed as \(f(x, y) = 0\). i.e., it cannot be easily solved for \('y'\) (or) it cannot be easily got into the form of \(y = f(x)\). Let us consider an example of finding \(\mathbf{\frac{dy}{dx}}\) given the function \(xy = 5\). Let us find \(\mathbf{\frac{dy}{dx}}\) in two methods: Solving it for \(y\) Without solving it for \(y\). Method- 1: $$xy = 5$$ $$y = \frac{5}{x}$$ $$y = 5(x^{-1})$$ Differentiating both sides with respect to \(x\): \(\mathbf{\frac{dy}{dx}= 5(-1x^{-2})) = \frac{-5}{x^2}}\) Method – 2: Differntiating both sides with respect to x: $$\mathbf{\frac{d}{dx}(xy) = \frac{d}{dx}(5)}$$ Using product rule on the left side, $$x \frac{d}{dx}(y) + y \frac{d}{dx}(x) = \frac{d}{dx}(5)$$ $$x (\frac{dy}{dx}) + y (1) = 0$$ $$x(\frac{dy}{dx}) = -y$$ $$\frac{dy}{dx} = -\frac{y}{x}$$ From \(xy = 5\), we can write \(y = \frac{5}{x}\). $$\frac{dy}{dx} = -\frac{(\frac{5}{x})}{x} = \frac{-5}{x^2}$$ In Method -1, we have converted the implicit function into the explicit function and found the derivative using the power rule. But in method-2, we differentiated both sides with respect to \(x)\) by considering y as a function of \(x\), and this type of differentiation is called implicit differentiation. But for some functions like \(xy + \sin (xy) = 0\), writing it as an explicit function (Method – 1) is not possible. In such cases, only implicit differentiation (Method – 2) is the way to find the derivative. The derivative that is found by using the process of implicit differentiation is called the implicit derivative. For example, the derivative \(\frac{dy}{dx}\) found in Method-2 (in the above example) at first was \(\frac{dy}{dx} = \frac{-y}{x}\) and it is called the implicit derivative. An implicit derivative usually is in terms of both \(x\) and \(y\). The chain rule of differentiation plays an important role while finding the derivative of implicit function. The chain rule says $$\frac{d}{dx}(f(g(x)) = (f' (g(x)) · g'(x)$$ Whenever we come across the derivative of y terms with respect to \(x\), the chain rule comes into the scene and because of the chain rule, we multiply the actual derivative (by derivative formulas) by \(\frac{dy}{dx}\). Here is an example. Second Derivative Calculator Chain rule implicit differentiation is clearly explained with an example. Here are more examples to understand the chain rule in implicit differentiation. $$\frac{d}{dx}(y^2) = 2y \frac{dy}{dx}$$ $$\frac{d}{dx}(sin y) = cos y \frac{dy}{dx}$$ $$\frac{d}{dx}(ln y) = \frac{1}{y}·\frac{dy}{dx}$$ $$\frac{d}{dx}(tan^{-1}y) = \frac{1}{(1 + y^2)} · \frac{dy}{dx}$$ In other words, wherever y is being differentiated, write \(\frac{dy}{dx}\) also there. It is suggested to go through these examples again and again as these are very helpful in doing implicit differentiation. In the process of implicit differentiation, we cannot directly start with \(\frac{dy}{dx}\) as an implicit function is not of the form \(y = f(x)\), instead, it is of the form \(f(x, y) = 0\). Note that we should be aware of the derivative rules such as the power rule, product rule, quotient rule, chain rule, etc before learning the process of implicit differentiation. Here is the flowchart of the steps for performing implicit differentiation. Now, these steps are explained by an example where are going to find the implicit derivative \(\frac{dy}{dx}\) if the function is \(y + \sin y = \sin x\). Domain And Range Calculator Step – 1: Differentiate every term on both sides with respect to \(x\). Then we get \(\frac{d}{dx}(y) + \frac{d}{dx}(sin y) = \frac{d}{dx}(sin x)\). Step – 2: Apply the derivative formulas to find the derivatives and also apply the chain rule. (All \(x\) terms should be directly differentiated using the derivative formulas; but while differentiating the \(y\) terms, multiply the actual derivative by \(\frac{dy}{dx}\). In this example, \(\frac{d}{dx} (sin x) = cos x\) whereas \(\frac{d}{dx} (sin y) = cos y (\frac{dy}{dx})\). Then the above step becomes: \(\frac{dy}{dx} + (cos y) ((\frac{dy}{dx}) = cos x\) Step – 3: Solve it for \(\frac{dy}{dx}\). Taking \(\frac{dy}{dx}\) as common factor: \(\frac{dy}{dx} (1 + cos y) = cos x\) \(\frac{dy}{dx} = \frac{(cos x)}{(1 + cos y)}\) This is the implicit derivative. We have seen the steps to perform implicit differentiation. Did we come across any particular formula along the way? No!! There is no particular formula to do implicit differentiation, rather we perform the steps that are explained in the above flow chart to find the implicit derivative. Derivative Of sin^2x, sin^2(2x) & More Double Integral Calculator Implicit differentiation is the process of finding \(\mathbf{\frac{dy}{dx}}\) when the function is of the form \(f(x, y) = 0\). To find the implicit derivative \(\mathbf{\frac{dy}{dx}}\), just differentiate on both sides and solve for \(\mathbf{\frac{dy}{dx}}\). But in this process, write \(\mathbf{\frac{dy}{dx}}\) wherever we are differentiating \(y\). All derivative formulas and techniques are to be used in the process of implicit differentiation as well. How do you calculate implicit differentiation? Take the derivative of every variable. Whenever you take the derivative of \(y\) you multiply by \(\mathbf{\frac{dy}{dx}}\). Solve the resulting equation for \(\mathbf{\frac{dy}{dx}}\). Does Mathway do implicit differentiation? Enter the function you want to find the derivative of in the editor. The Derivative Calculator supports solving first, second, fourth derivatives, as well as implicit differentiation and finding the zeros/roots. You can also get a better visual and understanding of the function by using our graphing tool. What is an implicit differentiation example? For example, \(x^2+y^2=1\). Implicit differentiation helps us find ​\(\mathbf{\frac{dy}{dx}}\) even for relationships like that. This is done using the chain ​rule and viewing y as an implicit function of x. For example, according to the chain rule, the derivative of \(y^2\) would be \(2y⋅\mathbf{\frac{dy}{dx}}\). How do you calculate differentiation? Some of the general differentiation formulas are; Power Rule: (d/dx) (xn ) = nx. Derivative of a constant, a: (d/dx) (a) = 0. Quotient And Product Rule – Formula & Examples Horizontal Asymptotes – Definition, Rules & More
CommonCrawl
Electrical performance of PEDOT:PSS-based textile electrodes for wearable ECG monitoring: a comparative study Reinel Castrillón1, Jairo J. Pérez2 & Henry Andrade-Caicedo ORCID: orcid.org/0000-0002-5924-26672 BioMedical Engineering OnLine volume 17, Article number: 38 (2018) Cite this article Wearable textile electrodes for the detection of biopotentials are a promising tool for the monitoring and early diagnosis of chronic diseases. We present a comparative study of the electrical characteristics of four textile electrodes manufactured from common fabrics treated with a conductive polymer, a commercial fabric, and disposable Ag/AgCl electrodes. These characteristics will allow identifying the performance of the materials when used as ECG electrodes. The electrodes were subjected to different electrical tests, and complemented with conductivity calculations and microscopic images to determine their feasibility in the detection of ECG signals. We evaluated four electrical characteristics: contact impedance, electrode polarization, noise, and long-term performance. We analyzed PEDOT:PSS treated fabrics based on cotton, cotton–polyester, lycra and polyester; also a commercial fabric made of silver-plated nylon Shielde® Med-Tex P130, and commercial Ag/AgCl electrodes. We calculated conductivity from the surface resistance and, analyzed their surface at a microscopic level. Rwizard was used in the statistical analysis. The results showed that textile electrodes treated with PEDOT:PSS are suitable for the detection of ECG signals. The error detecting features of the ECG signal was lower than 2% and the electrodes kept working properly after 36 h of continuous use. Even though the contact impedance and the polarization level in textile electrodes were greater than in commercial electrodes, these parameters did not affect the acquisition of the ECG signals. Fabrics conductivity calculations were consistent to the contact impedance. The imminent population growth is a major concern for public health systems worldwide. In most countries, the capacity in hospitals result insufficient to opportunely treat patients. Traditional medicine is reactive rather than preventive based on late responses to, in many cases, predictable conditions. Furthermore, this system is deficient in covering the persistent demand, especially from cardiovascular patients, who require a continuous and frequent monitoring. According to the World Health Organization (WHO), cardiovascular diseases are listed as the globally leading cause of death with about 17.5 million deaths in 2012, accounting for 31% of deaths worldwide. Consequently, early diagnosis of these diseases becomes an essential means for their prevention and treatment [1]. Electrocardiography (ECG) is one of the most popular techniques in clinical practice [2]. Technological advances have permitted to include it in the daily life of patients. Modern biomedical systems allow the incorporation of high-performance ambulatory monitoring devices in commonly used elements such as clothing. These elements are known as wearable systems and belong to a strategic trend of technological devices that seek the improvement of the health care promotion. They have enabled continuous wearable monitoring of several physiological signals at a low cost, easily manufacturability and comfort. A growing interest in alternative electrodes referred to as textile electrodes has been reported from different research groups [3]. The performance of textile electrodes has been evaluated in biological signals such as respiration and ECG monitoring and compared with commercial ECG-electrodes [4]. However, these textile electrodes have limitations on noise reduction, polarization, durability and long-term performance, that need to be overcome. Although ECG monitoring systems traditionally depend on the utilization of Ag/AgCl disposable electrodes, textile electrodes offer an alternative means to register electrical cardiac readings over time, yielding equivalent diagnostic information. Ag/AgCl electrodes are suitable for short periods of time. Afterward, they will become uncomfortable due to the use of adhesives to enhance a firm attachment to the skin. They require the use of electrolytic gels that evaporates after few hours [5]. Additionally, they could eventually generate harmful skin reactions [6]. These problems make the conventional Ag/AgCl electrode unsuitable for routine and long-term ECG measurements. Ag/AgCl electrodes have been extensively studied and tested, to the extent that the association for the advancement of medical instrumentation (AAMI) and the National Institute of American Standard (ANSI) have proposed the standard "Disposable ECG Electrodes—ANSI/AAMI EC12:2000/(R)2010" containing the performance requirements and test methods for disposable electrodes used in electrocardiography. Nevertheless, there is no similar standard for electro—conductive textile electrodes [7]. This fact leads the researchers to propose the most relevant and resourceful strategies in the characterization of textile electrodes. Dry electrodes enable long-term monitoring that becomes relevant for specific health conditions, such as chronic diseases, fitness, and self-care. Current efforts aim to the development of ambulatory monitoring systems based on electrodes assembled from textile materials as cotton, polyester, lycra, and silver-plated nylon. Fabrics made of those materials are commonly utilized in wearable systems, since they are treated with compounds, such as electroactive polymers, carbon structures and metal substrates [8], that allow their electrical conduction and hence the detection of biological potentials. Poly(3,4-ethylenedioxythiophene)-poly(styrene sulfonate) or in simplified form, PEDOT:PSS [9, 10] are used to improve ionic conductivity, which reduces the effects of contact impedance in ECG signal acquisition. Most of the works reported in the literature are comparative studies between new textile electrodes and commercial reference electrodes. In addition, many of them focus on testing contact impedance and noise. Pani et al. [11] treated woven cotton and polyester fabrics with highly conductive PEDOT:PSS solutions and introduced the effects on the conductivity and their affinity when using a different second dopant. Conductivity was reported for cotton = 424 mS/cm and polyeste = 575 mS/cm and compared with commercial Ag/AgCl 3M electrodes in human ECG recordings. The results showed that the conductivity can be improved by both improving the quality of the treatment of the textiles with conductive polymers and by carefully designing the electrode, i.e. the distribution of the surface, the thickness, snap fastener and conductive yarn. They indicated that there is a decrease in the mismatch in the electrochemical impedance of the skin-electrode interface when using PEDOT: PSS, therefore, conductive gels can be avoided. In this work, we studied the electrical performance of five fabrics from the point of view of polarization, long-term performance, contact impedance and noise. Each electrode was evaluated based on four key features: contact impedance [12,13,14,15,16], polarization [17,18,19], noise level [7, 14, 20, 21] and long-term performance [6, 14, 17, 22]. Samples of the textile materials used in this study were kindly provided by the Department of Electrical and Electronic Engineering, University of Cagliari, Cagliari, Italy. Fabrication process is described by Panni et al. [11]. Briefly, textile electrodes were made by treating conventional fabrics with a conductive solution of PEDOT:PSS dispersion Clevios PH 500 (Heraeus Clevios—Germany), second dopant was glycerol 33%. Woven fabrics were used, and immersed for at least 48 h at room temperature in the polymer solution. Fabrics were then taken out from the solution and drained off to remove the solution in excess. Samples were annealed, for both water and dopants to evaporate in order to avoid deterioration of the fabric mechanical properties. The conductive fabrics used in this study were: cotton, cotton–polyester (65% cotton, 35% polyester), lycra, polyester; additionally, we included a commercial silver-plated nylon fabric Shieldex®Med-Tex P130 (Statex—Germany) [23]. Figure 1 shows five electrodes manufactured following the process described by Pani et al. [11]. The fabrics were cut into pieces of 20 mm × 20 mm, which were sewn to a non-conductive synthetic leather with silver-coated yarn to obtain greater rigidity. The size of the electrodes, which is acceptable for ECG monitoring, was chosen to ensure reproducibility of the customized fabrication process. The use of a layer of rigid synthetic leather allowed to improve the contact between the electrode and the skin ensuring a uniform pressure, which is especially beneficial in the case of textile electrodes. Physical appearance of the electrodes used in the research. Each electrode was constructed from 2 cm × 2 cm pieces of fabric sewn to a non-conductive support using silver–nylon conducting yarns. The commercial Ag/AgCl electrode is used as the standard of comparison in each of the measurements Finally, we fixed a metallic snap fastener to the synthetic leather and interfaced them with the same conductive yarn. In such a way, the snap fastener remained at the rear of the electrode without getting in touch with the skin. Figure 2 shows a closer view of the final aspect of the electrode, its structure, and components. The snap fastener was used to connect the electrodes to the ECG leads. In the experiments, we utilized Ag/AgCl disposable electrodes ref 2228 (3M, Germany) as the reference electrode on the ECG recording arrangement. This work focuses on electrode-skin interactions, other evaluation tests to characterize the physical properties of the electrodes were not conducted. Closer view of the final aspect of the textile electrode. Front, back and top view of a textile electrode made of lycra. On the right side, it is possible to discern the elements of the electrode: conductive fabric, synthetic leather, conductive yarn, and metallic snap fastener Pani et al. [11] reported the values of conductivity for cotton and polyester. In this work, we calculated the conductivity for cotton–polyester and lycra as the product between thickness and surface resistance, considering the fabrics as thin films with uniform surfaces, commonly reported as two-dimensional entities. Med-Tex P130 conductivity was not calculated as per its plated surface, it is intended to obtain silver ionic release for wound care, skin disorders, skin irritations, burn victims, not for uniformed conductivity. Figure 3 shows optical micrographs of each type of fabric where is possible to appreciate the different types of weave. Optical micrographs were acquired using a 10× objective and an upright microscope (Eclipse Ci; Nikon). Optical micrographs of the fabrics used in the study. An upright microscope with a 10× objective was used. The scale bar corresponds to 100 µm Our test protocol was previously approved by the Committee of Health Research Ethics of Universidad Pontificia Bolivariana (Colombia), located therein in the document R.R. N 80 17-12-2008. Data were obtained from 8 healthy, slim build individuals between the ages of 18 and 30, four for each test (two men and two women). Our interest lied in the number of repeated measurements of each type of electrode rather than in a large number of individuals. Even though the evaluations were carried out with four participants, the noise was measured in eight individuals: four individuals chosen originally for the noise test and other four resulting from the first long-term performance measurements since both tests followed the same protocol. We set the experiments in an in-paralleled electrode configuration. We replaced the disposable electrodes at every test to avoid adding a new variable into the experiments. The volunteers were informed about the protocol to which they would be subjected. They were asked to be at rest for a period of 30 min before the test to homogenize their body conditions. Then, the area to be measured was shaved and cleaned with alcohol to improve the adhesion of the electrodes to the skin. An elastic waistband was used to attach the textile electrodes to the skin. Neither adhesives nor electrolytic gel were used. We used an R language based platform known as Rwizard [24] to perform all the statistical analysis. The main electronic equipment that we used in the study was: A virtual instrument, composed by a two channels USB oscilloscope and a function generator, Handyscope HS5 (Sneek, The Netherlands). A device to measure low voltages, currents, and power, Cassy Lab (LD DIDACTIC GmbH, Hürth, Germany). A switching circuit which is driven by a microcontroller to measure the power terminals at different points of the circuit. An acquisition card based on an EVM ADS1298 device, low-power, 24-bit, simultaneously sampling, eight-channel front-end for ECG and EEG applications (Texas instrument, Texas, EEUU). A laptop to set up the electronic systems and to record the data. Measurement strategies are described in the following sections: Contact impedance measurements Contact impedance refers to the impedance at the skin-electrode interface. This test intends to quantify the property of the electrode-skin contact to oppose time-varying electric current produced by the material under test. We selected single and double dispersion Cole impedance models of first and second order to represent this parameter [25, 26]. We set a variable AC source at 5 V peak to peak (\( 5~V_{pp} \)) to sweep in a range of 0.1 Hz–10 kHz. Although the spectral components of ECG signals do not exceed 150 Hz, it is strategic to measure high frequencies to tune the models [27]. We used a variation of the method reported by Xie et al. [12]. The procedure involves determining the response of the electrodes when a sinusoidal voltage source is swept in frequency. This method is effective in measuring absolute magnitude of the impedance, however, does not allow the discrimination of the resistive and reactive components. To find such components individually, we performed the procedure based on the scheme in Fig. 4a. Instead of using a multimeter, we used a digital oscilloscope to estimate the magnitude and phase components of the contact impedance. Setup for contact impedance measurement \( (Z_{contact}) \). a Circuit to calculate combined impedance \(Z_{sum}\), it is equal to the sum of both, tissue impedance (skin impedance and subcutaneous tissue) and contact impedance due to electrode 2 (\(Z_{contact}\)). b Circuit to calculate impedance \(Z_{23}\), it corresponds to sum of tissue impedance (skin impedance and subcutaneous tissue), contact impedance due to electrode 2, and contact impedance due to electrode 3 \(V_g\) supplies AC signal (\( 5~V_{pp} \)) to the circuit through the electrode 3; \(V_{e2}\) corresponds to the voltage measured at electrode 2; the voltage \(V_r\) at the reference resistance \(R_{ref}\) is calculated with \(V_g\) as the reference, and \(V_{21}\) satisfies \(V_{e2} - V_r\). \(V_r\) and \(V_{e2}\) are measured in the phasor form (magnitude and phase). The impedance is calculated as \(Z_{sum} = Z_{contact} + Z_{SB12}\), where \( Z_{contact}\) represents contact impedance (skin/electrode) of a single electrode 1 and \(Z_{SB12}\) represents the impedance of the subcutaneous tissue between the electrodes one and two. It can also be calculated by the expression: $$\begin{aligned} Z_{sum}=Z_{contact}+Z_{SB12}=\frac{V_{21}}{I} \end{aligned}$$ where I is the current in the circuit and can be calculated as \(I=\frac{V_r}{R_{ref}}\) (both values known). The circuit of Fig. 4b. allows determining the impedance \(Z_{12}\), which satisfies \(Z_{12}=2Z_{contact} + Z_{SB12}\). \(V_g\) supplies AC signal (\( 5~V_{pp} \)) to the circuit through the electrode 2; the voltage \(V_r\) is measured at the reference resistance \(R_{ref}\) and is obtained by using \(V_{21}=V_g - V_r\). Thus, the impedance is calculated as: $$\begin{aligned} Z_{12}=2Z_{contact}+Z_{SB12}=\frac{V_{21}}{I} \quad\text{where}\; I=\frac{V_r}{R_{ref}} \end{aligned}$$ Finally, contact impedance is calculated as \(Z_{contact} = Z_{12}-Z_{sum}\). Lissajous figures Given an input signal x(t) and a phase-shifted output signal y(t) such as: $$\begin{aligned} x(t)= \; & {} X_0sin(\omega t) \nonumber \\ y(t)= \; & {} Y_0sin(\omega t+\theta ) \end{aligned}$$ the Lissajous figure is generated when plotting \(x(t)\ \text{vs}\ y(t)\) signals. The intersection of the figure with the y axis is identified and named \(y_0\). The phase shift between the waveforms is calculated by the expression: $$\begin{aligned} \theta =arcsin \left( \frac{y_0}{Y_0}\right) \end{aligned}$$ Cross-correlation is a measure of the similarity between two series as a function of one relative to the other. The cross correlation between a discrete input signal x(n) and an output signal y(n) is given by the expression: $$\begin{aligned} r_{x,y}(l)=\sum _{n=-\infty }^{\infty }x(n)y(n-l) \end{aligned}$$ where l represents a shift in discrete time between signals. The aim is to find the value of l in such way that the maximum correlation between signals in obtained. From this value, the phase shift in degrees can be calculated from the equation: $$\begin{aligned} \theta =\frac{360\cdot l\cdot F}{F_s} \end{aligned}$$ where F corresponds to frequency of the original wave and \(F_s\) to the sample frequency from which the signals were acquired. We determined voltages in the phasor form from their waveforms at the frequencies of interest. We designed a high-pass digital filter for eliminating the DC offset (Filter Designer App, Matlab, Mathworks, Inc.), with the following parameters: FIR, cut-off frequency = 0.05 Hz, order = 2000. Filter was applied off-line to both kind of signals (acquired from textile and reference electrodes), thus delays affected equally. The magnitude was calculated by obtaining the peaks of each wave. The phase was calculated using two techniques: In order to compare the absolute impedance values of textile and disposable electrodes, we converted each data set of curves into a single scalar value. We have used the AUC score as a comparison parameter since the behavior of the magnitude of the impedance in the magnitude–frequency plots, decreases monotically with the increase of the frequency for all the samples. The use of the AUC score as a comparison criterion for spectral curves was used previously by Sarbaz et al. [28]. We used a multifactorial ANOVA (either two-way or repeated variables) in cases where the statistical assumptions of normality and homoscedasticity were applicable. For those cases when the assumptions were not satisfied, we performed nonparametric tests, such as Kruskal–Wallis and Wilcoxon. The main factors evaluated were: the material of the textile electrode (cotton, cotton–polyester, lycra, silver-plated nylon, and polyester), and their behavior relative to the reference (textile electrode vs Ag/AgCl electrodes). The measurement scheme for this test is shown in Fig. 5. Contact impedance measurement scheme. Measurements are performed simultaneously in both textile and Ag/AgCl electrodes. The switching circuit has been used to interchange the measurement pins in the different configurations as explained before. Control is exerted automatically from a software application We selected three different points of each leg to perform the measurements, to take into account the local variations of the skin. The tests are performed on the legs due to the ease of locating several electrodes for simultaneous measurements. In this way the probability of presence of motion artifacts and the interference from other bioelectric signals (such as ECG and breathing signals) is reduced. Polarization measurements The electrode polarization is a consequence of an alteration of the charge distribution in the skin-electrode interface, and causes a baseline drift or DC potential/offset in ECG signals. Normally in practice, measuring ECG signals requires at least two electrodes connected differentially to an instrumentation amplifier that reduces the effects of common mode interference. ECG trace must be amplified and DC potential reduced. Electrochemical phenomena at the skin cause variations, as polarizations, on the skin-electrode interface, which results in interfering and modifying signals that are added to the desired ECG signal, although the characteristics of the electrodes are the same. Polarization at the skin-electrode interface was calculated by measuring a DC potential in open circuit at the terminals of a pair of electrodes attached to the skin. The potentials were registered once the patient remained one minute motionless to avoid instabilities in the skin-electrode interface, product of involuntary biomechanical movements. The skin-electrode interface is the largest source of interference due to polarization potentials. Polarization potentials are normally in the order of millivolts; however, when values exceed such order at the presence of action potential variations, the output of the amplifier is saturated, making the ECG signal difficult to extract and polarization potentials difficult to eliminate. We set two independent acquisition channels (Cassy Lab, Label Didactics., Ltd.) to perform simultaneous measurements of DC potentials. The measurements were carried out both, in textile and reference electrodes in the same muscle group. We programmed a series of measurements by using four electrodes (two textiles electrodes and two Ag/AgCl electrodes), as reported by Rattfalt et. al. [19]. We used three different points of each leg, (sampling frequency = 10 sps), during approximately 30 min. We calculated DC potentials for each type of electrode as the mean absolute difference between two consecutive samples. We used the standard average exchange ratio \(\bar{X}_i\) suggested by Rattfalt [18, 19]. $$\begin{aligned} &\bar{X}_i=\frac{\sum _t|X_i(t)-X_i(t+1)|}{N-1} \quad t=0,1,2,...,N-1 \nonumber \\ &\bar{X}=\frac{\sum _i\bar{X}_i}{n} \end{aligned}$$ where \(i\) denotes each particular individual; \(n\) the total number of individuals for each electrode type, and \(N\) the total number of samples. It is necessary to guarantee that the patient is motionless to avoid muscle signals product of involuntary movements. We used the interquartile range to eliminate the outliers in each set of observations. Each measurement series became a datum representing the average behavior of the electrode through the time. A two-way ANOVA analysis was used. We selected the type of electrode as the factor, the assumptions of normality (Kolmogorov–Smirnov, p = 0.2051) and homoscedasticity were satisfied (Levene test, p = 0.1149). The measurement scheme for this test is shown in Fig. 6. Open circuit polarization measurement scheme. Measurements were performed simultaneously in both textile and Ag/AgCl electrodes. We used a Cassy Lab sensor (Label Didactics., Ltd.) that allows to program and automatize a set of measurements from a software application at the PC Noise measurements These experiments intended to quantify the noise level due to external interference, biological signals different to ECG, artifacts, and measuring equipment. We performed simultaneous measurements of the textile and commercial Ag/AgCl electrodes. The experiments consisted in capturing the same 1-lead ECG using different pairs of textile electrodes. We performed the measurements using lead II, as suggested by Takamatsu [29]. The acquisition process was conducted for a period of five minutes, where textile electrodes were attached to the skin by an elastic waistband. Figure 7 depicts the location of the electrodes and the connection to the electronics acquisition card EVM ADS1298 (Texas Instruments). ECG signals measurement scheme. Measurements were performed simultaneously in both textile and Ag/AgCl electrodes. A circuit board based on the ADS1298 chip (Texas Instrument) was configured to acquire two channels at the same time We designed a digital filter (Filter Designer App, Matlab, Mathworks, Inc.) for removing undesired components from the power supply and their corresponding harmonics (2 stopband filters, FIR filters, stop frequencies = 60 and 120 Hz respectively, windowing method = Kaiser, \( \beta =0.5 \), order = 150, broadband = 10 Hz) and attenuating the frequency components out of the range of cardiac signals (passband filter, FIR filter, band pass = 0.05–150 Hz, windowing method = Kaiser, \( \beta =0.5 \), order = 150), as is suggested in [30]. We performed three methods to analyze the data: noise power, cross-correlation coefficient, and segmentation. Noise power quantifies the magnitude of the signal eliminated in the filtering process. The aim of this procedure is to identify which type of electrode has greater affectation by external interferences, biological noise, artifacts of muscle movement and breathing. The process involves determining the difference between the original and the filtered signal to calculate the average power. $$\begin{aligned} &E=|ECG_{original}-ECG_{filtered}| \nonumber \\ &\bar{P}=\frac{1}{N}\sum _{i=0}^{N-1}E^2 \end{aligned}$$ where E is the absolute value resulting from the difference between the original and filtered signals. \(\bar{P}\) is the noise power and N represents the number of samples. The second method is the Pearson cross-correlation coefficient. Since the cardiac signals were recorded simultaneously with both, the textile and disposable electrodes, in the same area of the volunteer's body, we expected two morphologically identical signals. However, they suffered a potential drift that was removed using a digital high pass filter described above. The normalized cross-correlation provides a value that expresses the similarity of two signals in terms of morphology; therefore, low values of the cross-correlation index suggest a large effect of noise on the ECG signals recorded by different electrodes. The third process involves the use of a segmentation algorithm, which detects and quantifies complete P–Q–R–S–T waves. It uses the continuous wavelet transform, discrete wavelet transform, and Pan and Tompkins algorithm for the classification of the ECG signal, as reported by Bustamante et al. [31]. The error rate is calculated by dividing the number of complete ECG segments registered with the experimental material, against the number of ECG segments captured simultaneously with Ag/AgCl commercial electrodes. Long-term performance The performance of the electrodes over time is affected by the wear of the material. We evaluated the degree of deterioration of the textile electrode quantifying its capacity to record complete ECG complexes that are morphologically similar to those recorded by Ag/AgCl electrodes. The signal acquisition process was the same as described in the noise measurement section. We evaluated each type of electrode (cotton, cotton–polyester, lycra, polyester and silver-plated nylon) for a period of 36 h on each of the four subjects. The volunteers continued with their daily lives but were asked to return to the laboratory to perform measurements spaced at 0, 1, 3, 7, 12, 24, 30 and 36 h. The measurements obtained at time 0 were added to the dataset for the noise analysis. During the entire process, we did not remove the textile electrodes from the patient's skin; nevertheless, we adjusted them against displacements on each partial measurement. Due to the duration of the experiment, we replaced the disposable electrodes on each measurement. We did not performed additional measurements like contact impedance during these tests. Signal processing (like filtering) was the same described in the noise measurements section. As this study focuses on the performance of fabrics, no especial filtering or higher order filter was needed. The selected range of frequencies permits components to pass through and provides a high-fidelity tracing for the P–Q–R–S–T ECG wave. Consequentially, we used the segmentation algorithm, introduced above, to extract the P–Q–R–S–T complex from the ECG trace and split it into single P–Q–R–S–T waves for individual analysis. Each ECG segment from the textile electrodes was compared with the segments, captured simultaneously, from the Ag/AgCl electrodes; then, the error rate is calculated by dividing the number of complete ECG segments registered with the experimental material, against the number of ECG segments captured simultaneously with Ag/AgCl commercial electrodes. We analyzed the data through a multivariate ANOVA of repeated variables. According to the data input given by Mestrovic (2016), the calculated conductivity value was 2.64 S/cm for the disposable commercial Ag/AgCl 3M electrodes used in this study. Two orders above from cotton–polyester and lycra: 337 and 393 mS/cm respectively [32]. It is clearly seen that lycra performs sensibly better than cotton–polyester as an evidence of the affinity of both materials to the PEDOT:PSS solution. The order of the conductivity values demonstrate that the performance of lycra is actually better that cotton–polyester to enhance ionic transfer through the skin-electrode interface. Disposable electrodes have the highest conductivity among the samples, including those reported in Pani et al. Conductivity data is validated by Fig. 12b; however, data reported by Pani does not correspond to impedance analysis. It is clearly visible that lycra and cotton–polyester electrodes have a similar performance. Conductivity of Med-Tex P130 was not calculated as it is not an isotropic material, which means it does not have uniformed conductivity, due to it is plated for silver ionic release; in fact, we presume the dispersion of the electrodes made Med-Tex P130 is high. Figure 3 shows the characteristics of the weaves of each fabric used in the study. Cotton and cotton–polyester exhibit a similar pattern, the fabric is made with thick yarns (approximately 200 µm) intertwined at short intervals. Only a few empty spaces between yarns are appreciable. Despite their physical similarities, cotton–polyester is closer to lycra in terms of the contact impedance than cotton. Lycra presents thinner yarns than cotton (150 µm approximately), the pattern of the fabric is linear, the yarns are nearby and tight, which leaves very few empty spaces. Indeed, lycra exhibit the lowest impedance values. Nylon–silver exhibits a pattern similar to lycra, its network of intertwined yarns leaves few empty spaces; however, its contact impedance is higher. Polyester presents the thinnest yarns (approximately 100 µm); besides, they are very far apart, which causes many empty spaces. It may explain its poor performance in establishing a good skin electrode interface, which causes the highest contact impedance values (the black line in Fig. 8). Contact impedance. a Average of the magnitude of the contact impedance of each material versus frequency. Each of the lines of the graph represents the average value of the magnitude of the contact impedance evaluated on the different test subjects. b Box and whisker plot with the value of the area under the curve (AUC) of each of the materials. The AUC is a numerical value that represents the value of the area under the curve of the impedance spectral signals. c Shadow plot of the contact impedance magnitude versus frequency: the solid line represents the average value, the shadow is the standard deviation of the values around the average. Textiles (blue), Ag/AgCl (red). d Box and whisker plot with the value of the area under the curve comparing textile versus Ag/AgCl electrodes We recorded and plotted the data from the four experiments separately for each individual. We assessed the electrical performance of textile electrodes by analyzing impedance magnitude, polarization variability, ECG morphology deviations, and proneness to electrical noise. We used statistical tools to compare textile electrodes against Ag/AgCl commercial electrodes. Figure 8a shows a Bode plot of the average impedance magnitude at the selected frequency range, from 0.1 to 104 Hz. The statistical analysis of the data showed that the assumptions of normality and homoscedasticity were not satisfied; thus, we used the Kruskal–Wallis test to interpret the data. Figure 8b presents a box plot with the distribution of the data. The figure was constructed from data taken from four test subjects at six test points, corresponding to 24 measurements per material simultaneously with the reference electrodes. The data were analyzed using a non-parametric Kruskal–Wallis test yielding a p = 0.8684, confidence = 95% indicating that there are no significant differences between electrode types. Figure 8c shows a shadow plot to represent the impedance average of textile materials (blue) compared to the reference Ag/AgCl electrodes (red). Shadow represents the standard deviation. Thus, it is possible to appreciate a clear difference in the impedance magnitude of the two groups. Figure 8d presents a comparison of the data distribution by groups In total, 24 samples per material (120 measurements) and the same number of measurements with the reference electrodes were used. In this case, the assumptions of normality and homoscedasticity were not satisfied either; hence, it was necessary to perform a Wilcoxon test. The test indicated significant differences between treatments (p = 2.2 × 10–16, confidence = 95% ) where the textile electrodes, generally, presented higher contact impedances compared to Ag/AgCl commercial electrodes; besides, they had higher dispersion. We obtained minimal variability in polarization potentials along the measurements (we only noticed small changes due to muscular activation). The average polarization level was 15.4 mV, the standard average exchange ratio calculated between consecutive measures was 269.29 µV every 0.1 s. Under the same conditions, Ag/AgCl electrodes show an average polarization level of 2.54 mV (standard average exchange ratio = 163.56 µV every 0.1 s). Detailed results of this test are presented in the Table 1. Table 1 Measurements of the average polarization potential and of the variability between consecutive samples taken at time intervals of 0.1 s registered on each of the four test subjects, with each type of textile material (bold values) compared with commercial electrodes of Ag/Agcl (cursive values) Silver-plated nylon electrodes showed the lowest polarization potential value (p = 0.004035). Figure 9a depicts the average behavior along the time; results do not evidence significant differences among the other materials. Figure 9b presents a general comparison of the electrical behavior of textile and Ag/AgCl electrodes. Data did not meet the normality assumption, therefore we performed a Wilcoxon test. It confirmed that electrodes treated with PEDOT:PSS have higher polarization compared to Ag/AgCl. Average polarization potential drift for different materials and series of measurements. a Box and whisker plot of the average polarization drift of different materials. b Box and whisker plot of the average polarization drift comparing textile vs Ag/AgCl electrodes Statistical analysis showed significant differences in the behavior of the materials (\(p=0.01945\)), particularly that the lycra electrodes are less sensitive to noise than silver-plated nylon electrodes, as depicted in Fig. 10a. Noise measurements. a Box and whisker plot of the average noise power quantified from a filtering process. b Box and whisker plot of the average noise power comparing textile vs Ag/AgCl electrodes. c Plot of the average Pearson cross-correlation coefficient calculated between ECG signals acquired simultaneously (textile vs Ag/AgCl). d Segment of an ECG signal simultaneously recorded with textile (blue) and Ag/AgCl (red) electrodes. e Average percentage of error in the detection of ECG signal segments. Additionally, we compared the overall performance of the textile electrodes against the reference electrodes Ag/AgCl. For this purpose we performed a two-way ANOVA analysis using as factor the type of electrode (textile or disposable), see Fig. 10b. Assumptions of normality (Shapiro–Wilk p = 0.09372) and homoscedasticity (Levene p = 0.2317) were satisfied. We observed that the disposable electrodes present a significantly lower noise levels compared with textile electrodes (p = 9.17 × 10–11). Cross-correlation analysis depicted in Fig. 10d quantifies the similitudes between the signals recorded with the electrodes under test (Fig. 10c). In general, the correlation values are higher than 80%. Besides, we observed no statistically significant differences between textile electrodes performance. The ability of the electrodes to acquire ECG signals was tested by the segmentation algorithm. The error rate was determined and tabulated in Table 2 based on a Kruskal–Wallis test, complementary results of comparisons are presented on Fig. 10e. Such test showed no significant differences between different types of treatments (p = 0.9965). In Table 2, 92.5% of the measurements, yield that the percentage error was less than 2%, such values are within the tolerance of the algorithm (98%). Table 2 Number of subjects in the error range versus electrode type We performed eight measurements, lasting 5 min each, in a time interval of 36 h, we counted the number of complete P–Q–R–S waves using the segmentation algorithm. Figure 11 shows the results of each type of material. Long-term performance: segmentation error percentage versus time. Each one of the box diagrams represents the percentage of error in the determination of ECG signals of the textile electrodes with respect to the commercial electrodes, in the different measurement time periods. Most of the graphs show that the median of the measurements is below 5%. a Cotton electrodes. b Cotton–polyester electrodes. c Lycra electrodes. d Nylon–silver electrodes. e Polyester electrodes. There is no evidence to establish differences in the number of complete ECG signals detected during the test. In general, after 36 h, the quality of the captured ECG signals is similar to the obtained at 0 h. Except for the very last measurement with silver-plated nylon electrodes, in all cases, the average percentage error was less than 5%. Silver-plated nylon electrodes are the only ones that presented an apparent relationship between the increase in the percentage error and time. In Fig. 11 the results are graphically presented for each type of material. We propose a single dispersion Cole impedance model for textile electrodes treated with PEDOT:PSS (Fig. 12). The impedance parameters obtained for this model were \(R_\infty =35.065\;\text{k}\Omega \), \(R_1=3.701\;\text{M}\Omega \), \(C_1=15.129\;\text{nF}\) and \(\alpha _1=0.8397\). This static model only intends to represent a simplification of the data acquired in this study. The model can be adjusted by increasing the number of tests and individuals; thus, the model can be tuned to the specific set of data. The high variability of biological systems increases the difficulty of obtaining deterministic and predictive models. Therefore, reference models become a valid alternative for preliminary examinations. Circuit model of contact impedance for textile electrodes. a Single dispersion Cole impedance model. b Continuous cyan line fits the model, other lines are the values of impedance magnitude obtained in the research The main results of the research are summarized in Table 3. Table 3 Summary of the main findings of the investigation ECG signals were correctly registered by a set of textile electrodes treated with a conductive polymer (PEDOT:PSS). Materials under test were cotton, cotton–polyester (65% cotton, 35% colyester), lycra, polyester, and MEDTEX P-130. No gel, substrate or adhesive material was used to improve the ionic conductivity. Experiments confirmed that these materials can be used in the fabrication of wearable sensors of daily use. Materials tested are suitable for applications where the use of disposable electrodes is not practical. Fabrics in Fig. 3 contain fibers with a coating of PEDOT:PSS with no visible deterioration at microscopic level. Due to previous conductivity calculations, we determined that the combination of highly conductive solution of PEDOT:PSS with cotton, lycra, cotton–polyester, and polyester provides an acceptable surface resistance for medical applications, specially monitoring of ionic transfers. All conductivity-related properties are connected with its fibrous structure. Resistance is associated with contact resistance between neighboring yarns and numerous contact points at the crossing between yarns in each fabric [33]. Figure 3b, c show that the manner in which the yarns of lycra and cotton–polyester are arranged on the surface of the fabrics does not impact significantly their resistance, which was observed in the impedance responses. However, when they are compared to Fig. 3d, the impedance differs notably; indeed, the gaps between each yarn represent an increment on the impedance as a consequence of the material resistance. From the observations to the fibers, we identified that the anisotropy may result from the different number of warp and weft yarns per length unit. It is visible that the current does not spread in uniformly on the surface. As we did not care about the orientation of the surface interface from the fabrics, we cannot address any affirmations about the direction of current flows along the yarns and the detailed effects of the gaps between them. However, the arrangement of the yarns, which was multi-directional, show better results in the impedance analysis and conductivity calculations, we presume that the interlacing yarns conducts to higher current captures, caused by the isotropy of electroconductive properties of fabrics shown in Fig. 3b, c. Nylon–silver and polyester showed worst performance doe to their anisotropic structure. We determined that materials treated with PEDOT:PSS presented no statistically significant differences in acquiring ECG signals. MEDTEX P-130 based electrodes only presented a better performance in polarization tests; hence, showed a slight tendency to have poorer performance in ECG signal acquisition. Lycra based textile electrodes exhibited a highly reliable behavior, represented in lower impedance mean values and lower dispersion in the different repetitions. Notwithstanding the multiple factors that influence the skin impedance, and the high variability present in contact impedance data, even in the same individual, we confirmed that the contact impedance is higher in textile electrodes compared to commercial electrodes. One of the factors that could strongly influence these results was the effective area of textile electrodes (4 cm2), an analysis that was out of the scope of this work. Effective electrode area affects the skin-electrode interface and its impedance which actively influences the acquired ECG signal. In fact, the relationship between the effective area of the electrode and contact impedance was studied by Puurtinen et al. [20]. It is also well known that high contact impedance is balanced with high and ultra-high input impedance electronic systems [34, 35]. It is important to highlight that all the tests reported in this paper were performed with dry electrodes. We did not use any type of gel or electrolyte that help to improve the conductivity of the skin-electrode interface. This could explain the high values of contact impedance with respect to the Ag/AgCl electrodes. Actual applications where electrodes are incorporated to acquire bioelectric signals are intended to be into the wardrobe in a natural way for the end user. Wearable devices should not become an invasive element, difficult to use and manipulate. This work aimed to elucidate which material provides advantages for the manufacture of garments that allow the continuous monitoring of electrocardiographic signals. However, since there is no significant difference between the electrical characteristics of the materials, it is relevant to conduct studies focused on the mechanical properties of textile materials. Besides, strategies should be sought to improve the fitting of the electrode with the skin, in order to improve the effective area of contact. Previous work demonstrated that textile electrodes have a significant effect on charge transfer. This is due to the complex contact area created by the woven structure, and since the surface of the electrode is not completely parallel to the skin. Many variables inherent to textile materials contribute to the high variability of the contact impedance, for instance, the number of fibers per cross-section, fiber properties, conductive polymer adhesion to the fibers, fiber density, and hairiness [36]. Figure 3 shows the physical appearance of the surface of the different fabrics used in this study. The woven structures apparently influence the contact impedance through the contact area formed with the skin. Polyester shows the highest impedance value, which may correspond to the empty spaces that remain between the fibers. On the other hand, lycra has the lowest impedance values, which may obey a better contact area, created by a better disposition of fibers tightly arranged. Textile electrodes treated with PEDOT:PSS have a higher extent of polarization level than conventional ones, and the MEDTEX P-130 based electrodes (mean value: 15.4 mV). The results did not show substantial changes under conditions of complete rest; however, were slightly affected by artifacts generated by muscle activation. The potential polarization effect tends to become uniform over time (variability < 0.3 mV every 0.1 s), which contributes to its elimination through the use of analog electronic circuits. On the segmentation process, the error rates were generally under 2%. Signals taken with textile electrodes showed a greater presence of noise than (Ag/AgCl) commercial electrodes. Nonetheless, all the segments of the ECG signal were identified properly. MEDTEX P-130 electrodes are more sensitive to noise than the other textile electrodes. Long-term performance measurements showed that after 36 h the electrodes treated with PEDOT:PSS continue to have an adequate performance, i.e. ECG signals were clearly identified. MEDTEX P-130 based electrodes showed deterioration of the ECG signal during the test, likely as a consequence of the interaction with biological fluids. There is no evidence that relates changes of the properties of the electrodes over the time; however, misreadings can be attributed to a poor contact to the skin as a result of a movement, displacement of the material or momentary disconnection during the test. Due to the duration of the test, data was only visualized off-line, after the segmentation process. The duration of the long-term performance measurements was conditioned by the availability of laboratories and the volunteers. Takamatsu et. al [29] reported reliable results after 72 h using textile electrodes with similar features. We propose, as future work, the development of monitoring systems using wearable sensors that incorporate PEDOT:PSS treated electrodes. Investigations should focus on the behavior of textile electrodes for long periods of time, material behavior after washing processes, noise and artifacts during physical activity, and the effects of sweating on the quality of the ECG. Bearing in mind that the material is intended to be used in wearable devices, a future study is necessary that contemplates the use of dynamic tests. It is also important how the effective area, the morphology, thickness of the polymer on the fabric and other characteristics can modify the contact impedance of the skin electrodes and their effect on the quality of the acquired ECG signals. Contact impedance, polarization, and noise level tests showed significant differences in favor of Ag/AgCl commercial electrodes. We found that fabrics treated with PEDOT:PSS such as cotton–polyester (65% cotton, 35% polyester), lycra and polyester are suitable for activities that do not involve diagnosis. Although, they have a considerable advantage over disposable electrodes, which must be replaced at least every 24 h. Long-term performance tests demonstrated that fabrics treated with PEDOT: PSS are functional after 36 h of continued use. They allowed to acquire ECG signals as at the beginning of the test; however, electrodes constructed with silver-plated nylon showed considerable deterioration of the ECG signal after 24 h. We measured noise level under 5% on fabrics treated with PEDOT:PSS and silver-plated nylon. Such fabrics could be used as primary sensing elements of an ECG monitoring system. The average polarization level measured on the textiles under test was 15.4 mV. Polarization levels were constant and only affected by involuntary muscle movements performed by the individuals. Such values can be removed by DC coupling mechanisms or by digital filtering. Silver-plated nylon electrodes showed superior performance than electrodes treated with PEDOT:PSS, in comparison to the behavior of Ag/AgCl electrodes. None of the tests yielded statistically significant evidence that permits to determine that one PEDOT:PSS treated material used in textile electrodes has a superior performance compared to others. Therefore in the development of wearable ECG signals acquisition systems, the type of material is not an aspect to be considered from the point of view of signal quality and electrical behavior. Future studies should focus on mechanical characterization of the material to obtain an adequate coupling to the skin to orientate the applications to the development of systems for athletes, people in rehabilitation or at risk of heart disease or prevention systems, generation of early warnings and promotion of self-care. World Health Organization. Cardiovascular diseases (CVDs). 2016. http://www.who.int/mediacentre/factsheets/fs317/en/. Accessed 29 Jan 2016. Taji B, Shirmohammadi S, Groza V, Bolic M. An ECG monitoring system using conductive fabric. In: 2013 IEEE international symposium on medical measurements and applications proceedings (MeMeA). New York: IEEE; 2013. p. 309–14. Raj D, Ha-Brookshire JE. How do they create Superpower? An exploration of knowledge-creation processes and work environments in the wearable technology industry. Int J Fash Des Technol Educ. 2016;9(1):82–93. Fiedler P, Biller S, Griebel S, Haueisen J. Impedance pneumography using textile electrodes. In: 2012 annual international conference of the IEEE engineering in medicine and biology society (EMBC). New York: IEEE; 2012. p. 1606–9. Tronstad C, Johnsen GK, Grimnes S, Martinsen OG. A study on electrode gels for skin conductance measurements. Physiol Meas. 2010;31(10):1395. Ask P, ÖDerg PA, ÖDman S, Tenland T, Skogh M. ECG electrodes: a study of electrical and mechanical long-term properties. Acta Anaesthesiologica Scandinavica. 1979;23(2):189–206. Marozas V, Petrenas A, Daukantas S, Lukosevicius A. A comparison of conductive textile-based and silver/silver chloride gel electrodes in exercise electrocardiogram recordings. J Electrocardiol. 2011;44(2):189–94. Carpi F, De Rossi D. Electroactive polymer-based devices for e-textiles in biomedicine. IEEE Trans Inf Technol Biomed. 2005;9(3):295–318. Pani D, Dessi A, Gusai E, Saenz-Cogollo JF, Barabino G, Fraboni B, Bonfiglio A. Evaluation of novel textile electrodes for ECG signals monitoring based on PEDOT:PSS-treated woven fabrics. In: 2015 37th annual international conference of the IEEE engineering in medicine and biology society (EMBC); 2015. p. 3197–200. Pani D, Dessi A, Gusai E, Saenz Cogollo JF, Barabino G, Fraboni B, Bonfiglio A. Fully textile, pedot:pss based electrodes for wearable ecg monitoring systems. IEEE Trans Biomed Eng. 2016;63(3):540–9. Pani D, Dessi A, Saenz-Cogollo JF, Barabino G, Fraboni B, Bonfiglio A. Fully textile, PEDOT: PSS based electrodes for wearable ECG monitoring systems. IEEE Trans Biomed Eng. 2016;63(3):540–9. Xie L, Yang G, Xu L, Seoane F, Chen Q, Zheng L. Characterization of dry biopotential electrodes. In: Proceeding of the 35th annual international conference of the IEEE engineering in medicine and biology society. IEEE Engineering in Medicine and Biology Society. Annual Conference, vol. 2013. Osaka; 2013. p. 1478–81. https://doi.org/10.1109/EMBC.2013.6609791. https://pdfs.semanticscholar.org/f2c7/16a5b367c95d05394099686205117c88c090.pdf. Mestrovic MA, Helmer RJN, Kyratzis L, Kumar D. Preliminary study of dry knitted fabric electrodes for physiological monitoring. In: Proceeding of the 3rd international conference on intelligent sensors, sensor networks and information, 2007. ISSNIP 2007, p. 601–6. https://doi.org/10.1109/ISSNIP.2007.4496911. Oh TI, Yoon S, Kim TE, Wi H, Kim KJ, Woo EJ, Sadleir RJ. Nanofiber web textile dry electrodes for long-term biopotential recording. IEEE Trans Biomed Circuits Syst. 2013;7(2):204–11. Beckmann L, Neuhaus C, Medrano G, Jungbecker N, Walter M, Gries T, Leonhardt S. Characterization of textile electrodes and conductors using standardized measurement setups. Physiol Meas. 2010;31(2):233. Chen Y, Pei W, Chen S, Wu X, Zhao S, Wang H, Chen H. Poly(3,4-ethylenedioxythiophene) (PEDOT) as interface material for improving electrochemical performance of microneedles array-based dry electrode. Sens Actuators B Chem. 2013;188:747–56. Patterson RP. The electrical characteristics of some commercial ECG electrodes. J Electrocardiol. 1978;11(1):23–6. Rattfält L, Lindén M, Hult P, Berglin L, Ask P. Electrical characteristics of conductive yarns and textile electrodes for medical applications. Med Biol Eng Comput. 2007;45(12):1251–7. Rattfält L, Björefors F, Nilsson D, Wang X, Norberg P, Ask P. Properties of screen printed electrocardiography smartware electrodes investigated in an electro-chemical cell. Biomed Eng Online. 2013;12(1):64. Puurtinen MM, Komulainen SM, Kauppinen PK, Malmivuo JAV, Hyttinen JAK. Measurement of noise and impedance of dry and wet textile electrodes, and textile electrodes with hydrogel. In: conference proceedings : 28th annual international conference of the IEEE engineering in medicine and biology society. IEEE engineering in medicine and biology society. Conference 1; 2006. p. 6012–5. Pola T, Vanhala J. Textile electrodes in ECG measurement. 3rd international conference onIntelligent sensors, sensor networks and information, ISSNIP 2007; 2007. p. 635–9. Baba A, Burke M. Measurement of the electrical properties of ungelled ECG electrodes. Int J Biol Biomed Eng. 2008;2(3):89–97. GmbH SPV. Shieldex®Med-tex P130. https://goo.gl/KGK1Hj. [En línea; accedido18-Septiembre-2017]. https://goo.gl/KGK1Hj. 2013. Accessed 2 Aug 2013. Guisande C, et al. Rwizard software. Universidad de Vigo. España. 2014. http://www.ipez.es/RWizard. Accessed 16 Nov 2017. Freeborn TJ, Maundy B, Elwakil AS. Cole impedance extractions from the step-response of a current excited fruit sample. Comput Electron Agric. 2013;98:100–8. Freeborn TJ. A survey of fractional-order circuit models for biology and biomedicine. IEEE J Emerg Sel Topics Circuits Syst. 2013;3(3):416–24. Vanlerberghe F, De Volder M, de Beeck MO, Penders J, Reynaerts D, Puers R, Van Hoof C. 2-Scale topography dry electrode for biopotential measurements. In: 2011 annual international conference of the IEEE engineering in medicine and biology society, EMBC; 2011. p. 1892–5. Sarbaz Y, Towhidkhah F, Mosavari V, Janani A, Soltanzadeh A. Separating Parkinsonian patients from normal persons using handwriting features. J Mech Med Biol. 2013;13(03):1350030. Takamatsu S, Lonjaret T, Crisp D, Badier JM, Malliaras GG, Ismailova E. Direct patterning of organic conductors on knitted textiles for long-term electrocardiography. Sci Rep. 2015;5:1–7. https://doi.org/10.1038/srep15003. Kligfield P, Gettes LS, Bailey JJ, Childers R, Deal BJ, Hancock EW, van Herpen G, Kors JA, Macfarlane P, Mirvis DM, Pahlm O, Rautaharju P, Wagner GS. Recommendations for the standardization and interpretation of the electrocardiogram. Part I: the electrocardiogram and Its technology a scientific statement from the American heart association electrocardiography and arrhythmias committee, council on clinical cardiology; the American college of cardiology foundation; and the heart rhythm society endorsed by. J Am Coll Cardiol. 2007;49(10):1109–27. Bustamante Arcila C, Duque Vallejo S, Orozco-Duque A, Bustamante Osorno J. Development of a segmentation algorithm for ecg signals, simultaneously applying continuous and discrete wavelet transform. In: Image, signal processing, and artificial vision (STSIVA), 2012 XVII Symposium Of; 2012. p. 44–9. https://doi.org/10.1109/STSIVA.2012.6340555 Mestrovic M. Characterisation and biomedical application of fabric sensors a thesis submitted for fulfilment of the requirements for the degree of Master of Engineering. Ph.D. thesis, RMIT University. https://researchbank.rmit.edu.au/eserv/rmit:14607/Mestrovic.pdf. http://researchbank.rmit.edu.au/eserv/rmit:14607/Mestrovic.pdf; 2007. Tokarska M, Frydrysiak M, Zieba J. Electrical properties of flat textile material as inhomegeneous and anisotropic structure. J Mater Sci Mater Electron. 2013;24(12):5061–8. Gargiulo G, Bifulco P, Cesarelli M, Ruffo M, Romano M, Romano M, Calvo RA, Jin C, van Schaik A. An ultra-high input impedance ECG amplifier for long-term monitoring of athletes. Med Device (Auckland, NZ). 2010;3:1–9. Chi YM, Maier C, Cauwenberghs G. Ultra-high input impedance, low noise integrated amplifier for noncontact biopotential sensing. IEEE J Emerg Sel Topics Circuits Syst. 2011;1(4):526–35. Priniotakis G, Westbroek P, Van Langenhove L, Hertleer C. Electrochemical impedance spectroscopy as an objective method for characterization of textile electrodes. Trans Inst Meas Control. 2007;29(3–4):271–81. HA-C and JJP were responsible for writing the manuscript. HA-C, JJP and RC were responsible for planning the experiments. HA-C was responsible for overall planning of the study. RC was responsible for planning and carrying out the experiments. All authors read and approved the final manuscript. The authors gratefully acknowledge the researchers at the University of Cagliari in Italy, specially Ph.D. José Francisco Saenz for facilitating the materials used in this work, and also for the conceptual and logistical support. Likewise we are grateful to the members of the Center of Bioengineering at the Universidad Pontificia Bolivariana in Colombia, for the methodological support and project financing. Thanks to the faculties of Health Sciences and Engineering at the Universidad Católica de Oriente in Colombia for accompanying the project, facilitating the laboratories where testing were made and the measurement equipment used. Finally a special thanks to the volunteers who served as test subjects in this investigation, who endured long working hours without receiving any remuneration. At the time of their initial briefing, all study participants were informed of the likelihood that the data would be part of a publication. All subjects gave informed consent, and were briefed both verbally and in written form before their ear impressions were taken, in accordance with the regulations of the local ethics committee (Committee on Health Research Ethics, Universidad Pontificia Bolivariana). Mobile Computation and Ubiquituos Research Group GIMU, Universidad Católica de Oriente, Sector 3 Cra 46-40 B-50, Rionegro, Colombia Reinel Castrillón Centro de Bioingeniería, Facultad de Ingeniería Eléctrica y Electrónica, Universidad Pontificia Bolivariana, Circular 1 #70-01, Medellin, 050031, Colombia Jairo J. Pérez & Henry Andrade-Caicedo Jairo J. Pérez Henry Andrade-Caicedo Correspondence to Reinel Castrillón. Castrillón, R., Pérez, J.J. & Andrade-Caicedo, H. Electrical performance of PEDOT:PSS-based textile electrodes for wearable ECG monitoring: a comparative study. BioMed Eng OnLine 17, 38 (2018). https://doi.org/10.1186/s12938-018-0469-5 Textile electrodes PEDOT:PSS Electric characterization Contact impedance
CommonCrawl
General Relativity and Gravitation May 2019 , 51:63 | Cite as Black hole thermodynamics: general relativity and beyond Sudipta Sarkar Editor's Choice (Invited Report: Introduction to Current Research) Black holes have often provided profound insights into the nature of gravity and the structure of space–time. The study of the mathematical properties of black objects is a major research theme of contemporary theoretical physics. This review presents a comprehensive survey of the various versions of the first law and second law of black hole mechanics in general relativity and beyond. The emphasis is to understand how these laws can constrain the physics beyond general relativity. "The black holes of nature are the most perfect macroscopic objects there are in the universe: the only elements in their construction are our concepts of space and time." .... Subrahmanyan Chandrasekhar. It is appropriate to start a review of black hole thermodynamics with the above quotation by S. Chandrasekhar. The quote brings out the fundamental characteristics of a black hole: The Universality. The properties of a black hole are (almost) independent of the details of the collapsing matter, and this universality is ultimately related to the fact that black holes could be the thermodynamic limit of underlying quantum gravitational degrees of freedom. Therefore, the classical and semi-classical properties of black holes are expected to provide important clues about the nature of quantum gravity. A significant obstacle in constructing a theory of quantum gravity is the absence of any experimental or observational result. The only "test" we can imagine is the theoretical and mathematical consistency of the approach. The understanding of the fundamental laws of black hole mechanics could be a necessary (if not sufficient) constraint on the theory of quantum gravity. The modern understanding of the properties of black hole starts with the resolution of the "Schwarzschild Singularity" using Kruskal–Szekeres coordinates [1, 2]. These coordinates cover that entire spacetime manifold of the maximally extended vacuum spherically symmetric solution of the Einstein's field equation and are well-behaved everywhere outside the physical singularity at the origin, in particular at the position \(r = 2M\). The next important step is the discovery of the rotating asymptotically flat vacuum black hole solution by Roy Kerr [3]. The solution exhibited various interesting and generic properties of a stationary black hole in general relativity. The existence of Ergosphere and Superradiance show how to extract energy and angular momentum from the black hole. The study of these phenomena lead to a significant result; the area of the black hole can never be decreased using these processes. For example, using the Penrose process, it is possible to extract energy from the black hole, and as a result, the mass of the black hole decreases. At the same time, the process slowed down the rotation, and the net effect only increases the area. Then comes the famous work by Hawking [4] which analyzes the general properties of a black hole, independent of the symmetry of a particular solution. This work contains several important theorems: the topology theorem, the strong rigidity theorem and most importantly, the area theorem. Area theorem is a remarkable result which asserts that the area of the event horizon can not decrease as long as the matter obeys a specific energy condition. This is a highly nontrivial statement related to the dynamics of black holes in general relativity. Consider the collision of two black holes which generated a burst of gravitational waves extracting energy from the black holes to infinity. The area theorem constrains the efficiency of this process and limits the amount of radiated energy so that the area of the final black hole is always greater than the sum of the individual black hole areas before the collision [5]. In this sense, the area theorem is a statement of the limitation of converting the black hole mass into energy; akin to the second law of thermodynamics. Immediately after this result, Hawking, Bardeen, and Carter wrote down the four laws of black hole mechanics [6] and as it is well known, these laws have an intriguing resemblance with the laws of thermodynamics. Interestingly, the paper treated this resemblance as only a formal analogy. The real step towards the black hole thermodynamics was taken by Bekenstein, [7, 8] who proposes that we should take the area theorem seriously and the area of the black hole is indeed related to the thermodynamic entropy of the event horizon. The basis of this claim was somewhat heuristic; Gedanken experiments estimating the loss of information due to the presence of the horizon. The arguments show that the entropy is proportional to the area of the event horizon and therefore the area theorem is a consequence of the second law of thermodynamics. These results mark the real beginning of the black hole thermodynamics, and the analogy becomes a robust correspondence with the discovery of the Hawking radiation [9] fixing the proportionality constant between area and entropy. The final expression of the Hawking temperature and Bekenstein entropy of the horizon of a 4-dimensional Schwarzschild black hole of mass M and area A becomes; $$\begin{aligned} T_H =\frac{ \hbar \, c^3 }{8 \pi G k_B\, M}; \,\,\, S = \frac{c^3 k_B\, A}{4 G \hbar }, \end{aligned}$$ It is evident from the appearance of the Planck constant, and the Newton's constant, the understanding of these expressions would require some form of quantum gravity. It may be possible to proclaim that these are the leading order result of the theory of quantum gravity.1 This review aims to understand the issue of the general applicability of the laws of black hole thermodynamics. In particular, we will try to answer the following questions; How far the laws of Black Hole mechanics can be generalized beyond General Relativity? Can we constrain possible extensions of general relativity using the black hole (BH) mechanics? What exactly we have learned so far about Quantum Gravity from BH Mechanics? The last question is indeed the toughest and probably remain unanswered in this review except for some rudimentary remarks at the end. But, given the recent developments of black hole physics, it is possible to provide reasonable answers to the first two questions. The discussion in the review will be mostly classical, and we will assume the applicability of the classical energy conditions, in particular, the null energy condition. The primary focus is a comprehensive discussion of the physical process law and the second law. We will not consider the issues related to the semiclassical gravity; in particular, Hawking radiation and trans Planckian problem. Another vital omission will be the information loss paradox. We will also restrict ourselves to the mechanics and thermodynamics of the event horizon only. 2 The various versions of the first law The first law of black hole mechanics has several avatars, and we need to distinguish the different formulations of the first law. In ordinary thermodynamics, the first law is the statement of the conservation of energy. The total energy can not be destroyed or created, but can always be converted into another form of energy. The statement is mathematically described by the difference equation \(\Delta U = Q - W\). The change of the internal energy U of the system is equal to the difference of the heat supplied Q, and the work done W by the system. The conservation of energy is built-in into the dynamics of general relativity. So, what we mean by the first law is the Clausius theorem which involves the notion of the entropy. Consider a system under quasi-static change which is subjected to an infinitesimal amount of heat Open image in new window from the surrounding. The heat change is an inexact differential, and therefore the total heat Q is not a state function. It is then assured that there exists a state function called the "entropy" S such that the temperature of the system acts as an integrating factor relating the change in entropy to the heat supplied as Open image in new window . The Clausius theorem ensures the existence of the state function entropy associated with a thermodynamic equilibrium state of the system. Note that all changes are considered to be quasi-stationary, always infinitesimally close from an equilibrium state. In the case of a black hole, we need to be careful before applying these concepts. To begin, the obvious choice of an equilibrium state is a stationary black hole. So, let us first define the notion of the stationary black hole in general relativity. To define the event horizon of a black hole, we require information about the asymptotic structure. Suppose we consider an asymptotically flat space–time such that the asymptotic structure is the same as that of the flat space–time. Then, the event horizon is defined as the complement of the past of the future null infinity. This is a global definition and to find the location of the event horizon, we require the knowledge about the entire space–time. This is not a very convenient concept. For example, if one is looking for the signature of the formation of the event horizon in the computer codes of numerical relativity, she has to wait for infinite time! As a result, alternative notions like apparent horizon and quasi-local horizons may suit much better for such an analysis. Nevertheless, the event horizon can be very useful because it is a null surface,2 and the causal boundary between two regions of space–time called inside and outside of the black hole. As a result, at least intuitively it makes sense to assign an entropy to the null event horizon. The definition of the event horizon does not need any symmetries of the underlying space–time. Now, consider the particular case when the space–time is stationary and contains a time like Killing vector. Such a time like Killing vector provides a related concept called the Killing horizon. A Killing horizon is a surface where the time like Killing field becomes null. An example of such a surface would be the Rindler horizon in the flat space–time. It is easy to check that the boost Killing field indeed becomes null at the location of the Rindler horizon. This example shows that the Killing horizon may be entirely unrelated to the event horizon. The Rindler accelerated horizon is a Killing horizon but not an event horizon. Next, consider an event horizon in a stationary space–time. Then, it is the Strong Rigidity theorem [4] which asserts that the event horizon in a stationary space–time is also a Killing horizon. The strong rigidity theorem is a powerful result, and the proof requires Einstein field equation and some technical assumptions like the analyticity of the space–time. Generalizing the proof beyond \(3+1\) dimensions needs more sophisticated mathematical machinery [11, 12]. The strong rigidity theorem is only proven for general relativity. Therefore for black holes in various modified gravity theories, we have to consider this as an assumption. The derivation of the equilibrium state version of the first law starts with a stationary event horizon which is also a Killing horizon. For simplicity, the D dimensional spacetime is assumed to be asymptotically flat. We will also assume that the physical space–time can be extended to add a bifurcation surface in the past, where the time like Killing field vanishes. The existence of a bifurcation surface ensures that the surface gravity is constant along all the directions on the horizon [13]. We will consider that the bifurcation surface is regular, i.e., all the fields have a smooth limit from the outside to the bifurcation surface. This is a nontrivial assumption, and there are theories, e.g., Einstein–Aether Theory [14] in which such an assumption does not hold. Given all these, we now write down the expression of the ADM mass, or in this case the Komar mass as, $$\begin{aligned} M = -\frac{1}{8 \pi } \int _{S} \nabla ^a \xi ^b \, dS_{ab}. \end{aligned}$$ The integration is at the asymptotic spatial infinity, and the Killing field is normalized as \(\xi _a \xi ^a = -1\) asymptotically. For the black hole spacetime, let us consider a space like hypersurface \(\Sigma \) which extends from infinity to the horizon. The surface has two boundaries, one at infinity and other at the horizon. Using Stokes theorem and Einstein's field equations \(G_{ab} = 8 \pi T_{ab}\), we can then express the Komar mass as, $$\begin{aligned} M = -2 \int _{\Sigma } \left( T^{a}_{b} \xi ^b - \frac{1}{D - 2} T \xi ^a \right) d\Sigma _a + \frac{1}{8 \pi } \int _{H} \nabla ^a \xi ^b \, dS_{ab}, \end{aligned}$$ where T denotes the trace of the energy–momentum tensor. Let us further assume that we are only considering a vacuum solution; and therefore \(T_{ab} = 0\). Also, there is no angular momentum, and the space–time is static. Then the first integral vanishes. The last integral is at the inner boundary of the surface \(\Sigma \) where it meets the horizon. For the static spacetime, we can evaluate the second integral and write the final expression as [6], $$\begin{aligned} M = \left( \frac{D - 2}{D - 3}\right) T_H S, \end{aligned}$$ where \(T_H\) is the Hawking Temperature and S is the Bekenstein entropy of the black hole. The expressions of \(T_H\) and S contain the Planck constant \(\hbar \), but the product is independent of \(\hbar \). This equation is a particular case of what is known as the Smarr formula [15]. Although the derivation of this equation is straightforward, the interpretation is a bit subtle. The equation relates an asymptotic quantity, the ADM/Komar mass with the quantities corresponding to the horizon. If we approve the use of thermodynamic concepts, the Smarr formula may be regarded as the equation of state at thermodynamic equilibrium relating energy M, temperature \(T_H\) and entropy S. Also, the derivation does not work for \(D = 3\) indicating the absence of asymptotically flat vacuum black hole solutions in lower dimensions. Note that, there is no physical process by which we can change the ADM or Komar mass. This formula is only valid for a strictly static and vacuum space–time. Therefore, instead of a physical change, let us now consider a virtual change of the quantities: two Schwarzschild black hole solutions in D dimensions with masses M and \( M + \Delta M\) in the space of solutions of general relativity. Therefore the variation \(\Delta M\) represents a virtual change, again only in the space of static, vacuum and asymptotically flat black hole solutions of general relativity. Then, the variation of the Smarr formula gives, $$\begin{aligned} \Delta M = \left( \frac{D - 2}{D - 3}\right) \left( T_H \Delta S + S \Delta T_H \right) . \end{aligned}$$ Let us evaluate the r.h.s of the above equation for a Schwarzschild black hole in D dimensions. Set \(\left( G = \hbar = k_B = c = 1\right) \) and then the metric is, $$\begin{aligned} ds^2 = -\left( 1- \frac{C}{r^{D-3}}\right) dt^2 + \frac{dr^2}{\left( 1- \frac{C}{r^{D-3}}\right) }+r^2d\Omega ^2. \end{aligned}$$ The constant C is a function of the ADM mass M of the space time and if \(D=4\), we have \(C=2 M\). The horizon is located at \(r_h = C^{\frac{1}{D-3}}\) and the surface gravity is \(\kappa = ((D-3)/2)C^{-\frac{1}{D-3}}\). The expression of the Hawking Temperature and the Bekenstein entropy are then given by, $$\begin{aligned} T_H = \frac{\kappa }{2 \pi } = \frac{D-3}{4 \pi } C^{-\frac{1}{D-3}}; \,\,\, S = \frac{A_{D-2} C^{\frac{D-2}{D-3}}}{4 }. \end{aligned}$$ Using these expressions, it is easy to verify that, $$\begin{aligned} \Delta M = \left( \frac{D - 2}{D - 3}\right) \left( T_H \Delta S + S \Delta T_H \right) = T_H \Delta S, \end{aligned}$$ This is the simplest derivation of the equilibrium state version of the first law of black hole mechanics. This derivation can be generalized in several ways. If we include matter, e.g., an electrovacuum solution, there will be additional work terms. But the most interesting generalization is for theories with higher curvature terms in the action. The area law fails generically for higher curvature gravity [18, 19, 20, 21, 22] and the entropy is proportional to a different local geometric quantity evaluated on the horizon. In fact, the black hole entropy in any diffeomorphism invariant theory of gravity turns out to be the Noether charge of the Killing isometry which generates the horizon [21, 22]. Before discussing the derivation of this "Wald entropy", we will first try to understand intuitively why and how the area law fails beyond general relativity, using a generalized version of the original argument by Bekenstein [7, 8]. There are several motivations of considering a higher curvature theory. As a typical example, consider the perturbative quantization of gravity which leads to nonrenormalizable quantum theory and is confronted by uncontrollable infinities. If we treat such a nonrenormalizable theory as a low-energy effective field theory, adding new counter-terms and couplings at each new loop order, then the effective Lagrangian of gravity can be expressed as $$\begin{aligned} \mathcal{L} = \frac{1}{16 \pi G} \left( R + \alpha \, \mathcal{O}(R^2) + \beta \, \mathcal{O}(R^3) + \cdots \right) , \end{aligned}$$ where \(\alpha , \beta , \ldots \) are the new parameters of the theory with appropriate dimensions of length. At the level of the effective theory, all terms consistent with diffeomorphism invariance can appear, but from a phenomenological point of view, only a subset of terms which leads to a well behaved classical theory is more desirable. In this case, the motivation of having these higher curvature terms comes from the idea that the Einstein–Hilbert action is only the first term in the expansion for the low energy effective action and higher order terms arise from the quantum corrections to the Einstein–Hilbert action functional [23], which will, of course, depend on the nature of the microscopic theory. In particular, such higher curvature terms also arise in the effective low energy actions of some string theories [24, 25]. The detailed structure of these terms will depend on the specifics of the underlying quantum gravity theory. If we turn on these higher curvature corrections, the field equation will get modified, and the area theorem may not hold anymore. But, for specific higher curvature terms, we can still obtain exact black hole solutions as in case of GR. Now, consider the simplest case of spherical symmetry and assume that a set of identical particles with the same mass m is collapsing in D dimensions to form a black hole of mass M. If each of these particles contains one bit of information (in whatever form, may be information about their internal states, etc.), then the total loss of information due to the formation of the black hole will be \( \sim M/m\). Classically, this can be as high as possible, but quantum mechanically there is a bound on the mass of each constituent particle because we want the Compton wavelength of these particles to be less than the radius of the hole \(r_h\). Then, the maximum loss of information will be \(\sim M r_h\), and this is a measure of the entropy of the hole. Note that, we have not used any information about the field equation yet. So, this is completely an off-shell result. The field equation will provide a relationship between the mass M and the horizon radius. Let us now treat the specific case of general relativity. If we solve the vacuum Einstein's equations for spherical symmetry, we obtain the usual Schwarzschild solution with \(M \sim r_{h}^{D-3}\), and this lead to black hole entropy proportional to \(r_{h}^{D-2}\), the area of the horizon. Next comes the modified gravity, with higher curvature terms and we will have new dimensionful constants in our disposal. Therefore, there could be a complicated relationship between mass and horizon radius. For example, if we restrict ourselves up to only curvature square correction terms with a coupling constant \(\alpha \), we could have a relationship like \( M \sim r_{h}^{D-3} + \alpha \, r_{h}^{D-5}\), and the second term results in a sub-leading correction to black hole entropy. This simple illustration shows how the presence of new dimensionful constants in modified gravity theories leads to a possible modification of the black hole entropy. The simplest way to derive the first law for any higher curvature theory would be to start with a suitable modification of the definition of the Komar mass in Eq. (2). For example, if we are working with m-th Lovelock class of action functionals with Lagrangian \(\mathcal{L}^{(m)}\), the appropriate definition of the Komar mass will be [16], $$\begin{aligned} M = -\frac{1}{8 \pi } \int _{S} P_{abcd} \nabla ^c \xi ^d \, dS_{ab}, \end{aligned}$$ where the tensor \(P_{abcd}\) has the symmetries of the Riemann Curvature tensor and is defined as, $$\begin{aligned} P^{abcd} = \frac{\partial \mathcal{L}^{(m)}}{\partial R_{abcd}}. \end{aligned}$$ For Lovelock gravity, the tensor also has the property \(\nabla _i P^{abcd} = 0; \,\, i = a, b, c, d\). Using this expression and also the properties of the Killing vector, it is possible to derive a Smarr formula [17] exactly as in case of GR but the entropy as; $$\begin{aligned} S_w = - 2\pi \int _{\mathcal{B}} P^{abcd} \epsilon _{ab} \epsilon _{cd} \sqrt{h} \,d^{D-2} x, \end{aligned}$$ where \(\epsilon _{ab} \) is the bi-normal to the bifurcation surface \(\mathcal{B}\). As in the case of general relativity, the entropy obeys a Clausius theorem \( T \Delta S = \Delta M\) for infinitesimal variation in the space of static vacuum solutions. This simple derivation can be made more rigorous by using the Noether charge formalism of Wald and collaborators [18, 19, 20, 21, 22]. The crucial input to the derivation is the diffeomorphism invariance in the presence of an inner boundary. The bulk part of the Hamiltonian vanishes on-shell, and the two boundary terms (one at the horizon and other at the outer boundary) are related to each other. Then for variations in the space of stationary solutions, we get the first law as the Clausius theorem. The construction of Wald entropy formula crucially depends on the existence of a bifurcation surface. However, as pointed out by [26], the Wald entropy remains unaffected even when it is evaluated on an arbitrary cross-section of a stationary event horizon provided the bifurcation surface is regular. The Noether charge construction also has several ambiguities, but, the ambiguities in the Noether charge construction doesn't affect the Wald entropy in case of stationary black holes [22, 26]. However, if the horizon is involved in a dynamical process, i.e., for nonstationary black holes, the Wald entropy formula no longer holds and turns out to be ambiguous up to the addition of terms proportional to the expansion and shear of the dynamical event horizon. A pictorial depiction of the geometry considered in the physical process version of the first law. The green line depicts the evolution of the unperturbed stationary event horizon while the black curve denotes the evolution of the perturbed dynamical event horizon. The change in area is calculated between the two slices \(\lambda =0\) (the bifurcation surface) and \(\lambda =\lambda _f\) (a stationary final slice) along the black curve (color figure online) Having discussed the equilibrium state version of the first law, let us now focus on another version of the first law for black holes: The physical process law. This version of the first law involves the direct computation of the horizon area change when a flux of matter perturbs the horizon [27, 28, 29] (henceforth referred to as PPFL). Unlike the equilibrium state version, PPFL is local and does not require the information about the asymptotic structure of the space–time and is therefore expected to hold for a wide class of horizons (see Fig. 1). Consequently, after some initial debate regarding the applicability of PPFL in the context of Rindler space–time [29], it was later demonstrated, following [30, 31], that the physical process version of first law indeed holds for Rindler horizon in flat space time, or for that matter, any bifurcate Killing horizon. Consider a situation, in which a black hole is perturbed by matter influx with stress-energy tensor \(T_{ab}\) and it finally settles down to a new stationary state in the future. Then the PPFL which determines the change of the horizon area \(A_{\mathrm{H}}\) is given by, $$\begin{aligned} \frac{\kappa }{2\pi }\delta \left( \frac{A_{\mathrm{H}}}{4}\right) =\int _{\mathcal{H}} T_{ab}\,\xi ^{a}\, d\Sigma ^b~. \end{aligned}$$ Here \(d\Sigma ^b = k^{b} \,dA\,d\lambda \) is the surface area element and \(k^a = \left( \partial / \partial \lambda \right) ^a\) stands for the null generator of the horizon. The integration is over the dynamical event horizon and the affine parameter \(\lambda \) varies from the bifurcation surface (set at \(\lambda = 0\)) to the future stationary cross section at \(\lambda = \lambda _f\). Also, the background event horizon is a Killing horizon with the Killing field \(\xi ^a\) being null on the background horizon. On the background horizon surface, it is related with the affinely parametrized horizon generator (\(k^a\)) as \(\xi ^a = \lambda \kappa k^a\), where \(\kappa \) is the surface gravity of the background Killing horizon. It is important to note that the derivation of the above result crucially hinges on the fact that the terms quadratic in expansion and shear of the null generator \(k^{a}\) can be neglected since the process has been assumed to be sufficiently close to stationarity. This approximation ensures that there will be no caustic formation in the range of integration. For PPFL, the variation \(\delta A_{\mathrm{H}}\) represents the physical change in the area of the black hole due to the accretion of matter. As a result, here we are considering a genuinely dynamical situation. The physical process first law, therefore, relates the total change of entropy due to the matter flux from the bifurcation surface to a final state. If we assume that the black hole horizon is stable under perturbation, then the future state can always be taken to be stationary with vanishing expansion and shear, and the initial cross-section can be set at the bifurcation surface (\(\lambda =0\)). The choice of these initial and final states are necessary for this derivation of the physical process first law, to make some boundary terms vanish. The derivation can be generalized to obtain the expression of the entropy change between two arbitrary nonequilibrium cross sections of the dynamical event horizon. The additional boundary terms appearing in Eq. (13) are then related to the energy of the horizon membrane arising in the context of the black hole membrane paradigm [32]. To elaborate on the derivation of the PPFL, let us start by describing the horizon geometry and set up the notations and conventions. We will follow the derivation as presented in [32]. 3 General structure of PPFL The event horizon H of a stationary black hole in D spacetime dimensions is a null hypersurface generated by a null vector field \(k^a=(\partial / \partial \lambda )^a\) with \(\lambda \) being an affine parameter. The cross section (\({\mathcal {H}}\)) of the event horizon, which is a co-dimension two, spacelike surface, can be taken as \(\lambda = \text {constant}\) slice. Being a co-dimension two surface, \({\mathcal {H}}\) posses two normal direction. One of them is the null normal \(k^{a}\) and the other corresponds to an auxiliary null vector \(l^{a}\) defined on \({\mathcal {H}}\) such that \(k_a l^a =-1\). Then, the induced metric on the horizon cross section takes the form, \(h_{ab} = g_{ab}+k_a l_b + k_b l_a\). Taking \(x^A\) to be the coordinates on \({\mathcal {H}}\), \((\lambda ,x^A)\) spans the horizon. We define the expansion and shear of the horizon to be the trace and traceless symmetric part of the extrinsic curvature and denoted as \((\theta _k,\sigma ^k_{ab})\) and \((\theta _l,\sigma ^l_{ab})\) with respect to \(k^a\) and \(l^a\) respectively. Taking h to be the determinant of the induced metric \(h_{ab}\), the expansion \(\theta _k\) of the horizon can be written as, $$\begin{aligned} \theta _k = \frac{1}{\sqrt{h}} \frac{d}{d\lambda }\sqrt{h}. \end{aligned}$$ Then, the evolution of \(\theta _k\) along the horizon with respect to the affine parameter \(\lambda \) is governed by the Raychaudhuri equation, $$\begin{aligned} \frac{d\theta _k}{d\lambda } = -\frac{1}{D-2}\theta _k^2 -\sigma ^k_{ab}\sigma ^{ab}_k - R_{ab}k^a k^b. \end{aligned}$$ As mentioned before, an important notion that will play a significant role throughout our discussion is the bifurcation surface. A bifurcation surface is a \((D-2)\) dimensional spacelike surface \({\mathcal {B}}\), on which the Killing field \(\xi ^a\) identically vanishes. Also \({\mathcal {B}}\) is the surface on which the past and future horizons intersect. For our purpose, it is convenient to choose \({\mathcal {B}}\) to be at \(\lambda =0\). This choice can always be made due to the freedom to choose the parametrization of the horizon. The bifurcation surface is not a part of black hole spacetime formed by the gravitational collapse of an object. However, if the geodesics that generate the horizon are complete to the past, one can always have a bifurcation surface at some earlier \(\lambda \). This can be realized by the maximal extension of the black hole space–time. For instance, no notion of bifurcation surface exists in the Schwarzschild space–time. Nevertheless, in its maximal extension, i.e., in the Kruskal space–time, the 2-sphere at \(U=0, V=0\) represents a bifurcation surface, as indicated in Eq. (2). A simple calculation leads to the following expressions of the expansion coefficients along k and l, The point \({\mathcal {B}}~(U=0,V=0)\), a \((D-2)\)-dimensional cross-section of the Horizon represents the Bifurcation surface, where \(\theta _k,\theta _l=0\) $$\begin{aligned} \theta _k \propto U;\qquad \theta _l \propto V. \end{aligned}$$ Hence, at the future horizon \(U = 0\), the expansion \(\theta _k = 0\) and at the past horizon where \(V = 0\), we have \(\theta _l = 0\). At the bifurcation surface \((U=0,V=0)\) both \(\theta _k\) and \(\theta _l\) vanishes. Also, the shears can be shown to be vanishing on \({\mathcal {B}}\). \(\theta _k\) and \(\sigma _k\) are of first order in perturbation, i.e., \({\mathcal {O}}(\epsilon )\), while \(\theta _l\) and \(\sigma _l\) are of zeroth order, with \(\epsilon \) referring to the strength of perturbation everywhere on the future event horizon. However, since \(\theta _l\) and \(\sigma _l\) vanishes at the bifurcation surface of the stationary black hole, they both must be of at least \({\mathcal {O}}(\epsilon )\) only at \({\mathcal {B}}\). This result is a property of the bifurcation surface itself, independent of the physical theory one considers. Hence it also generalizes beyond general relativity and holds for higher curvature theories as well. In summary we have, \(\theta _k,\, \sigma _k,\, R_{ab}k^{a}k^{b} \backsim {\mathcal {O}}(\epsilon )\) and \(\theta _l,\sigma _l \backsim {\mathcal {O}}(\epsilon )\) at \({\mathcal {B}}\). As a result, terms like \(\theta _k \theta _l \backsim {\mathcal {O}}(\epsilon ^2)\) only at the bifurcation surface (Fig. 2). Having defined the geometry of the horizon, we are now set to illustrate the physical process version of first law for an arbitrary diffeomorphism invariant theory of gravity in its most general form. In order to discuss the PPFL, we need to define some suitable notion of entropy of the horizon. Since we consider theories beyond general relativity, the Bekenstein area law no longer holds. Nevertheless, whatever the expression for entropy might be, it must be some local functional integrated over the horizon. Hence we start by considering the following expression for horizon entropy: $$\begin{aligned} S = \frac{1}{4} \int _{{\mathcal {H}}} (1+\rho ) \sqrt{h}\;d^{D-2}x , \end{aligned}$$ where \(\rho \) is some entropy density constructed locally on the horizon and may contain the higher curvature contributions. The area-entropy relation in general relativity limit can be obtained by setting \(\rho =0\). If the black hole is stationary, the entropy should coincide with the Wald formula, but for non stationary case, can also be different. The field equation in such a general theory can always be written as, $$\begin{aligned} G_{ab}+H_{ab} = 8\pi T_{ab}, \end{aligned}$$ where, the term \(H_{ab} \) represents the deviation from general relativity. Let us now compute the variation of the entropy along the horizon generator \(k^{a}\), in response to some influx of matter, $$\begin{aligned} \delta S(\rho )&= \frac{1}{4}\int _{{\mathcal {H}}} d^{D-2}x \int \frac{d}{d\lambda }[(1+\rho )\sqrt{h}]\;d\lambda \nonumber \\&= \frac{1}{4}\int _{{\mathcal {H}}} \sqrt{h} \, d^{D-2}x \int d\lambda \, \Theta _{k}, \end{aligned}$$ where \(\Theta _{k}= \theta _k + \rho \theta _k + \frac{d\rho }{d\lambda }\). In case of general relativity, \(\Theta _k\) is simply the expansion of the horizon generators. Otherwise, this can only be interpreted as the change in entropy per unit area; we will call this the generalized expansion. The entropy change can be further simplified by integrating by parts and finally the change in entropy between two cross sections at \(\lambda _1 \) and \(\lambda _2\) takes the form, $$\begin{aligned} \delta S(\rho )&=\frac{1}{4}\left( \int dA\; \lambda \Theta _{k}\right) _{\lambda _1}^{\lambda _2} + 2\pi \int dA \;d\lambda \; \lambda \, T_{ab}k^a k^b \nonumber \\&\quad +\frac{1}{4}\int dA\; d\lambda \; \lambda \left[ -\left( \frac{D-3}{D-2}\right) (1+\rho )\theta _k^2 +(1+\rho )\sigma ^2 \right] \nonumber \\&\quad - \frac{1}{4} \int dA\;d\lambda \; \lambda \left( \frac{d^2\rho }{d\lambda ^2} + 2\theta _k \frac{d\rho }{d\lambda } -\rho R_{ab}k^a k^b + H_{ab}k^a k^b\right) . \end{aligned}$$ To derive this equation, we have used Raychaudhuri equation as well as the field equation of the form \(G_{ab} + H_{ab} = 8 \pi T_{ab}\). We would like to emphasize that Eq. (20) represents the most general form of the variation of entropy along the null generator and no assumption regarding the strength of the perturbation or the range of integration has been made throughout the derivation. We can now discuss the change in entropy at various orders of perturbation. Since terms like \(\theta _k^2,\,\sigma _k^2\) and \(\theta _k (d\rho /d\lambda )\) are of \({\mathcal {O}}(\epsilon ^2)\), they do not contribute to the first order variation. Hence, truncating the general result upto first order in perturbation, we find that the first order change in entropy takes the form, $$\begin{aligned} \delta S^{(1)}(\rho )&= \frac{1}{4}\left( \int dA\; \lambda \Theta _{k}\right) _{\lambda _1}^{\lambda _2} + 2\pi \int dA \;d\lambda \; \lambda T_{ab}k^a k^b \nonumber \\&\quad - \frac{1}{4} \int dA\;d\lambda \; \lambda \left( \frac{d^2\rho }{d\lambda ^2} -\rho R_{ab}k^a k^b + H_{ab}k^a k^b\right) . \end{aligned}$$ Let us evaluate the last integrand of the above equation for some simplified models, say general relativity with f(R) correction, for which, the full field equation takes the form, $$\begin{aligned} G_{ab} + \alpha \left( f'(R)R_{\mu \nu }-\frac{1}{2}g_{\mu \nu }f(R)+g_{\mu \nu }\square f'(R) -\triangledown _\mu \triangledown _\nu f'(R) \right) = 8\pi T_{\mu \nu }.\nonumber \\ \end{aligned}$$ We now need an expression of the horizon entropy for f(R) gravity. Let us first use the Wald entropy formula for stationary black holes Eq. (12) which gives \(f'(R)\) represents the modifications to the entropy density over and above the Einstein–Hilbert expression, i.e., \(\rho = \alpha f'(R)\), where the prime denotes first derivative w.r.t the Ricci scalar R. Also, one can always rewrite the field equation for f(R) theory in the form of Eq. (18), with $$\begin{aligned} H_{ab}k^a k^b= \alpha \left( f'(R)R_{ab}k^a k^b - k^a k^b \nabla _a\nabla _b f'(R)\right) . \end{aligned}$$ Substitution of the above expression for \(H_{ab}k^a k^b\) results into the following identity for f(R) theories with \(\rho = \alpha f'(R)\), $$\begin{aligned} \int dA\;d\lambda \; \lambda \left( \frac{d^2\rho }{d\lambda ^2} -\rho R_{ab}k^a k^b + H_{ab}k^a k^b\right) =0. \end{aligned}$$ Eq. (24) is a property of the entropy density. If this equation is valid, our expression for the change in entropy upto first order as given in Eq. (21) will take a very simplified form. Motivated by this result, we argue that, Eq. (24) could be a general property of the entropy density and holds in an arbitrary diffeomorphism invariant theory of gravity, at least up to the first order in perturbation. In fact, for f(R) gravity including general relativity, it holds as an exact identity if we choose the entropy density as \(\rho = \alpha f'(R)\). Hence in general we demand that Eq. (24) is of the form, $$\begin{aligned} \int dA\;d\lambda \; \lambda \left( \frac{d^2\rho }{d\lambda ^2} -\rho R_{ab}k^a k^b + H_{ab}k^a k^b\right) = O(\epsilon ^2) \end{aligned}$$ If this is valid for a theory of gravity, the first order variation of the entropy simplifies to, $$\begin{aligned} \delta S^{(1)}(\rho ) =\frac{1}{4}\left( \int dA\; \lambda ~\Theta _{k}\right) _{\lambda _1}^{\lambda _2} + 2\pi \int dA \;d\lambda \; \lambda \, T_{ab}k^a k^b \end{aligned}$$ This is the linearised change of the entropy between two arbitrary cross sections at \(\lambda = \lambda _1\) to \(\lambda = \lambda _2\) provided the condition in Eq. (25) holds. The first term in the r.h.s is a boundary term and can be interpreted as a change of the energy \(\delta E\) associated with the horizon membrane. Then, we have a version of the physical process law of the form: \(T \delta S = \delta E + \delta Q\) [32]. Now, let us spells out two more assumptions which we like to use, The horizon possesses a regular bifurcation surface in the asymptotic past, which is set at \(\lambda = 0\) in our coordinate system. The horizon is stable under perturbation and eventually settle down to a new stationary black hole. So, all Lie derivatives with respect to horizon generators vanish in the asymptotic future. The second assumption is motivated by the cosmic censorship conjecture which asserts that the black hole horizon must be stable under perturbation so that expansion and shear vanish in the asymptotic future at \(\lambda = \lambda _f\). This is in principle similar to the assertion that a thermodynamic system with dissipation ultimately reaches an equilibrium state. This is a desirable property of the black hole horizon. Moreover, while deriving the physical process first law, we are already neglecting higher order terms, and this requires that small perturbations remain small throughout the region of interest. This is equivalent to not having any caustic formation on any portion of the dynamical horizon. Under these assumptions, the boundary term doesn't contribute when integrated from the bifurcation surface to a stationary slice. This is because we have set the bifurcation surface to be at \(\lambda =0\) and the final stationary cross-section, all temporal derivatives vanish.3 Ultimately, what we left with is, $$\begin{aligned} \delta S&= 2 \pi \int _{\lambda = 0}^{\lambda _f}\lambda ~d\lambda \, dA \,T_{ab}k^{a}k^{b}~. \end{aligned}$$ Subsequently identifying the background Killing field as \(\xi ^a = \lambda \, \kappa \,k^a\) one can rewrite the above equation as, $$\begin{aligned} \frac{\kappa }{2\pi }\delta S=\int _{\mathcal{H}} T_{ab}\,\xi ^{a}\, d\Sigma ^b~. \end{aligned}$$ This completes the standard derivation of what is known as the integrated version of the physical process first law. If the matter field satisfies the null energy condition, then one will have \(T_{ab}k^{a}k^{b}\ge 0\). As a consequence, it will follow that the total change in entropy between the boundary slices is also positive semi-definite. The validity of Eq. (25) is an important requirement for the validity of the physical process law. As of now, there is no general proof of the condition Eq. (25). In case f(R) gravity, the condition Eq. (25) holds as an exact identity leading to the physical process law for such a theory [33]. Same can be established for Einstein Gauss–Bonnet and Lovelock class of theories [34, 35, 36]. But, there is still no general proof of this condition. We will discuss more on this in the later sections. In comparison with the equilibrium state version of the first law, the PPFL is local and independent of the asymptotic structure of the space–time. The relationship between these two versions is not straightforward. In the next section, we would like to understand how these two approaches are related to each other. 4 Equilibrium state version and physical process law The equilibrium state version compares two nearby stationary solutions in the phase space which differ infinitesimally in ADM mass and relates the change of ADM mass \(\Delta M\) to the entropy variation \(\Delta S\) as, $$\begin{aligned} \frac{\kappa }{2 \pi } \Delta S = \Delta M. \end{aligned}$$ The variation \(\Delta \) is to be understood in the space of solutions. To understand the relationship with PPFL, consider a time-dependent black hole solution: say for simplicity, a spherically symmetric Vaidya black hole which is accreting radiation. The metric for such a space–time is [37], $$\begin{aligned} ds^2 = - \left( 1 - \frac{ 2 M(v) }{ r} \right) dt^2 + 2 dv \,dr + r^2 d\Omega ^2. \end{aligned}$$ The Vaidya space–time is an excellent scenario to study the physical process first law. The area of the event horizon is increasing due to the flux of the infalling matter. The rate of change of the time-dependent mass M(v) represents the energy entering into the horizon. But, although M(v) is changing with time, the ADM mass of the spacetime is constant, evaluated at the spacelike infinity: \(M_{\text {ADM}} = M(v \rightarrow \infty )\). In fact, by definition, there is no physical process which can change the ADM mass of the space time. Therefore, the relationship between the PPFL and the equilibrium state version is somewhat subtle. To understand the relationship, we consider the Vaidya spacetime, as a perturbation over a stationary black hole of ADM mass m. Therefore, we assume \(M(v) = m + \epsilon \, f(v)\). The parameter \(\epsilon \) signifies the smallness of the perturbation. Note that, the background spacetime with ADM mass m is used only as a reference; it does not have any physical meaning beyond this. In the absence of the perturbation, the final ADM mass would be the same as m. Therefore, we may consider the process as a transition from a black hole of ADM mass m to another with ADM mass \(M_{\text {ADM}}\), and this allows us to relate the PPFL to the equilibrium state version. In the case of an ordinary thermodynamic system, the entropy is a state function, and its change is independent of the path. Therefore, we can calculate the change of entropy due to some non equilibrium irreversible process between two equilibrium states by using a completely different reversible path in phase space. In black hole mechanics, the equilibrium state version can be thought as the change of entropy along a reversible path in the space of solutions, whereas the PPFL is a direct irreversible process. The equality of the entropy change for both these processes shows that the black hole entropy is indeed behaving like that of a true thermodynamic entropy [28]. Having understood the relationship between these two versions of the first law for black holes, we will now study the ambiguities of Wald's construction and how PPFL is affected by such ambiguities. 5 Physical process first law and ambiguities in black hole entropy The entropy of a stationary black hole with a regular bifurcation surface in an arbitrary diffeomorphism invariant theory of gravity is given by Wald's formula [21] as, $$\begin{aligned} S_{W} = -2\pi \int _{{\mathcal {B}}} \frac{\partial L}{\partial R_{abcd}}\epsilon _{ab}\epsilon _{cd} \,\sqrt{h} \,d^{D-2}x =\frac{1}{4} \int _{{\mathcal {B}}} (1+\rho _w) \sqrt{h} \,d^{D-2}x \end{aligned}$$ where \(\epsilon _{ab} = k_a l_b -k_b l_a\) is the bi-normal of the bifurcation surface and \(\rho _w\) represents the contribution from higher curvature terms. As discussed in [22, 26], the ambiguities in the Noether charge construction doesn't affect the Wald entropy in case of a stationary black hole. However, if the horizon is involved in a dynamical process, i.e., for nonstationary black holes, the Wald entropy formula no longer holds and turns out to be ambiguous up to the addition of terms of the form, $$\begin{aligned} \Delta S_w = \int \Omega ~dA , \end{aligned}$$ where \(\Omega =(p \theta _k \theta _l + q \sigma _k\sigma _l)\) and \(\sigma _k \sigma _l = \sigma _{ab}^{k}\sigma ^{ab}_l\). Note that, terms in \(\Omega \) contains an equal number of k and l indices and hence combine to produce a boost invariant object, although they individually transform non-trivially under boost. The coefficients p and q are entirely arbitrary and can not be determined from the equilibrium state version of the first law. Comparing Eq. (17) and Eq. (31) and taking into account the ambiguities, we can identify \(\rho = \rho _w + \Omega \). This identification essentially means that the black hole entropy for a non-stationary horizon slice can always be expressed as the expression obtained from Wald's formula plus ambiguities. So, let us define \(\rho \) as the black hole entropy for higher curvature correction and \(\rho _w\) as the Wald entropy. Note that the black hole entropy \(\rho \) coincide with Wald entropy only in the stationary limit. Now, we would like to ask a definite question: how does the physical process law get affected by the ambiguities in the Noether charge construction? We will show that as in the case of the stationary version, the physical process law for linear perturbations is also independent of these ambiguities, provided we consider the entropy change from the past bifurcation surface to the final stationary cross-section. To see this, we write the difference between the change in black hole entropy and the change in Wald entropy up to first order in expansion and shear. A straightforward calculation using Eq. (21) for \(\rho \) and \(\rho _w\) shows, $$\begin{aligned} \Delta S^{(1)}(\rho ) -\Delta S^{(1)}(\rho _w) = \frac{1}{4}\int dA \lambda \left( \frac{d\Omega }{d\lambda } +\Omega \theta _k \; \right) \Bigg |_{\lambda _1}^{\lambda _2}- \frac{1}{4} \int dA\, d\lambda \left( \lambda \frac{d^2\Omega }{d\lambda ^2}\right) ,\nonumber \\ \end{aligned}$$ where we have neglected the terms \(\Omega \, R_{ab} k^a k^b\) and \(\Omega \,\theta _k\) which are of \({\mathcal {O}}(\epsilon ^2)\) and do not contribute to the first order variation. Simplifying it further one can obtain, $$\begin{aligned} \Delta S^{(1)}(\rho ) -\Delta S^{(1)}(\rho _w) = \frac{1}{4}\int dA\; \Omega \Big |_{\lambda = 0}^{\lambda _f}. \end{aligned}$$ The above equation represents the difference in the change in black hole entropy and Wald entropy as a boundary term evaluated between the bifurcation surface at \(\lambda = 0\) and the final stationary cross section at \(\lambda = \lambda _f\). Then, as discussed in the previous section, terms like \(\theta _k \theta _l\) are of second order in perturbation and therefore \(\Omega \) turns out to be \({\mathcal {O}}(\epsilon ^2)\) and does not contribute to the linear order calculation. The contribution from the upper limit also vanishes as the expansion \(\theta _k\) is zero on a future stationary cross-section. Hence, up to first order, the ambiguities does not affect the PPFL when integrated from a bifurcation surface to a stationary slice. This is analogous to the case of the equilibrium state version of the first law as proven in [22, 26]. The integrated version of the physical process law and therefore the net change of the entropy is independent of the ambiguities in the Wald entropy construction. In summary, given a particular theory, if there is a choice of entropy density \(\rho \) which obeys the condition Eq. (25), then \(\rho + \Omega \) will also obey the same condition. So, Eq. (25) is independent of the ambiguities as long as we the integrating from a past bifurcation surface to a stationary future cross-section. If it holds for \(\rho _w\), it will hold for \(\rho \) also. This result, however, doesn't hold when second-order perturbations are considered. Unlike first order, the difference in the change in black hole entropy and Wald entropy is given by a boundary term and a bulk integral. As a result, any conclusion about the change of black hole entropy beyond linearized perturbation requires the resolution of these ambiguities. Similarly, if we demand an instantaneous second law, such that the entropy is increasing at every cross-section, to hold beyond general relativity, we need to fix the ambiguities and find the appropriate black hole entropy [38, 39]. Then, it is also possible to study the higher order perturbations and obtain the transport coefficients related to the horizon [40]. 6 Linerized version of the second law We have seen in the last section that the integrated version of the physical process law is insensitive to the ambiguities in the Wald entropy. Therefore, to fix the ambiguities, let us consider the linearized version of the second law, where we seek to evaluate the instantaneous change of the entropy due to the flux of matter. To start, we consider the expression of the change of the entropy, $$\begin{aligned} \Delta S(\rho ) = \frac{1}{4}\int _{{\mathcal {H}}} \sqrt{h} \, d^{D-2}x \int d\lambda \, \Theta _{k}. \end{aligned}$$ If we can prove that the generalized expansion is always positive on any cross-section of the horizon, we have an instantaneous increase theorem for the black hole entropy. Before proceeding with the calculation, we ponder over the implication of such a result. In ordinary thermodynamics, the entropy is generally defined for an equilibrium state. So, it makes sense to obtain the change of entropy between two equilibrium states of the thermodynamic system. The integrated version of the physical process law is an equivalent calculation for black holes. But, the area theorem in GR shows that the area/entropy is a locally increasing function, and there is a local version of the second law at a non stationary cross-section of the black hole horizon. This is indeed stronger than the global increase of the entropy. Using the local second law, we can define an entropy current associated with the horizon which has positive divergence. The existence of such a current may imply a hydrodynamical picture of the black hole mechanics as envisaged in fluid gravity duality [41].4 To present the derivation, first consider the general relativity, for which \(\Theta _k = \theta _k / 4\). Then, the Raychaudhuri equation and Einstein equation imply, $$\begin{aligned} \frac{d \Theta _{k}}{d\lambda } = - 2 \pi \, T_{ab} k^a k^b + {\mathcal {O}}(\epsilon ^2), \end{aligned}$$ where the higher order terms involve the squares of expansion and shear of the horizon generators. The matter flux itself is of \({\mathcal {O}}(\epsilon )\). So, if the matter obeys null energy condition, i.e., \(T_{ab} k^a k^b > 0\) and the higher order terms are essentially small, then we have, \( d \Theta _{k} / d\lambda < 0\). So, the expansion is decreasing at every cross section. Next, we recall the assumption about the stability of the black hole, which asserts that \(\Theta _k \rightarrow 0\) in the asymptotic future. This boundary condition immediately gives \( \Theta _k \ge 0\) at every slice on the horizon, and the equality holds only in the asymptotic stationary future. As a result, the area is increasing locally on the horizon. Note that importance of the boundary condition to derive the result. The assumption that the expansion vanishes in the asymptotic future ensures the stability of the black hole under perturbation. There are several aspects to this assumption. This can be argued using the Penrose's theorem that the generators of the event horizon have no end future point and as a result, there is no caustic in the future. If the expansion is negative at any instant, it will further decrease and ultimately will lead to a caustic invalidating the Penrose's result [10]. This is also related to the cosmic censorship hypothesis [42]. We want to generalize the same result to a theory of gravity with higher curvature correction terms. To do this, let us write the black hole entropy for a non stationary cross section as \( \rho = \rho _w + p\, \theta _k \theta _l + q\, \sigma _k\sigma _l \). Then, the evolution equation of the generalized expansion at the linearized order of the perturbation becomes $$\begin{aligned} \frac{d \Theta _{k}}{d\lambda } = - 2 \pi \, T_{ab} k^a k^b + \frac{1}{4}\left( \frac{d^2\rho }{d\lambda ^2} -\rho R_{ab}k^a k^b + H_{ab}k^a k^b\right) + {\mathcal {O}}(\epsilon ^2). \end{aligned}$$ While discussing the integrated version, we have shown that the integrated version of physical process first law is independent of the ambiguities of Wald entropy. This is because the integral of the second term in r.h.s of the above equation is of higher order when we integrate between past bifurcation surface to the future stationary slice. So, if \(\rho \) obeys the integrated version of the first law so does \(\rho _w\) as both of these obey Eq. (25). On the other hand, the formulation of the local first law requires the integrand of Eq. (25) to be of higher order. Therefore, the condition for the validity of the linearized increase law is, $$\begin{aligned} \frac{d^2\rho }{d\lambda ^2} -\rho R_{ab}k^a k^b + H_{ab}k^a k^b = {\mathcal {O}}(\epsilon ^2). \end{aligned}$$ Therefore, we set the first order part of the l.h.s of the above equation to zero and determine the ambiguity coefficients p and q. This is just one condition, but at the linearized order, expansion and shear are independent of each other, and as a result, we have two independent conditions from the above equation. As an example, consider a theory of gravity of gravity described the Lagrangian, $$\begin{aligned} L = \frac{1}{16 \pi } \left( R + \beta \, R_{ab} R^{ab}\right) . \end{aligned}$$ The Wald formula in Eq. (31) for this theory gives \(\rho _w = - 2 \beta R_{ab} k^a l^b\). Then the requirement of the validity of the condition Eq. (38) will give \( p = - 1/2\) and \( q = 0\) [38] and the black hole entropy becomes, $$\begin{aligned} S = \frac{1}{4}\int dA \left[ 1 - 2\beta \, \left( R_{ab}k^a l^b - \frac{1}{2} \theta _k \theta _l \right) \right] . \end{aligned}$$ The evolution equation of the generalized expansion for this entropy is, $$\begin{aligned} \frac{d \Theta _{k}}{d\lambda } = - 2 \pi \, T_{ab} k^a k^b + {\mathcal {O}}(\epsilon ^2). \end{aligned}$$ This equation is exactly analogous to the linerized Raychaudhuri equation in general relativity. This is a nontrivial result; the entropy function has modified due to higher curvature corrections, the field equation is also different. But the evolution equation of the entropy for linerized perturbation remains same in form! Therefore, if we use the future stability condition i.e., \(\Theta _k \rightarrow 0\) in the future, we have the instantaneous increase of the entropy at every cross section of the horizon, \(\Theta _k \left( \lambda \right) > 0\) provided the matter obeys the null energy condition. Similarly, consider a more general theory of gravity in D dimensions with the Lagrangian, $$\begin{aligned} L = \frac{1}{16 \pi } \left( R + \alpha \,R^2 + \beta \, R_{ab} R^{ab} + \gamma \, \mathcal{L_{GB}} \right) , \end{aligned}$$ where \(\mathcal{L_{GB}} = R^2 - 4 R_{ab} R^{ab} + R_{abcd} R^{abcd}\), the so called Gauss Bonnet correction term. The black hole entropy for such a theory can be obtained by fixing the ambiguities using the linerized second law and the result is, $$\begin{aligned} S = \frac{1}{4}\int dA \left[ 1 + \left( 2\alpha R- 2 \beta \, \left( R_{ab}k^a l^b - \frac{1}{2} \theta _k \theta _l \right) +2 \gamma \, ^{(D-2)}R \right) \right] , \end{aligned}$$ where \(^{(D-2)}R\) is the intrinsic Ricci scalar associated with the horizon cross section. This can be generalized to any theory, and it is always possible to fix the ambiguities from the linearized second law so that we have a local increase theorem at every cross-section of the nonstationary event horizon [38, 39, 47]. If we now consider that the black hole is in an asymptotically Anti De Sitter space–time, after fixing the ambiguities, the black hole entropy becomes identical in form to the holographic entanglement entropy of the boundary gauge theory [38, 39]. The holographic entanglement entropy [43] is a proposal which relates the entropy of the boundary gauge theory with the area of certain minimal surfaces in the bulk (which obeys Einstein's equation) within the context of gauge-gravity duality. The original principle has been generalized to higher curvature theories [44, 45, 46] and the entanglement entropy density of the boundary theory is given as, \(\rho = \rho _w + a\, \theta _k \theta _l + b\, \sigma _k\sigma _l \),. The part \(\rho _w\) is of the same form of Wald entropy for black holes, and the coefficients a and b depends on the choice of gravity theory in the bulk; for general relativity \( a = b = 0\). The expansions and shears correspond to that of a codimension two surface which is anchored to a region of the boundary. Note, a priory, this entanglement entropy is not related to the entropy of the black hole in the bulk. Also, this entropy has no ambiguities, and the coefficients a and b can be calculated using AdS-CFT [46]. Our calculations show, if we consider a nonstationary black hole in the bulk and demand that the black hole entropy obeys linearized second law, we will have \( p = a\) and \( q = b\) [39]. It is indeed remarkable that the entropy for black holes in AdS spacetime which obeys linearized second law turns out to be related with the holographic entanglement entropy. It seems that the validity of black hole thermodynamics is already encoded in the holographic principle; the holographic entanglement entropy satisfies the linearized second law while the Wald entropy does not. Let us summarise the main results: a theory of gravity which has black hole solutions will obey the integrated version of the physical process law if Eq. (25) holds. Given a theory and an expression of black hole entropy, we can always verify the validity of this condition. Also, the condition Eq. (25) is independent of the ambiguities of the Noether charge construction, as long as we are integrating from initial bifurcation surface to a future stationary cross-section. Therefore, in any theory, if Wald entropy \(\rho _w\) satisfied this condition, so does the black entropy \(\rho \). On the other hand, the local increase law depends on the validity of Eq. (38) which is sensitive to the ambiguities. Hence, there is only a particular choice of the ambiguity coefficients p and q for which the local increase law for linearized fluctuations holds. Remarkably, such a choice makes the black hole entropy identical in form to holographic entanglement entropy. 7 Beyond the linearized second law The next obvious question is to find the full evolution equation of the entropy. For general relativity, the full Raychaudhuri equation Eq. (15) with null energy condition still gives \(d\theta _k / d\lambda <0\) and this leads to the area theorem. Beyond general relativity, the calculation is straightforward and gives an evolution equation for the generalized expansion \(\Theta _k\). We will only present the final result, for more details about the derivation, refer to [40]. We consider a D dimensional Einstein–Gauss–Bonnet theory; the entropy is then given by, $$\begin{aligned} S = \frac{1}{4}\int dA \left( 1 + 2 \gamma \, ^{(D-2)}R \right) . \end{aligned}$$ This entropy can be obtained by setting \( \alpha = \beta = 0\) in the expression Eq. (43). Then, the full evolution equation of the generalized expansion is, $$\begin{aligned} 4 \frac{d\Theta _k}{d\lambda }= & {} -\frac{\theta ^{(k)2}}{D-2}-\sigma ^{(k)ab}\sigma ^{(k)}_{ab}-6\gamma \frac{(D-4)\theta ^{(k)2}{\mathcal {R}}}{(D-2)^2}-2\gamma \sigma ^{(k)ab}\sigma ^{(k)}_{ab}{\mathcal {R}}\nonumber \\&-\,4\gamma \frac{(D-8)\theta ^{(k)}\sigma ^{(k)ab}{\mathcal {R}}_{ab}}{(D-2)} +8\gamma \sigma ^{(k)a}_c\sigma ^{(k)cb}{\mathcal {R}}_{ab}-4\alpha {\mathcal {R}}_{fabp}~\sigma ^{(k)ab}\sigma ^{(k)pf}\nonumber \\&+\,2\gamma \Bigg [2\bigg (D_c\beta ^c\bigg )\bigg (K^{(k)}_{ab}K^{(k)ab}\bigg )-4\bigg (D_c\beta ^b\bigg )\bigg (K^{(k)}_{ab}K^{(k)ac}\bigg )+2\beta ^c\beta _cK^{(k)}_{ab}K^{(k)ab}\nonumber \\&-\,4\beta _cK^{(k)}_{ab}\beta ^bK^{(k)ac}\Bigg ] +4\gamma \bigg [2\bigg (D^b\beta ^f\bigg )\bigg (K^{(k)}K^{(k)}_{bf}\bigg )-2\bigg (D_a\beta ^a\bigg )\bigg (K^{(k)}\bigg )^2\nonumber \\&+\, 2h^{ab}\beta ^cK^{(k)}_{ac}\beta _bK^{(k)}-h^{ab}\beta _a\beta _b(K^{(k)})^2\bigg ] +4\gamma R_{kk}\frac{(D-3)(D-4)\theta ^{(n)}\theta ^{(k)}}{(D-2)^2} \nonumber \\&-4\gamma h^{ac}h^{bd}R_{kckd}\frac{(D-4)\theta ^{(k)}\sigma ^{(n)}_{ab}}{D-2}-4\gamma h^{ac}h^{bd}R_{kckd}\frac{(D-4)\theta ^{(n)}\sigma ^{(k)}_{ab}}{D-2}\nonumber \\&+\,8\gamma h^{ac}h^{bd}R_{kckd}\sigma ^{(k)}_{af}\sigma ^{(n)f}_b-4\gamma R_{kk}\sigma ^{(k)}_{ab}\sigma ^{(n)ab}-8\pi G\, T_{kk} \nonumber \\&+\,\gamma \, \text {(total derivatives)}. \end{aligned}$$ To comprehend this formidable equation, let us first spell out the notations. \(K^{(i)}_{ab}\) is the extrinsic curvature of the horizon cross section w.r.t the null normal \(i = k , l\). We have also used the notations \(R_{k c k d} = R_{\mu c \rho d } k^\mu k^\rho \), \( \beta _a = - l^\mu \nabla _a k_\mu \) etc. Setting \( \gamma = 0\), we will obtain the familiar null Raychaudhuri equation. Otherwise, this equation is the thermodynamics generalization of the null Raychaudhuri equation. The expansion and shear of the horizon generators, i.e., \(\theta ^{(k)}\) and \(\sigma ^{(k)}_{ab}\) vanish on the background stationary horizon and therefore are at least linear order in perturbation. But, the expansion of the auxiliary null vector \(\theta ^{(l)}\) is non zero even on the stationary horizon. The total derivative terms involve spacial derivative of the extrinsic curvatures and are second order in perturbation. If we consider only the terms linear in perturbation, we will obtain: $$\begin{aligned} \frac{ d \Theta _k}{d \lambda } = - 2 \pi \, T_{ab}k^a k^b + \mathcal{O}(\epsilon ^2). \end{aligned}$$ This is the equation which will give us the linearized version of the physical process law. The full equation will give an exact expression of the change of horizon entropy. We would like to apply this equation to understand the full evolution of the horizon entropy. Due to the complicated structure of the terms, it is difficult to obtain any conclusion in general. So, to make sense of this equation, we will now specialize to the case of spherically symmetric second-order perturbations about a static black hole background with maximally symmetric horizon cross-section [38, 39]. Then, in a order by order calculation in \(\theta ^{(k)}\) and \(\sigma ^{(k)}_{ab}\), we will obtain followings up to second order: $$\begin{aligned} \frac{ d \Theta _k}{d \lambda } = - 2 \pi \, T_{ab}k^a k^b - \zeta \, \theta _{k}^{2}, \end{aligned}$$ where the quantity \(\zeta \) is to be evaluated on the background horizon. There is no shear because we have assumed spherically symmetric perturbation only. Now consider a situation where the stationary black hole is perturbed by some matter flux, and we are examining the second law when the matter has already entered into the black hole. In that case, the above evolution equation does not have any contribution from matter stress-energy tensor and the evolution will be driven solely by the \(\theta _{k}^{2}\) term. In such a situation, if we demand the entropy is increasing, we have to fix the sign of the coefficient of \(\theta _{k}^{2}\); the quantity \(\zeta \). We evaluate the coefficient in the stationary background and impose the condition that overall sign in front of \(\theta _{k}^{2}\) is negative. This will immediately give us a bound on the parameters of the theory under consideration. To illustrate this, we now consider specific cases. First consider the case when the background is a spherically symmetric solution of the Einstein Gauss–Bonnet (EGB) gravity with metric, $$\begin{aligned} ds^2 = -f(r) \, dt^2 + \frac{dr^2}{f(r)} + r^2 \, d\Omega ^{2}_{D-2}. \end{aligned}$$ The expression of \(\zeta \) is now given by, $$\begin{aligned} \zeta =\frac{1}{D-2}+\frac{(D-4)\gamma }{(D-2)^2} \left[ 6\,^{(D-2)}R - \frac{2\,(D-3)(D-2) f'(r)}{ r(v)} \right] . \end{aligned}$$ As discussed earlier, we will evaluate \(\zeta \) for different backgrounds and determine bounds on the coefficient \(\gamma \) from the constraint \(\zeta > 0\). For the EGB gravity, we first consider the 5-dimensional spherically symmetric, asymptotically flat Boulware–Deser (BD) [25] black hole as the background, for which the horizon radius \(r_h\) is related to the mass M as, \(r_{h}^{2}+2\gamma =M\) and the existence of an event horizon demands \(r_h^2 > 0\). Now, evaluating \(\zeta \) for the above background at the horizon \(r =r_{h}\), and imposing that \(\zeta > 0\), we obtain the condition, \(M>2 |\gamma |\, \text {if}~ M>0\). To understand this better, note that we require \(M>2\gamma \) to avoid the naked singularity of the black hole solution for \(\gamma >0\). Thus in this case for a spherically symmetric black hole, \(\zeta \) will be positive and hence second law will be automatically satisfied. The condition of the validity of the second law is same as that for having a regular event horizon. Also, for \(\gamma > 0\), it is possible to make \(r_h\) as small as possible by tuning the mass M. But when \(\gamma \) is negative (a situation that appears to be disfavoured by string theory, see [25, 48] and references therein), \(r_h\) cannot be made arbitrarily small, and it would suggest that these black holes cannot be formed continuously from a zero temperature set up. Notice that we could have concluded the same without the second law if M is considered to be positive–however, our current argument does not need to make this assumption. Due to this pathology, it would appear that the negative Gauss–Bonnet coupling case would be ruled out in a theory with no cosmological constant. The case for the 5-dimensional AdS black hole solution for EGB gravity with cosmological constant \( \Lambda = - (D-1)(D-2)/2 l^2 \) as the background is more interesting. Now the horizon could be of planar, spherical or hyperbolic cross sections. We will first consider a black brane solution with a planar horizon. Then we obtain \(\zeta = 1/(D-2)\left( 1-2(D-1)\lambda _{GB}\right) \) where we have introduced a rescaled coupling in D dimensions as \(\lambda _{GB} l^2=(D-3)(D-4)\gamma \). Again demanding positivity of \(\zeta \) we get [39], $$\begin{aligned} \lambda _{GB}< \frac{1}{2(D-1)}\,. \end{aligned}$$ Remarkably, in \(D=5\) this coincides with the bound which has to be imposed to avoid instability in the sound channel analysis of quasi-normal modes of a black hole in EGB theory which is taken to be the holographic dual of a conformal gauge theory. It was shown in [49] that when \(\lambda _{GB} > 1/8\) the Schroedinger potential develops a well which can support unstable quasi-normal modes in the sound channel. It is quite interesting to see that the second law knows about this instability. Another interesting case corresponds to the hyperbolic horizon. In this case, the intrinsic scalar is negative, and if we also assume that \(\gamma > 0\), then there is an obvious bound on the higher curvature coupling beyond which the entropy itself becomes negative and thereby loses any thermodynamic interpretation. This bound in general D dimension is found as \(\lambda _{GB} < D(D-4) / 4 (D-2)^2\). If the analysis of the second law has any usefulness, it must provide a more stringent bound for the coupling \(\gamma \), and that is indeed the case. Also, to analyze the case for hyperbolic horizons, we will only consider the so-called zero mass limit. In the context of holographic entanglement entropy, these topological black holes play an important role as shown in [44, 50, 51]. One can relate the entanglement entropy across a sphere to the thermal entropy in \(R \times H^{D-2}\) geometry by a conformal transformation. Now for holographic CFTs, one has to evaluate the Wald entropy for these topological black holes as they are dual to the field theory placed on \(R \times H^{D-2}\) to obtain the entanglement entropy across a spherical region at the boundary. In our context, imposing \(\zeta > 0\), it turns out that the zero mass limits gives the most stringent bound on the coupling \(\lambda _{GB}\) given by [39], $$\begin{aligned} \lambda _{GB}< \frac{9}{100}. \end{aligned}$$ First, note that this bound on \(\lambda _{GB}\) is independent of the dimensions. Also, comparing with the bound in Eq. (50) we can easily see that up to \(D=6\), the bound in Eq. (51) is strongest but from \(D=7\) onwards Eq. (50) is the strongest one. Next, in the five dimensions, the bound in Eq. (51) quite curiously coincide with the tensor channel causality constraint [52, 53, 54]. For \(D>5\), this bound Eq. (51) from the second law will be stronger than the causality constraints. In principle, it is possible to repeat this analysis for any higher curvature gravity theory to obtain similar bounds on the higher curvature couplings provided we have an exact stationary black hole solution as the background [39]. These bounds will be necessary if we demand that the second law of thermodynamics holds for an observer outside the horizon. Any quantum theory of gravity which reproduces such higher curvature corrections and also aims to explain the microscopic origin of black hole entropy must satisfy these bounds. We can constrain various interesting gravity theories in 4 dimensions by our method. In 4 dimensions, our method is the only one to constrain these theories where the causality based analysis [55] is insufficient. For example, for critical gravity theories in \(D=4\) [56] analyzing black holes in AdS background we obtain the bound on the coupling (\(\alpha _{c}\)), \(-\frac{1}{2}\le \alpha _{c} \ \le \frac{1}{12}.\) Also, for New Massive gravity in \(D=3\) [57, 58] we obtain the bound on couplings (\(\sigma \)) as, \(-3\le \sigma \le \frac{9}{25}\).5 In conclusion, these results show that the validity of a local increase law of black hole entropy can constrain the parameters of the higher curvature terms. Any theory of gravity which does not obey these bounds will have a severe problem with the second law in the presence of a black hole. Interestingly, there are works which suggest that the higher curvature gravity does not make sense as a stand-alone classical theory. Consider Einstein–Gauss–Bonnet gravity in dimensions greater than four. The theory has exact shock wave solutions which can lead to a negative Shapiro time delay. This can be used to create a time machine: closed timelike curve without any violation of energy conditions [55]. As a result, such higher curvature theories have badly behaved causal properties for either sign of the higher curvature coupling. Hence, it is proposed that these theories can only make sense as an effective theory and any finite truncation of the gravitational action functional will lead to pathological problems. This result is criticized in [59] where gravitons propagating in smooth black hole spacetimes are considered. It is shown that for a small enough black hole, the gravitons of appropriate polarisation, and small impact parameter, can indeed experience negative time delay, but this can not be used to build a time machine. This is because the required initial data surface is not everywhere space like and therefore the initial value problem is not well-posed. Nevertheless, the result of [55] is quite significant and needs careful understanding. Similar conclusions can be obtained about the validity of the classical second law for black hole mergers in Lovelock class of theories [60, 61]. In such theories, it is possible to construct scenarios involving the merger of two black holes in which the entropy instantaneously decreases. But, it is also argued that the second law is not violated in the regime where Einstein–Gauss–Bonnet theory holds as an effective theory and black holes can be treated thermodynamically [62]. 8 Conclusions and open problems Black hole thermodynamics provides a powerful constraint on any proposal to understand the quantum gravitational origin of black hole entropy. The area law has motivated significant progress in theoretical physics; most importantly the holographic principle. Similarly, the pioneering work by Jacobson [63] where he considered the concept of local Rindler horizons and showed that Einstein field equations could be derived from thermodynamic considerations hints a deep thermodynamic origin of the full dynamics of gravity. Similar results are proven in a more general context by Padmanabhan and collaborators. They have shown that the field equations of any higher curvature gravity theory admits an intriguing thermodynamic interpretation [64, 65]. Interestingly, the result is also valid beyond black hole horizons and for any null surface in space–time [66]. These fascinating results lead an alternative approach "the emergent gravity paradigm" to understand the dynamics of gravity [67]. There is also a local gravitational first law of thermodynamics formulated using the local stretched light cones in the neighbourhood of any event [68]. This result indicates that certain geometric surfaces—stretched future light cones—which exist near every point in every spacetime, also behave as if they are endowed with thermodynamic properties. All these results seems to suggest that the thermodynamic properties of space time transcends beyond the usual black hole event horizon. The derivation of a full second law beyond general relativity remains an important open problem. Ideally, we would like to follow a nonperturbative approach and find a suitable generalization of the area theorem with some restriction on the higher curvature parameters. This requires understanding the thermodynamic Raychaudhuri equation like Eq. (45) for an arbitrary theory of gravity. This is a formidable but straightforward problem. We also like to understand the relationship between holographic entanglement entropy and black hole entropy. The area theorem may have some interesting holographic interpretations. The Holographic Entanglement Entropy was shown to obey various nontrivial inequalities. One of these is the strong subadditivity condition (SSA) which is a fundamental property of entanglement entropy in any quantum field theory and a central theorem of quantum information theory. It is known that the violation of SSA for the boundary theory is connected with the violation of the null energy condition in the bulk spacetime [69, 70, 71]. Since null energy condition is a requirement for the validity of the Hawking area theorem, it is expected that there exists a strong connection between the area theorem for black holes and SSA for holographic entanglement entropy. This relationship may provide us a better understanding of the scope and applicability of the holographic principle. We end this review with a quotation by Arthur Eddington, "The law that entropy always increases, holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations - then so much the worse for Maxwell's equations. If it is found to be contradicted by observation - well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation." Same can be said for any theory of gravity which has a black hole solution. As a side remark, let me point out that the Chandrasekar mass formula also contains all the fundamental constants. But we do not associate that result with the quantum gravity. A proof that the event horizon in an asymptotically flat spacetime is a null surface is given in [4] and also discussed in detail in [10]. This requires the generalized expansion \(\Theta _k\) goes to zero in the future faster than the time scale \(1/\lambda \) [29]. We thank Shiraz Minwalla to suggest this. The lower bound for both these two cases are coming from demanding the positivity of the entropy. This review is based on the work done in collaboration with Aron Wall, Srijit Bhattacharjee, Arpan Bhattacharyya, Aninda Sinha, Fairoos C, Akash K Mishra, Avirup Ghosh, Sumanta Chakraborty, and Maulik Parikh. The author thanks Amitabh Virmani for his encouragement to write this review. Special thanks to Ted Jacobson, Aron Wall, Aninda Sinha and Maulik Parikh for sharing their deep insights about black hole physics. SS also acknowledge many constructive comments from the referees on the previous draft of this review. The research of SS is supported by the Department of Science and Technology, Government of India under the Fast Track Scheme for Young Scientists (YSS/2015/001346). Kruskal, M.D.: Phys. Rev. 119, 1743 (1960). https://doi.org/10.1103/PhysRev.119.1743 ADSMathSciNetCrossRefGoogle Scholar Szekeres, G.: Publ. Math. Debr. 7, 285 (1960)Google Scholar Kerr, R.P.: Phys. Rev. Lett. 11, 237 (1963). https://doi.org/10.1103/PhysRevLett.11.237 ADSMathSciNetCrossRefGoogle Scholar Hawking, S.W.: Commun. Math. Phys. 25, 152 (1972). https://doi.org/10.1007/BF01877517 ADSCrossRefGoogle Scholar Hawking, S.W.: Phys. Rev. Lett. 26, 1344 (1971). https://doi.org/10.1103/PhysRevLett.26.1344 ADSCrossRefGoogle Scholar Bardeen, J.M., Carter, B., Hawking, S.W.: Commun. Math. Phys. 31, 161 (1973). https://doi.org/10.1007/BF01645742 ADSCrossRefGoogle Scholar Bekenstein, J.D.: Lett. Nuovo Cim. 4, 737 (1972). https://doi.org/10.1007/BF02757029 ADSCrossRefGoogle Scholar Bekenstein, J.D.: Phys. Rev. D 7, 2333 (1973). https://doi.org/10.1103/PhysRevD.7.2333 ADSMathSciNetCrossRefGoogle Scholar Hawking, S.W.: Commun. Math. Phys. 43, 199 (1975) Erratum: [Commun. Math. Phys. 46, 206 (1976)]. https://doi.org/10.1007/BF02345020 ADSMathSciNetCrossRefGoogle Scholar Townsend, P.K.: arXiv:gr-qc/9707012 Hollands, S., Ishibashi, A., Wald, R.M.: Commun. Math. Phys. 271, 699 (2007). https://doi.org/10.1007/s00220-007-0216-4. [arXiv:gr-qc/0605106]ADSCrossRefGoogle Scholar Moncrief, V., Isenberg, J.: Class. Quant. Grav. 25, 195015 (2008). https://doi.org/10.1088/0264-9381/25/19/195015. [arXiv:0805.1451 [gr-qc]]ADSCrossRefGoogle Scholar Kay, B.S., Wald, R.M.: Phys. Rep. 207, 49 (1991). https://doi.org/10.1016/0370-1573(91)90015-E ADSMathSciNetCrossRefGoogle Scholar Foster, B.Z.: Phys. Rev. D 73, 024005 (2006). https://doi.org/10.1103/PhysRevD.73.024005. [arXiv:gr-qc/0509121]ADSMathSciNetCrossRefGoogle Scholar Smarr, L.: Phys. Rev. Lett. 30, 71 (1973) Erratum: [Phys. Rev. Lett. 30, 521 (1973)]. https://doi.org/10.1103/PhysRevLett.30.521, https://doi.org/10.1103/PhysRevLett.30.71 Kastor, D.: Class. Quant. Grav. 25, 175007 (2008). https://doi.org/10.1088/0264-9381/25/17/175007. [arXiv:0804.1832 [hep-th]]ADSCrossRefGoogle Scholar Liberati, S., Pacilio, C.: Phys. Rev. D 93, no. 8, 084044 (2016) https://doi.org/10.1103/PhysRevD.93.084044. [arXiv:1511.05446 [gr-qc]] Visser, M.: Phys. Rev. D 48, 583 (1993). https://doi.org/10.1103/PhysRevD.48.583. [arXiv:hep-th/9303029]ADSMathSciNetCrossRefGoogle Scholar Jacobson, T., Myers, R.C.: Phys. Rev. Lett. 70, 3684 (1993). https://doi.org/10.1103/PhysRevLett. [arXiv:hep-th/9305016]. 70.3684ADSMathSciNetCrossRefGoogle Scholar Visser, M.: Phys. Rev. D 48, 5697 (1993). https://doi.org/10.1103/PhysRevD.48.5697. [arXiv:hep-th/9307194]ADSCrossRefGoogle Scholar Wald, R.M.: Phys. Rev. D 48, no. 8, R3427 (1993) https://doi.org/10.1103/PhysRevD.48.R3427.[arXiv:gr-qc/9307038]ADSMathSciNetCrossRefGoogle Scholar Iyer, V., Wald, R.M.: Phys. Rev. D 50, 846 (1994). https://doi.org/10.1103/PhysRevD.50.846. [arXiv:gr-qc/9403028]ADSMathSciNetCrossRefGoogle Scholar Deser, S., van Nieuwenhuizen, P.: Phys. Rev. D 10, 401 (1974). https://doi.org/10.1103/PhysRevD.10.401 ADSCrossRefGoogle Scholar Zwiebach, B.: Phys. Lett. 156B, 315 (1985). https://doi.org/10.1016/0370-2693(85)91616-8 ADSCrossRefGoogle Scholar Boulware, D.G., Deser, S.: Phys. Rev. Lett. 55, 2656 (1985). https://doi.org/10.1103/PhysRevLett.55.2656 ADSCrossRefGoogle Scholar Jacobson, T., Kang, G., Myers, R.C.: Phys. Rev. D 49, 6587 (1994). https://doi.org/10.1103/PhysRevD.49.6587. [arXiv:gr-qc/9312023]ADSMathSciNetCrossRefGoogle Scholar Hawking, S.W., Hartle, J.B.: Commun. Math. Phys. 27, 283 (1972). https://doi.org/10.1007/BF01645515 ADSCrossRefGoogle Scholar Wald, R.M.: Quantum field Theory in curved Space-Time and black hole thermodynamics. In: Chicago Lectures in Physics, 1st edn. University of Chicago Press, Chicago (1994)Google Scholar Jacobson, T., Parentani, R.: Found. Phys. 33, 323 (2003). https://doi.org/10.1023/A:1023785123428. [arXiv:gr-qc/0302099]MathSciNetCrossRefGoogle Scholar Amsel, A.J., Marolf, D., Virmani, A.: Phys. Rev. D 77, 024011 (2008). https://doi.org/10.1103/PhysRevD.77.024011. [arXiv:0708.2738 [gr-qc]]ADSMathSciNetCrossRefGoogle Scholar Bhattacharjee, S., Sarkar, S.: Phys. Rev. D 91, no. 2, 024024 (2015) https://doi.org/10.1103/PhysRevD.91.024024. [arXiv:1412.1287 [gr-qc]] Mishra, A., Chakraborty, S., Ghosh, A., Sarkar, S.: JHEP 1809, 034 (2018). https://doi.org/10.1007/JHEP09(2018). [arXiv:1709.08925 [gr-qc]]. 034ADSCrossRefGoogle Scholar Chatterjee, A., Sarkar, S.: Phys. Rev. Lett. 108, 091301 (2012). https://doi.org/10.1103/PhysRevLett. [arXiv:1111.3021 [gr-qc]]. 108.091301ADSCrossRefGoogle Scholar Kolekar, S., Padmanabhan, T., Sarkar, S.: Phys. Rev. D 86, 021501 (2012). https://doi.org/10.1103/PhysRevD.86.021501. [arXiv:1201.2947 [gr-qc]]ADSCrossRefGoogle Scholar Sarkar, S., Wall, A.C.: Phys. Rev. D 88, 044017 (2013). https://doi.org/10.1103/PhysRevD.88.044017. [arXiv:1306.1623 [gr-qc]]ADSCrossRefGoogle Scholar Vaidya, P.C.: Phys. Rev. 83, 10 (1951). https://doi.org/10.1103/PhysRev.83.10 ADSCrossRefGoogle Scholar Bhattacharjee, S., Sarkar, S., Wall, A.C.: Phys. Rev. D 92, no. 6, 064006 (2015). https://doi.org/10.1103/PhysRevD.92.064006. [arXiv:1504.04706 [gr-qc]] Bhattacharjee, S., Bhattacharyya, A., Sarkar, S., Sinha, A.: Phys. Rev. D 93, no. 10, 104045 (2016). https://doi.org/10.1103/PhysRevD.93.104045. [arXiv:1508.01658 [hep-th]] Fairoos, C., Ghosh, A., Sarkar, S.: Phys. Rev. D 98, no. 2, 024036 (2018). https://doi.org/10.1103/PhysRevD.98.024036. [arXiv:1802.00177 [gr-qc]] Bhattacharyya, S., Hubeny, V.E., Loganayagam, R., Mandal, G., Minwalla, S., Morita, T., Rangamani, M., Reall, H.S.: JHEP 0806, 055 (2008). https://doi.org/10.1088/1126-6708/2008/06/055. [arXiv:0803.2526 [hep-th]]ADSCrossRefGoogle Scholar Wald, R.M.: General Relativity. https://doi.org/10.7208/chicago/9780226870373.001.0001 Ryu, S., Takayanagi, T.: Phys. Rev. Lett. 96, 181602 (2006). https://doi.org/10.1103/PhysRevLett. [arXiv:hep-th/0603001]. 96.181602ADSMathSciNetCrossRefGoogle Scholar Casini, H., Huerta, M., Myers, R.C.: JHEP 1105, 036 (2011). https://doi.org/10.1007/JHEP05(2011). [arXiv:1102.0440 [hep-th]]. 036ADSCrossRefGoogle Scholar Bhattacharyya, A., Sharma, M.: JHEP 1410, 130 (2014). https://doi.org/10.1007/JHEP10(2014). [arXiv:1405.3511 [hep-th]]. 130ADSCrossRefGoogle Scholar Dong, X.: JHEP 1401, 044 (2014). https://doi.org/10.1007/JHEP01(2014). [arXiv:1310.5713 [hep-th]]. 044ADSCrossRefGoogle Scholar Wall, A.C.: Int. J. Mod. Phys. D 24, no. 12, 1544014 (2015). https://doi.org/10.1142/S0218271815440149. [arXiv:1504.08040 [gr-qc]]ADSCrossRefGoogle Scholar Buchel, A., Myers, R.C., Sinha, A.: \(\text{ Beyond } \text{ eta }/\text{ s } = 1/4 \text{ pi },\). JHEP 0903, 084 (2009). [arXiv:0812.2521 [hep-th]]ADSCrossRefGoogle Scholar Buchel, A., Myers, R.C.: Causality of holographic hydrodynamics. JHEP 0908, 016 (2009). [arXiv:0906.2922 [hep-th]]ADSMathSciNetCrossRefGoogle Scholar Myers, R.C., Sinha, A.: Phys. Rev. D 82, 046006 (2010). [arXiv:1006.1263 [hep-th]]ADSCrossRefGoogle Scholar Myers, R.C., Sinha, A.: JHEP 1101, 125 (2011). [arXiv:1011.5819 [hep-th]]ADSCrossRefGoogle Scholar Hofman, D.M., Maldacena, J.: Conformal collider physics: energy and charge correlations. JHEP 0805, 012 (2008). [arXiv:0803.1467 [hep-th]]ADSCrossRefGoogle Scholar Brigante, M., Liu, H., Myers, R.C., Shenker, S., Yaida, S.: The viscosity bound and causality violation. Phys. Rev. Lett. 100, 191601 (2008). [arXiv:0802.3318 [hep-th]]ADSCrossRefGoogle Scholar Buchel, A., Escobedo, J., Myers, R.C., Paulos, M.F., Sinha, A., Smolkin, M.: Holographic GB gravity in arbitrary dimensions. JHEP 1003, 111 (2010). [arXiv:0911.4257 [hep-th]]ADSCrossRefGoogle Scholar Camanho, X.O., Edelstein, J.D., Maldacena, J., Zhiboedov, A.: Causality constraints on corrections to the graviton three-point coupling. arXiv:1407.5597 [hep-th] Lu, H., Pope, C.N.: Critical gravity in four dimensions. Phys. Rev. Lett. 106, 181302 (2011). [arXiv:1101.1971 [hep-th]]ADSCrossRefGoogle Scholar Bergshoeff, E.A., Hohm, O., Townsend, P.K.: Massive gravity in three dimensions. Phys. Rev. Lett. 102, 201301 (2009). [arXiv:0901.1766 [hep-th]]ADSMathSciNetCrossRefGoogle Scholar Grumiller, D., Hohm, O.: AdS(3)/LCFT(2): correlators in new massive gravity. Phys. Lett. B 686, 264 (2010). [arXiv:0911.4274 [hep-th]]ADSMathSciNetCrossRefGoogle Scholar Papallo, G., Reall, H.S.: JHEP 1511, 109 (2015). https://doi.org/10.1007/JHEP11(2015). [arXiv:1508.05303 [gr-qc]]. 109ADSCrossRefGoogle Scholar Liko, T.: Phys. Rev. D 77, 064004 (2008). https://doi.org/10.1103/PhysRevD.77.064004. [arXiv:0705.1518 [gr-qc]]ADSMathSciNetCrossRefGoogle Scholar Chatterjee, S., Parikh, M.: Class. Quant. Grav. 31, 155007 (2014). https://doi.org/10.1088/0264-9381/31/15/155007. [arXiv:1312.1323 [hep-th]]ADSCrossRefGoogle Scholar Jacobson, T.: Phys. Rev. Lett. 75, 1260 (1995). https://doi.org/10.1103/PhysRevLett. [arXiv:gr-qc/9504004]. 75.1260ADSMathSciNetCrossRefGoogle Scholar Padmanabhan, T.: AIP Conf. Proc. 1241, 93 (2010). https://doi.org/10.1063/1.3462738. [arXiv:0911.1403 [gr-qc]]ADSCrossRefGoogle Scholar Padmanabhan, T.: Rep. Prog. Phys. 73, 046901 (2010). https://doi.org/10.1088/0034-4885/73/4/046901. [arXiv:0911.5004 [gr-qc]]ADSCrossRefGoogle Scholar Chakraborty, S., Parattu, K., Padmanabhan, T.: JHEP 1510, 097 (2015). https://doi.org/10.1007/JHEP10(2015). [arXiv:1505.05297 [gr-qc]]. 097ADSCrossRefGoogle Scholar Padmanabhan, T.: Mod. Phys. Lett. A 30, no. 03n04, 1540007 (2015). https://doi.org/10.1142/S0217732315400076. [arXiv:1410.6285 [gr-qc]]ADSMathSciNetCrossRefGoogle Scholar Parikh, M., Sarkar, S., Svesko, A.: arXiv:1801.07306 [gr-qc] Allais, A., Tonni, E.: JHEP 1201, 102 (2012). https://doi.org/10.1007/JHEP01(2012). [arXiv:1110.1607 [hep-th]]. 102ADSCrossRefGoogle Scholar Callan, R., He, J.Y., Headrick, M.: JHEP 1206, 081 (2012). https://doi.org/10.1007/JHEP06(2012). [arXiv:1204.2309 [hep-th]]. 081ADSCrossRefGoogle Scholar Caceres, E., Kundu, A., Pedraza, J.F., Tangarife, W.: JHEP 1401, 084 (2014). https://doi.org/10.1007/JHEP01(2014). [arXiv:1304.3398 [hep-th]]. 084ADSCrossRefGoogle Scholar © Springer Science+Business Media, LLC, part of Springer Nature 2019 1.Indian Institute of TechnologyGandhinagarIndia Sarkar, S. Gen Relativ Gravit (2019) 51: 63. https://doi.org/10.1007/s10714-019-2545-y Accepted 23 April 2019
CommonCrawl
Home 1 › NTC Thermistor 2 › NTC Thermistor 3 What are NTC Thermistors? NTC stands for "Negative Temperature Coefficient". NTC thermistors are resistors with a negative temperature coefficient, which means that the resistance decreases with increasing temperature. They are primarily used as resistive temperature sensors and current-limiting devices. The temperature sensitivity coefficient is about five times greater than that of silicon temperature sensors (silistors) and about ten times greater than that of resistance temperature detectors (RTDs). NTC sensors are typically used in a range from −55 to +200 °C. The non-linearity of the relationship between resistance and temperature exhibited by NTC resistors posed a great challenge when using analog circuits to accurately measure temperature. However, rapid development of digital circuits solved that problem through enabling computation of precise values by interpolating lookup tables or by solving equations which approximate a typical NTC curve. NTC Thermistor Definition An NTC thermistor is a thermally sensitive resistor for which the resistance exhibits a large, precise and predictable decrease as the core temperature of the resistor increases over the operating temperature range. Characteristics of NTC Thermistors Unlike RTDs (Resistance Temperature Detectors), which are made from metals, NTC thermistors are generally made of ceramics or polymers. Different materials used in the manufacture of NTC thermistors result in different temperature responses, as well as other different performance characteristics. Temperature response Most NTC thermistors are typically suitable for use within a temperature range between −55 and 200 °C, where they give their most precise readings. There are special families of NTC thermistors that can be used at temperatures approaching absolute zero (-273.15 °C) as well as those specifically designed for use above 150 °C. The temperature sensitivity of an NTC sensor is expressed as "percentage change per degree C" or "percentage change per degree K". Depending on the materials used and the specifics of the production process, the typical values of temperature sensitivities range from -3% to -6% / °C. Characteristic NTC curve As can be seen from the figure, the NTC thermistors have a much steeper resistance-temperature slope compared to platinum alloy RTDs, which translates to better temperature sensitivity. Even so, RTDs remain the most accurate sensors with their accuracy being ±0.5 % of the measured temperature, and they are useful in the temperature range between -200 and 800 °C, a much wider range than that of NTC temperature sensors. Comparison to other temperature sensors Compared to RTDs, the NTC thermistors have a smaller size, faster response, greater resistance to shock and vibration at a lower cost. They are slightly less precise than RTDs. The precision of NTC thermistors is similar to thermocouples. However thermocouples, can withstand very high temperatures (in the order of 600 °C) and are used in these applications instead of NTC thermistors. Even so, NTC thermistors provide greater sensitivity, stability and accuracy than thermocouples at lower temperatures and are used with less additional circuitry and therefore at a lower total cost. The cost is additionally lowered by the lack of need for signal conditioning circuits (amplifiers, level translators, etc.) that are often needed when dealing with RTDs and always needed for thermocouples. Self-heating effect The self-heating effect is a phenomenon that takes place whenever there is a current flowing through the NTC thermistor. Since the thermistor is basically a resistor, it dissipates power as heat when there is a current flowing through it. This heat is generated in the thermistor core and affects the precision of the measurements. The extent to which this happens depends on the amount of current flowing, the environment (whether it is a liquid or a gas, whether there is any flow over the NTC sensor and so on), the temperature coefficient of the thermistor, the thermistor's total area and so on. The fact that the resistance of the NTC sensor and therefore the current through it depends on the environment is often used in liquid presence detectors such as those found in storage tanks. The heat capacity represents the amount of heat required to increase the temperature of the thermistor by 1 °C and is usually expressed in mJ/°C. Knowing the precise heat capacity is of great importance when using an NTC thermistor sensor as an inrush-current limiting device, as it defines the response speed of the NTC temp sensor. Curve Selection and Calculation The thermistor selection process must take care of the thermistor's Dissipation Constant, Thermal Time Constant, Resistance value, Resistance-Temperature curve and Tolerances, to mention the most important factors. Since the relationship between resistance and temperature (the R-T curve) is highly nonlinear, certain approximations have to be utilized in practical system designs. First-order approximation One approximation, and the simplest to use, is the first-order approximation which states that: $$\Delta R = k · \Delta T$$ Where k is the negative temperature coefficient, ΔT is the temperature difference, and ΔR is the resistance change resulting from the change in temperature. This first-order approximation is only valid for a very narrow temperature range, and can only be used for such temperatures where k is nearly constant throughout the whole temperature range. Beta formula Another equation gives satisfying results, being accurate to ±1 °C over the range of 0 to +100°C. It is dependent on a single material constant β which can be obtained by measurements. The equation can be written as: $$R(T) = R(T_0) · e^{\beta (\frac{1}{T} - \frac{1}{T_0})}$$ Where R(T) is the resistance at the temperature T in Kelvin, R(T0) is a reference point at temperature T0. The Beta formula requires a two-point calibration, and it is typically not more accurate than ±5 °C over the complete useful range of the NTC thermistor. Steinhart-Hart equation The best approximation known to date is the Steinhart-Hart formula, published in 1968: $$\frac{1}{T} = A + B · ln(R) + C · (ln(R))^3$$ Where ln R is the natural logarithm of the resistance at temperature T in Kelvin, and A, B and C are coefficients derived from experimental measurements. These coefficients are usually published by thermistor vendors as part of the datasheet. The Steinhart-Hart formula is typically accurate to around ±0.15 °C over the range of -50 to +150 °C, which is plenty for most applications. If superior accuracy is required, the temperature range must be reduced and accuracy of better than ±0.01 °C over the range of 0 to +100 °C is achievable. Choosing the right approximation The choice of the formula used to derive the temperature from the resistance measurement needs to be based on available computing power, as well as actual tolerance requirements. In some applications, a first-order approximation is more than enough, while in others not even the Steinhart-Hart equation fulfills the requirements, and the thermistor has to be calibrated point by point, making a large number of measurements and creating a lookup table. Construction and Properties of NTC Thermistors Materials typically involved in the fabrication of NTC resistors are platinum, nickel, cobalt, iron and oxides of silicon, used as pure elements or as ceramics and polymers. NTC thermistors can be classified into three groups, depending on the production process used. Bead thermistors These NTC thermistors are made from platinum alloy lead wires directly sintered into the ceramic body. They generally offer fast response times, better stability and allow operation at higher temperatures than Disk and Chip NTC sensors, however they are more fragile. It is common to seal them in glass, to protect them from mechanical damage during assembly and to improve their measurement stability. The typical sizes range from 0.075 – 5 mm in diameter. Disk and Chip thermistors These NTC thermistors have metalized surface contacts. They are larger and, as a result, have slower reaction times than bead type NTC resistors. However, because of their size, they have a higher dissipation constant (power required to raise their temperature by 1 °C). Since the power dissipated by the thermistor is proportional to the square of the current, they can handle higher currents much better than bead type thermistors. Disk type thermistors are made by pressing a blend of oxide powders into a round die and then sintering at high temperatures. Chips are usually fabricated by a tape-casting process where a slurry of material is spread out as a thick film, dried and cut into shape. The typical sizes range from 0.25 to 25 mm in diameter. Glass encapsulated NTC thermistors These are NTC temperature sensors sealed in an airtight glass bubble. They are designed for use with temperatures above 150 °C, or for printed circuit board mounting, where ruggedness is a must. Encapsulating a thermistor in glass improves the stability of the sensor and protects the sensor from the environment. They are made by hermetically sealing bead type NTC resistors into a glass container. The typical sizes range from 0.4 to 10 mm in diameter. NTC thermistors are used in a broad spectrum of applications. They are used to measure temperature, control temperature, and compensate for temperature. They can also be used to detect the absence or presence of a liquid, as current limiting devices in power supply circuits, for temperature monitoring in automotive applications, and in many more applications. NTC sensors can be divided into three groups, depending on the electrical characteristic exploited in an application. Resistance-temperature characteristic Applications based on the resistance-temperature characteristic include temperature measurement, control, and compensation. These also include situations in which an NTC thermistor is used so that the temperature of the NTC temp sensor is related to some other physical phenomena. This group of applications requires that the thermistor operates in a zero-power condition, meaning that the current through it is kept as low as possible, to avoid heating the probe. Current-time characteristic Applications based on current-time characteristic are: time delay, inrush-current limiting, surge suppression, and many more. These characteristics are related to the heat capacity and dissipation constant of the NTC thermistor used. The circuit usually relies on the NTC thermistor heating up due to the current passing through it. At one point it will trigger some sort of change in the circuit, depending on the application in which it is used. Voltage-current characteristic Applications based on the voltage-current characteristic of a thermistor generally involve changes in the environmental conditions or circuit variations which result in changes in the operating point on a given curve in the circuit. Depending on the application, this can be used for current limiting, temperature compensation or temperature measurements. NTC Thermistor Symbol The following symbol is used for a negative temperature coefficient thermistor, according to the IEC standard. NTC thermistor (IEC standard) If you need NTC Thermistors: 👉 100K Ohm NTC Thermistors(5Pcs) 👉 100K Ohm NTC Thermistors(10Pcs) 👉 NTC 3950 100K Thermistor(5Pcs) 👉 NTC Thermistor Sensor Probe(5Pcs) More Electronic Component,welcome to HALJIA!
CommonCrawl